Application Security in the AI Era
In their recent paper titled “Hype Cycle”, Gartner provides an AI temperature check. The research firm that advises executives and IT leaders on what to buy and what to prioritize offers clear guidance on which security ideas are early and messy, which are overhyped, and which are becoming normal. Developers should care because those decisions become your reality: which tools get standardized, what shows up as pull request (PR) gates in continuous integration and continuous delivery (CI/CD), and what “secure enough to ship” means this year.
It matters because it influences what security leaders buy and roll out, and those choices quickly show up in your day-to-day: new pull request (PR) gates in continuous integration and continuous delivery (CI/CD), new “must fix before release” thresholds, and new platforms and integrations you’re expected to adopt. It also points out that many security tools overlap, which is why companies are pushing toward fewer “platform” tools instead of a pile of point solutions.
Developers should care for two reasons. First, it’s a preview of what’s about to land in your workflow: more automated checks and more fixes owned by engineering. Second, it’s a signal that consolidation decisions are coming, and those choices can either reduce friction or make your pipeline miserable.
Three Planning Bets
Gartner leads with three planning assumptions:
• By 2027, at least 30% of AppSec exposures will result from vibe coding practices.
• By 2026, at least 40% of organizations will default to AppSec testing vendors for AI-based auto-remediation of vulnerable code.
• Through 2029, over 50% of successful attacks against AI agents will exploit access control issues using direct or indirect prompt injection.
That third point is the big shift: prompt injection becomes an authorization problem to prevent agents from taking actions they shouldn’t (e.g. deleting data in a production database).
The Four Themes
AppSec in 2025 falls into four themes. Two are AI-driven. Two are long-standing realities that are now the default and need to work at scale.
AI changes things
- AI-augmented development: Developers are using AI to produce more code faster. That boosts productivity, but it also increases the amount of change moving through your system and the amount of code that gets merged without the same level of human review per line.
- GenAI-enabled applications: More products now include generative artificial intelligence (GenAI) features and agents. That creates a new attack surface (especially when agents can take actions), via prompt injection. For example, a clever enough prompt may tell your backend to delete the entire production database. Bottom line: Your app now has a user interface for attackers, and that UI is English.
Old problems, now unavoidable
- Streamlining DevSecOps: This is not new; it’s the default model. Developers run the checks and fix most issues in the pull request / continuous integration and continuous delivery (CI/CD) flow, while security teams define the rules and monitor compliance. The main goal now is reducing friction: fewer low-value alerts, clearer ownership, and better prioritization so engineers focus on what actually matters.
- Code + workload together: You can’t judge risk from code alone anymore. You also need runtime context like what is exposed to the internet, what the workload can reach, and what permissions it has. Plain example: a missing authorization check on /admin/exportUsers is bad either way, but if that service gets exposed through an application programming interface (API) gateway, it becomes “anyone on the internet can export users.” That’s why AppSec has to connect code findings to runtime facts like exposure, permissions, and reachability. This is particularly of interest now that AI assisted code vastly increases the amount of code a single developer can write.
Topic: Vibe Coding
What It Is: Rapid AI-built prototypes where people focus on results more than reading the generated code.
Example Security Concern: AI generates a “shareable link” endpoint for uploads and it works, but it uses predictable IDs and no expiry; now anyone who guesses a link can download files. Do next: Keep it out of production. Use a sandbox and a standard process for what can graduate.
Practical Move: Don’t ban it. Raise the verification bar on critical paths (auth, exports, payments) and don’t rubber-stamp massive diffs.
Topic: Agents + tool use (prompt injection becomes authorization)
What It Is: The model can call tools (refunds, data lookups, exports), turning natural language into potentially unwanted actions.
Example Security Concern: Agent can call issueRefund(orderId); a user convinces it to refund an order that isn’t theirs because the refund tool trusts the agent instead of enforcing per-user authorization.
Practical Move: Enforce permissions at the tool/API boundary. Prompts are not a security control.
Topic: Model Context Protocol
What It Is: A standard for two-way communication between AI models and tools/data sources.
Example Security Concerns: Model has access to getCustomerRecord() for support; a support rep should only access certain customers/fields; if the tool doesn’t enforce this, the model becomes a bypass around access rules.
Practical Move: Treat each tool like a privileged API: narrow scopes, per-user authorization, audit logs. Avoid externally accessible remote MCP servers in production right now
The items above increase risk because they make it easier to ship code quickly or let software take actions on a user’s behalf. The next set is different. These aren’t “AI adds risk” items — they’re controls meant to reduce risk by testing AI behavior, catching misuse in production, and helping developers fix issues faster.
Topic: AI Code Security Assistants (ACSA)
What It Is: AI help that identifies issues and assists remediation.
Example Security Concerns: Assistant “fixes injection” by refactoring a query but accidentally removes a tenant filter; if the change is auto-accepted, the fix creates cross-tenant access.
Practical Move: Require boundary tests (auth + tenant isolation) for any AI-suggested fix that touches data access.
Topic: AI Security Testing
What It Is: AI-focused testing to find exposures in AI-enabled apps and agents.
Example Security Concerns: That it may not catch all of the possible issues: such as prompt injection attacks.
Practical Move: Add adversarial tests before any public AI feature that retrieves sensitive data or triggers actions.
Topic: AI Runtime Defense
What It Is: Production visibility and prevention for attacks like prompt injection.
Example Security Concerns: Runtime defense logs full prompts/tool outputs for debugging; without scrubbing, you store sensitive customer data and secrets in logs.
Practical Move: Monitor tool calls and sensitive actions, but scrub logs and set hard policies for high-risk actions.
Definition of Done for AI-Generated Changes
For any AI-assisted change that touches auth, data access, exports, payments, or admin actions:
1. Authorization never breaks
Example: /admin/* fails for normal users in an automated test.
2. Tenant / customer isolation never breaks
Example: user A can’t access user B’s data even with a guessed identifier.
3. Sensitive data doesn’t leak via logs or errors
Example: no tokens, secrets, or customer payloads in logs.
4. Input validation stays strict
Example: reject unknown fields and invalid types.
5. Dependency changes are treated like code changes
Example: software composition analysis (SCA) blocks known critical vulnerable versions.
6. Huge diffs don’t get rubber-stamped
Example: break up massive diffs or add targeted tests that prove boundaries didn’t change.
7. Agents enforce permissions at the tool boundary
Example: issueRefund() checks caller identity/role every time (prompt doesn’t matter).
TL;DR
AI-assisted coding is the new abstraction layer. Don’t fight it. Shift trust from “a human wrote it” to “we can prove it’s safe.”
The real risks aren’t “AI wrote code.” They’re:
- Shipping more code per unit of human attention
- Agents turning natural language into real actions (prompt injection becomes an access control problem)
- Easier wiring of models to powerful tools (model context protocol (MCP) makes over-permissioning and exposure easier)

Need to modernize AppSec for the AI era?
If AI assisted development is accelerating your delivery, we can help you secure it without slowing teams down. Talk with IntelliTect about AppSec strategy, AI risk, and DevSecOps optimization.
