Security Debt: How Today's AI Implementations Will Haunt Us Tomorrow

Introduction

In the rush to implement AI capabilities, organizations are unknowingly accumulating significant security debt—technical decisions made today that will create security vulnerabilities tomorrow. Just as we're still battling legacy systems vulnerable to decades-old exploits, hastily deployed AI systems are creating the next generation of security headaches.

After testing more and more GenAI applications and applications generated by AI. It's clear we're building tomorrow's security debt today with rushed AI deployments and vibe-coded applications.

The Parallel with Web Application Development

In the early 2000s, companies rushed to establish web presence without adequate security considerations. The result? Decades of SQL injection, XSS, and CSRF vulnerabilities that continue to plague organizations. Today's AI implementation rush follows a disturbingly similar pattern.

Then vs. Now:

  • 2000: "Just get the website up!"
  • 2023: "Just get the AI working!"
  • 2005: "We'll fix the security issues later."
  • 2023: "We'll add guardrails once it's in production."

The Unique Challenges of AI Security Debt

Unlike traditional security debt, AI systems present unique challenges that make remediation particularly complex:

Non-Deterministic Behavior

AI systems, particularly large language models, exhibit non-deterministic behavior. The same input can produce different outputs across multiple runs, making vulnerability detection and verification extraordinarily difficult. When security testing can't be definitively reproduced, it creates dangerous blind spots in your security posture.

Agent, Model and Tool Interdependencies

Today's AI implementations often involve chains of models, frameworks and tools working together. Each connection point introduces potential vulnerabilities, but organizations rarely document these dependencies comprehensively. As these systems evolve, the original understanding of data flow and trust boundaries becomes lost.

Surface Level Understanding

Many AI models function as "black boxes," making it difficult to understand why specific outputs are generated. This lack of explainability creates perfect hiding places for vulnerabilities that can remain undetected for years.

Evolving Attack Techniques

The techniques for attacking AI systems are developing rapidly. Systems designed today based on current understanding of threats will likely be vulnerable to attack techniques discovered next year—but by then, these systems may be deeply embedded in critical business processes.

Where AI Security Debt Accumulates

Prompt Engineering Without Boundaries

Many organizations are implementing LLMs with minimal input validation, relying solely on prohibitions in the system prompt itself ("Don't do X"). As we've already seen, these safeguards are easily circumvented through clever prompting, yet they're being deployed in production systems today.

Excessive Model Permissions

Current implementations often give AI models unnecessarily broad access to tools and APIs. The principle of least privilege is frequently overlooked in the excitement of demonstrating AI capabilities.

Insufficient Monitoring and Logging

Many organizations implement AI without comprehensive logging of prompts, completions, and actions taken by the system. This will create massive blind spots when investigating future security incidents.

Integration with Legacy Systems

AI systems are being connected to legacy systems without proper security boundaries, creating new attack vectors into previously isolated systems.

The Coming Wave of AI Vulnerabilities

By 2026, we can expect to see:

  1. AI-Specific CVEs: Common Vulnerabilities and Exposures specifically targeting design flaws in today's AI implementations
  2. AI Security Remediation Projects: Major initiatives to fix fundamental security flaws in production AI systems
  3. Compliance Requirements: New regulatory frameworks requiring AI security reviews and remediations
  4. Security Debt-Driven Rewrites: Complete rebuilds of AI systems because the security debt has become unmanageable

How to Avoid Creating AI Security Debt Today

Apply Secure Development Practices to AI

Treat AI development with the same rigor as application development:

  • Conduct threat modeling sessions specific to AI components.
  • Implement proper SDLC practices for AI models and systems.
  • Establish clear requirements for AI security testing.

Document Dependencies and Trust Boundaries

Create comprehensive architecture diagrams that show:

  • Data flows between models and systems.
  • Trust boundaries and security controls.
  • Authentication and authorization mechanisms.

Implement Defence in Depth

Don't rely solely on the AI model's internal controls:

  • Add input validation before requests reach the model.
  • Implement output filtering and verification.
  • Create architectural boundaries that limit potential damage.
  • Maintain Software Bill of Materials for Model, RAG and Agent frameworks.

Future-Proof Your Monitoring

Implement comprehensive logging and monitoring that captures:

  • All inputs to AI systems.
  • All outputs and actions taken.
  • Context information necessary for future forensics.
  • Keeping Prompt and Response pairs is critical.

Regular Security Reviews

Schedule periodic reviews of AI implementations with security specialists who understand both traditional application security and AI-specific vulnerabilities.

Conclusion

The security debt we're accumulating in today's AI implementations will inevitably come due. Organizations that recognize this pattern from previous technology waves will take steps today to minimize that debt, while others will find themselves facing costly and complex remediation projects in the coming years.

The most dangerous assumption? That AI systems are too novel for traditional security principles to apply. They actually require both traditional and new AI-specific controls. So please check the code coming out and please don't give the new shiny AI Agent a service account with domain administrator privileges.

Need help evaluating the security of your AI implementations? Contact our team of experts who specialize in testing and securing AI systems against the latest threats.

Jamie Baxter

Principal at Appsurent

© 2025 Appsurent Cyber Security. All rights reserved.