From SQL Injection to Prompt Injection: Why Security History Repeats Itself

Introduction: Old Vulnerabilities in New Packages

The more things change, the more they stay the same. In 1998, Rain Forest Puppy published one of the first formal advisories about SQL injection, revealing how attackers could manipulate database queries by inserting malicious SQL commands into input fields. Fast forward to today, and we're witnessing the rise of prompt injection attacks against AI systems—a remarkably similar vulnerability with potentially more severe consequences.

Both SQL injection and prompt injection share the same fundamental flaw: trusting user (or attacker)-controlled input in a privileged execution context. Whether typed into a web form or ingested through a document by an AI agent, the core vulnerability remains unchanged—the system fails to properly separate code from data.

Understanding the Parallels

To appreciate how history is repeating itself, let's examine the similarities:

1. Trust Boundaries Violation

SQL Injection: Occurs when an application constructs SQL statements by concatenating user input without proper validation:

Prompt Injection: Occurs when an AI system incorporates user input into its instructions without proper boundaries. Either directly through a LLM chat interface (direct) or it can be from a article, product description or  CV (indirect prompt injection)

2. Privilege Escalation Path

SQL Injection: Allows attackers to execute commands with the database privileges of the application.

Prompt Injection: Allows attackers to execute instructions with the capabilities granted to the AI model which can include tools, databases and other agents which may have access to data throughout the environment.

3. Failure to Validate Input

SQL Injection: Stems from inadequate input validation before database processing.

Prompt Injection: Stems from inadequate input validation before prompt processing.

The Critical Difference: Deterministic vs. Non-Deterministic Systems

While the parallels are striking, there's a fundamental difference that makes AI security significantly more challenging: determinism.

SQL databases are deterministic systems—the same query executed against the same database state will always return the same result. This predictability makes security testing straightforward: if a test demonstrates that an attack doesn't work once, it won't work on subsequent attempts (assuming the system remains unchanged).

LLMs, however, are fundamentally non-deterministic. The same prompt can produce different outputs across multiple runs due to the probabilistic nature of these models. This means:

  1. Inconsistent security boundaries: An attack prompt that fails on the first attempt might succeed on the fifth try.
  2. Unreliable testing: Security testing can't definitively prove the absence of vulnerabilities, only their presence in specific instances.
  3. Temporal vulnerabilities: Vulnerabilities may only manifest under specific conditions or with specific phrasings.

Example: The Inconsistency Challenge

Consider this simple security test scenario:

  1. A security team tests a prompt injection attack against a GenAI system (for our purposes a LLM)
  2. The attack fails 10 times in a row
  3. The team concludes the system is secure against that attack
  4. In production, an attacker tries the same attack 100 times with slight variations
  5. On the 17th attempt, the attack succeeds due to the non-deterministic nature of the model

This scenario rarely occurs with traditional vulnerabilities like SQL injection but becomes commonplace when dealing with AI systems.

The Inherent Conflict in AI Design

This non-determinism is further complicated by an inherent conflict in AI system design: LLMs are extensively trained to be helpful and responsive to user requests, yet security requires them to sometimes refuse or restrict responses.

This creates a fundamental tension. The more helpful an AI becomes, the more vulnerable it may be to carefully crafted requests that circumvent security guardrails. A clever turn of phrase or an indirect approach may be all it takes to bypass restrictions that worked in testing.

Even with guardrails and filter/supervisor LLMs,  encoding bypasses or indirect questioning will eventually find gaps. Multiple layers of protection that appear robust individually can still be circumvented through persistent attempts that exploit the probabilistic nature of responses.

Real-World Implications

The consequences of this new vulnerability landscape are significant:

1. Security Testing Challenges

Traditional security testing methodologies are insufficient for AI systems. Fuzzing a system once with known attack patterns is no longer adequate. Testing must account for:

  • Multiple attempts of the same attack pattern
  • Variations in phrasing and approach
  • Different execution contexts
  • Temperature and sampling settings

2. Shifting Security Perimeters

As AI systems get more capable, their potential to be weaponized increases. An LLM with access to sensitive systems through integration with internal tools or agents becomes a significant attack vector. The security perimeter now includes:

  • The prompt interface
  • Connected tools, agents and plugins
  • The underlying model itself
  • Data retrieval systems (like RAG)

3. Evolving Attack Techniques

New attack techniques specifically targeting these non-deterministic properties are emerging.

  • Iterative jailbreaking: Repeatedly attempting slightly modified attacks until one succeeds
  • Indirect prompt injection: Attacking through seemingly benign content that the AI processes
  • Model confounding: Deliberately creating ambiguous contexts that confuse the model's safety mechanisms
  • Context manipulation: Restructuring queries to make harmful content appear legitimate

The Imperative for Defence in Depth

Given these challenges, good security hygiene across the entire system becomes even more crucial. We need defense in depth because the front gate—the LLM itself—can't be guaranteed to always stand firm.

Effective Defensive Strategies

  1. Input Validation Before Prompt Processing
    • Sanitize and validate user inputs before they reach the LLM
    • Implement pattern matching to detect potential attack vectors
    • Create allowlists for safe commands and inputs
  2. Output Filtering and Validation
    • Implement post-processing of AI outputs before presenting them to users
    • Scan for potentially harmful content or instructions
    • Validate outputs against expected patterns
  3. Architectural Boundaries
    • Isolate AI components from critical systems
    • Implement explicit permission models for AI tool access
    • Create separate execution environments for untrusted inputs
  4. Comprehensive Logging and Monitoring
    • Log all inputs, outputs, and actions taken by AI systems
    • Implement anomaly detection for unusual patterns of interaction
    • Preserve context information for security forensics
  5. Model-Specific Controls
    • Fine-tune models with explicit security examples
    • Implement content filtering at the model level
    • Use multiple models in sequence as checks and balances

Looking Ahead: Learning from History

The security community has spent decades developing robust defenses against SQL injection and similar vulnerabilities. We must apply these hard-won lessons to AI security rather than reinventing the wheel.

Key principles that remain relevant:

  1. Never trust user input: All user-provided content should be treated as potentially malicious
  2. Maintain clear boundaries: Separation between instructions and data is critical, ensure the LLM is not trained or given access to senstive data.
  3. Implement defence in depth: Multiple layers of security controls are necessary
  4. Regular security testing: Continuous, comprehensive testing with evolving scenarios
  5. Assume compromise: Design systems to limit damage when (not if) controls fail

Conclusion: The More Things Change...

Have we really learned our lessons from the past? Or are we facing an even tougher challenge that demands a return to fundamental security principles throughout our systems?

The emerging landscape of AI security suggests both. We're facing many of the same core vulnerabilities we've battled for decades, but with a non-deterministic twist that makes them significantly harder to mitigate.

As organizations rush to implement AI capabilities, the security community must quickly adapt traditional security principles to these new systems. Those who recognize the patterns from previous technology waves will be better positioned to prevent history from repeating itself—at least in its most damaging forms.

For security professionals, the message is clear: AI security isn't an entirely new discipline. It's an evolution of established security principles applied to systems with new properties and risks. By understanding both the similarities and the critical differences, we can build more secure AI implementations that don't repeat the mistakes of the past.

Need help evaluating the security of your AI implementations? Contact our team of experts who specialize in testing and securing AI systems against the latest threats.

Jamie Baxter

Principal at Appsurent

© 2025 Appsurent Cyber Security. All rights reserved.