Large language models (LLMs) are a type of AI that can generate text, translate languages, write different kinds of creative content and answer your questions in an informative way. Unfortunately, they are becoming the latest emerging attack surface.

To address this, the Open Worldwide Application Security Project (OWASP) created a working group consisting of an international team of around 500 experts with over 125 active contributors with the purpose of identifying the top 10 security issues that developers and security teams need to address when building applications with LLMs. 

The OWASP LLM top 10 list identifies the 10 most critical security risks for LLMs. These risks include (listed in order of criticality):

  1. Prompt injection: Occurs when an attacker can inject malicious code or data into the prompt that is used to interact with an LLM; this can lead to the LLM generating unintended or malicious output, or to the LLM being taken over by the attacker.
     
  2. Insecure output handling: Occurs when LLM output is not properly sanitized or validated; this can lead to sensitive information being leaked, or to the LLM being used to carry out attacks against other systems.
     
  3. Training data poisoning: Occurs when an attacker can introduce malicious data into the training data used to train an LLM; this can cause the LLM to generate biased or incorrect output, or to be susceptible to attacks.
     
  4. Model denial of service: Occurs when an attacker can overwhelm an LLM with requests, causing it to crash or to become unresponsive; this can prevent legitimate users from using the LLM.
     
  5. Supply chain vulnerabilities: Occurs when an attacker can compromise the supply chain for an LLM, such as the software used to train the LLM or the hardware on which it is running. This can allow the attacker to gain access to the LLM or to install malicious code on it.
     
  6. Sensitive information disclosure: Occurs when sensitive information, such as passwords or credit card numbers, is not properly protected by an LLM; this can lead to this information being leaked to attackers.
     
  7. Insecure plugin design: Occurs when plugins or extensions for an LLM are not properly designed or implemented; this can allow attackers to exploit these plugins to gain access to the LLM or to take control of it.
     
  8. Excessive agency: Occurs when an LLM is given too much control over a system or process; this can allow the LLM to be used to carry out unauthorized actions, such as deleting data or changing system settings.
     
  9. Overreliance: Lack of human-in-the-loop validation of LLM output can expose systems or people to wrong or unsuitable content from LLMs, leading to problems in information, communication, security and even law.
     
  10. Model theft: This is stealing, copying, or leaking LLM models; the consequences are financial loss, losing edge over competitors and exposing confidential information.
image

These are just some of the security risks that enterprise CISOs need to be aware of when deploying LLMs in their organizations. By understanding these risks CISOs can take steps to mitigate them and protect their systems from attack.

Here are a few suggestions to address the threats identified in LLM's top 10:

  • Monitor all LLM inputs and outputs in real-time for signs of manipulation or data exfiltration using AI-enhanced detection tools; set strict thresholds for anomalous activity.
     
  • Implement robust identity and access management to control who can access and use LLM models and data; require multi-factor authentication for any human-LLM interactions.
     
  • Use data sanitization and scrubbing techniques; ensure only appropriate data makes it into the training data.  
     
  • Continuously scan for new vulnerabilities and misconfigurations in all LLM frameworks, libraries and dependencies; keep everything patched and updated, including 3rd-party packages, open source models, crowd-sourced data and plugins.
     
  • Use a secure development lifecycle (SDLC), a process for developing software that is secure by design; it includes steps such as threat modeling, code review and penetration testing.
     
  • Train and educate employees on the security risks associated with LLMs and how to mitigate them.
     
  • Mandate documented AIBOMs from LLM vendors that contain detailed information on data provenance, model architecture, training methodology and performance benchmarks; review AIBOMs thoroughly prior to procurement and deployment decisions.
     
  • Have a plan for responding to LLM attacks – the event of an LLM attack, it is important to have a plan for responding; this plan should include isolating the LLM, investigating the attack and remediating the vulnerabilities that were exploited.

By following these suggestions, cyber teams can help to mitigate the security risks associated with LLMs and protect their organizations from attack. There is also the OWASP AI Security and Privacy Guide, a useful guide for understanding risks and mitigation for LLMs  

Additionally, as LLMs become more prevalent in application development, the current CVE system (a framework to catalog and manage vulnerabilities) will need to incorporate elements related to natural language processing exploits coupled with application code vulnerabilities. This new threat landscape of dealing with software that recognizes both human and programming languages will have an entirely new set of attack vectors that these frameworks need to acknowledge.

LLMs are a powerful technology with a wide range of potential applications. However, they also pose a number of security risks. Enterprise CISOs need to be aware of these risks and take steps to mitigate them. By following the suggestions in this article, CISOs can help to protect their organizations from attack.