Many organizations are creating or extending apps to leverage LLMs, to take advantage of the capabilities that AI brings. There are, however, major security concerns with this (it even has its own OWASP Top 10 list). I’ve been taking a look at these, to keep developers and their orgs informed. I invite you to take a look at our new article (AI Prompt and Inference Pipeline Threats | Pangea), covering security threats with apps using AI/LLMs.
This article mostly focuses on concerns with user prompts and the inference pipeline. Risks include prompt injection attacks, data poisoning, backdoors, agent risks (e.g., excessive agency), and sensitive data disclosure.
You can find this article and 30 others on the Secure by Design Education Hub (pangea.com/securebydesign/). As always, we welcome your thoughts and feedback and would love it if you share articles and the site with those who may find it valuable.