💉(WIP) Offensive Approach for Prompt Injection Attacks

As we discussed earlier, Prompt Injection can be a way to exploit many different vulnerabilities in your Generative AI Application.

All the security engineers and folks may not be scientists, but security can be practical in fascinating ways. The best way to secure complex stuff is to test them in practical and simplified ways.

Let's jump on the approach.

  1. Analyze the Architecture - Recon

Offensive testing starts with good reconnaissance techniques. Security is still security, so you should know your target. I assume you're going through approved white-box testing for your role.

  • Read all the architectural references, even for little plugins that may impact the system. Then, re-draw the architecture in your way. In my way, I track down the user input's journey and re-draw that journey in the security language.

Simplified example:

Prompt --> Welcoming Guardrail

Last updated