👹(WIP) AI/ML/LLM Application Security Testing
Any AI/ML/LLM-related application should undergo pen testing and red teaming activities according to size and needs. We should pay the same attention to traditional applications here. There is no difference in the risk perspective here. Due to the model's capabilities and improvision nature, high-profile risks are more likely.
In the application security testing, I prefer to examine under two headlines: - Predictive AI System Security Testing - Generative AI System Security Testing
In general, both have a model and data the models are trained on. However, the first-category systems involve using machine learning models to forecast future events or behaviours based on historical data. The second-category systems are working on generating entirely new results using the experience they got from the provided data. Both sides share some architectural parts and joint attack surfaces. For example, both have training data, models, and tokenization as their central parts. For the attack surface parts, both have similar tokenization issues, privacy attacks, and model bias risks on their attack surfaces.
All experienced and skilled security testers have a common approach: Know your target! So, whether you're Predictive or Generative, you should know your target. First questions to know your target:
What kind of model does it use?
How it's trained, and how the fine-tuning cycles will go, if there are any?
What is the data source? Could it be poisoned?
Where is the tokenization? What kind of tokens can you use for manipulating the model via inputs? Are there any special tokens that can trigger some behaviours like prediction? For example, MASK token.
What kind of data storage is used?
Last updated