October 16, 2024October 17, 2024 OWASP (Few words about AISec p2) Table of Contents This post describes different programs within OWASP AI Security functions.This post is also the collection of different studies. Lot’s of reading through the endless Internet.Lot’s of links to drafts, university papers, github repos and other studies. Machine Learning T10 (part 2) This is the second part of Owasp’s Machine Learning T10. Last time I described the current top10 with small descriptions borrowed from OWASP.org.This another OWASP site describes a lot of more information from different T10 attacks. There are also translations to different languages. Source: https://genai.owasp.org/llm-top-10/ I don’t list T10 attacks again since you can learn about them from GenAI Owasp Site. Machine Learning T10 related information There are lot’s of related information based on OWASP Machine Learning. Here are few of themVulnerability & Risk databases:Catalogued vulnerabilities and risks that were present in real-world AI and ML systems:AI Vulnerability Database (AVID) by AI Risk and Vulnerability AllianceAI Risk Database (airisk.io)AI Risk Repository (airisk.mit.edu)Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) FrameworkAI incidents:OECD AI Incidents Monitor (AIM)AI Incidents Database (AIID)AIAAIC Repository AI/ML security guidelines:Various guidelines on ML and AI Security and SafetyOWASP AI Security and Privacy GuideETSI.org’s Securing Artificial Intelligence (SAI)Biden&Harris Administraton – Ensuring Safe, Secure and Trustworthy AIOthers:All the other resources related to ML Security – threat modelling resources, risk assessments frameworksTrusted AI Adversarial Robustness ToolboxENISA – Securing Machine Learning AlgorithmsAwesome AI Security- A curated list of AI security resources (*)Awesome ML Security – A curated list of awesome ML security references, guidance, tools (*)Awesome Attacks on ML Privacy – curated list of papers related to privacy attacks (*)(*) Part of these have scientific papers from different Universities) Getting Started with AI Security - the Checklist The OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist is intended for people who are striving to stay ahead in the fast-moving AI world, aiming not just to leverage AI for corporate success but also to protect against the risks of hasty or insecure AI implementations. These leaders and teams must create tactics to grab opportunities, combat challenges, and mitigate risks.LLM Applications Cybersecurity and Governance Checklist v1.1 – English Guide for Preparing and Responding to Deepfake Events Deepfakes—hyper-realistic digital forgeries—have gained significant attention as the rapid development of generative AI has made it easier to produce convincingly realistic videos and audio recordings that can deceive even the most discerning viewers.Key strategies that the guide endorses include:Focusing on process adherence rather than visual or auditory detection of fakesImplementing and maintaining strong financial controls and verification proceduresCultivating a culture of awareness and skepticism towards unusual requests.Developing and regularly updating incident response plans.Guide for Preparing and Responding to Deepfake Events From the OWASP Top 10 for LLM Applications Team LLM AI Agents Cobus Greyling, Chief Evangelist @ Kore.ai writes about AI Agent’s. It is a program that uses one or more Large Language Models or Foundational Models as its backbone, enabling it to operate autonomously. AI Agents can handle highly ambiguous questions by decomposing them through a chain of thought process, similar to human reasoning. These agents have access to a variety of tools, including programs, APIs, web searches, and more, to perform tasks and find solutions.This recent research from Microsoft called the OmniParser is a general screen parsing tool, designed to extract information from UI screenshots into structured bounding boxes and labels, thereby enhancing GPT-4V’s performance in action prediction across various user tasks.Complex tasks can often be broken down into multiple steps, each requiring the model’s ability to:1. Understand the current UI screen by analysing the overall content and functions of detected icons labeled with numeric IDs, and2. Predict the next action on the screen to complete the task.Read more here or the Microsoft Research. OpenAgents GitHub available for the code. You can test it. This image illustrates the various components that make up an AI Agent, including its web browsing capabilities and its ability to export phone screens, desktop views, and web browsers. Source: www.cobusgreyling.com AI Threat Map The purpose of the AI Threat Model is to help defenders prepare their organizations by understanding the different types of threats and implement appropriate controls.Sandy Dunn , CISO, Board Member AIML and her AI Threat Map. She created the map after ChatGPT’s release in November 2022 to understand the deluge of AI/ML threats and vulnerabilities information.It has just been updated to version 1.9. Click here to see it Full. Link goes to her GitHub project.The AI Threat Map includes seven categories of Threats:Threats from AI ModelsThreats Using AI ModelsThreat to AI ModelsAI Legal & Regulatory ThreatThreats NOT using AI ModelsThreat of AI DependencyThreat Not Understanding AI ModelsUse the AI Threat Map to:Illustrates the challenge of balancing the different types of threatsIdentify and plan for all types of AI ThreatsQuick identification of weak or high risk areas to prioritize Image from Sandy Dunn AI Threat Map PDF AI Red Teaming Research Initiative: AI Red Teaming & Evaluation OWASP outlook in AI Red Teaming: The Power of Adversarial Thinking in AI Security – can be found here.OWASP GAI Red Teaming Methodologies, Guidelines & Best Practices draft can be found here. OWASP LLM System Guardrails & AI Red Teaming OWASP LLM Newsletter September 2024 OWASP LLM Newsletter September 2024 What is AI Red Teaming? “AI Red Teaming is a systematic, adversarial approach, employed by human testers, to identify issues/problems in systems that have Generative AI components. The tests include tests for unsafe material, Inaccuracies, out-of-scope responses and identify unknown risks that come to light from live usage/new discovery/benchmarks. Developers can then use that information to retrain/augment the models or develop “guardrail” rules to mitigate risk” – Krishna Sankar Krisna Sankar – A Distinguished Engineer of GenAI Red Teaming & Security Guardrails introduced for me the topic of AI Redteaming in his blog. Source: Krishna Sankar - AI Red Teaming blog What's the difference between traditional Red-Teaming and AI Red-Teaming? A question which really interests me. Well one answer is given by Dr. Josh Harguess (cranium.ai).” “Red teaming” and “AI red teaming” are two approaches used in security and assessment practices to test and improve systems. While traditional red teaming focuses on evaluating the security of physical and cyber systems through simulated adversary attacks, AI red teaming specifically addresses the security, robustness, and trustworthiness of artificial intelligence systems.”Read the whole GREAT article here.The Venn diagram below illustrates the overlap among cybersecurity, traditional red teaming, and AI red teaming. SOURCE: Dr. Josh Harguess (Cranium.ai) Conclusion There are lots of going on in AI Security area. Large Language Model security is one the most studied field but AI Red Teaming and other Security studies are evolving. This is definitely something I want to be part of. So let’s continue and participate to these. First part of this series called “Few words about AI Security” can be found here.Next part is related to my work with Microsoft products -> Microsoft AI Security. Share on Social Media x facebook linkedinwhatsapp Discover more from Jussi Metso Subscribe to get the latest posts sent to your email. Type your email… Subscribe AI SECURITY
SECURITY NIS2.0 – The new EU-wide cybersecurity directive and how Microsoft solutions can help October 19, 2023October 20, 2023 Table of Contents Summary for the C-LEVEL NIS2.0 is the new EU directive on network… Read More
AI Microsoft Security Copilot – Can your SOC live without it? December 3, 2023December 3, 2023 Table of Contents Microsoft is bringing Copilot also for the Security field called Security Copilot…. Read More
AI Security Copilot refresh February 8, 2025February 8, 2025 Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale. Read More