Curso Advanced AI Security: Understanding and Mitigating Risks in LLM and GenAI

  • RPA | IA | AGI | ASI | ANI | IoT | PYTHON | DEEP LEARNING

Curso Advanced AI Security: Understanding and Mitigating Risks in LLM and GenAI

16 horas
Visão Geral

Este Advanced AI Security: Understanding and Mitigating Risks in LLM and GenAI,  foi desenvolvido para fornecer aos profissionais técnicos e arquitetos seniores uma compreensão abrangente dos riscos de segurança associados aos Large Language Models (LLMs) e à Inteligência Artificial Gerada (GenAI). Os participantes irão se aprofundar nas vulnerabilidades inerentes aos LLMs e aos sistemas GenAI, explorar técnicas de modelagem de ameaças e obter insights práticos sobre estratégias de mitigação. Ao desmistificar o funcionamento dos LLMs e do GenAI, os participantes estarão equipados com o conhecimento e as ferramentas necessárias para enfrentar eficazmente os desafios de segurança em ambientes orientados por IA.

Objetivo

Após realizar este Curso Advanced AI Security: Understanding and Mitigating Risks in LLM and GenAI, você será capaz de:

  • Entenda os riscos de segurança exclusivos apresentados por LLMs e GenAI.
  • Aprenda técnicas para modelagem de ameaças em sistemas LLM e GenAI.
  • Desmistifique o funcionamento dos LLMs e identifique ameaças associadas.
  • Desmistifique a operação dos sistemas GenAI e reconheça ameaças potenciais.
  • Explore estratégias de mitigação para lidar com vulnerabilidades LLM e GenAI.
  • Obtenha proficiência na implementação das 10 principais práticas de segurança da OWASP em ambientes LLM.
Publico Alvo

Profissionais técnicos, incluindo engenheiros de software, especialistas em segurança cibernética e arquitetos seniores, envolvidos no projeto, desenvolvimento ou proteção de sistemas de IA. Os participantes devem ter uma compreensão básica dos conceitos de inteligência artificial e dos princípios de segurança cibernética.

Materiais
Português/Inglês + Exercícios + Lab Pratico
Conteúdo Programatico

Understanding LLM Vulnerabilities and Threat Modeling

  1. Introduction to Large Language Models (LLMs)
  2. Risks and security challenges in LLMs
  3. Threat modeling methodologies for LLM systems
  4. Identifying common vulnerabilities in LLM architectures
  5. Case studies and real-world examples of LLM security incidents
  6. Hands-on threat modeling exercises for LLM systems

Exploring GenAI Threats and Mitigation Strategies

  1. Overview of Generated Artificial Intelligence (GenAI)
  2. Demystifying the operation of GenAI systems
  3. Threat landscape for GenAI applications
  4. Mitigation strategies for GenAI vulnerabilities
  5. Implementation of OWASP Top 10 security practices in LLM environments
  6. Best practices for securing AI-powered applications
  7. Case studies and real-world examples of GenAI security incidents
  8. Hands-on threat modeling exercises for GenAI systems

Delivery Format:

The course will be delivered through a combination of lectures, interactive discussions, hands-on exercises, and case studies. Participants will have the opportunity to engage with industry experts and collaborate with peers to deepen their understanding of AI security concepts and practices. Threat modeling exercises will be incorporated throughout the course to provide practical experience in assessing and mitigating security risks in LLM and GenAI systems.

  1. Threats and risks associated with LLMs,
  2. Prompt injection
  3. Insecure Output Handling
  4. Training Data Poisoning
  5. Supply Chain vulnerabilities
  6. Insecure Plugin Design
  7. Overreliance
  8. Model-Theft
  9. Excessive Agency
  10. Model Denial of Service
  11. Leverage GenAI Security Best Practices & Frameworks

Google Secure AI framework (SAIF)

  1. Overview of Best Practices
  2. Proactive threat detection and response for LLMs
  3. Leveraging threat intelligence, and automating defenses against LLM threats.
  4. Platform security controls to ensure consistency
  5. Enforcing least privilege permissions for LLM usage and development.
  6. Adaptation of application security controls to LLM-specific threats and risks
  7. Feedback loop when deploying and releasing LLM applications.
  8. Contextualize AI risks in surrounding business processes.

AI Risk Management Program

  1. Reduce AI Data Pipeline Attack Surface & LLM Data Validation
  2. Protecting the AI data pipeline
  3. Threat Management and Least Privilege
  4. LLM Application Security
  5. GenAI security controls
  6. Targeting GenAI-associated risks and threats
  7. Governance oversight

Analyzing Threats and risks associated with LLMs/GenAI

  1. Prompt injection
  2. Insecure Output Handling
  3. Training Data Poisoning
  4. Supply Chain vulnerabilities
  5. Insecure Plugin Design
  6. Overreliance
  7. Model-Theft
  8. Excessive Agency
  9. Model Denial of Service

LLM/GenAI Threat Modeling Maps

  1. Weakness and Vulnerability Analysis (WVA)
  2. Categorizing Threats with STRIDE
  3. STRIDE Threat Categorization
  4. Categorizing Threats with DREAD
  5. Process for Attack Simulation and Threat Analysis (PASTA)
  6. Common Attack Pattern Enumeration and Classification (CAPEC)
  7. Common Vulnerability Scoring System (CVSS)
TENHO INTERESSE

Cursos Relacionados

Curso AI ML Toolkits with Kubeflow Foundation

24 horas

Curso Container Management with Docker

24 Horas

Curso Machine Learning Python & R In Data Science

32 Horas

Curso Docker for Developers and System Administrators

16 horas

Curso artificial inteligence AI for Everyone Foundation

16 horas

Curso IA Inteligência Artificial e Código Aberto Foundation

16 horas

Curso Artificial Intelligence with Azure

24 Horas

Curso RPA Robotic Process Automation Industria 4.0

32 horas