Curso Kafka Deep-Dive

  • DevOps | CI | CD | Kubernetes | Web3

Curso Kafka Deep-Dive

16 horas
Visão Geral

Curso Kafka Deep-Dive. Kafka é o sistema de filas e barramento de mensagens mais usado no mundo, e saber como usá-lo corretamente é fundamental para mantê-lo estável e de alto desempenho.

Este Curso Kafka Deep-Dive, prático de um 16 horas foi projetado para fornecer detalhes e compartilhar muitos bons conselhos e informações importantes. Aprenda diretamente com nossos especialistas que consultam diariamente projetos Kafka como o Kafka deve ser usado e quais erros evitar.

Objetivo

Após realizar este Curso Kafka Deep-Dive, você será capaz de:

  • Usando Kafka corretamente.
  • Formatos e esquemas de mensagens.
  • Garantias de entrega de mensagens.
  • Maximizando o rendimento e minimizando a latência.
  • Durabilidade e alta disponibilidade.
  • Monitoramento e ajuste de desempenho.
Materiais
Inglês + Exercícios + Lab Pratico
Conteúdo Programatico

Module 1 - Introduction

  • Basic overview of Kafka
  • Broker, Consumer, Producer and Consumer groups
  • Topics, partitions and replicas
  • Logs, segments and lag
  • Working with Kafka

Module 2 - Creating a throughput optimized pipeline

  • Measuring a naive pipeline with default values
  • Compression impact
  • Message formats (Avro/Json/Protobuf)
  • Bulking
  • Acks
  • Delivery semantics - At most once / At least once
  • Tweaking broker thread pools

Module 3 - How to measure and improve latency

  • Product wants to see their metrics within 5 seconds - now what?
  • Measuring latency
  • Understanding Kafka artificial delays
  • What about delays coming from our application?
  • Consumer lag analysis and responses for Downstream issues
  • We just made throughput optimizations, what is their impact on latency?
  • Can't I just add more partitions?
  • How does the broker state impact our latency?

Module 4 - Durability and fault tolerance

  • Are we highly available?
  • In-Sync replicas & acks
  • Replication lag
  • Quotas
  • Data retention - compacted topics vs retention by time/size
  • Retry mechanisms in Kafka
  • Consumer lag analysis and responses for Broker outages
  • Schema evolution with schema registry

Module 5 - Ordering, Transactions and Exactly Once Semantics

  • Ordering guarantees in Kafka - Writing a custom partition selection
  • Idempotency
  • Transactions - Read Committed vs Read Uncommitted
  • Manual offset commits
  • Consumer group rebalances and their impact (Consumer assignment strategies, Group member IDs)
  • Analysis of temporary failures (Overview of timeouts configurations and other potential failures)
  • Avoiding Consumer Group rebalances shaking things up due to temporary failures

Module 6 - Ordering, Transactions and Exactly Once Semantics

Just kidding, the right title is "Deployment, Sizing and Costs".

  • Right number of partitions
  • Analyzing Replication costs
  • Calculating compute costs
  • Data retention costs
  • Separation of concerns (Multi cluster / single cluster etc)

Module 7 - Production-grade Kafka

  • Fault tolerance with Consumer and Producers
  • Load testing and making sure our Kafka cluster and applications are up to the task
  • Transactions - Read Committed vs Read Uncommitted
  • Monitoring and whats behind the metrics
  • Load distribution
  • Take aways to your daily work
TENHO INTERESSE

Cursos Relacionados

Curso Ansible Red Hat Basics Automation Technical Foundation

16 horas

Curso Terraform Deploying to Oracle Cloud Infrastructure

24 Horas

Curso Ansible Linux Automation with Ansible

24 horas

Ansible Overview of Ansible architecture

16h

Advanced Automation: Ansible Best Practices

32h