Visão Geral
O curso Mojo para Engenharia de IA/ML foi desenvolvido para capacitar engenheiros de Inteligência Artificial e Machine Learning a construir pipelines, algoritmos e sistemas de IA altamente performáticos, utilizando a linguagem Mojo.
O curso aborda o uso do Mojo como substituto e complemento ao Python em cenários críticos de performance, explorando computação numérica intensiva, paralelismo, SIMD, controle de memória e integração com frameworks de IA, permitindo extrair o máximo do hardware moderno para workloads de Machine Learning, Deep Learning e Data Science em larga escala.
Conteúdo Programatico
Module 1 – Mojo for AI/ML Engineers – Foundations
- Introduction to Mojo for AI workloads
- Why Mojo for Machine Learning performance
- Mojo vs Python in AI pipelines
- Overview of Mojo execution model
- Use cases in ML and Deep Learning
Module 2 – Numeric Computing with Mojo
- Numeric types and precision
- Vector and matrix representations
- Efficient array operations
- Memory layout for numerical data
- Performance considerations
Module 3 – Data Processing Pipelines
- High-performance data ingestion
- Batch processing optimization
- Feature engineering with Mojo
- Data transformation pipelines
- Zero-copy data handling
Module 4 – Memory Management for ML
- Stack vs heap in AI workloads
- Buffer reuse strategies
- Memory ownership for tensors
- Avoiding unnecessary allocations
- Memory optimization patterns
Module 5 – Parallel Programming for ML
- Data parallelism strategies
- Task parallelism in ML pipelines
- Parallel preprocessing
- Scaling across CPU cores
- Safe parallel execution
Module 6 – SIMD for Machine Learning
- Vectorized numerical operations
- SIMD acceleration for tensors
- Loop vectorization techniques
- Reducing branch divergence
- SIMD performance tuning
Module 7 – Mojo for Model Training
- Accelerating training loops
- Optimizing loss calculations
- Gradient computation optimization
- Batch and mini-batch processing
- Training performance benchmarks
Module 8 – Mojo for Model Inference
- Low-latency inference pipelines
- Batch vs real-time inference
- Memory-efficient inference
- Throughput optimization
- CPU-based inference acceleration
Module 9 – Integration with Python and ML Frameworks
- Calling Mojo from Python
- Hybrid Python + Mojo architectures
- Using Mojo inside existing ML projects
- Gradual migration strategies
- Best practices for interoperability
Module 10 – Performance Benchmarking
- Measuring training performance
- Measuring inference latency
- Python vs Mojo benchmarks
- Profiling CPU and memory usage
- Interpreting performance results
Module 11 – Real-World AI Use Cases
- Large-scale data processing
- Classical ML optimization
- Deep Learning preprocessing
- AI pipelines at scale
- Industry case studies
Module 12 – Final Project
- End-to-end ML pipeline in Python
- Identification of performance bottlenecks
- Partial rewrite using Mojo
- Parallel and SIMD optimization
- Final benchmark and technical report