Welcome to AI Engineering & MLOps! 🎓
This curriculum is designed to take you from core understanding to confident delivery through weekly applied practice, measurable outcomes, and portfolio evidence.
Each week builds progressively with practical tasks, implementation checkpoints, and reflection points so you can convert knowledge into repeatable professional performance.
Your success is our priority. Stay consistent with weekly execution, document your work, and use feedback loops to continuously improve your delivery quality.
Detailed Weekly Curriculum
Python Environments, Notebooks, and Reproducible Workspaces
- Analyze the principles of Python Environments, Notebooks, and Reproducible Workspaces and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Python Environments, Notebooks, and Reproducible Workspaces in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Python Environments, Notebooks, and Reproducible Workspaces, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Python Environments, Notebooks, and Reproducible Workspaces with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Data Collection, Labeling, and Dataset Hygiene
- Analyze the principles of Data Collection, Labeling, and Dataset Hygiene and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Data Collection, Labeling, and Dataset Hygiene in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Data Collection, Labeling, and Dataset Hygiene, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Data Collection, Labeling, and Dataset Hygiene with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Exploratory Data Analysis and Feature Quality Checks
- Analyze the principles of Exploratory Data Analysis and Feature Quality Checks and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Exploratory Data Analysis and Feature Quality Checks in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Exploratory Data Analysis and Feature Quality Checks, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Exploratory Data Analysis and Feature Quality Checks with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Feature Engineering and Dataset Splitting Strategy
- Analyze the principles of Feature Engineering and Dataset Splitting Strategy and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Feature Engineering and Dataset Splitting Strategy in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Feature Engineering and Dataset Splitting Strategy, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Feature Engineering and Dataset Splitting Strategy with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Baseline Models and Classical Machine Learning Workflow
- Analyze the principles of Baseline Models and Classical Machine Learning Workflow and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Baseline Models and Classical Machine Learning Workflow in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Baseline Models and Classical Machine Learning Workflow, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Baseline Models and Classical Machine Learning Workflow with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Model Evaluation, Metrics, and Error Analysis
- Analyze the principles of Model Evaluation, Metrics, and Error Analysis and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Model Evaluation, Metrics, and Error Analysis in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Model Evaluation, Metrics, and Error Analysis, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Model Evaluation, Metrics, and Error Analysis with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Hyperparameter Tuning and Experiment Tracking
- Analyze the principles of Hyperparameter Tuning and Experiment Tracking and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Hyperparameter Tuning and Experiment Tracking in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Hyperparameter Tuning and Experiment Tracking, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Hyperparameter Tuning and Experiment Tracking with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Packaging Models for Reuse and Deployment
- Analyze the principles of Packaging Models for Reuse and Deployment and link them to course outcomes at advanced depth with architecture-level decision quality.
- Evaluate Packaging Models for Reuse and Deployment in a guided scenario using realistic tools, constraints, and quality gates.
- Design trade-offs, risks, and decision points for Packaging Models for Reuse and Deployment, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Packaging Models for Reuse and Deployment with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Data Versioning and Artifact Management
- Evaluate the principles of Data Versioning and Artifact Management and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Data Versioning and Artifact Management in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Data Versioning and Artifact Management, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Data Versioning and Artifact Management with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
MLflow for Experiment Tracking and Model Registry
- Evaluate the principles of MLflow for Experiment Tracking and Model Registry and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design MLflow for Experiment Tracking and Model Registry in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for MLflow for Experiment Tracking and Model Registry, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for MLflow for Experiment Tracking and Model Registry with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Containerizing Training and Inference Workloads
- Evaluate the principles of Containerizing Training and Inference Workloads and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Containerizing Training and Inference Workloads in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Containerizing Training and Inference Workloads, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Containerizing Training and Inference Workloads with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Serving Models with FastAPI or Inference APIs
- Evaluate the principles of Serving Models with FastAPI or Inference APIs and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Serving Models with FastAPI or Inference APIs in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Serving Models with FastAPI or Inference APIs, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Serving Models with FastAPI or Inference APIs with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Batch Inference versus Real-Time Inference
- Evaluate the principles of Batch Inference versus Real-Time Inference and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Batch Inference versus Real-Time Inference in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Batch Inference versus Real-Time Inference, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Batch Inference versus Real-Time Inference with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Workflow Orchestration with Airflow or Kubeflow
- Evaluate the principles of Workflow Orchestration with Airflow or Kubeflow and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Workflow Orchestration with Airflow or Kubeflow in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Workflow Orchestration with Airflow or Kubeflow, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Workflow Orchestration with Airflow or Kubeflow with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
CI/CD for Machine Learning Pipelines
- Evaluate the principles of CI/CD for Machine Learning Pipelines and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design CI/CD for Machine Learning Pipelines in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for CI/CD for Machine Learning Pipelines, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for CI/CD for Machine Learning Pipelines with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Automated Testing for Data Pipelines and Models
- Evaluate the principles of Automated Testing for Data Pipelines and Models and link them to course outcomes at advanced depth with architecture-level decision quality.
- Design Automated Testing for Data Pipelines and Models in a guided scenario using realistic tools, constraints, and quality gates.
- Optimize trade-offs, risks, and decision points for Automated Testing for Data Pipelines and Models, then record rationale for stakeholder review.
- Justify a portfolio-ready model evaluation brief for Automated Testing for Data Pipelines and Models with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Feature Stores and Reusable Data Pipelines
- Design the principles of Feature Stores and Reusable Data Pipelines and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Feature Stores and Reusable Data Pipelines in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Feature Stores and Reusable Data Pipelines, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Feature Stores and Reusable Data Pipelines with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Monitoring Model Performance and Data Drift
- Design the principles of Monitoring Model Performance and Data Drift and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Monitoring Model Performance and Data Drift in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Monitoring Model Performance and Data Drift, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Monitoring Model Performance and Data Drift with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Retraining Triggers and Model Lifecycle Governance
- Design the principles of Retraining Triggers and Model Lifecycle Governance and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Retraining Triggers and Model Lifecycle Governance in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Retraining Triggers and Model Lifecycle Governance, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Retraining Triggers and Model Lifecycle Governance with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Cloud Training Jobs and Managed ML Services
- Design the principles of Cloud Training Jobs and Managed ML Services and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Cloud Training Jobs and Managed ML Services in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Cloud Training Jobs and Managed ML Services, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Cloud Training Jobs and Managed ML Services with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Security, Secrets, and Access Control for ML Systems
- Design the principles of Security, Secrets, and Access Control for ML Systems and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Security, Secrets, and Access Control for ML Systems in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Security, Secrets, and Access Control for ML Systems, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Security, Secrets, and Access Control for ML Systems with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Responsible AI, Bias, Explainability, and Auditability
- Design the principles of Responsible AI, Bias, Explainability, and Auditability and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Responsible AI, Bias, Explainability, and Auditability in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Responsible AI, Bias, Explainability, and Auditability, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Responsible AI, Bias, Explainability, and Auditability with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Capstone Build: End-to-End MLOps Platform
- Design the principles of Capstone Build: End-to-End MLOps Platform and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Capstone Build: End-to-End MLOps Platform in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Capstone Build: End-to-End MLOps Platform, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Capstone Build: End-to-End MLOps Platform with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Capstone Review, Hardening, and Stakeholder Handover
- Design the principles of Capstone Review, Hardening, and Stakeholder Handover and link them to course outcomes at advanced depth with architecture-level decision quality.
- Optimize Capstone Review, Hardening, and Stakeholder Handover in a guided scenario using realistic tools, constraints, and quality gates.
- Architect trade-offs, risks, and decision points for Capstone Review, Hardening, and Stakeholder Handover, then record rationale for stakeholder review.
- Defend a portfolio-ready model evaluation brief for Capstone Review, Hardening, and Stakeholder Handover with measurable success criteria and next actions.
- Track measurable progress using rubric scores, defect/risk trends, and evidence completeness each week.
- Run a short retrospective focused on what to retain, improve, and scale into the following week.
- Incorporate peer or mentor feedback and revise the week deliverable to professional publication quality.
- Publish the week output into your cumulative portfolio with concise outcome narrative and proof artifacts.
Capstone Projects
Project 1: Reproducible Training Pipeline Build
Create a production-oriented training pipeline with tracked experiments and versioned artifacts.
- Pipeline code with data validation and reproducible training configuration
- Experiment tracking report with model comparison and metric thresholds
- Technical note on feature design, assumptions, and failure cases
Project 2: Model Serving and Monitoring Stack
Deploy a model inference service and implement monitoring for performance and drift detection.
- Inference API with model registry integration and deployment automation
- Observability dashboard for latency, reliability, and prediction quality
- Operations guide covering rollback, retraining trigger, and alert response
Project 3: Enterprise MLOps Handover Package
Deliver a full MLOps platform handover with architecture, governance, and roadmap priorities.
- End-to-end architecture pack with security and compliance controls
- Stakeholder presentation linking business goals to ML service objectives
- 90-day improvement roadmap for reliability, cost, and model quality