Learning Content
Module Overview
AI projects differ from traditional software projects in critical ways: requirements evolve as you learn from data, timelines are harder to predict, success metrics can be ambiguous, and technical uncertainty is higher. Standard waterfall or even agile methodologies need adaptation for AI.
This module teaches you how to successfully manage AI projects from kickoff through deployment, with specific focus on Malta business contexts. You'll learn how to scope AI projects realistically, manage stakeholder expectations, handle iterative development, and deliver value incrementally.
🔑 Key Concept: AI Projects Are Experimental
Unlike traditional software where you know upfront if requirements are feasible, AI projects involve experimentation. You won't know if 85% accuracy is achievable until you try. Manage AI projects with an experimental mindset: hypotheses, tests, learning, iteration.
Why AI Projects Fail: Common Pitfalls
- Unclear Success Criteria: "Make it smarter" isn't measurable. Need specific KPIs (e.g., "Reduce fraud losses by 60%" or "Achieve 80%+ churn prediction accuracy")
- Unrealistic Timelines: Expecting production AI in 6 weeks when data prep alone needs 3 months
- Data Not Ready: Starting ML modeling before data quality, accessibility, and labeling are addressed
- Perfectionism: Waiting for 99% accuracy when 85% would deliver massive business value
- Lack of Domain Expertise: Data scientists working in isolation without business context, building technically sound but business-irrelevant models
- No Production Plan: Building great experimental models that never deploy to production due to integration challenges
The AI Project Lifecycle
Phase 1: Scoping & Planning (Weeks 1-2)
Activities:
- Define business problem and success metrics (specific, measurable)
- Assess data readiness (availability, quality, labels)
- Define MVP scope (minimum viable product) vs. future phases
- Identify stakeholders and form project team
- Create project charter with goals, constraints, and success criteria
Key Deliverable: Project charter document signed by executive sponsor, defining scope, success metrics, timeline, budget, and team
Phase 2: Data Preparation (Weeks 3-8, often 40-60% of project time)
Activities:
- Collect and centralize data from disparate sources
- Clean data (handle missing values, errors, outliers, duplicates)
- Label data if supervised learning (historical labels or expert annotation)
- Explore data patterns (EDA - exploratory data analysis)
- Create train/validation/test splits
- Build data pipelines for automated refresh
Key Deliverable: Clean, labeled, accessible dataset ready for ML modeling + data quality report
Phase 3: Model Development & Experimentation (Weeks 9-14)
Activities:
- Select candidate ML algorithms appropriate for problem
- Engineer features (transform raw data into model inputs)
- Train initial models and evaluate performance
- Iterate: try different algorithms, features, hyperparameters
- Validate models on holdout test data
- Document model performance, limitations, and recommendations
Key Deliverable: Trained model(s) meeting success criteria + model documentation + performance report
Phase 4: Integration & Deployment (Weeks 15-18)
Activities:
- Build APIs or batch pipelines for model serving
- Integrate model with existing business systems
- Implement monitoring and logging
- Create user interfaces if needed (dashboards, alerts, workflows)
- Conduct user acceptance testing (UAT)
- Deploy to production with rollback plan
Key Deliverable: Production AI system integrated with business workflows + deployment documentation
Phase 5: Monitoring & Iteration (Ongoing)
Activities:
- Monitor model performance in production (accuracy, latency, errors)
- Detect model drift (performance degradation over time)
- Collect user feedback and business impact metrics
- Retrain models periodically with new data
- Iterate based on learnings (new features, algorithm improvements)
Key Deliverable: Ongoing performance reports + model update cadence + continuous improvement roadmap
Agile AI: Adapting Scrum for Machine Learning
Traditional 2-Week Sprints Don't Work Well for AI: ML experimentation doesn't fit neatly into sprints. Training a model might take 3 days; evaluating results and deciding next experiment another 2 days. Artificial sprint boundaries interrupt learning cycles.
Adapted Approach: Milestone-Based with Rapid Experimentation
- Milestone 1 (Weeks 1-8): Data readiness milestone. Definition of Done: Clean, labeled dataset accessible for modeling.
- Milestone 2 (Weeks 9-14): Model performance milestone. Definition of Done: Model achieving [X]% accuracy on test set.
- Milestone 3 (Weeks 15-18): Production deployment milestone. Definition of Done: Model live, serving predictions, integrated with business systems.
- Milestone 4 (Ongoing): Optimization milestone. Definition of Done: Model improvements based on production performance.
Weekly Standups Instead of Daily: AI work involves deep thinking and experimentation. Daily standups interrupt flow. Weekly check-ins sufficient unless blockers arise.
Experimentation Log Instead of Backlog: Track ML experiments (algorithms tried, features tested, results) rather than traditional user stories. Use tools like MLflow, Weights & Biases, or simple spreadsheets.
Managing Stakeholder Expectations
Challenge: Non-technical stakeholders often have unrealistic AI expectations shaped by media hype. Your job is managing expectations without killing enthusiasm.
Strategies:
- Set Range Expectations, Not Guarantees: "We're targeting 80-85% accuracy based on similar projects, but won't know exactly until we test with our data."
- Explain Uncertainty Upfront: "AI projects are experimental. We might discover our data isn't sufficient and need to adjust scope."
- Show Progress Incrementally: Share early results every 2 weeks (even if imperfect) so stakeholders see progress, not a black box.
- Celebrate Learning, Not Just Success: "We discovered Feature X doesn't help prediction—that's valuable learning that saved us from wrong direction."
- Translate Technical to Business: Don't say "F1 score of 0.83"—say "Model catches 83% of fraud attempts while minimizing false alarms."
Malta Case Study: iGaming Project Management Success
Project: Malta iGaming operator implementing player churn prediction AI (covered in previous modules)
Initial Plan (Week 0):
- Success Metric: Achieve 80%+ accuracy predicting player churn 30 days in advance
- MVP Scope: Churn prediction only (not game recommendations or other features)
- Timeline: 16-week project to production deployment
- Budget: €45K (MAIA platform license + data engineer contractor + AI PM time)
- Team: AI Product Manager (internal, 50% time), Data Engineer (contractor, 3 months), MAIA platform support
Milestone 1: Data Preparation (Weeks 1-6)
- Week 1-2: Data discovery—identified player data across 3 databases (gaming platform, CRM, payment system)
- Week 3-4: Built data warehouse consolidating player data (used Snowflake, €2K setup)
- Week 5: Challenge: Discovered 30% of players missing email engagement data (data quality issue)
- Week 5 Decision: Rather than delay 6 weeks to backfill missing data, decided to proceed with available features. Documented as known limitation.
- Week 6: Final dataset ready: 450,000 players, 18 months history, 35 features, churn labels (churned = no login in 60 days)
- ✓ Milestone 1 Achieved: Dataset ready (1 week late but acceptable)
Milestone 2: Model Development (Weeks 7-12)
- Week 7: Baseline model trained using MAIA platform (logistic regression). Accuracy: 68% (below target)
- Week 8-9: Feature engineering—added derived features (session frequency trends, deposit recency, gameplay diversity). Accuracy improved to 77%
- Week 10: Tried advanced algorithms (gradient boosting). Accuracy: 84% (exceeded target!)
- Week 11: Model validation and explainability analysis (neurosymbolic reasoning showed key churn indicators: deposit decline + session frequency drop)
- Week 12: Finalized model, documented performance and limitations
- ✓ Milestone 2 Achieved: 84% accuracy on test set (exceeded 80% target)
Milestone 3: Deployment (Weeks 13-16)
- Week 13: Built API endpoint using MAIA platform (5 days vs. 3-4 weeks if custom built)
- Week 14: Integrated with CRM system—daily batch predictions, high-risk players flagged in CRM
- Week 15: Challenge: Marketing team requested real-time predictions, not daily batch. Scope creep risk.
- Week 15 Decision: Deployed daily batch as MVP. Documented real-time as Phase 2 enhancement (3 months later). Avoided scope creep.
- Week 16: User acceptance testing with retention team, training on how to use predictions, go-live
- ✓ Milestone 3 Achieved: Production system live on schedule
Post-Launch: Monitoring & Optimization (Ongoing)
- Month 1: Accuracy in production: 86% (better than test set—good sign). Retention team acting on 400 high-risk players/week.
- Month 2: Business impact measured: 23% churn reduction among flagged players. ROI validated.
- Month 3: Model retrained with 3 additional months of data. Accuracy maintained at 85%.
- Month 6: Added real-time prediction capability (Phase 2 from Week 15 decision)
Project Management Keys to Success:
- Clear Success Metric: 80%+ accuracy was specific, measurable, achievable. Everyone knew what "done" looked like.
- Pragmatic Decisions: Week 5 (data quality issue) and Week 15 (scope creep) decisions prioritized delivering MVP over perfection.
- Milestone Structure: Organized around meaningful deliverables (data, model, deployment) rather than artificial 2-week sprints.
- Transparency: Weekly progress reports to stakeholders showing current accuracy, challenges, and learnings kept everyone aligned.
- Managed Expectations: Targeted 80%, delivered 84%, but communicated uncertainty throughout—no surprise disappointments.
Risk Management for AI Projects
| Risk |
Likelihood |
Impact |
Mitigation |
| Insufficient Data Quality |
High |
High |
Conduct data assessment BEFORE project kickoff. Budget 40-60% time for data prep. |
| Can't Achieve Target Accuracy |
Medium |
High |
Set range targets (75-85%) not fixed. Define "good enough" threshold. Have fallback plan. |
| Integration Complexity |
Medium |
Medium |
Involve IT/engineering team early. Assess integration feasibility in scoping phase. |
| Scope Creep |
High |
Medium |
Define MVP strictly. Document Phase 2 features. Require exec approval for scope changes. |
| Key Person Dependency |
Medium |
High |
Document everything. Cross-train team members. Use platforms reducing dependency on specific ML experts. |
| Regulatory Rejection |
Low-Medium |
High |
Engage legal/compliance early. For MGA/MFSA, use explainable AI. Prepare audit trails. |
Key Takeaways
- AI projects are experimental—manage with hypotheses, tests, and iterations rather than fixed requirements
- Typical AI project timeline: 16-24 weeks (40-60% data prep, 30-40% modeling, 20-30% deployment)
- Define clear, measurable success criteria upfront (e.g., "80%+ accuracy" not "improve predictions")
- Use milestone-based management (data ready, model trained, production deployed) rather than rigid 2-week sprints
- Expect and plan for data quality challenges—they're the #1 cause of AI project delays
- Manage scope creep aggressively—define MVP, document Phase 2 features, require exec approval for changes
- Communicate progress transparently with weekly updates showing current performance, even if imperfect
- For Malta businesses: Engage MGA/MFSA compliance early if regulated industry; explainability matters for regulatory acceptance
- De-risk by starting small (single use case MVP) before expanding to multiple AI initiatives