How I work
I treat ML as a product system, not a notebook. I ship with reproducibility, measurement, and operational clarity:
data validation, versioned artefacts, and deployment discipline. I’m deliberate about trade-offs — latency, cost, robustness,
and auditability — and I prefer evidence over claims.
Reproducibility
Evaluation
Observability
Production Readiness
What I build end-to-end
Ingestion → data modelling → feature engineering → training + evaluation → artefact registry →
containerisation → serving APIs → monitoring and retraining triggers. For GenAI use cases: retrieval, chunking,
reranking, guardrails, and evaluation harnesses.
Feature Engineering
Model Registry
Dockerisation
FastAPI
Technical focus areas
This site highlights systems that connect modelling quality with engineering quality. The goal is to demonstrate a single identity:
a full-stack ML engineer who can both build models and deploy them reliably.
AI & Modelling
Supervised learning, feature engineering, uncertainty-aware thinking, model evaluation, calibration, and robust baselines.
When using LLMs: retrieval-augmented generation, prompt + tool orchestration, and evaluation-driven iteration.
Feature Engineering
Evaluation
RAG
Guardrails
MLOps & Engineering
CI/CD for ML, container-first delivery, reproducible training, deployment automation, and monitoring-ready outputs.
I design for reliability: versioning, provenance, and fast rollback paths.
CI/CD for ML
Docker
Model Serving
Monitoring
Recruiter-friendly summary (ATS-ready)
Full Stack Machine Learning Engineer / Data Scientist building scalable ML pipelines and deployment systems.
Keywords: CI/CD for ML, Dockerisation, model serving, inference scaling, monitoring, automated retraining, RAG/LLMOps,
evaluation harness, data governance, and production trade-offs.