Demo
The app accepts weather sensor readings and returns a classification.
Input features
Temperature · Pressure · Humidity · Wind speed · Wind degree · Rain 1h · Rain 3h · Snow · Clouds
Prediction classes
clear · cloudy · drizzly · foggy · hazey · misty · rain · smokey · thunderstorm
Response time
< 5 ms
Example input
Temp: 72°F
Pressure: 1013
Humidity: 45%
Wind: 8 mph
Pressure: 1013
Humidity: 45%
Wind: 8 mph
Model output
clear
Latency
1.24 ms
What it solves
Clear outcomes, no marketing language.
- Classifies weather from raw sensor readings without manual interpretation.
- Demonstrates a full ML deployment lifecycle: train → containerise → deploy → test.
- Automated pipeline from code push to production rollout with zero manual steps.
CI/CD Pipeline
Push to main triggers the full build → test → deploy chain.
Push
Code merged to main branch.
CI
Install, smoke test, unit test, integration test.
Build
Docker image built and pushed to DockerHub.
Deploy
K8s deployment updated, rollout verified.
Stack
Each layer chosen for a reason.
Flask
Lightweight web framework. Simple routing for the form input and result display.
No overhead for this use case.
scikit-learn
Trained classifier pickled to disk. Fast inference, no GPU needed, small footprint.
Docker
Reproducible builds. Same image runs in CI, staging, and production.
Based on python:3.10-slim for minimal size.
Kubernetes
Deployment with 2 replicas, NodePort service.
Demonstrates orchestration, scaling, and rollout strategy.
GitHub Actions
CI runs tests and builds the Docker image. CD triggers on CI success
and performs a rolling K8s update via a self-hosted runner.
Ansible
Playbooks for provisioning: installing Minikube, kubectl, creating
the systemd service, and deploying/scaling the app.
Decisions
The tradeoffs that shaped this project.
Pickle over ONNX
Pickle is simpler for sklearn models. ONNX adds complexity without benefit at this scale.
Self-hosted runner
CD deploys to a real cluster via a self-hosted GitHub Actions runner,
demonstrating actual infrastructure management.
NodePort over Ingress
Simpler for a single-service demo. Demonstrates K8s networking without the
overhead of an ingress controller.
Next steps
Planned improvements, kept realistic.
- Retrain model with a larger, more recent dataset and serve via a model registry.
- Add Helm chart for cluster configuration management.
- Implement health checks and readiness probes in the K8s deployment.
- Add monitoring with Prometheus metrics endpoint.