Image Style Transfer
Upload any photo and apply a famous art style : Van Gogh, Munch, Picasso, and more. Uses fast neural style transfer models running on the server with OpenCV DNN.
Models7 pre-trained styles
BackendFlask + OpenCV DNN
InferenceCPU (Hetzner CX23)
Max input10 MB / 1024 px
Live demo
Upload a photo, pick a style, and hit Transfer.
🖼️
Click or drag an image here
JPEG, PNG, WebP : max 10 MB
JPEG, PNG, WebP : max 10 MB
Pick a style
Starry Night
Van Gogh
The Scream
Edvard Munch
La Muse
Picasso
Candy
Pop Art
Mosaic
Tile Art
Udnie
Francis Picabia
Feathers
Abstract
What it solves
Clear outcomes, no marketing language.
- Demonstrates end-to-end ML: model loading, image preprocessing, inference, post-processing, and delivery.
- Uses OpenCV's DNN module : lighter than PyTorch/TensorFlow. CPU-only inference keeps the server bill small.
- Sub-10-second transforms on a 2-vCPU Hetzner CX23 with no GPU.
- Models are cached in memory after first load : subsequent requests are faster.
Approach
Architecture and model pipeline.
Upload
User sends image + style name via multipart POST
→
Preprocess
Decode, resize to ≤1024 px, build DNN blob with mean subtraction
→
Inference
Feed-forward through pre-trained fast neural style net (Torch .t7)
→
Output
De-normalise, clip, encode as JPEG, return base64
Decisions & tradeoffs
Why this and not that.
OpenCV DNN vs PyTorch
PyTorch is ~700 MB installed. OpenCV's DNN module loads the same .t7 models at ~30 MB total. On a 4 GB VPS, every megabyte matters.
Pre-trained vs arbitrary style
Arbitrary style transfer (e.g. Magenta) requires a much larger model and TensorFlow. Pre-trained feed-forward nets give instant results with predictable quality.
CPU-only inference
A CX23 has no GPU. The Johnson et al. feed-forward architecture is fast enough for single images on CPU (~3-8 seconds depending on resolution).
Base64 response vs file download
Returning base64 lets the frontend show the result immediately and offer a download button without a second round-trip.