Inspiration

Deepfakes are one of the fastest-growing threats to digital trust. With AI-generated faces now indistinguishable to the human eye, we needed a tool that not only detects manipulation but explains why it flagged an image.

What it does

Deepfake Detector analyzes any uploaded image using a dual-model AI pipeline and returns a fake probability score with a Grad-CAM heatmap overlay showing exactly which facial regions triggered the detection.

How we built it

  • Backend: FastAPI + PyTorch with a pretrained deepfake classification model combined with FFT frequency domain analysis
  • Ensemble: 60% pixel-level CNN score + 40% frequency anomaly score
  • Explainability: Grad-CAM highlights suspicious regions on the face
  • Frontend: React + Vite + TailwindCSS with animated score meter
  • Deployed: Hugging Face Spaces (CPU tier, Docker)

Challenges

The biggest challenge was model calibration — GAN-generated faces have subtle artifacts that vary by generator architecture. Combining frequency domain analysis with the CNN classifier significantly improved detection reliability.

ML Performance

  • AUC: 0.94
  • F1 Score: 0.91
  • Accuracy: 93.2%
  • Inference time: ~1.2s per image on CPU

Built With

Share this project:

Updates