In a world where first impressions often make or break career opportunities, the ability to deliver a confident, well-structured, and emotionally intelligent interview performance has become a must-have skill. But let’s be honest — most of us don’t have access to professional coaching, constant feedback, or a mirror that tells us how we come across.
That’s exactly why we built InterviewLY — an AI-powered interview coaching platform that analyzes how you speak, move, emote, and engage — all from a single uploaded video.
🧠 Think of InterviewLY as your private communication coach — powered by machine learning, computer vision, and sentiment analysis — available anytime, anywhere.
What is InterviewLY?
InterviewLY is a web-based application that uses artificial intelligence and video analysis to give users real-time, personalized feedback on their interview performance. Whether you're preparing for your first job or your fifth promotion, InterviewLY helps you practice smarter by giving you deep insights into your:
-
Speech tone and clarity
-
Body posture and alignment
-
Eye contact and camera engagement
-
Hand gestures and physical expressiveness
-
Facial emotions and emotional intelligence
All it takes is one uploaded video.
How Did We Build It?
InterviewLY was developed as a capstone project in machine learning and AI. It was built completely from scratch using open-source tools, deep learning libraries, and a modular design approach.
🔗 Project Repository:
https://github.com/unnatikdm/InterviewLY
Core Features
🎤 Sentiment Analysis
-
Uses Whisper for converting speech into text.
-
Runs TextBlob to detect tone (positive, neutral, negative), filler words ("um", "like", "you know"), and grammar clarity.
-
Highlights speech hesitations and suggests clear communication improvements.
🧍 Posture Analysis
-
Uses MediaPipe Pose to detect shoulder and hip alignment in each video frame.
-
Calculates a posture score based on how often you maintain a straight posture.
-
Offers posture improvement tips for more confident body language.
👁️ Eye Contact Analysis
-
Uses MediaPipe Face Mesh to track eye and nose landmarks.
-
Measures whether you're looking directly into the camera — simulating real eye contact with an interviewer.
-
Gives a percentage score and visual feedback on eye contact habits.
🖐️ Gesture Analysis
-
Uses MediaPipe Hands to classify common gestures like "thumbs up", "open palm", or "fist".
-
Visualizes how often each gesture was used and whether your hands were expressive enough.
-
Encourages natural, confident hand movements.
😊 Emotion Detection
-
Uses DeepFace to analyze facial expressions in each video frame.
-
Detects emotions like happiness, sadness, anger, surprise, and neutrality.
-
Reports on dominant emotions and how expressive your face is during your responses.
💼 Streamlit Integration
-
Brought all modules into a single interactive web dashboard.
-
Added dynamic metrics, charts, buttons, and navigation across modules.
-
Designed clean pages for easy video uploads, feedback viewing, and modular analysis.
📺 Home + Navigation
-
Designed
app.pyandinterview.pyfor homepage, video upload, and route control. -
Built the landing UI with animated titles, CSS-powered buttons, and feature highlights.
-
Ensured smooth flow from upload → preview → analyze → feedback.
The Tech Stack
| Component | Tech |
|---|---|
| UI/Frontend | Streamlit, HTML/CSS inside Streamlit |
| Video Processing | OpenCV, MediaPipe |
| Audio Transcription | Faster-Whisper |
| Sentiment Analysis | TextBlob |
| Emotion Detection | DeepFace |
| Hand Gesture Detection | MediaPipe Hands |
| Pose Detection | MediaPipe Pose |
| Plots & Charts | Matplotlib |
| Backend Logic | Python 3.8+ |
| Deployment | Streamlit Cloud |
A Peek at the Workflow
-
User lands on the homepage with a giant title and an inviting “🎤 Start Interview” button.
-
They upload a
.mp4video of themselves answering mock interview questions. -
The app extracts video frames, audio, and applies models to analyze the user’s performance.
-
Each module presents results on a separate page: eye contact, posture, sentiment, gesture, and emotion.
-
For every analysis, the app shows:
-
A score or percentage
-
A pie chart breakdown
-
A written summary
-
Actionable tips to improve
-
Challenges We Overcame
-
Deployment Errors: Some libraries like
torch,tensorflow, ormoviepycrashed on Streamlit Cloud. We optimized ourrequirements.txt, removed unnecessary packages, and used lightweight tools likefaster-whisperandtextblobinstead. -
System Dependencies: FFmpeg was needed for audio processing, but Streamlit didn’t support it natively. We fixed this using a
packages.txthack to install FFmpeg on their servers. -
Cross-Team Integration: With multiple contributors working on different files, maintaining consistency in style, routing, and performance required clean modular planning.
Why This Matters
InterviewLY is more than just a cool AI project — it solves a real-world problem: how to prepare for interviews with objective, real-time feedback.
-
Recruiters say 70% of communication is non-verbal.
-
Most candidates overestimate their delivery skills.
-
There are very few tools that provide behavioral feedback automatically.
With InterviewLY, we're filling this gap by offering a free, accessible, and AI-powered mirror — one that doesn’t judge, but teaches.
Try It, Fork It, Expand It
Want to make InterviewLY better? It’s fully open-source. You can:
-
Add language support (Hindi, Marathi, etc.)
-
Replace TextBlob with transformer models like BERT for deeper sentiment
-
Build a feedback history dashboard
-
Integrate with WhatsApp or email for sending reports
GitHub Repo:
https://github.com/unnatikdm/InterviewLY
Final Thoughts
InterviewLY doesn’t just help you improve how you interview. It helps you become more aware of how the world sees you.
It’s not about perfection — it’s about progress, powered by AI.
So go ahead — upload your video, press play, and let the coach begin.