The FeelGPT application
This project introduces an innovative chatbot designed to foster meaningful conversations about human emotions. Through gentle prompt engineering, the chatbot encourages self-reflection and emotional understanding in a safe, supportive space.
A standout feature is its optional camera integration, enabling real-time analysis of facial expressions and gestures to personalize interactions.
Powered by advanced Large Language Models (LLM) for seamless, human-like conversations and enhanced by Visage|SDK for real-time emotion detection, the chatbot adapts to users' emotional cues through facial expression analysis, creating a truly personalized experience.
Additional features include a comprehensive chat history, enabling users to revisit past conversations, a graph and a mood tracker that visually map emotional trends over time, providing insights into their emotional journey.
Users can also enable email notifications, customizing the time and frequency, to receive a summary of the previous day’s emotional highlights and a link to start a new conversation.
FEATURES:
- hybrid chat interaction
- history overview
- moodTraker
- graph
- email notification system
- user dashboard and update section
ARHITECTURE OVERVIEW
The FeelGPT system is organized into three primary components: Frontend, Backend, and Database.
The frontend modules handle all direct user interactions, including text input, camera-based emotion detection, and displaying the chatbot’s responses.
The backend manages the core logic, processes user input and emotion data, and integrates external services for Natural Language Processing (NLP) and emotion recognition.
The database modules manage all data storage, including user profiles, conversation history, and emotional data. These modules ensure data security and efficient retrieval.
USED TECHNOLOGIES
- Node.js
- React
- Visage|SDK FaceAnalysis
- Phi-3.5 (SLM)
- Azure CosmosDb for MongoDB
- Azure