dc.description.abstract |
EmoSense is a technology-driven solution designed to help visually impaired individuals better understand the emotions of those around them. Social interactions can be
challenging for individuals with visual impairments, and emotions play a crucial role in
how people communicate and interact with one another. Understanding the emotional
states of others can be especially difficult for visually impaired individuals. Therefore,
EmoSense aims to leverage the power of artificial intelligence to provide either haptic or
audio feedback of the emotions of person they are interacting with.
The EmoSense system consists of a motor-driven camera and an AI system that detects human speech then tracks, captures and analyzes the facial expressions of individuals
in the wearer’s surroundings. The system then categorizes the emotions of the facial expressions into one of five categories: happiness, sadness, anger, surprise, and neutral. The
system provides this information as haptic vibration or audio that the wearer can use to
navigate social interactions more effectively.
EmoSense represents a significant step forward in improving the quality of life for
visually impaired individuals by providing them with a better understanding of the emotional states of those around them, enabling them to participate more fully in social
events and interactions. The EmoSense project aims to help visually impaired individuals better understand the emotions of those around them. It utilizes a head-mounted
device equipped with a camera and an AI system to capture and analyze the facial expressions of individuals in the wearers surroundings. The system categorizes emotions
into happiness, sadness, anger, surprise, and neutral, and provides this information as
haptic vibration or audio feedback to the wearer.
Keywords: Real-time emotion recognition meeting experience for the visually impaired using Raspberry Pi, webcam, vibration motor, servo motor, 2-Axis Pan and Tilt
Mount Kit, microphone, Bluetooth earphones, OpenCV, machine learning, CNN, facial
detection, and haptic/audio feedback. |
en_US |