The Internet of Things (IoT) has already revolutionized how we interact with our environment, and the next frontier is the integration of emotion-sensing capabilities. This emerging field, known as the Internet of Emotional Things (IOET), promises a future where devices can perceive and respond to human emotions.
While the potential benefits of IOET are vast, they are accompanied by significant ethical concerns. Emotion sensing technologies raise critical questions about privacy, data security, and the potential for manipulation. Unfettered access to emotional data could lead to targeted advertising, social discrimination, and even the erosion of free will.
This project, advised by Dr. Richmond Y. Wong, Dr. Noura Howell, Dr. Jay Bolter, Dr. Michael Nitsche, and Dr. Anne Sullivan of Georgia Institute of Technology, proposes user-centric design solutions, specifically 'Notices and Choices', as a crucial step towards responsible development of IOET. By empowering users with clear information and control over their emotional data, we can navigate towards a future where technology enhances human well-being without compromising our autonomy.
To bring this project to fruition, I first sought to understand the problem and the research question. I then explored the problem through two parallel paths and finally materialized solutions in the form of design spaces and prototypes.
I built several prototypes of a central hub device that allows users to control all emotion-sensitive technologies within their home. These prototypes showcase user-centric notices and choices in action.
Understand
The Internet of Emotional Things (IOET) imagines a future where tech senses and reacts to human emotions via devices collecting data from facial expressions, voice tones, and biometrics like heart rate. Despite offering personalized experiences, experts worry about privacy, manipulation, and bias in emotion-reading algorithms.
Emotion-sensitive technologies recognize, interpret, and respond to human emotions using sensors, algorithms, and data analytics. They are applied in human-computer interaction, virtual assistants, mental health monitoring, and beyond.
Users lack clear choices and control over their emotional data. This makes it difficult for them to understand how it's collected and used, hindering their ability to make informed choices. Establishing clear user notice and choice mechanisms is crucial for building a responsible future for Internet of Emotional Things.
So, how do we design for consent and awareness in IOET? In order to devise solutions, I explored the problem statements even further.
Explore
Design Futuring
This project involved creating fictional designs of IOET products for a company called Emo Sense. These products included digital and physical interfaces across various applications like chat apps, home improvement apps, and smart devices.
What kind of surveillance problems could arise in IOET?
By envisioning potential future scenarios, the project aimed to identify potential surveillance issues that could arise with widespread IOET adoption. These issues include:
Constant emotional monitoring through text, voice, and content engagement.
Manipulation and exploitation through targeted advertising based on emotions.
Data breaches exposing highly personal emotional states.
Emotional profiling leading to limited content options, restricted user experiences, and unfair discrimination.
Public Survey
The public survey played a crucial role in understanding public attitudes and concerns surrounding the Internet of Emotional Things (IOET).
The survey results also showcased that users want to be informed about emotional data usage by companies or governments.
For an expanded look at the survey responses, please scroll to the end of the project.
Materialize
By following the proposed design space framework and prioritizing user-centric design, companies can create privacy notices that empower users, foster trust, and promote responsible data practices. The prototypes of the IOET Control Hub showcase how users encounter various types of notices and choices, and focus on four key dimensions: Type, Functionality, Timing, Modality.
The ’Emo Home’ App displays a notice informing users about emotional data tracking and potential sharing with third parties. Users can choose to allow all tracking, select specific emotions to be tracked, or deny tracking entirely.
The ’Emo Chat’ App presents a "privacy rights-based choice" notice. The notice informs users of their rights to express themselves freely, control data collection, and learn about tone detection.
Users can access detailed information about the technology used and choose to review and edit their communication based on the app’s analysis. This contextualized choice considers the user’s relationship with the recipient and suggests turning off tone detection for positive relationships.
The ’Emo Music’ App explores modality through a visual illustration of human emotions. A notice informs users that their emotional data is used to enhance their music experience. Users can choose to allow or deny emotional data tracking, impacting the way the app curates music selection.
How Do Notices and Choices Address The Problems?
Privacy notices and choices can explain data collection and offer opt-out options, limiting the creation of detailed emotional profiles.
Notices and choices can disclose how emotion data is used and allow users to control personalized experiences, reducing manipulation risks.
While notices and choices can't prevent data breaches, they can explain security measures and empower users with data encryption or deletion options.
Notices and choices can help prevent explicit discrimination based on emotions, but stronger regulations are needed for a truly secure environment.
Limitations of Notices and Choices for Emotion AI:
Privacy notices might not disclose everything, making it hard to understand how emotional data is truly used.
Companies might still find ways to exploit emotions indirectly, even with user choices.
Data breaches can still expose users' emotional states despite privacy notices.
Subtle discrimination based on emotions might occur even with clear notices, requiring strong regulations.
User control over emotional data collection and usage might be limited by companies' design choices.
Frequent privacy pop-ups in emotion-sensitive tech can overwhelm users, leading to notice fatigue, rushed choices and a weakened understanding of data usage.
Deep Dive Into User Research:
The public survey provided valuable insights into the public’s perception of Emotion AI and IOET. Recognizing both the potential benefits and the associated anxieties allows developers and policymakers to work towards a future of IOET that is transparent and respects individual privacy.
Conclusion
The Internet of Emotional Things and Emotion AI hold immense potential to transform our lives. However, this potential can only be realized if we prioritize user privacy, ethical considerations, and responsible design principles. Notices and choices are a powerful tool in this endeavor, empowering users to navigate the IOET landscape with con.dence. By fostering a future built on trust and transparency, we can ensure that technology enhances human well-being and emotional intelligence, instead of diminishing them.
Thanks for viewing the project!