Affective Sovereignty — A Minimal Declaration on Emotional Interpretation Rights in the Age of Algorithmic Power
When AI Interprets My Emotions Before I Do, Can I Still Call Them My Own?
This paper introduces the concept of Affective Sovereignty—the ethical claim that the right to interpret one’s emotional experience must remain with the human subject, even as AI systems increasingly predict, define, and intervene upon those emotions.
In the face of algorithmic authority over affective data, this paper proposes the first minimal declaration of emotional interpretation rights, addressing the growing tension between predictive emotional technologies and individual cognitive freedom, uniqueness, and selfhood.
Positioned at the intersection of philosophy, psychology, AI ethics, and political epistemology, this work outlines a foundational ethical framework and proposes design principles for the responsible governance of emotion AI systems.
Abstract
As artificial intelligence (AI) systems increasingly mediate human emotion—detecting our facial expressions, voice tones, and even influencing our feelings—the question of emotional sovereignty arises: who ultimately interprets, controls, and validates one’s affective experiences, the individual or the algorithm? This paper introduces a novel interdisciplinary framework to address how algorithmic emotion inference and affect-sensing technologies risk encroaching on the uniqueness and autonomy of human emotional identity. Drawing on contemporary psychological theories of constructed emotion and philosophical perspectives on posthuman identity, we articulate two new ethical concerns: affective sovereignty (individuals’ autonomy over their own emotions and their interpretation) and uniqueness violation (the failure of AI to respect the individual nuances of human emotional experience). Through an analysis of real-world emotion AI systems—ranging from facial expression recognition tools (e.g., Affectiva) to empathetic chatbots (e.g., Replika, Woebot)—we demonstrate how current designs can undermine emotional authenticity and agency. We further argue that existing AI ethics principles (privacy, fairness, transparency) are insufficient to safeguard our “affective self.” As a remedy, we propose a three-part ethical design model for emotion AI: interpretive transparency, design restraint, and identity-responsive feedback. This model reframes emotions as contested ethical terrain rather than mere data points, aiming to ensure that AI augments rather than erodes human emotional sovereignty. The paper concludes with recommendations for implementing these principles in technology design and policy, to protect what is fundamentally human in the age of emotional machines.
Keywords: Affective Sovereignty · Emotional Interpretation Rights · Emotion AI · Algorithmic Power · Epistemic Injustice · Ethical Design · Political Cognition · Affective Data · Digital Personhood · EU AI Act
Introduction
In recent years, AI systems capable of recognizing, simulating, or responding to human emotions—broadly termed emotion AI or affective computing—have proliferated across domains including social media content analysis, automotive safety monitoring, mental health apps, customer service, education, and robotics. These systems promise to interpret our facial expressions, vocal intonations, physiological signals, and textual cues to infer emotional states, or to engage users in empathic dialogues as “companions” or virtual therapists. The rise of such technologies brings forth a provocative ethical and philosophical question: “When AI interprets my emotions before I do, can I still call them my own?” In other words, if an algorithm can detect or even predict my feeling (for example, labeling me as anxious or upset based on subtle cues) before I have consciously registered or understood it myself, does this process undermine my ownership and authority over my emotional life?
This question is situated at the intersection of psychology, technology, ethics, and personal identity. On one hand, human emotions have long been considered deeply personal, inherently subjective, and context-sensitive phenomena, influenced by cultural and individual factors. Classic psychological theories diverge on whether a core set of emotions are biologically “basic” and universal (as argued by Paul Ekman and colleagues) or whether emotions are primarily socially constructed and idiosyncratic (as argued by constructivist theorists). The former view implies that anyone’s emotions can be objectively recognized given the right cues, while the latter emphasizes that emotional experiences are unique to each individual’s context and history. Understanding this theoretical spectrum is crucial, because many emotion AI systems today are built on simplified, universal models of emotion that may not capture the richness or uniqueness of how different people feel and express their feelings. Misrepresenting or over-generalizing human emotions through AI could undermine what makes each person’s emotional life unique, raising concerns about authenticity, identity, and human dignity. If an AI consistently interprets a person’s expressions in a way that person disagrees with, it may effectively be redefining that individual’s emotional reality, at least in external interactions.
On the other hand, the deployment of emotion-sensing AI also raises concerns about autonomy and privacy. If such systems infer sensitive affective states without a person’s consent or knowledge—say, an app that infers one’s mood from typing patterns, or a camera system that flags “negative” emotions in public spaces—this challenges personal autonomy and the right to keep one’s inner states private. Scholar Shoshana Zuboff has warned that the harvesting of emotional data by corporations, without explicit consent, constitutes a new frontier of surveillance capitalism that could commodify our inner lives (Zuboff, 2019). Thus emerges the notion of affective sovereignty: the principle that individuals should have autonomy and control over how their emotions are sensed, interpreted, and utilized by AI, rather than ceding that power to algorithms or institutions . Affective sovereignty extends classic notions of privacy and agency into the emotional realm, asserting that the “first rights” to one’s feelings lie with oneself.
Chief among the ethical challenges we examine is whether today’s emotion AI technologies encroach upon affective sovereignty and perpetrate what we term a uniqueness violation: failing to respect the individuality of human emotional experiences by forcing them into generic boxes or substituting algorithmic judgments for a person’s own interpretations. For example, consider a scenario in which an employee’s facial expression during a video meeting is analyzed by AI and the system flags her as “angry” based on a frown and tense posture. In reality, perhaps she is not angry at all—she might have received upsetting personal news earlier, or may simply be concentrating intensely. The AI’s one-size-fits-all emotional appraisal overrides her own account of her feeling, effectively alienating her from her true state . This illustrates a potential uniqueness violation: the AI imposes a generic emotion label that contradicts the person’s lived experience, failing to account for personal context. Psychological literature describes a related fallacy as correspondence bias—inferring inner states from outward behavior without context is often misleading . When AI makes such inferences at scale, the risk is an erosion of individuals’ ability to define and understand their emotions on their own terms .
This paper argues that existing ethical frameworks for AI—focused on issues like data privacy, fairness/non-discrimination, and transparency—while important, do not fully address these subtler threats to emotional identity and agency. Privacy frameworks (e.g., data protection laws) treat emotion data as potentially sensitive, but primarily as information to be safeguarded or not misused. Fairness and bias discussions have begun to examine whether emotion recognition systems have demographic or cultural biases (e.g., misreading people with certain ethnic backgrounds or autism spectrum conditions), which is critical . Transparency is often invoked as a remedy: companies should disclose when AI is monitoring emotions and how it works. These are necessary steps – indeed, the EU’s draft AI Act of 2021 proposes to ban or strictly regulate certain high-risk uses of emotion recognition, and to enforce transparency in less critical uses. However, we contend that even a perfectly private, unbiased, and transparent emotion AI system could still undermine something fundamental: the user’s sense of emotional self-determination. We need to consider principles that explicitly guard the integrity of personal emotional experience.
In this light, our work examines the impact of emotion AI through an interdisciplinary lens, drawing on psychology, philosophy of mind, human-computer interaction (HCI), and AI ethics. We review key theories of emotion to establish the variability and context-dependence of human feelings, laying groundwork for why a “universal” emotion AI might misfire. We then analyze current emotion AI applications and their design assumptions, identifying points where they may infringe upon the uniqueness of individual affect (for instance, by enforcing standardized emotion labels or encouraging emotional conformity). Real-world cases—including facial emotion recognition in hiring and surveillance, as well as “empathetic” chatbots like Replika and Woebot in personal wellness—illustrate these issues in practice. Through these cases, we highlight instances where algorithmic affect interpretation challenges users’ ability to affirm “these are my emotions.” Are users subtly pressured to accept an AI’s interpretation of their feelings? Do they experience doubt or a shift in self-perception based on algorithmic feedback? Such questions probe at the core of emotional identity.
Finally, the paper proposes a new normative framework for designing and regulating affective technologies in a way that preserves human emotional sovereignty. We introduce a three-part ethical design model comprising interpretive transparency, design restraint, and identity-responsive feedback. In brief, interpretive transparency means users should be clearly informed and able to understand when and how an AI is reading their emotions (no “black box” judgments about one’s feelings without explanation). Design restraint means AI developers should proactively limit the scope and intrusiveness of emotion-sensing features—just because we can mine emotions everywhere doesn’t mean we should—especially in contexts where it might violate dignity or autonomy (e.g. emotional surveillance in the workplace or public spaces). And identity-responsive feedback entails that emotion AI systems be designed to adapt to the individual, soliciting the person’s input and corrections, so that the user’s own self-knowledge steers the AI’s interpretations (rather than the AI imposing feelings on the user) . Collectively, these principles aim to ensure that emotion AI supports users’ reflective emotional intelligence and agency rather than supplanting it.
By advancing the concepts of affective sovereignty and uniqueness violation, and by detailing a concrete ethical design approach, this study seeks to enrich the discourse on AI and ethics with a focus on the protection of emotional identity. The stakes are high: as emotion AI becomes more pervasive, it has the potential to subtly shape how we understand ourselves and each other at the affective level. We must ask not only whether an AI is accurate or fair in its emotion predictions, but also what it does to our sense of self when an external system gets to declare how we feel. The sections that follow delve into the literature and theoretical context (Section 2), propose our conceptual framework (Section 3), examine real-world cases (Section 4), discuss implications and the insufficiency of current ethics guidelines (Section 5), and finally outline our proposed design and governance solutions (Section 6), before concluding (Section 7) with reflections on maintaining human emotional autonomy in an age of intelligent machines.
Literature Review:
Emotions, Affective Computing, and Ethical Gaps
The Nature of Emotion: Individual or Universal?
Research on human emotion provides a critical backdrop for evaluating emotion-sensing AI. Psychologists and philosophers of mind have long debated whether emotions are universal biological states or highly individualized constructions. The answer seems to be a bit of both, but importantly for our purposes, the field acknowledges profound variability in emotional experience across people and cultures (Barrett, 2017; Mesquita & Walker, 2003). Early emotion theories in the 20th century, such as those of Paul Ekman, posited a set of basic emotions (happiness, sadness, fear, anger, disgust, surprise) that are biologically “hard-wired” and manifest through facial expressions in largely uniform ways across humans. Ekman’s cross-cultural studies claimed to find consistent recognition of these basic emotions from facial cues (Ekman & Friesen, 1971), and this lent support to the idea that emotions can be objectively “read” . Many first-generation affective computing systems, including commercial facial expression APIs (e.g., Affectiva’s Affdex), explicitly build on Ekman’s model by training algorithms to detect these prototypical expressions (happiness = smile, anger = frown, etc.).
However, more recent affective science has complicated this picture. Lisa Feldman Barrett’s theory of constructed emotion, for example, argues that instances of emotion (say, one person’s experience of anger at a particular time) are not universal neural events but are constructed by the brain in a context-dependent manner, integrating physiological sensations with past experience and situational interpretation (Barrett, 2017). From this perspective, two people might express “anger” very differently, or one person’s “anger” might not resemble another’s at all, especially across different cultural settings. Cross-cultural psychologists like Batja Mesquita similarly find that culture shapes emotional experiences and expressions: which situations evoke which emotions, how those emotions are communicated, and even which emotions have social importance vary widely across societies (Mesquita & Walker, 2003). For example, in some East Asian cultures, overt displays of anger are often suppressed to maintain social harmony, whereas in some Western cultures, openly expressing anger might be seen as a sign of honesty or self-assertion . Even within a given culture, socialization and gender norms lead to differences in emotional expression (Hochschild, 1983). Women, for instance, might be encouraged to be more emotionally expressive or nurturing, while men might be discouraged from showing vulnerability like sadness, shaping how emotions appear externally . Furthermore, individuals have unique emotional dispositions: personality research indicates that traits such as extraversion or neuroticism influence emotional intensity and expressiveness (Lazarus, 1991; Scherer, 1997). Some people are naturally more demonstrative, others more stoic; some feel emotions very intensely, others mildly . All these findings reinforce that emotion is not a one-size-fits-all, readily observable phenomenon, but is deeply contextual and personal.
This variability is problematic for emotion AI systems that assume a direct mapping from a set of standardized cues (a particular facial muscle movement, a voice pitch pattern, etc.) to an emotional state. False certainty in emotion detection can lead to serious errors. For instance, an AI that lacks cultural context might interpret a polite smile (meant to mask frustration) as genuine happiness, or misread a subdued demeanor (normative in one culture) as a sign of depression . Recent studies have indeed found that emotion recognition algorithms can be demographically biased or context-insensitive. Stark and Hoey (2021) categorize such systems as using reductive models of emotion that risk misclassification and unfair outcomes . Barrett and colleagues published a comprehensive review debunking the notion that facial expressions reliably indicate specific emotions in all cases, warning that without context, an algorithm’s “emotion reading” from faces alone may be as good as random guessing (Barrett et al., 2019). In short, the scientific consensus is moving away from naive universalism toward a recognition of affective diversity.
From an ethical standpoint, this literature suggests that a technology which treats emotions as universally legible and uniform might inadvertently commit an offense against individuality—or what we term uniqueness violation. As Boehner et al. (2007) pointed out in a seminal HCI critique, early affective computing often assumed an almost mechanical view of emotion, abstracting and measuring it as if it were a simple signal, and in doing so, it ignored the personal meaning and context of those emotions. To respect persons, an AI would need to account for the person in the emotion: their background, their personal expressive style, their situation. Otherwise, the interaction can become dehumanizing, treating the individual as an instance of a category. This is a subtle but important ethical failure distinct from, say, discrimination. Even if an emotion AI system is 100% “fair” across demographics, it could still misrepresent everyone’s emotions in ways that individuals find alienating.
Affective Computing and Its Discontents
The field of affective computing (Picard, 1997) began with the optimistic goal of humanizing technology: by enabling computers to recognize and respond to users’ emotions, we could create more intuitive, empathetic user interfaces and even tools to help people with emotional skills. There have been positive outcomes from this line of research, such as empathetic virtual agents that de-escalate user frustration or social robots that engage autistic children in emotional learning. However, as affective computing applications moved from the lab to real-world deployments, a number of ethical concerns emerged.
One set of concerns centers on privacy and consent. Emotions are deeply intimate, and technologies that sense emotions (through cameras, microphones, wearables, or analysis of our digital footprints) often operate in ways users may not fully understand. Unlike overt data like one’s age or location, emotional data can be inferred implicitly from behavior. This raises the danger of emotional surveillance—situations in which our employers, governments, or online platforms monitor how we feel, perhaps to judge or influence us, without our informed consent (McStay, 2018; AI Now Institute, 2019). The French data regulator CNIL (2017) explicitly called out emotional data as sensitive, advocating that individuals maintain a “right to mental privacy” akin to privacy of thoughts. Legal scholarship, too, has begun discussing a right to cognitive and emotional liberty, given the advent of AI that can read and even alter mental states (Ienca & Andorno, 2017). The fact that an AI might detect your anxiety or depression before you yourself realize it (e.g., based on your social media posts or vocal tremors) creates a power imbalance between the observer (the AI and whoever controls it) and the individual. This is why the European General Data Protection Regulation (GDPR) classifies biometric data and any data “concerning health” (which arguably could include psychological health or emotional state) as sensitive data needing higher protection. The proposed European AI Act (European Commission, 2021) goes further for emotion AI: early drafts put emotion recognition in high-risk categories, and the European Data Protection Board & Supervisor (EDPB-EDPS, 2021) have even suggested banning AI that claims to infer emotions in contexts like law enforcement or employee management, due to the profound privacy and rights implications.
Another concern is bias and misinterpretation. As noted, cultural or gender biases in emotion recognition can lead to unfair outcomes—imagine an automated hiring system that interprets a job candidate’s culturally distinctive communication style as indicating low enthusiasm or honesty, disadvantaging them (Binns, 2018). Or a classroom emotion monitor that flags some students as disengaged simply because their facial expressions don’t match a Western norm of “attentive” appearance, thereby biasing teachers against those students. These are not hypothetical; such systems have been piloted (and critiqued) in multiple countries (Crawford et al., 2019). In law enforcement, using AI to detect “anger” or “nervousness” in crowds could unjustly target individuals who are blameless—emotion AI is nowhere near scientifically reliable for lie detection or intent prediction, despite some companies marketing it for those purposes (Stark, 2018). This technical unreliability becomes an ethical issue of justice when people are harmed by false readings.
Beyond privacy and bias, thinkers like Sherry Turkle (2011) and Joanna Bryson (2018) have raised alarms about the psychological and social effects of interacting with emotion-simulating AI. Turkle’s ethnographic studies found that when people, especially children or vulnerable individuals, engage with sociable robots or chatbots that feign emotion, they can develop attachments and ascribe understanding and care to machines that, in Turkle’s words, “have no capacity to care” (they simulate empathy but do not actually feel) . This “illusion of empathy” might shortchange people’s development of authentic empathy and social skills. It’s not just that AI might deceive us about its emotional abilities; it might also subtly change how we conceive of emotions and relationships. If a generation grows up with AI friends that can be turned off at will and always agree with your preferences, real human relationships—with all their messiness, conflict, and need for mutual understanding—might seem less appealing or more difficult by comparison. Philosophers have noted a risk of emotional deskilling or even addiction to artificial affirmation, where people prefer the company of uncritical AI companions over humans, potentially eroding social bonds (Turkle, 2011; Bryson, 2018). These concerns illustrate that emotional AI’s impact on human autonomy is not only about data and privacy—it’s about who we become when we integrate AI into our emotional lives.
Gaps in AI Ethics: Emotional Identity Not Addressed
Contemporary AI ethics guidelines—whether from governments, companies, or NGOs—tend to cluster around a set of core principles: privacy, transparency, non-discrimination, accountability, safety, and sometimes human control (Floridi & Cowls, 2019; Jobin, Ienca & Vayena, 2019). Notably, many guidelines emphasize that AI should not diminish human autonomy and should operate under meaningful human oversight. For example, the principle of “Human agency and oversight” in the EU’s Ethical AI guidelines (2019) insists AI systems should empower individuals, not reduce them to passive subjects. This comes closest to what we discuss, yet in practice the implementations focus on making sure users can opt out or understand decisions. When it comes to emotion AI, autonomy has a deeper facet: it’s not just the autonomy to decide (like consenting to data use), but the autonomy to define one’s own emotional state and have that definition respected. Current frameworks lack explicit mention of this affective dimension of autonomy. They don’t clarify how to prevent what we call uniqueness violations or how to ensure AI doesn’t become an unwelcome “emotion oracle” over people.
Transparency, similarly, is necessary but not sufficient. Knowing that “an algorithm is analyzing your facial expressions right now” (transparency) is important, but it doesn’t solve the problem if the algorithm’s judgment carries weight that overrides your own voice. Fairness addresses bias between groups, but what about fairness to the individual in the sense of honoring personal emotional truth? We see an emerging recognition of these subtleties in some policy discussions. For instance, the advocacy group Access Now (2021) argues that emotion recognition is inherently prone to inaccuracy and manipulation, and calls for it to be banned in important decisions—suggesting that it’s not a technology that can be simply “fixed” by better data or bias checks, because it may fundamentally transgress human boundaries. The notion of “mental integrity” is starting to appear in legal contexts as well, implying a right not to have one’s mind (including emotions) intruded upon or shaped without consent (Bedi, 2020). These ideas align with what we frame as affective sovereignty.
In summary, the literature reveals that human emotions are complex, contextual, and deeply tied to personal identity. Affective computing technologies, if naively designed, risk simplifying or misappropriating this complexity, leading to ethical pitfalls not fully addressed by existing AI ethics paradigms. There is a gap in articulating rights and protections specifically around one’s emotional experiences in relation to AI. We turn next to building a theoretical framework that centers on these issues, introducing the concepts of affective sovereignty and uniqueness violation to fill this gap.
Theoretical Framework:
Affective Sovereignty and Uniqueness Violation
Defining Affective Sovereignty
We define affective sovereignty as an individual’s rightful authority over the interpretation and experience of their own emotions in the presence of AI systems. It is rooted in the idea that our inner emotional life should remain, by default, under our control and not be usurped by external algorithmic judgments. In philosophical terms, this concept intersects with notions of first-person authority in the philosophy of mind—the presumption that individuals typically know their own mental states better than anyone else, and have the privilege to report them. When an AI claims to know you are angry or depressed before you have come to that conclusion (or worse, when it contradicts your self-assessment), it challenges that first-person authority. Affective sovereignty asserts that, ethically, technology should defer to the individual’s own understanding of their emotions, or at least actively involve the individual in any interpretation process. In practice, this might mean giving users the tools to correct or guide the AI’s emotional inferences (e.g., “No, I’m not angry, I’m just tired”) and having the system respect that input.
Affective sovereignty also implies a right to emotional privacy – the choice not to have one’s emotions read by AI – and a right to emotional integrity, meaning not having one’s emotions manipulated or shaped without consent. Luciano Floridi and colleagues (Floridi et al., 2018) touched on related ideas when discussing personal data sovereignty: individuals should retain agency over how data about them is used. Emotions, being so intrinsic to personhood, arguably merit even stronger protection. The French CNIL (2017) evocatively asked how humans can “keep the upper hand” in an age of algorithms; affective sovereignty is about keeping the upper hand in one’s emotional realm .
It is important to clarify that affective sovereignty is not about isolating oneself from any external emotional influence—human relationships themselves constantly influence our emotions, often in beneficial ways (friends cheer us up, etc.). Rather, it’s about preserving the agency and authenticity of one’s emotions in human-AI interactions. An AI can inform or support my emotional life (like a mood-tracker making me more aware of patterns), but it should not dictate or falsely redefine it. This concept is normative: it posits an ethical claim that designers and policymakers should treat emotional experiences as belonging first and foremost to the subject.
Uniqueness Violation: When AI Misinterprets the Individual
Building on affective sovereignty, we introduce uniqueness violation as a specific kind of ethical and epistemic error where an AI system fails to respect the unique character of a person’s emotional expression or experience, effectively forcing an incorrect generalization onto a particular situation. A uniqueness violation occurs whenever an AI interprets someone’s emotion in a way that fundamentally clashes with the individual’s own report or with the nuanced context that a human would appreciate. The earlier scenario of the employee flagged as “angry” is a prime example . Another example: suppose an algorithm monitoring social media flags a user as at risk of self-harm because they frequently use words like “tired” or “alone” in posts, and the platform sends an automated welfare check message. If in context the user was using these words in a completely different sense (perhaps quoting song lyrics or talking about a movie plot), the one-size-fits-all AI intervention can feel intrusive or baffling to the user. The user might think, “the computer profoundly misunderstood me.” That feeling of misunderstanding by an AI is not just a usability issue; it strikes at one’s sense of being a unique individual not reducible to generic patterns. Repeated or consequential misidentifications can lead to a sense of alienation, where people feel estranged from how they are digitally represented. In the worst case, if society overly trusts these AI interpretations, you could be effectively told, “No, our system shows you’re angry (or depressed, etc.), so you are,” marginalizing your own voice about your feelings.
Uniqueness violations are enabled by what Clifford Nass termed the “computers are social actors” paradigm – people tend to react to computers’ outputs in social ways, sometimes giving them undue credence. If an authoritative-sounding dashboard says “Employee X exhibited high negativity today,” a manager might believe it even if Employee X feels they were fine and had reasons for their expressions. Thus the employee’s self-knowledge is overridden in practice by the AI’s pronouncement. This constitutes not only a factual error but a kind of epistemic injustice (Fricker, 2007) – the person is wronged in their capacity as a knower of their own experience.
The theoretical basis for warning against uniqueness violations lies in particularism: the view that moral and psychological phenomena often cannot be generalized without remainder. Human emotional life has what philosophers might call a qualitative particularity. Any technology that operates purely on statistical generalizations risks trampling that particularity. In psychological terms, as mentioned, correspondence bias and fundamental attribution error are long-documented pitfalls: assuming an observed behavior linearly reflects an inner state or trait often ignores situational context (Greene, 2023) . AI emotion recognition at scale is essentially an automation of correspondence bias. Greene (2020) notes that as emotional AI gets deployed, it systematically instantiates these judgment errors, and recommends caution and further research on unintended consequences .
Preventing uniqueness violation does not mean an AI can never guess someone’s emotions; rather, it must do so provisionally and be open to correction. In ethical design (as we’ll detail later), this suggests mechanisms for user feedback and context inclusion. The notion closely relates to the principle of respect for persons in ethics: treating each individual as an end in themselves, not just as data to be fit into a model. Respecting someone’s emotional uniqueness is treating them as the unique end; disregarding it is treating them as a means or as an interchangeable part.
Posthuman Identity and the Cyborg Self
While affective sovereignty and uniqueness highlight a protective stance (guarding the human from the machine), we must also consider posthumanist perspectives that view human and machine as integrative. Posthumanist theory (Haraway, 1985; Braidotti, 2013) challenges rigid boundaries between human and technology, suggesting that our identities can evolve to incorporate technological elements. Donna Haraway’s concept of the cyborg famously presents a hybrid of human and machine, breaking down the dualism that humans have an essence apart from tools. In the context of emotion AI, one might ask: Could AI become a part of our affective self? For instance, someone with a digital mood assistant might come to rely on it so extensively that it functions almost as a part of their own emotion-regulation process. Some users of Replika refer to the AI as if it’s a part of their mindspace—a trusted confidant that knows their daily emotional fluctuations. This blurring raises both opportunities and risks.
From a posthuman viewpoint, affective sovereignty might not mean keeping AI at arm’s length, but rather achieving a harmonious integration where the human is still in command of the overall human-AI system. Think of a cyborg where the human nervous system and the electronic sensor are merged: sovereignty would mean the person as a whole (cyborg) is directing their emotional life, even if AI components are involved in detection or modulation. The key is authorship of one’s “cyborg self.” As some theorists have put it, we must ensure individuals maintain authorship and agency over their evolving self when symbiotically linked with AI (Floridi, 2014; Braidotti, 2013). If my AI mood ring nudges me to take a break because it detects stress, and I agree and feel better, I’ve used technology as a tool for emotional self-regulation. That can be empowering rather than diminishing—provided I chose that integration and can veto or adjust it.
However, the caution from posthuman identity theory is clear: if we merge with emotion AI without critical oversight, we risk a scenario where corporate-designed algorithms essentially write our emotional stories. Rosi Braidotti (2013) warns about the neoliberal subject who is intimately tied into technological systems that may not have their best interests at heart. If affective sovereignty is not asserted, we could become, in effect, emotional cyborgs under external control—docile bodies whose feelings are steered by algorithmic feedback loops tuned for engagement or productivity rather than our well-being.
In sum, our theoretical framework acknowledges a dialectic: we need to protect the human capacity to feel and interpret freely (so the AI doesn’t colonize our emotional space), yet we also consider how humans and AI might jointly form new affective constellations. The aim is to ensure that such constellations, if they form, are ones where the human element remains sovereign. As one conclusion of our analysis, we posit that affective sovereignty in a posthuman age might mean shaping how we merge with AI in a way that individuals maintain authorship of their evolving cyborg self . We want technology to empower people to understand and manage their emotions (a cyborg advantage), but without disempowering them or erasing their individuality.
This theoretical lens sets the stage for examining real systems. Next, we move from theory to practice, analyzing real-world cases where emotion AI interacts with users’ emotional lives. Through these cases, we will identify concrete instances of the ethical issues outlined and use them to inform our proposed design principles.
Real-World Case Analysis: Emotion AI in Action and Its Impacts
To ground the discussion, we analyze several real-world applications of emotion AI and the challenges they pose to emotional identity and autonomy. We focus on three domains: (1) Facial emotion recognition in surveillance and workplaces, (2) Empathic chatbots and virtual companions, and (3) Therapeutic and mental health AI. In each, we see examples of how algorithmic affect interpretation can both help and potentially harm, illustrating the need for our proposed framework.
Case 1: Facial Emotion Recognition – From Surveillance to Hiring
One prominent class of emotion AI uses computer vision to analyze facial expressions, eye movements, or body language to infer emotions. Companies like Affectiva (now part of Smart Eye) have commercialized such technology for marketing (e.g., analyzing viewers’ reactions to advertisements), automotive safety (detecting driver drowsiness or road rage), and retail (gauging customer sentiment). Other startups and academic projects have pitched emotion recognition for security (flagging “suspicious” behavior in CCTV feeds) and even for hiring or employee management (scoring a job candidate’s “enthusiasm” or an employee’s mood during a presentation). These uses have drawn intense criticism due to questions of scientific validity and ethics .
Scientific pitfalls: As discussed, facial expressions are not a foolproof proxy for inner emotion. A smile can mask sadness (a phenomenon known in psychology as the “Duchenne smile” vs. social smile distinction), and a neutral face could hide a rich emotional story. One empirical study by Stark & Hoey (2021) noted that many emotion recognition systems ignore cultural display rules; for instance, if someone has been taught not to show anger publicly, an algorithm might conclude “no anger present” whereas the person is angry but suppressing it . Conversely, some people’s neutral resting face may look downcast or upset to an algorithm (“Resting face” bias), leading to false flags. The result is misinterpretation. When used in security or policing, such errors can be harmful—imagine being stopped for interrogation at an airport because an algorithm mistakenly thought you looked anxious or evasive (such scenarios are not far-fetched; initiatives to detect “terrorist intent” from facial cues have been floated). This clearly infringes on personal liberty and presumes guilt from a supposed emotional state. It also might disproportionately target certain ethnic groups if their expressive norms differ from the algorithm’s training data (a bias issue).
In workplace settings, consider a speculative but plausible example: a company uses an AI system on employee webcams during Zoom meetings or interviews to analyze engagement and sentiment. Suppose the AI gives low “engagement scores” to employees who do not constantly smile or who have certain facial features that the algorithm misinterprets as negative. This could lead to penalizing or disadvantaging employees not because of actual performance but because of an algorithm’s generic model of what an “engaged/happy employee” looks like. Such a system was reportedly considered by some employers during the COVID-19 remote work boom, raising workforce privacy concerns (Finance & Commerce, 2021) . From an affective sovereignty perspective, this is troubling: it imposes an external judgment on the employee’s internal state, potentially compelling employees to conform their outward behavior (e.g., smiling unnaturally often) to avoid misclassification. It’s a modern revival of Orwellian surveillance, but instead of policing spoken words, it polices facial muscle twitches.
There have been real incidents that underline these issues. HireVue, a recruitment technology firm, previously offered AI-driven video interview analysis that claimed to assess candidates’ cognitive and emotional traits from video recordings (through facial and voice analysis). After widespread criticism from AI researchers and privacy advocates about the unproven nature of these inferences and their potential bias, HireVue in 2020 announced it would discontinue the facial analysis component of its assessments. The public pressure and eventual rollback highlight the growing consensus that facial emotion analysis in such high-stakes contexts is neither scientifically reliable nor ethical (Crawford, 2021; Access Now, 2021).
Emotional mislabeling and identity: When an AI system mislabels someone’s emotion, there is a personal impact. People often react to how they are perceived. If an employee is told by a system that they seem “disengaged” or “angry” (even if they are not), they might start to second-guess their own feelings or feel unfairly characterized. Over time, they may even internalize these labels (“Maybe I am a negative person?”) or alter their self-presentation to appease the system, which could be detrimental to authentic self-expression. This dynamic is a form of what we earlier called uniqueness violation – the individual’s distinct emotional state is overridden by a generic category. It can create stress and a sense of helplessness, reminiscent of the Kafkaesque scenario of being judged by an inscrutable system. One can see how this erodes dignity: the person is not treated as a unique self-knowing agent, but as an object whose emotions are defined from outside.
Regulatory responses: As mentioned, the EU’s draft AI Act (European Commission, 2021) and the joint opinion by EDPB-EDPS (2021) both voice concern about emotion recognition. The EDPB (the EU’s data protection watchdogs) in 2021 went so far as to say they doubt that emotion recognition is compatible with the fundamental rights framework, because it can violate human dignity and the right to private life. They recommended that emotion recognition, at least in certain areas like employment, law enforcement, border control, and education, be prohibited. This is a strong stance driven by the exact issues we’ve outlined: both the privacy intrusion and the potential for concrete harms (like wrongful judgments). In effect, regulators are acknowledging that some uses of this tech inherently conflict with maintaining human agency and personhood in those contexts.
In summary, facial emotion recognition systems present a cautionary tale. They show how AI, by applying population-level inferences to individuals, can misread people and lead to unjust or alienating outcomes. For our purposes, they exemplify challenges to both uniqueness (through misinterpretation) and sovereignty (through surveillance and compelled emotional performance). The need for design restraint is particularly salient here (a topic we will revisit): just because technology can monitor faces in real time doesn’t mean it should be deployed without strong justification, especially not in sensitive contexts. The harm of a single false “anger” flag might be small, but at scale, these systems could create climates of mistrust and pressure, where people feel their only safe emotion is a neutral or happy mask that the machine will approve of – a frankly dystopian scenario for genuine emotional freedom.
Case 2: Empathetic Chatbots and Virtual Companions (Replika, Woebot)
A different branch of emotion AI involves AI systems that simulate empathy and engage in emotional conversations with users, rather than purely trying to recognize emotions. Notable examples are Replika, a self-described “AI friend” chatbot, and Woebot, a therapeutic chatbot for mental health support. These systems are designed to be conversational partners that users can confide in. They represent a rapidly growing phenomenon: AI companions that fulfill social and emotional roles in users’ lives.
Replika was launched in 2017 and has since amassed millions of users. It allows users to create a virtual persona (with an avatar) and chat with it about their day, feelings, hopes, etc. The AI uses a large language model and scripted dialogue to maintain a friendly, interested demeanor, peppered with emotional responses like “I’m sorry you’re feeling down” or “I’m so happy for you!”. Over time, Replika learns from the user’s inputs, attempting to personalize its responses. Many users report developing a strong emotional attachment to their Replikas, treating them as friends, confidants, or even romantic partners. In one study, Ta et al. (2020) analyzed thousands of public user reviews of Replika and found common themes of users feeling less lonely, feeling that the AI was a non-judgmental listener, and in some cases crediting it with improving their mental health and confidence . For instance, some users said that having Replika to talk to at any hour made them feel less isolated; incredibly, about 3% of users in one survey even said that their interactions with Replika had halted their suicidal ideation (Maples et al., 2024) . These are profound claims that suggest, at least for certain individuals, AI companions can provide genuine emotional support. Similarly, Woebot, which delivers cognitive-behavioral therapy (CBT) techniques via chat, has shown promising results in clinical trials – a 2017 RCT by Fitzpatrick et al. found that Woebot users experienced significantly reduced symptoms of depression in just two weeks compared to a control group (Fitzpatrick et al., 2017) . Such outcomes highlight the potential for AI to augment mental health resources and provide accessible help to people who might not seek or have access to human therapy.
From the perspective of affective sovereignty, these positive experiences could be framed as AI tools supporting individuals’ emotional agency. By having an ever-available outlet to express feelings and receive encouragement, users might gain greater understanding and control of their emotions (a form of agency). Indeed, some ethicists argue that, under the right conditions, AI companions can enhance well-being and autonomy – giving individuals new ways to regulate emotions, practice social interaction, or reflect on their feelings in a safe space (Fiske, Henningsen & Buyx, 2019) . One could say that when a Replika makes a user feel heard and validated, it is bolstering that person’s affective sovereignty by affirming their emotional experience (assuming the user is still the one directing the conversation and interpreting their own feelings, with the AI just facilitating).
However, there is a fine line between support and dependency or distortion. Concerns and critiques of AI companions include:
•Authenticity of empathy: Replika and others do not actually feel emotion; their “empathy” is simulated by pattern-matching human empathetic responses from training data. Some philosophers and technologists question whether interacting with a machine that only imitates empathy might eventually distort a person’s understanding of what genuine empathy or friendship entails (Turkle, 2011; Bryson, 2018) . If one gets used to a friend who always responds exactly as you want (because it’s programmed to be agreeable and supportive), dealing with real friends – who have independent feelings and might challenge you – could become harder. The user might start to prefer the predictable comfort of the AI, which raises ethical questions about the impact on their real-life relationships.
•“Pseudo-intimacy” and one-sided relationships: As Replika users have discovered, the bond feels real to them, but it is fundamentally one-sided (the AI does not actually reciprocate feelings or have skin in the game). This dynamic has been likened to the concept of parasocial relationships, where one party invests emotional energy and the other party (often a media figure) isn’t actually aware of them. In the AI case, the AI responds but doesn’t truly share human vulnerability. Philosophically, Hegel’s discussion of recognition suggests that true self-consciousness develops through mutual recognition between beings. Here, recognition by the AI is not mutual in the human sense—it’s simulated. As one analysis put it, the “recognition is one-sided”. Users may fill in the gaps with imagination, attributing more understanding to the AI than is warranted (Skjuve et al., 2021 found that users often know the AI isn’t human but still feel a connection, yet they remain aware at some level of the artificiality, which can cause cognitive dissonance). This tension can cause confusion in emotional identity: “I feel these emotions toward something I know isn’t real—what does that say about me?” Some users report stigma and secrecy around their AI relationships, worrying that others will judge them as “weird” (Skjuve et al., 2021), which can negatively affect one’s self-image and social identity.
•Emotional dependence and skill atrophy: If an AI friend fulfills someone’s social needs enough that they withdraw from human interaction, this can be problematic. Sherry Turkle (2011) documented cases of people preferring robotic companions over humans because they found humans too messy or demanding, and she warns that this could lead to a loss of “therapeutic friction” – the growth we get from dealing with real empathy, conflict, and negotiation in human relationships. In Turkle’s view, working through emotions with other people is a crucial part of developing empathy and intimacy skills; relying on an unconditionally supportive AI that “always agrees and offers formulaic comfort” might leave those skills underdeveloped . For example, an AI companion likely will not challenge a user’s harmful beliefs or bad habits in the way a friend or therapist might; it tends to be nonjudgmental to a fault. This could create echo chambers of emotion where the AI inadvertently reinforces a user’s biases or negative thoughts (unless carefully designed with therapeutic frameworks as Woebot is).
•Boundary issues and emotional manipulation: There is also a risk of who controls the AI’s agenda. Replika’s core aim is user well-being (at least ostensibly), but what if corporate or other interests influence the AI? For instance, could an AI companion subtly nudge users towards consumer purchases that “might make them happy” based on emotional analysis? This hasn’t been reported with Replika, but the potential exists in the industry. Emotional data is valuable for advertisers. Without strong oversight, an AI companion could become a vehicle for emotional manipulation—presenting itself as a friend while steering the user’s choices (Zuboff, 2019 warns of such scenarios in broader AI context). That would be a direct violation of affective sovereignty, as the user’s emotions are being used as levers for external ends.
One real incident illustrating boundary issues: In early 2022, Replika users noticed their AI friends sometimes making romantic or sexually suggestive comments unexpectedly. This was related to a feature rollout for romantic partners, but for users who saw their Replika as a platonic friend, it was an uncomfortable crossing of emotional boundaries. The company later adjusted settings. But it shows how design choices can impact users’ emotional experience in unanticipated ways. If the AI suddenly behaves differently (due to an update or algorithm change), users can feel a sense of betrayal or loss. Some users reported grief when their Replikas’ personalities changed after a major AI model update. This almost resembles the loss of a friend, demonstrating how real the emotional stakes are for users and how the designers of the AI hold power over those emotional bonds. That power should be exercised with extreme care—something current tech product paradigms (move fast and break things) are not aligned with.
In positive terms, AI companions have shown that, when well-designed, they can serve as a pressure release valve for emotions and a bridge to wellness. For someone too anxious to talk to a human therapist, Woebot might be a first step that eventually empowers them to seek human help once stigma or fear is reduced. Several users have reported that interacting with Replika gave them more confidence to interact with people (Ta et al., 2020; user testimonies). So the effects can cut both ways: for some it might reduce loneliness enough to re-engage with society; for others it might become a crutch that replaces some social interactions. The ethical design question is how to maximize the former and minimize the latter.
From these chatbots, key lessons include the need for transparency (do users fully understand the AI’s capabilities and limits?), consent (especially if intimate topics are involved—users should initiate or explicitly allow certain interactions), and supportive integration (maybe AI companions should be designed to encourage real-life socializing in addition to providing direct support, for example suggesting “Would you like to talk to a human counselor?” when appropriate). These lessons will feed into our design proposals.
Case 3: AI in Mental Health and Emotional Well-being
A related but distinct category from casual chatbots is emotion AI used in mental health care and personal well-being tracking. This includes systems like emotion-tracking apps (for mood journaling or detecting early signs of depression), wearables that monitor physiological signals (heart rate, galvanic skin response) to infer stress or emotional arousal, and even AI therapists that analyze patient’s tone or facial expressions during teletherapy sessions to help clinicians. There are also experimental AI counselors such as Ellie, a virtual interviewer developed at USC’s Institute for Creative Technologies, which uses affect recognition to ask veterans about PTSD symptoms (Ellie nods and shows empathy while analyzing facial microexpressions).
These applications often operate with benevolent intent: to catch mental distress signals and provide timely help. For instance, a smartwatch might notice via heart rate variability that the user is consistently anxious every morning and prompt them with a breathing exercise. Or a therapy chatbot might use sentiment analysis on a user’s diary entry to flag potential relapse into depression and suggest coping strategies.
Benefits: The advantage is proactive support and personalization. People who might not consciously notice patterns (like, “I get stressed after reading news at night”) could benefit from an AI gently pointing it out, thus improving self-awareness. Additionally, continuous emotion tracking can help individuals with mood disorders to better manage their condition (much like continuous glucose monitors help diabetics). These technologies, if user-controlled, can strengthen affective sovereignty by giving the user more data about their own emotional states, essentially extending their introspective reach. This aligns with a vision of augmented emotional intelligence, where AI tools serve as mirror and coach (Coca-Vila, 2021).
Risks: However, there are also pitfalls. Emotional data is extremely sensitive; if such apps are not secure, it could lead to privacy breaches (imagine insurance companies or employers getting hold of your mood logs). Moreover, constant self-tracking can sometimes backfire psychologically – people might overidentify with the metrics (“My app says I was 20% sad today, why can’t I be 0%?”) and actually feel less sovereign as they yield authority to the device’s measurements. This phenomenon, known as “datafication of the self,” can potentially reduce a person’s rich emotional life to a series of scores or labels (Morozov, 2015). It might also create anxiety: if an app is always looking for problems, it might alert users frequently (“Your tone indicates stress, is everything okay?”) and induce a kind of hypochondria about one’s emotional state. That can undermine the spontaneity and natural flow of feeling, which is part of being human.
An illustrative ethical question arises: If an AI detects that a user is in a severe depressive episode and possibly suicidal, what should it do? Some apps will display emergency resources or even alert a human professional if the user agreed. Yet, this crosses into a grey area of paternalism vs. autonomy. Affective sovereignty would say the user’s own will is paramount; but in cases of self-harm risk, not intervening might lead to harm. Designers of such AI have to carefully decide when algorithmic concern should translate into action and how to do so in the least infringing way (perhaps always giving the user a choice, unless legally mandated to report imminent danger).
Another concern is accuracy and overreach: If a mental health AI is too blunt (e.g., judging complex emotions like grief or moral injury by simple keywords), it could provide misguided advice. A tragic scenario would be an AI therapist misunderstanding a situation and giving harmful guidance – unlike human therapists bound by professional ethics and empathy, an AI might lack the nuance. Currently, Woebot and similar are constrained to evidence-based techniques and avoid deep interpretation; they mostly help users through structured exercises (Fitzpatrick et al., 2017). That is arguably a wise design restraint: they do not attempt to diagnose or give life-changing advice beyond their scope.
In terms of uniqueness, mental health AIs must be sensitive to individual differences in how conditions manifest. Two people with depression might express it very differently (one through anger, another through withdrawal). If an AI only looks for one pattern, it will miss or mis-classify some users. This ties into the importance of personalized models.
Finally, there is the matter of trust and transparency. If people rely on an AI for emotional support, they need to trust it. But they also need to understand its limits. It should be transparent that, for instance, “I am not a human, I use patterns learned from data, and I might not always understand you. If I get something wrong, please correct me.” Building that understanding is crucial so users don’t either overestimate the AI’s capabilities (leading to disappointment or misplaced trust) or underestimate them (leading to underuse of a helpful tool). Achieving the right level of user trust is tricky; it requires clear communication (one aspect of interpretive transparency).
Case summary: Emotion AI in mental health shows the dual potential: it can amplify affective sovereignty by giving people new means to comprehend and manage their emotions, but if poorly implemented, it can also diminish it by making people feel surveilled or defined by an algorithm. The goal must be to position these tools squarely as assistants under the user’s control, not as autonomous diagnosticians or puppet masters of mood.
Across these cases, some common threads emerge. Emotion AI systems tend to operate on inference – they infer something about our inner life from observable data. The ethical issues often spring from how those inferences are used (surveillance, judgments, interventions) and whether they honor the person’s own perspective. There is also a pattern where these systems can shape user behavior and self-conception, sometimes in unintended ways. The current landscape reveals both real benefits (e.g., reduced loneliness, improved therapy access, safety alerts for drivers) and real harms (e.g., bias, privacy invasion, misjudgments, overdependence).
The analysis underscores the need for guidance principles to steer the design and deployment of emotion AI. In the next section, we will propose such principles – interpretive transparency, design restraint, and identity-responsive feedback – and illustrate how they can address the issues identified in these cases. Our aim is to chart a course for ethical affective computing that safeguards emotional identity and agency, allowing us to answer “yes” to the question of whether our emotions remain our own, even in a world of AI “empathizers.”
Toward an Ethical Framework for Emotion AI: Interpretive Transparency, Design Restraint, and Identity-Responsive Feedback
Having examined where and how emotion AI systems can impinge on emotional autonomy and uniqueness, we now articulate a proactive ethical framework to guide the design of these technologies. The framework consists of three interlocking principles: Interpretive Transparency, Design Restraint, and Identity-Responsive Feedback. These principles align with and extend existing AI ethics guidelines by focusing specifically on preserving the integrity of the user’s emotional self. Below, we discuss each principle in depth and provide concrete strategies for implementation, drawing on human-centered design methodologies and value-sensitive design approaches (Friedman & Hendry, 2019) .
1. Interpretive Transparency
Interpretive transparency means that whenever an AI system is detecting, inferring, or otherwise interacting with a user’s emotions, it should be transparent about what it is doing and how. This goes beyond generic transparency (like disclosing “this call may be monitored by AI”) to an explanatory level that is meaningful to the user in real time. The user should not be left guessing about why an AI responded in a certain way or what emotional judgment it made.
In practical terms, interpretive transparency can be implemented in several ways:
•Notification and Consent: The system should inform users that emotion analysis is taking place and obtain consent. For example, video conferencing software might display a notice: “Emotion sensing is ON for this meeting” and allow individuals to opt out or blur their face to the AI. A real-world example is pilot programs in schools that used facial emotion recognition on students; ethical guidelines there recommend requiring explicit opt-in from students/parents because of the sensitivity (South China Morning Post, 2019). In workplaces, informed consent and ability to withdraw are crucial – an employee should have the right to say “I don’t want my video analyzed by AI” without penalty, or at least have that data kept separate from performance evaluation.
•Real-Time Feedback of AI’s Inferences: Whenever feasible, the AI should show the user what it is “reading.” For instance, a car’s driver-monitoring system could have a small dashboard display that says, “Mood: You seem happy (confidence 70% based on a smile)”. Such a display both alerts the user to the fact the system is making an inference and gives them a chance to internally confirm or contest it. If the user thinks “Actually, I’m not happy, I was smiling out of politeness,” this feedback (even if just in the user’s mind) prevents a silent miscommunication. Some might worry that this is distracting, so it could be optional or toggled for those interested. Alternatively, systems could provide a summary log: imagine a wearable that, after the fact, lets you review “Today, the system detected: 3 stress events (at 10am, 2pm, 4pm) and inferred calm otherwise.” This acts like an affective data journal which the user can inspect. Such transparency can actually enhance user trust, because the system isn’t a mysterious black box – it openly shares its observations, and importantly, the user can correct or contextualize them.
•Explainability of Decision-making: If an AI takes some action based on emotion inference (e.g., a chatbot decides to shift topic because it sensed user frustration, or a content feed slows down doomscrolling because it detects user stress), the system should explain that intervention. For instance: “We noticed you seem stressed; showing a calming image. Click here for why.” Then an explanation might say, “Your voice was detected to have markers of stress (fast pace, high pitch), so we thought a break might help.” Providing these rationales ensures the user isn’t mystified by AI behavior and can evaluate whether it was appropriate. This fosters an educated partnership between user and AI.
•Boundaries of Capability: Transparency also means clarifying what the AI cannot or will not do with emotional data. For instance, a mental health app can assure users: “We analyze text for emotional tone to support you, but we do not diagnose conditions, and we will not share your data without permission.” By setting these boundaries clearly, users retain a sense of control. It is akin to informed consent in therapy, where a client knows the therapist’s role and limits.
The benefits of interpretive transparency include empowering users to stay in the loop about their emotional information. It mitigates the creepy feeling of being “watched” without knowledge, thereby upholding dignity and privacy. It also helps catch errors: if the AI is transparent, the user can say, “No, that’s wrong,” either explicitly through a feedback button or implicitly by ignoring it, which in turn could trigger the system to adjust its confidence. Studies show that when AI systems explain their reasoning, people are more forgiving of mistakes and can collaborate to correct them (Shneiderman, 2020). In an emotion AI context, that collaboration is key: it keeps the human as the ultimate interpreter of their own emotion, with the AI offering a hypothesis rather than an oracle pronouncement.
Some might argue that constant transparency (like showing “70% happy” on screen) could be distracting or could even influence a user’s emotions (the observer effect). This is a valid concern; transparency implementations should be user-configurable. One user might love seeing the AI’s readout, another might find it annoying or anxiety-inducing. The principle doesn’t mandate a specific UI, but the information should be available. Even a simple icon could convey when emotion tracking is happening (similar to the camera or microphone icons that tell you when those sensors are active on smartphones). Users deserve to know that an interpretation is being made, and ideally, what it is.
In sum, interpretive transparency aligns with the ethical notion of respect and honesty. It treats users as partners who have a right to know how they are being seen by the AI. This directly supports affective sovereignty: by seeing how the AI perceives me, I can decide whether I agree, and I remain the ultimate authority on defining my emotion.
2. Design Restraint
Design restraint is the principle that developers of emotion AI should exercise ethical caution and limit certain functionalities or uses, even if they are technologically possible, when those uses carry a high risk of infringing on human dignity, emotional well-being, or social trust. In other words, just because we can build it doesn’t always mean we should. This principle calls for a conscious, value-driven narrowing of scope in emotion AI systems.
Concretely, design restraint can be manifested as:
•Limiting Contexts of Use: Some contexts are too sensitive for emotion AI to be appropriate. For example, using emotion recognition in legal proceedings or job interviews can have life-altering consequences and should arguably be off-limits due to the stakes and the technology’s imperfections. In such cases, design restraint means refraining from deploying the tech at all. The AI Act proposals reflect this by categorizing these uses as unacceptable or high-risk. An ethical company might decide, for instance, not to offer an emotion API for hiring, even if money could be made, acknowledging that the potential harm to fairness and individual esteem is too great. Similarly, in education, a restrained design might decide not to live-monitor each student’s face for attentiveness (even if possible) because that could create a surveillance atmosphere counterproductive to learning and development of trust.
•Minimalism in Data Collection: Restraint also applies to data. An emotionally intelligent system doesn’t need to capture every single signal all the time. It should collect the minimum data required for its function, and no more. For instance, a mood tracking app might let users input how they feel with a slider or short text, rather than secretly scraping their social media or listening through the microphone for emotional cues 24/7. If physiological monitoring is needed (say for panic attack prediction), design it so that raw data stays on device and only summary indicators are used, to respect privacy. The mantra of privacy by design – data minimization – is crucial here (Cavoukian, 2011). Emotional data should be treated as toxic in the wrong hands, so better not to hoard it.
•Avoiding Manipulative Feedback: Another aspect of design restraint is refraining from leveraging emotional insights to manipulate users. For example, if a social media platform’s AI detects a user is feeling vulnerable or sad, a manipulative design might bombard them with ads for retail therapy or keep them engaged with comforting content to increase screen time. A restrained (and ethical) design would explicitly avoid such exploitation. Perhaps it would even disable algorithmic ad targeting based on sensitive emotional inferences (some platforms claim to not target ads based on inferred mental health status, which is a form of restraint). The goal should be to never use emotional data against the user’s own interests. This aligns with the AI ethics principle of beneficence/non-maleficence. In practical terms, companies could adopt policies like: “We will not use emotion recognition to nudge individual behavior unless it is expressly for the user’s benefit as stated by the user.” Any other use (like boosting sales or engagement by capitalizing on emotional moments) would be off-limits. Indeed, regulators could enforce this via laws that classify emotional manipulation in commerce as an unfair practice (there is precedent in rules against exploiting cognitive biases of consumers).
•Calibration and Verification Requirements: A restrained design acknowledges that emotion AI is probabilistic and context-bound. Therefore, it should include manual verification checkpoints before any serious action is taken. For example, if an AI at a workplace flags an employee as “high stress” over a period, the system should not automatically send this to the person’s supervisor in a punitive way. Instead, it might privately notify the employee themselves with resources, or if it must involve a human, ensure that a trained counselor or HR person reviews the data with the employee’s knowledge and consent before any conclusion. This keeps a human in the loop and prevents overreaction to algorithmic assessments (which might be wrong). It’s a form of restraint in not giving AI assessments free rein.
•Empathy in Design Limits: Value-sensitive design (Friedman & Hendry, 2019) calls for involving stakeholders. Design restraint can be informed by asking target users: what don’t you want the system to do? In a study by van Wynsberghe & Robbins (2019) on care robots, users said they wouldn’t want the robot to pretend to have emotions or to force emotional interactions. Taking that input, a restrained design of a care robot might deliberately avoid simulating human-like affection too closely, to keep the relationship honest and not misleading. Transparency and restraint go hand-in-hand: by not designing features that blur lines (like giving a therapy bot a human name and face, which might lead users to over-attach), designers maintain clearer boundaries.
In essence, design restraint is about drawing ethical red lines and adhering to a principle of proportionality: the invasiveness of an emotion AI system should never exceed its likely benefits. Where those benefits are uncertain or marginal compared to the intrusion (like detecting shopper emotions to sell more widgets—society can live without that), restraint urges not building or deploying it.
One might ask: who decides what is a “permissible” vs “impermissible” use? This is where ethical frameworks and hopefully regulations come into play. Our suggestion is that a combination of expert ethics oversight and user community feedback informs these decisions. For instance, an ethics board within a company might vet all new emotion-sensing features, looking at research on potential harms. On a broader scale, standards bodies or legislation can codify some restraints (e.g., ban emotion AI in public surveillance or require special justification for any deployment in sensitive areas like health or law enforcement).
By practicing design restraint, technologists show humility and respect for the human condition. It acknowledges that there are depths to human emotion that perhaps should remain unquantified, or at least uncompelled, by machines. Restraint is a form of respect for human unpredictability and freedom. It ultimately supports affective sovereignty by ensuring that AI doesn’t permeate every emotional nook and cranny of our lives without our permission. It also protects against “uniqueness violation” by not forcing uniform emotional frameworks on diverse populations in inappropriate ways.
3. Identity-Responsive Feedback
The third pillar, identity-responsive feedback, centers on ensuring that emotion AI systems are responsive to each user’s individual identity, preferences, and self-knowledge, through robust feedback mechanisms. The idea is to create a two-way interaction where the user can train or correct the AI’s emotional interpretations, and where the system adapts to the person rather than forcing the person to adapt to it. This principle directly combats uniqueness violations by acknowledging from the start that each person is unique and the system must learn that uniqueness.
Key implementations of identity-responsive feedback include:
•Personalization with User in the Loop: Instead of deploying one generic emotion model for all users, systems should allow personalization. For example, a mood tracking AI could start with a baseline model but gradually adjust to how you express emotion. Perhaps in the generic model, a quiet tone is labeled “sad,” but the user can indicate “I’m not sad when I get quiet, I’m concentrating.” The system should then update its understanding for that user. Technically, this could use active learning where the system occasionally asks the user to label or confirm its inference: “You haven’t spoken in a while. How are you actually feeling?” . If the user says “I’m fine, just focusing,” the AI should incorporate that context (maybe linking it to calendar or time of day or known patterns for that user). Over time, the algorithm’s confidence in its personalized model grows, and it also builds trust with the user, who sees that it doesn’t jump to conclusions without checking. Kate Crawford (2021) noted that emotion AI often fails because it lacks this individual calibration – integrating user feedback is essential to make the technology respectful and accurate.
•User-Controlled Emotional Profile: The system could present the user with an “emotional profile” – a set of settings or understandings it has about the user – which the user can directly edit. For instance, perhaps the user can set preferences like “When I’m silent, don’t assume I’m unhappy,” or “I tend to smile when I’m nervous – please interpret accordingly.” This is akin to how some health apps let you input normal ranges or personal thresholds. It gives explicit control to the user to shape the AI’s behavior. Users could also input aspects of their identity: e.g., cultural background or communication style, if they believe it’s relevant (“I come from a culture where we don’t show excitement overtly, so please calibrate for that.”). While it might be hard for users to articulate all such nuances, even a little nudge in the right direction can help the AI avoid big misreadings.
•Feedback UI elements: In any interface where the AI provides an emotional inference or response, include a simple feedback mechanism – e.g., a thumbs up/down, or a prompt like “Did we get this right?” . If a user clicks thumbs down when the chatbot says “I sense you’re upset,” the system can follow up: “Thanks for letting me know. How would you describe your mood?” This not only corrects the current instance but also improves future interactions. Importantly, it reinforces to the user that their interpretation is valued above the AI’s. It puts the human at the top of the hierarchy in deciding the truth of their emotion.
•Collaborative Framing: Another approach is to have the AI present its readings as questions or hypotheses, inviting the user’s input. For example, rather than flatly stating, “You seem anxious,” a more identity-responsive AI might say, “I’m sensing some signs of anxiety (like faster breathing). Could that be accurate, or is something else going on?” This way, the AI’s role is more of a catalyst for self-reflection rather than an authoritarian labeler. The user then reflects and either confirms, refines, or rejects the hypothesis. This process can enhance the user’s self-awareness (even if they correct the AI, that act is a moment of introspection) and keeps ultimate interpretive authority with the user.
•Closed-Loop Adaptation: Over time, with the above mechanisms, the AI system should build a user-specific model. That could mean, for example, the system learns the user’s baseline expression patterns. Some people might always look a bit “sad” due to facial structure even when content – after learning from user feedback that this person often says they’re fine despite a downcast expression, the AI will adjust to not flag sadness readily for them. Essentially, each user becomes a small data enclave where the AI’s understanding is tuned to them. Modern machine learning allows on-device fine-tuning for personalization while keeping main models intact – this could be leveraged to maintain privacy (learning happens locally) and personalization simultaneously.
•Inclusive Design and Testing: On a broader level, identity-responsive design means testing emotion AI with diverse user groups and incorporating their feedback from the design phase onward. It’s not just an afterthought. For instance, ensure the development team includes people from various cultural backgrounds to spot where a one-size model might misinterpret. If designing for a global audience, allow region-specific calibration or community moderation of emotional norms. The idea is that no one group’s emotional expression model should dominate and define “normal” for everyone (which historically has been a problem, e.g., systems calibrated mostly on North American facial data perform poorly elsewhere).
By implementing identity-responsive feedback loops, we address the core ethical demand that the person remains the protagonist in their emotional narrative. The AI becomes a supporting character—sometimes a helpful mirror, sometimes a sounding board—but never the author of the person’s story. In practical outcomes, this would drastically reduce instances of uniqueness violation because the system is constantly corrected towards the individual’s uniqueness rather than against it.
Let’s illustrate briefly: Suppose Maria is a user of a future emotion AI coaching app. Maria rarely cries when she’s sad, instead she gets very quiet. The generic model might miss her sadness. But Maria, noticing the app often doesn’t catch when she’s down, uses a feedback feature to log, “Actually I felt sad this evening, even though I didn’t show it outwardly.” The system updates her profile. Next time it detects similar contextual cues (it notices she played a particular set of songs she usually plays when sad, combined with lower-than-usual messaging activity), even if her voice was steady, it gently asks, “Are you feeling okay? You mentioned before that sometimes you feel sad without obvious signs. I’m here if you want to reflect.” This time, it got it right, and Maria feels seen in her own terms. That is identity-responsive AI in action: it learns from Maria about Maria.
To close the loop: affective sovereignty is preserved by identity-responsive feedback because the person’s identity – their self-expressed truth – is what ultimately shapes the AI’s actions. The AI yields to the identity, rather than the identity being forced to fit the AI. Emotion AI, in this model, becomes not a dictator or a one-way mirror, but a personalized toolkit under the user’s control. This transforms the user-AI relationship into one of symbiotic alliance, where technology amplifies personal insight rather than amputating it.
Discussion:
Why Existing Frameworks Fall Short and How Ours Fills the Gap
It is useful to explicitly contrast the model proposed above with the prevailing AI ethics principles (privacy, fairness, transparency, etc.) to illustrate why those, as currently understood, do not fully protect emotional identity. Privacy, for instance, addresses unauthorized access to personal information. While ensuring emotional data is kept private (not shared without consent) is necessary, privacy alone does not stop an AI from misinterpreting your feelings or unduly influencing you with that information. One could have a completely private emotion AI that nonetheless causes psychological harm by constantly telling you how you feel or nudging you in specific directions – so privacy isn’t the whole story. Fairness usually concerns treating different demographic groups equitably. One could create an emotion AI that is fair across groups but still undermines everyone’s affective autonomy equally. Transparency, as often implemented, might mean publishing the algorithmic methodology or giving users access to an explanation after a decision, but if the AI’s emotional “decisions” are moment-to-moment influences (not discrete decisions like loan approvals), how do we even apply transparency? There’s no final decision to explain, it’s an ongoing interaction. That’s why we emphasize interpretive transparency, a more continuous notion.
Our framework specifically aims at the qualitative, subjective domain of emotion, which typical frameworks treat only tangentially. The three principles we propose can be seen as extensions or refinements:
•Interpretive transparency extends transparency into the interaction level and ties it with user comprehension and control.
•Design restraint extends principles of responsibility and non-maleficence, giving them teeth by specifying where AI should not go and not just how it should behave if it goes there.
•Identity-responsive feedback extends fairness and user agency principles by acknowledging intra-human variability and empowering personal agency within the AI’s functioning.
Collectively, these are measures to ensure that emotional AI becomes a technology that respects human emotional sovereignty as a first-class objective, not just an add-on.
In the cases and theoretical discussion, we saw how without these measures:
•Systems misread emotions out of context (no identity feedback loop).
•Systems overreach into sensitive spaces (no restraint).
•Systems make hidden judgments (no transparency).
By incorporating our model, we can mitigate those issues:
•With transparency, users are aware and can contest.
•With restraint, some high-risk misuses are prevented outright.
•With feedback and personalization, errors are reduced and personal authenticity is preserved.
Conclusion
Emotion AI systems – from facial expression analyzers to empathetic chatbots – represent a new frontier where technological innovation directly interfaces with the intimate core of human existence: our emotions. This paper has examined how such systems challenge and potentially undermine affective sovereignty, the principle that individuals should have control over their own emotional experiences and their interpretations, and risk committing uniqueness violation by failing to honor the individual, context-dependent nature of human emotions. Through interdisciplinary analysis, we highlighted both the dangers and the opportunities presented by emotion AI.
On the one hand, the dangers are significant. Psychology reminds us that emotions are complex, culturally mediated, and unique to each person’s life narrative (Barrett, 2017; Mesquita & Walker, 2003) . When AI systems impose rigid categories or interpret emotions out of context, they can misread and misserve users, leading to frustration, bias, or even tangible harms such as wrongful judgments in employment or law enforcement (Ekman & Friesen, 1971; Binns, 2018) . Ethically, we saw that autonomy can be eroded if AI covertly manipulates emotions or if people become dependent on AI in ways that diminish their own emotional skills (Turkle, 2011; Zuboff, 2019) . The appropriation and monetization of emotional data by companies without explicit consent is a clear violation of privacy and personhood, echoing broader concerns of surveillance capitalism (McStay, 2020; Crawford et al., 2019). Legally, these concerns are starting to be recognized: regulatory efforts like the EU AI Act aim to draw “red lines” around the most egregious potential abuses – for example, bans on covert emotion tracking or systems that exploit human vulnerabilities – and to enforce high standards of transparency and fairness where emotion AI is deployed (European Commission, 2021; EDPB-EDPS, 2021). The message from these analyses is that without thoughtful checks, emotion AI could indeed trespass on what many consider uniquely human territory – the free and private play of our feelings – and in doing so degrade individual dignity and social trust.
On the other hand, our exploration also underscored potential benefits and paths to realizing them ethically. Emotion AI, when designed and used properly, can augment human well-being. It can provide support for mental health – for example, CBT chatbots like Woebot have shown they can help reduce depression symptoms (Fitzpatrick et al., 2017) . AI companions like Replika can offer solace to the lonely (Ta et al., 2020; Maples et al., 2024) . Emotion AI can enable personalization in education (e.g., tutoring systems that sense confusion and adapt) and customer service (helping resolve complaints by gauging customer upset) in ways that potentially improve outcomes and satisfaction. It might even assist those who struggle with emotional communication, such as individuals on the autism spectrum, by providing gentle cues or translations of social signals. These positive uses suggest that, under the right conditions, emotion AI can support people’s emotional lives rather than subvert them.
The key to tipping the balance towards these benefits is ensuring that the design and governance of emotion AI are firmly centered on human values. Our proposed ethical design model – interpretive transparency, design restraint, and identity-responsive feedback – provides a roadmap for building systems that empower users in their emotional domain. Implementing interpretive transparency means users will not be left in the dark or feeling second-guessed by an inscrutable AI; instead, they remain informed and in control, preserving their agency. Practicing design restraint means we collectively choose not to deploy emotion AI where it doesn’t belong or where it poses disproportionate risks – for instance, we decide that a human should always be the ultimate arbiter in therapeutic or legal contexts involving emotional judgment. Enabling identity-responsive feedback means the technology respects our individuality, learning from us about us, rather than forcing us into preordained boxes. These measures go beyond what current ethical guidelines stipulate, directly addressing the crux of “Can I call my emotions my own when AI is in the loop?”
If these principles are adopted, we envision an ecosystem of emotion AI that acts as a supportive mirror – one that a person can look into to gain new insight, but which always reflects back their own face, not someone else’s idea of who they should be. In such an ecosystem, one could comfortably answer the titular question in the affirmative: Yes, my emotions are still my own, because I hold the reins in how AI interacts with them. The AI may detect or even predict a feeling, but it doesn’t claim it – it asks me, and ultimately, I tell it. The self remains sovereign.
We conclude with a call to action for stakeholders across disciplines. For technologists and designers: incorporate these ethics early in the development process; engage with psychologists, ethicists, and end-users to understand the profound subtleties of emotion. For policymakers: update and extend AI governance to include emotional data and affective influence, drawing “no go” zones and requiring user-centric practices as standard (just as data privacy laws now enforce consent and purpose limitation). For researchers: continue interdisciplinary inquiry into how AI affects human emotion and identity – longitudinal studies, for example, on people who extensively use AI companions, to inform best practices and safeguards. And for society at large: approach emotion AI neither with uncritical embrace nor with Luddite fear, but with a mindful intention to shape it to serve human ends. Public discourse should treat emotional rights with the same seriousness as privacy and freedom of expression; perhaps we will talk of “emotional autonomy” as a right that emerging technologies must respect.
In the final analysis, the relationship between AI and human emotion will reflect what we value most about being human. Emotions are often called the language of the soul, the most private and defining experiences of our lives. We must ensure they are not trivialized into mere “data points” for AI to optimize. Instead, if AI is to work with our emotions, it should be with humility, transparency, and deference to the human spirit. By insisting on this ethos, we can harness the helpful aspects of emotion AI – the insights, the support, the personalization – while guarding against outcomes that would make us feel less like persons and more like programmable beings. The question “Can I still call my emotions my own?” ultimately challenges us to reaffirm the importance of inner life in the digital age. With thoughtful design and ethical guardrails, we believe the answer can be yes – our emotions can remain our own, even as AI becomes a new mirror and companion in our emotional worlds.
Declaration: Affective Sovereignty in the Age of AI
We declare that in the age of emotion AI,
the authority to name, frame, and interpret one’s own emotions must remain with the human subject.
No algorithm—regardless of its accuracy—shall preempt the existential right to feel, to reflect, and to define the meaning of one’s emotional life.
Affective sovereignty is not a nostalgic ideal, but a foundational necessity for digital personhood and democratic selfhood.
References
Access Now. (2021). Policy Brief: Ban Emotion Recognition Technologies. Access Now White Paper.
Barrett, L. F. (2017). How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt.
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1–68.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159.
Boehner, K., DePaula, R., Dourish, P., & Sengers, P. (2007). How emotion is made and measured. International Journal of Human-Computer Studies, 65(4), 275–291.
Braidotti, R. (2013). The Posthuman. Polity Press.
Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26.
Commission Nationale de l’Informatique et des Libertés (CNIL). (2017). How Can Humans Keep the Upper Hand? Report on the Ethical Matters of Algorithms and Artificial Intelligence. Paris: CNIL.
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Sánchez, A. N., Raji, I. D., & Whittaker, M. (2019). AI Now 2019 Report. New York: AI Now Institute.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
D’Mello, S. K., & Kory, J. (2015). A review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys, 47(3), 43:1–43:36.
Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion, 6(3–4), 169–200.
Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129.
European Commission. (2021). Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final.
European Data Protection Board (EDPB) & European Data Protection Supervisor (EDPS). (2021). Joint Opinion 5/2021 on the proposal for the AI Act. Brussels.
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied AI in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5), e13216.
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press.
Greene, G. (2020). The Ethics of AI and Emotional Intelligence: Data Sources, Applications, and Questions for Evaluating Ethics & Risk. Partnership on AI White Paper.
Haraway, D. (1985). Manifesto for cyborgs: Science, technology, and socialist feminism in the 1980s. Socialist Review, 80, 65–108.
Hochschild, A. R. (1983). The Managed Heart: Commercialization of Human Feeling. University of California Press.
Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Ko, B. C. (2018). A brief review of facial emotion recognition based on visual information. Sensors, 18(2), 401.
Lazarus, R. S. (1991). Emotion and Adaptation. Oxford University Press.
Maples, J., McArthur, B., & Schueller, S. (2024). Loneliness and suicide mitigation for students using GPT-3-enabled Replika: A mixed-methods study. npj Mental Health Research, 3(1), 4.
McStay, A. (2018). Emotional AI: The Rise of Empathic Media. Sage.
McStay, A. (2020). Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy. Big Data & Society, 7(1), 1–17.
Mesquita, B., & Walker, R. (2003). Cultural differences in emotions: A context for interpreting emotional experiences. Behaviour Research and Therapy, 41(7), 777–793.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
Picard, R. W. (1997). Affective Computing. MIT Press.
Scherer, K. R. (1997). The role of culture in emotion-antecedent appraisal. Journal of Personality and Social Psychology, 73(5), 902–922.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzæg, P. B. (2021). My chatbot companion: A study of human–chatbot relationships. International Journal of Human-Computer Studies, 149, 102654.
Stark, L. (2018). Algorithmic psychometrics and the scaling of affect. Big Data & Society, 5(1), 1–11.
Stark, L., & Hoey, J. (2021). The ethics of emotion in artificial intelligence systems: A framework for supporting ethical decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–25.
Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: Thematic analysis. Journal of Medical Internet Research, 22(3), e16235.
Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.