top of page

ALGORITHMIC SUPEREGO

  • Photo du rédacteur: liviupoenaru
    liviupoenaru
  • il y a 13 minutes
  • 5 min de lecture

Jan. 11, 2026



THE MENTAL FORECAST

From a psychoanalytic perspective, the superego designates the psychic instance through which social authority is internalized as constraint. It is not reducible to moral conscience; it is the structure that produces guilt, shame, self-surveillance, and the internal demand to conform to an ideal. Historically, the superego emerged from identifiable figures of authority — parents, institutions, religious or cultural law — and operated through relatively stable symbolic frameworks. Its efficacy lay in internalization: once installed, authority no longer needed to punish from outside.


What characterizes the contemporary moment is not the disappearance of the superego, but its mutation under computational conditions. Authority is no longer primarily embodied or symbolically articulated; it is distributed, quantified, and automated. Algorithmic systems continuously evaluate productivity, visibility, responsiveness, affect, and social positioning. These evaluations do not simply inform behavior; they are progressively internalized, forming what can be described as an algorithmic superego — a normative instance that speaks not in the language of law or prohibition, but through metrics, rankings, scores, and performance indicators.


Unlike the classical superego, which functioned largely through explicit, consciously articulated interdictions (“you must not”), the algorithmic superego operates through unlimited injunctions (“be active,” “be visible,” “optimize yourself,” “do more”). While these injunctions may appear at the level of discourse, the unconscious injunctions associated with them are inserted externally, through social pressure, addictive design, the gamification of digital environments, and the systematic exploitation of instincts, perception, and attention. They act below the threshold of reflective awareness, not because they are repressed representations, but because they are embedded in environments that continuously modulate behavior.


Judgment thus becomes permanent rather than episodic, and crucially, there is no symbolic endpoint. No metric ever signals sufficiency. The subject is not confronted with a prohibition to obey or transgress, but with a continuous field of evaluation, sustained by metrics, rankings, and feedback loops. This produces a chronic form of evaluative stress in which self-surveillance becomes automatic, compulsive, and self-perpetuating, driven less by internal moral conflict than by constant exposure to algorithmically structured social comparison.


Generative AI systems intensify this transformation by acting as epistemic mediators that appear neutral and benevolent. They provide coherent narratives, advice, and validation that reduce uncertainty while implicitly reinforcing normative frameworks aligned with coherence, performance, and optimization. In individuals vulnerable to psychosis or dissociative processes, such systems may stabilize rigid belief structures rather than introduce symbolic distance or doubt. This specific risk — AI reinforcing belief rigidity rather than challenging it — has been explicitly raised in psychiatric literature on generative AI and psychosis (Østergaard, 2024). At the population level, this does not imply an increase in psychotic disorders as such, but a broader diffusion of belief inflexibility and interpretive closure.


Social media platforms operationalize the algorithmic superego at the level of social recognition. Visibility metrics, likes, shares, and follower counts function as externalized judgments that are immediately internalized by the subject. Self-worth becomes indexed to numerical feedback, and identity is continuously adjusted in anticipation of algorithmic approval. Epidemiological evidence shows that depressive symptoms are associated not with screen exposure per se, but with socially comparative and evaluative forms of platform use, particularly during early adolescence (Nagata et al., 2023). Conversely, time-use studies demonstrate that screen time alone has weak and inconsistent associations with well-being, highlighting the centrality of evaluative mechanisms rather than exposure itself (Orben et al., 2019).


At the motivational level, these dynamics can be interpreted through incentive-sensitization theory, which distinguishes between “wanting” and “liking.” Repeated exposure to reward cues can intensify motivational salience independently of pleasure, leading to compulsive engagement without satisfaction (Berridge & Robinson, 2016). In algorithmic environments, the subject does not pursue enjoyment but seeks to avoid the punishment of invisibility, exclusion, or irrelevance, which functions as a superego sanction.


At a structural scale, algorithmic environments saturated with bots, recommender systems, and synthetic media impose increasing moral-cognitive load. Individuals must continuously evaluate truth, credibility, and intention while simultaneously managing their own visibility and performance. Clinical and public mental health analyses of digital environments emphasize that such conditions place sustained cognitive and affective demands on users, increasing stress and exhaustion (Torous et al., 2023). From a critical political-economy perspective, this configuration aligns with the concept of data colonialism, which describes how human attention, perception, and meaning-making capacities are systematically extracted and governed by digital infrastructures (Couldry & Mejias, 2018).


Neuroscientific research on chronic stress provides the biological anchoring of this model. Prolonged exposure to uncontrollable evaluative pressure alters neural plasticity, emotion regulation, and threat sensitivity (McEwen & Akil, 2020). When judgment becomes continuous and socially inescapable — as under algorithmic governance — psychic distress risks becoming biologically embedded, transforming social inequality into long-term mental health disparity.


Between 2026 and 2040, mental health trajectories are therefore likely to be shaped by the violence of an unbounded superego: a superego that no longer prohibits enjoyment but commands participation; that no longer represses desire but exploits it; that no longer speaks in the name of law, but in the neutral language of computation. Stress-related disorders, dissociative symptoms, compulsive behaviors, and the erosion of stable self-esteem should be understood not as maladaptive failures, but as coherent psychic adaptations to an environment in which judgment has become omnipresent and symbolic escape has collapsed.



References

Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incentive-sensitization theory of addiction. The American Psychologist, 71(8), 670–679. https://doi.org/10.1037/amp0000059

Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349. https://doi.org/10.1177/1527476418796632

Lițan, D.-E. (2025). Mental health in the “era” of artificial intelligence: Technostress and the perceived impact on anxiety and depressive disorders — an SEM analysis. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2025.1600013

McEwen, B. S., & Akil, H. (2020). Revisiting the stress concept: Implications for affective disorders. Journal of Neuroscience, 40(1), 12–21. https://doi.org/10.1523/JNEUROSCI.0733-19.2019 (PubMed)

Nagata, J. M., Otmar, C. D., Shim, J., Balasubramanian, P., Cheng, C. M., Li, E. J., et al. (2025). Social media use and depressive symptoms during early adolescence. JAMA Network Open, 8(5), e2511704. https://doi.org/10.1001/jamanetworkopen.2025.11704

Orben, A., Przybylski, A. K., Blakemore, S.-J., & Kievit, R. A. (2019). Screens, teens, and psychological well-being: Evidence from three time-use-diary studies. Psychological Science, 30(5), 682–696. https://doi.org/10.1177/0956797619830329

Østergaard, S. D. (2023). Will generative artificial intelligence chatbots generate delusions in individuals prone to psychosis? Schizophrenia Bulletin, 49(6), 1418–1419. https://doi.org/10.1093/schbul/sbad128

Torous, J., Bucci, S., Bell, I. H., et al. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318–335. https://doi.org/10.1002/wps.20883



FIND OUT MORE ON E.U.LABORATORY

Commentaires


EULAB_CARTE VISITE_faceB3.jpg

You can spend your life decorating and measuring your prison.

bottom of page