top of page

AI as a Medium of Power in Psychotherapy

Behavioral health companies are surging ahead with researching and developing products and systems driven by artificial intelligence (AI), perhaps before enough people have examined the potential impacts. AI in the field of psychotherapy will change power at institutional, cultural, and therapeutic levels. It will be impossible to stop the development, adoption, and application of these technologies, and so it's important that, as mental health professionals, we’ve considered the ethical use of AI before these systems appear in our own practices.


AI in the therapy room
AI in the therapy room

Overview of AI integration in psychotherapy

Some of the initial purposes for integrating AI into behavioral health systems relate to diagnosing and documentation. Designing software to assist with these tasks isn’t new. Within the field, there already exist many products to help therapists. However, AI has been proven to be adept at accurately identifying the most parsimonious diagnosis from an intake interview. It’s also an efficient note taker. When used for diagnosing and documentation, AI can simply do a better job than current software.


But that's not the limit of how AI is being developed for use in psychotherapy. It's being looked at to train therapists, to provide them with in-session interventions, and even to potentially reach the lofty goal of creating therapeutic AI chatbots. If we can ignore the profit motives behind developing these products and assume that every company making them has the best intentions, we still have to understand how these products might impact and alter power dynamics at the scope of institutions, culture, and therapy itself.


Those with institutional power who know little to nothing of the art but a lot about medical and insurance models will have agendas that are at odds with most modern perspectives on psychotherapy. Power has the ability to constrain what is considered valid within psychotherapy: What is normal? What is disease? Who has authority to speak? What treatments can be used? Institutional and cultural authorities will shape the invisible rules and patterns that guide how we talk about and understand AI-driven behavioral health systems and influence what is considered true, relevant, or acceptable in psychology and psychotherapy.


And so, what is the cost of psychotherapists not participating in the development of AI in behavioral health? AI models can be trained. AI systems must be designed by people. But if therapists don't engage, they are ceding the training and design to those with less understanding. We need to have robust models trained on all of the psychotherapy orientations and many cultural perspectives to have the best chance at preserving ethical approaches to therapy. Therapists must be willing to help train psychotherapy AI models and design AI mental health systems if they want to be part of the discursive formations. Trying to run away from AI will not be effective.


There must also be a middle ground between giving AI all of the judgment and power in the course of therapy and rejecting it completely. To find this middle way will require engagement and discussion. We’ll need to train future generations of therapists to understand that the therapeutic relationship is the most sacred part of therapy. If AI is creating a barrier to that, then its scope within therapy must be diminished.


The adoption of AI will also require therapists to embrace deep ethics more than ever before. Regardless of what technology or process we’re focused on in the context of psychotherapy, the same ethics apply. It doesn’t matter if we’re talking about general purpose software or something potentially much more powerful, like AI; therapists must still evaluate whether or not they are adhering to the ethical code of psychotherapy. The ethics spelled out by the American Counseling Association will be the ones of primary interest in this examination: autonomy, beneficence, fidelity, justice, nonmaleficence, and veracity. While from a different time, theories from the philosopher Michel Foucault and multidisciplinarian psychoanalyst Ernest Becker can help us better understand these ethical concerns.


Looking at AI through two lenses

Imagine power as something that doesn't just go back and forth between people or groups, like a game of catch. Instead, it's more like setting up a game board, where the rules you create influence all the moves that players can make. So, when someone exercises power, they're not just telling others what to do; they're actually shaping what actions are even possible in the first place. It's as if someone were playing chess and they made a move that limits all the future moves their opponent can make—that's how they're using power. The power of AI is fundamentally changing the game board of psychotherapy.


Power shapes our nervous systems from the womb and beyond. According to Foucault, power is exercised through a complex network of social practices, knowledge systems, and discursive formations rather than being wielded by a sovereign entity or held by an elite class. Power creates new possibilities but suppresses others. It shapes identities, guides conversations, and normalizes certain behaviors and truths. The power of AI wielded by institutions will create new realities while simultaneously destroying other possible paths. If woven into psychotherapy, it’ll constrain and expand the knowledge we have access to in our work, change the tools of the trade, and alter the professional language we use.


Ernest Becker suggests that people are often unconsciously motivated by a deep-seated fear of death. These fears drive us to find ways to feel like we're part of something lasting. If we can connect with experiences that symbolize immortality or leave behind a legacy that outlives our physical existence, then these fears can be temporarily assuaged. But when people don’t look into existential concerns and participate in a lifelong denial of death, they may renounce freedom and responsibility for their own lives and instead defer to authority in its many forms, including merely conforming to the dominant culture. Becker’s theories are important since AI may seduce psychotherapists and clients to defer to it as an authority to lessen their own anxieties. Instead of psychotherapy being used to facilitate the growth of responsibility that adults must develop, AI-driven psychotherapy could create the ultimate dependent relationship between therapists, clients, and AI tools.


AI will influence our cultural values and ideas about heroism, another important part of Becker’s theories. Once AI begins to outperform humans in significant ways, its presence will challenge our understanding of our own achievements and what we’re contributing to a personal legacy. If AI’s power leads people to feel an increase of insignificance, then its use in psychotherapy could threaten both the therapeutic relationship and people’s willingness to seek help for mental health struggles.


Effects on behavioral health institutions

Foucault (1978) writes, "The exercise of power is not simply a relationship between partners, individual or collective; it is a way in which certain actions may structure the field of other possible actions." The integration of AI has the potential to structure the therapeutic process in new ways, potentially constraining and influencing both the therapist's and the patient's actions.

There exists a chasm between therapists who want to treat the “whole human being” versus a constellation of symptoms. The DSM-driven approach to therapy focuses on the latter, which makes more sense in the context of psychiatry, where the DSM originated. While not historically the case, today psychiatry focuses more on medication management, the sole purpose of which is to affect symptoms. But the relational, biopsychosocial approach to psychotherapy, while also concerned about symptoms, extends into concerns about the human condition. The long-running tension between these approaches could be exacerbated by AI-driven tools.


AI will do a much better job assisting with therapy if the game board is constrained to focus more on symptoms. AI is about prediction: What sequence comes next logically? But people are dynamic systems more akin to global weather. The number of permutations of synapses in the human brain exceeds the estimated number of atoms in the universe.* And so, therapy cannot be about prediction. If it were, the field would not exist because every therapist would fail miserably. We can’t anticipate when a relatively stable patient might suddenly have their first psychotic breakdown. We can’t predict what impact an intervention will have on someone. If we could, psychotherapy could be manualized. And AI, if used improperly, could merely be the next failed attempt at manualizing something that is beyond the scale of computing by many orders of magnitude.

Institutions that attempt to influence behavioral health systems—to set up the game board tilted toward prediction and manualization, to focus on symptom reduction over the human condition and spirit—will likely only trigger rebellion against AI en masse. AI can put words to creativity, emotion, and even empathy, but it’s like the empathy of a psychopath who has an intellectual or superficial appreciation of the inner state of others but lacks depth of heart and emotion. It’s the beating heart within the therapist that helps facilitate the healing. It’s the intuitive, unconscious magic that happens when right-brain processes intertwine between two people when psychotherapy is facilitated by a compassionate, ethical person that helps someone grow and change. I’m not suggesting that psychopaths can’t be competent psychotherapists (I’m sure there are a few). But without doing a formal study, my hunch is that psychopaths, in general, would underperform other personalities in the role of psychotherapist. And so, we must not cede the treatment to AI.

Regardless of the obvious deduction that humans are best suited to be in the role of therapist, an effort to "cure” people more efficiently will be the agenda of many institutions, as usual.. Their ability to influence may require therapists to agree to use certain AI tools or instruments as part of therapy to be reimbursed. Once these tools become part of the process, the institutions no longer have to be concerned with exerting power directly on psychotherapy. By shaping the tools of therapy, they can sculpt the field.


"The success of disciplinary power derives no doubt from the use of simple instruments: hierarchical observation, normalizing judgment and their combination in a procedure that is specific to it, the examination," writes Foucault (1977). The reason control and order work so well is because they rely on basic tools. These systems watch people closely to decide what is “normal” behavior using tests and inspections.


AI will likely be viewed as the objective inspector of therapy by some with institutional power. But AI will not be objective. It will prescribe its views on what is normal behavior. It will then suggest to therapists interventions that lead to "normal" behavior. And what will be considered normal will be based upon the designers of the systems and those who tell the designers what to design. As those who practiced psychotherapy prior to AI become fewer and fewer, the hammering out of a narrowly defined set of psychotherapy outcomes could become unquestioned. To not adopt the AI systems could make one unemployable or force some to adopt a cash-only practice. If more therapists operate outside of the insurance systems, humane psychotherapy would be available only to the wealthiest of clients; the working class and poor would be relegated to using the AI-driven approaches that may largely aim to normalize them rather than humanize them.


Effects on culture that will impact mental health

Becker proposed that human civilization and the cultures that arise from it are primarily psychological constructs to shield us from the dread of our mortality. He wrote,"Culture is a structure of lies that we must have in order to maintain our psychic equilibrium” (Becker, 1973). This perspective also hints at how culture paves avenues for heroism so that we can find significance in the face of inevitable death. Heroism isn’t just saving someone from drowning or finding a cure for a disease; it can be the smallest of psychological boosts to self-esteem that prevent us from falling into the void of despair. Finishing a work project or retiling the bathroom to look more modern makes us feel like we have some significance.

As AI advances, it will threaten our need for significance. Imagine a near future where AI dominates communication, particularly among the cognitive class. Many will feel that their years of study and training have been rendered valueless as AI takes the professional stage. This warping of culture has great potential to have significant psychological impacts on us. In professional settings where humans are reduced to being the biological conduits of AI communication, the search for significance becomes more challenging.

Foucault’s correlative theory in “Discipline and Punish” explores the relationship between knowledge and power. He wrote, "There is no power relation without the correlative constitution of a field of knowledge, nor any knowledge that does not presuppose and constitute at the same time power relations” (Foucault, 1977). This interplay becomes crucial in understanding the evolving dynamics of artificial intelligence in our culture and knowledge systems.

A shift in authority towards AI, in line with Foucault's power-knowledge theory, requires a reevaluation of the role of psychotherapists. We will grapple with the need to protect our clients’ right to self-identity. In a data-driven society where AI's interpretations heavily influence societal norms and individual opportunities, what chance will we have as mental health professionals to promote justice for our clients? The challenge lies in preserving humanity within therapy against the backdrop of an increasingly AI-centric culture.

When AI outperforms us in most ways, including knowing more about where we fit into society than we do ourselves, the psychological impact will likely be a common theme of psychotherapy. As AI's interpretation of individual data shapes access to societal resources and institutions, the concept of identity undergoes a transformation. In the future, the question “Who am I?” might better be phrased as, "Who does AI think I am?" Other questions arise: How can society and culture preserve the essence of heroism and human significance in an AI-driven world? What must psychotherapists do to protect the right of people to choose their own identity? What can we do to preserve justice and client autonomy when a complex, invisible collection of data is feeding interrelated AI systems that are trying to impose interpretations of a person within a game board that cares mostly about productivity and capital?

Moreover, in an AI-dominant culture, our acts of heroism may merely be rebelling against the interpretations that AI projects upon us while asserting our own sense of self. Psychotherapy may be largely centered around this process as the AI-driven culture attempts to define us based upon AI power-knowledge. If people are coming to therapy largely as a result of lacking their own autonomy to choose who they wish to be, any use of AI in the treatment could have a negative impact on the willingness of people to engage with it or trust it. If people are looking for a refuge from a world transformed by AI that has rendered them even smaller than they previously felt, any use of AI at all could do harm and subsequently be unethical.


Effects on therapeutic content

"People create the reality they need in order to discover themselves," writes Becker (1975). It could be argued that AI tools in therapy may influence what content the therapist focuses on. If the field moves away from cocreation of realities between the therapist and the person in treatment, construction of the types of realities essential for self-discovery may be further from reach. Furthermore, if a disembodied system that can never die is shaping psychotherapy for the people, does it still respect patient autonomy? Whatever uses of AI emerge, ethical therapists must ensure that the client is cocreating new realities with another human being and not with an algorithm-driven, word-prediction service that has never breathed.

People in therapy must also be free to discuss every aspect of human existence. There’s been no shortage of criticism already regarding censorship and bias in AI models. If therapeutic models are biased against particular views or desires, the AI systems will potentially stigmatize people in therapy who express them. I found that using AI to create psychotherapy notes was fruitless because AI determined that common topics violated content guidelines. How can we create a container where every type of content can be examined and processed when the tools we use filter out everything that isn’t “positive and constructive,” a refrain a popular AI model uses when it considers a topic taboo or forbidden? If AI analyzes transcripts of psychotherapy sessions and pathologizes or stigmatizes the things that are spoken about, we’ll be reversing decades of work focused on creating room in therapy to explore the full range of a person’s fears, desires, and fantasies without shame.

This range of content includes death. "To live fully is to live with an awareness of the rumble of terror that underlies everything," writes Becker (1973). If we are outsourcing our search for what is meaningful to systems that don’t die and cannot understand death in an embodied way, we’re at risk of further eroding our ability to tolerate and accept death. We cannot cede control to AI systems that don’t feel the significance of exploring death. It may also merely consider it a topic that isn’t “positive and constructive.”

But according to Becker’s theories, we may welcome avoiding it. People have a long history of relinquishing their autonomy to authority figures whom they believe can protect them from destruction. Rather than cultivating a present-moment awareness of our mortality and then using that as a vehicle to choose a life based upon our own desires and values, we may seek comfort in the god-like powers of AI. Instead of encouraging the psychotherapist to dive headlong with their patients into the exploration of insignificance, temporality, and eventual demise, AI systems endorsed by insurance companies may shift them away from those areas, and all participants may breathe a sigh of relief. Psychotherapists, who also aren’t immune to the impulse to deny or avoid existential dread, may become complicit in taking this topic off the table, robbing people of the opportunity to explore it and create their own meaningful paths.


Effects on the therapeutic relationship

In psychotherapy, the therapist typically holds significant power, making client autonomy inherently fragile. To compound the responsibility of the therapist, the most comprehensive meta-analysis of psychotherapy outcomes concludes that the therapeutic relationship accounts for most of the variance in success outside of client factors. Since therapists have no ability to control client factors, whether or not psychotherapy is successful is significantly outside of the influence of the therapist. But what remains is mostly the quality of the relationship between the person who is suffering and the psychotherapist.

Foucault illuminated the multifaceted nature of power, which becomes increasingly relevant in the context of AI-assisted therapy. Power would no longer be limited to the interaction between therapist and patient but would expand to include the suggestions of AI. Therapists could begin to overtly or covertly coerce patients toward AI’s suggestions, bypassing the much more laborious and slow-moving process of a therapy that centers on the quality of the therapeutic relationship.

"Psychiatric power is not founded on the strength of a discourse of truth but on the strength of a true discourse" (Foucault, 2006). This quote from "Psychiatric Power: Lectures at the Collège de France" raises questions about the authoritative weight given to AI-driven insights and diagnoses in the therapeutic process. As AI systems begin to analyze and influence therapy sessions, the authenticity of the therapeutic discourse may be compromised. If AI frameworks inadvertently penalize therapists' honesty and vulnerability, the therapeutic relationship could shift from a genuine, empowering interaction to a form of power that constrains self-actualization, autonomy, and choice, pushing clients towards symptom reduction and functionality as defined by dominant societal narratives.

Creativity is a vital component of human psychological healing, yet it’s fraught with risks and complexities that an AI, with its inherently rational basis, may fail to appreciate. "The road to creativity passes so close to the madhouse and often detours or ends there," writes Becker (1973). A completely rational system built on AI may fail to understand the importance of going through regressions and psychological unraveling in the process of letting go of unhealthy conditioned responses. An AI system that’s trained on suggesting interventions that will "normalize" people could, in the process, dehumanize them, take away their autonomy, and interfere in a process of self-actualization that a disembodied machine could never understand. Only a strong therapeutic relationship built over many sessions can create the conditions that will allow some of the most traumatized people to let down their defenses enough to explore the darkest realms of their psyches with a trusted companion—the therapist.


Case studies of ethical and unethical uses of AI systems in psychotherapy

Now that we’ve explored the ethics related to employing the power of AI in therapy, let’s clarify the potential applications of AI-driven behavioral health systems by looking at two vignettes. The first will explore an ethical use, and the second will illustrate an unethical approach. With these examples, we can paint a picture of how critical it is that the therapist be the arbiter of ethical applications of AI.


Illustration of the ethical use of AI in psychotherapy

It’s been five years since Dr. Alex started using AI tools. Today is the fifth session with Jordan. As part of the informed consent process when Jordan first met with Dr. Alex, the use of AI tools was explained in detail, and Jordan was given the opportunity to opt out of using the AI-driven system. Dr. Alex explained the benefits and potential negative aspects of the system. Jordan thought the benefits outweighed the drawbacks and consented to have the AI tools be part of therapy. Dr. Alex explained that at any time during treatment, Jordan had the right to withdraw consent to use AI tools, and that Dr. Alex would comply.

During today’s session, Dr. Alex asked Jordan about a particularly sensitive topic. When Jordan responded, the AI tool noticed that Jordan’s anxiety spiked. Dr. Alex broke eye contact for a moment to read the alert displayed on the tablet that was running the software.

“It seems like you’re having quite a bit of anxiety about this,” said Dr. Alex.

“Is that what the AI is saying?” asked Jordan.

“Yes, although it was also very clear to me,” said Dr. Alex.

“This is just too much right now. I feel like I just need it to be you and me in the room right now. Can you turn that off?” asked Jordan.

“Of course,” said Dr. Alex, turning off the tablet and putting it in a desk drawer.

The session continued, and Dr. Alex provided supportive reflection and presence. When the session ended, Jordan thanked Dr. Alex for being fully present and scheduled the next appointment. Jordan left the session feeling cared for by Dr. Alex.

In this scenario, Dr. Alex used AI ethically. The pros and cons of AI were explained during informed consent, adhering to the ethic of veracity. During the session, Dr. Alex stopped using the tablet when Jordan asked, respecting patient autonomy. Additionally, Dr. Alex demonstrated fidelity by putting the device away upon request, which was promised during informed consent. As a result, the session was beneficial and did no harm to the therapeutic relationship or Jordan.


Illustration of the unethical use of AI in psychotherapy

Dr. Riley is very excited about how AI will help her be the best therapist she can be. She is new to the field and wants to make certain that the people that come to see her don’t view her as incompetent or inexperienced. She also wants to develop her professional reputation and have her clients believe in her abilities. As a result, during informed consent, she doesn’t mention how she uses AI in her practice.

Today, she has a new client, Emma, who just joined a telehealth session. Dr. Riley welcomes Emma, who is nervous to meet because it’s her first time trying therapy, and she has many reservations. Dr. Riley used AI tools to provide a tentative diagnosis for Emma based upon the intake interview she completed the week before.

“That fits with the Bipolar-I diagnosis,” says Dr. Riley in response to Emma explaining that she almost didn’t make the session since she was feeling very depressed.

Emma felt jarred at hearing the diagnosis foisted upon her within the first ten minutes of the session. Dr. Riley looked at the AI dashboard in the corner of the telehealth screen, which alerted her to the spike in anxiety.

“You’re feeling very anxious,” said Dr. Riley with an air of self-satisfied certainty.

“I guess,” said Emma, feeling exposed, vulnerable, and overwhelmed at hearing a diagnosis that could have serious implications.

Dr. Riley continued to look at the AI dashboard that was suggesting that Emma’s anxiety was too high. It recommended that Dr. Riley resource the client. Emma felt alone and disconnected from the experience.

“It looks like your anxiety is so high that we should do some mindful breathing before continuing. I can show you a few techniques,” Dr. Riley said while reading over a few mindfulness exercises produced by the AI system.

At that moment, the screen displayed a notification. “Patient has disconnected from the session.”

This scenario, while perhaps comically extreme, demonstrates clearly the employment of AI tools in an unethical way. Dr Riley violated the ethic of veracity by not explaining the use of AI in treatment. She also didn’t respect patient autonomy by failing to ask Emma if she wanted to know her diagnosis before providing it to her. Her use of AI caused severe harm because her dependence on the AI tool got in the way of forming a therapeutic relationship. Dr. Riley’s unethical use of AI resulted in Emma having a panic attack, leaving the session, and giving up on psychotherapy as an option for her.

These are only two of an infinite number of scenarios that could play out in the future. In reality, scenarios will be more complex. Therapists will not perfectly employ AI in therapy, but with a clear understanding of the ethical and unethical uses of these systems, therapists can minimize the potential for harm and increase the likelihood that AI is used for the benefit of the people in therapy, not to reduce therapist anxiety or for other reasons.


What else can psychotherapists do?

The major criticism of previous orientations to psychotherapy is that the psychoanalyst took on the role of objective observer. Modern relational theories understand that there is no objective observer. Two-person treatment is rooted in intersubjectivity, and the awareness that the participation of a particular therapist fundamentally alters the treatment. The implementation of AI may seduce therapists to retreat back into the safety of a belief in objectivity. And if you don’t believe this is a possibility within the field of therapy, look at the impact that the use of smartphones and social media has had on human interactions and relationships within the span of fifteen years. In the process of therapists trying to reduce their own anxieties, the art of therapy may regress rather than remain human, full of vulnerability and anxious moments of profound uncertainty. We must not go to that extreme. Therapists must preserve the therapeutic relationship and continue to promote human experiences as the authority.


At the other extreme, to simply reject AI outright will not be effective. There will be no stopping psychotherapy transcripts from being used to train AI models. It’s already happening, and it will continue to happen. Trying to block it out will mean that technologists and bureaucrats design the systems in a vacuum rather than doing so with the meaningful guidance of seasoned therapists who have a depth of insight into the human condition and the therapeutic relationship. To opt out completely means to cede the development of AI models and interventions to those who perhaps are less concerned about including the full range of human experience, cultural perspectives, and therapeutic content in psychotherapy.


The therapeutic relationship is the most important factor within psychotherapy, and we shouldn’t outsource it to AI. But we can leverage AI to help improve it. This is the middle ground for narrowly defining AI’s role and keeping it constrained to ethical purposes. With the use of patient feedback and perhaps other instruments, AI can be used to assess the strength of the therapeutic relationship and to provide suggestions for opportunities to repair or strengthen it. Soliciting client feedback has already been determined to be an integral component of developing therapist expertise. AI tools can be created to help with this—so long as we heed all of the previously mentioned warnings.

Responsibility and power still lie with us as therapists. Our duty is still to benefit the person we are working with and to do no harm. We cannot excuse ourselves from the task of considering whether the power of new technologies is helping or harming merely because the field has changed. Even if AI’s power has changed reality so that institutions and the culture reward those who practice with AI-assisted techniques and systems, we, as individual therapists, must be the gatekeepers for what is ethical, the practitioners of what is humane, and the ones speaking out when justice is no longer being served.


Notes

*To understand the claim regarding the permutations of synaptic connections being greater than the number of atoms in the known universe, we need to break it down into two parts: the estimated number of synapses in the human brain and the estimated number of atoms in the universe.

1. Synapses in the human brain:

  • The human brain contains approximately 86 billion neurons.

  • Each neuron forms synapses with other neurons. The number of synapses per neuron can vary widely, from about 1,000 to over 10,000 in some cases.

  • A conservative average estimate might be around 1,000 synapses per neuron. Multiplying 86 billion neurons by 1,000 synapses gives us about 86 trillion synapses in the human brain.

2. Atoms in the universe:

  • Estimates of the total number of atoms in the observable universe range from 1078 to 1082 atoms.

3. Permutations of synapses:

  • A permutation refers to an arrangement or ordering of items. The number of permutations of synapses would theoretically consider all possible ways these synapses could be arranged or connected.

  • However, calculating the exact number of permutations of synapses is not straightforward and would result in an astronomically high number, far exceeding the simple multiplication of neurons and synapses.

4. Comparison and analysis:

  • The claim refers to the permutations of synaptic connections (potential arrangements or patterns of connectivity) rather than the synapses themselves.

  • Even with 86 trillion synapses, the number of possible permutations of these synapses (how they could be interconnected) is an extremely large number, potentially surpassing many common estimates of the number of atoms in the universe.

  • However, it's important to note that not all theoretical permutations of synaptic connections are biologically plausible or meaningful. The brain organizes and reorganizes these connections in specific, non-random ways.

While the claim serves as a powerful metaphor for the complexity of the human brain, it should be understood more as an illustrative hyperbole rather than a precise scientific fact. The actual number of meaningful synaptic connections and arrangements in the brain is immense and contributes to the vast computational power and complexity of the brain, but a direct comparison to the number of atoms in the universe is more symbolic than literal.


Author bio and disclosure

I am a licensed psychotherapist and co-founder/AI researcher for Sessions Health, a popular behavioral health EHR. The ethical and moral decisions related to using AI in behavioral health software are a significant part of my responsibilities.


AI disclosure

ChatGPT was used as a research tool and as a sounding board for some of my arguments. It wrote the explanation in the notes section and generated the image for the essay.


References

Becker, E. (1973). The denial of death. Free Press.


Becker, E. (1975). Escape from Evil. New York, NY: Free Press.


Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. (A.

Sheridan, Trans.). New York, NY: Vintage Books.


Foucault, M. (1978). The History of Sexuality, Volume 1: An Introduction. (R.

Hurley, Trans.). New York, NY: Pantheon Books.


Foucault, M. (2006). Psychiatric Power: Lectures at the Collège de France,

1973-1974. (G. Burchell, Trans., J. Lagrange, Ed.). New York, NY: Picador.

63 views0 comments

Recent Posts

See All
bottom of page