ChatGPT — Jungian?

Sarah M. Inoue
2 min readDec 21, 2022
An image of GPT-3 found by searching Google

ChatGPT was all over the news last week as reporters and professors tried out the AI. The underlying software is based on GPT 3.5, and GPT 3.5 and GPT 3 had a similar week of experimentation and wonder. All three use machine learning, feeding a computer information and then programming the computer to interpret and reproduce similar texts.

In other words, GPT read the Internet and can now answer questions with a text similar to what it has read. GPT has read our collective narrative (see table for the inputs to GPT) by reading large portions of the Internet (Common crawl), books, and Wikipedia.

I listened to This American Life’s story about someone discussing with GPT 3 her sister’s death (https://www.thisamericanlife.org/757/the-ghost-in-the-machine, act 1: Ghostwriter). I felt that GPT has tapped into the collective unconscious identified by Jung. I know very little about Jung’s work, but this idea of a collective unconscious was mentioned to me. Thinking about GPT 3 and 3.5 ingesting all of these texts and then creating a narrative around death reminded me of the idea of a collective unconscious. GPT 3 and 3.5 have a better sense of our cultural narrative than any one person could have.

For many of us, watching television, Hollywood movies, and reading the canon gives us an overarching sense of the normative or cultural narrative for our lives. None of us can read or process everything, so we just have a personal understanding of the narrative, the story of how life is supposed to work. This story is deeply influenced by our family, our culture, and our personal history. GPT has no personal bias, although I suspect GPT has a strong bias towards those who write long stories on the Internet. Thus, GPT could be used by psychologists and deconstructionists to uncover some of the normative structures in our lives.

I hope they do. I’m intrigued not by what GPT might say but by what it says reveals about us.

For example, the person in This American Life asks GPT to write about her sister and starts with a prompt that lets GPT know her sister has cancer. GPT writes a story in which the sister recovers, and the two have a happy moment at the end. I find it fascinating that combining all of our stories about young people with cancer creates a happy ending. What would happen if we changed the inputs to only stories from one country? Would most countries give happy endings? Would stories from lower-income countries be more likely to have unhappy endings?

Instead of focusing on how GPT and ChatGPT talk, we can look at what their summarizations of our inputs tell us about us.

--

--