Try the Most Accurate AI Detector on the Market
Our patented AI checker is the most accurate detector on the market! Don't believe us? Try it for yourself!
Try for FREE Here!
AI Writing

OpenAI and MIT Study: Findings of Impacts of AI Use

OpenAI and MIT conducted a study to analyze the impacts of AI use and its implications for users’ emotional well-being. Here are the key insights of that study.

It seems like we’re always talking about what AI can do for us. You know, help us boost productivity, answer questions, or brainstorm ideas when we’re stuck. However, how often do we stop and ask what it’s doing to us?

OpenAI and the MIT Media Lab set out to explore that question through a large-scale research collaboration, “Investigating Affective Use and Emotional Well-being on ChatGPT”. 

The study focused on how interactions with ChatGPT affected users’ emotional well-being — what they found could change how we think about using and developing AI going forward.

This article breaks down the key insights of the study, including: how the researchers conducted the study, what they found, and why it matters.

Key Takeaways (TL;DR)

  • OpenAI and the MIT Media Lab conducted two parallel studies to help better understand how people use ChatGPT and its impacts on emotional well-being.
    • Study 1: Conducted by OpenAI with an automated analysis of close to 40 million ChatGPT interactions combined with user surveys.
    • Study 2: Conducted by MIT Media Lab with nearly 1,000 participants, as a Randomized Controlled Trial (RCT).
  • The study found that most people had more neutral, task-based interactions with ChatGPT, but a small group formed something like emotional bonds or considered ChatGPT “to be a friend.
  • Voice conversations resulted in more positive well-being if use was brief, but worse outcomes with prolonged daily use.”- OpenAI
  • Engaging’ voice-based chat often led to more emotional responses from ChatGPT, but the study highlights that this didn’t impact users negatively.
  • Heavy use and low well-being were connected with affective use of ChatGPT.

Study Overview: Participants, Structure, and Methods

To help them understand how people use ChatGPT and how it affects them, OpenAI and the MIT Media Lab carried out two parallel studies.

Study 1: OpenAI’s analysis of ChatGPT user behavior

For their part of the study, the OpenAI team analyzed close to 40 million ChatGPT conversations to get an idea of how people emotionally engaged with their AI (to get insight into affective use patterns).

They had a particular interest in what they called power users, or those who ranked in the top 1000 voice users on any given day.

With user privacy in mind, OpenAI developed a set of automated classifiers called EmoClassifiers V1 to scan conversations for emotional and behavioral cues, like:

  • Feelings of isolation
  • Openness about emotions or struggles
  • Dependence on ChatGPT for emotional support

At the same time, they also surveyed over 4000 users to compare how people described their emotional experiences with what showed up in the conversation data. They were especially interested in the differences between power users and more typical ones.

Study 2: MIT Media Lab’s randomized controlled trial of ChatGPT use over time

Then, MIT Media Lab tested how different types of ChatGPT interactions might affect emotional health over time (to obtain “causal insights” into the impact of different features like model personality and usage type on users). 

They ran a 28-day randomized controlled trial with 981 participants randomly assigned to one of nine experimental conditions that included some combination of:

  • Modality: text-only, neutral voice, or engaging voice
  • Prompt type: personal (emotionally reflective), non-personal (task-based), or open-ended

As with the ChatGPT study, the MIT team also had a particular interest in whether changing up the voice affected participants’ emotions. For example, they designed the “engaging” voice to sound more expressive and human to see if it would elicit stronger reactions from participants.

The researchers assigned each participant in the personal or non-personal groups one prompt to use per day, while the open-ended group could use ChatGPT however they wanted. 

Regardless of group, though, they expected all participants to use ChatGPT for at least five minutes daily.

Emotional well-being metrics

To track the participants’ emotional and behavioral changes throughout the study, the researchers monitored psychosocial outcomes using the following scales:

  • UCLA Loneliness Scale
  • Lubben Social Network Scale
  • Affective Dependence Scale
  • Problematic ChatGPT Use Scale

Participants also completed daily check-ins and post-study surveys to reflect on their experiences. 

Just like the OpenAI study, the MIT team also used classifiers for the analysis.

Key Findings of the OpenAI and MIT Study

This combination of research methods yielded mixed results: 

  • Most participants just used ChatGPT to get things done. 
  • However, voice-based chat led to more negative effects with prolonged use. 
  • Then, users with low well-being showed more signs of engaging emotionally with the chatbot.

Let’s take a closer look.

Most users had neutral interactions, but some formed emotional bonds

Across both studies, most participants used ChatGPT in neutral, task-based ways, not for emotional conversations.

Emotional engagement with ChatGPT is rare in real-world usage.” - OpenAI

However, the researchers did find that a small group, most notably among power or heavy users, showed signs of higher emotional attachment. 

These participants showed attachment by:

  • Referring to ChatGPT as a friend
  • Using affectionate or using personal language
    • Described as “emotionally expressive interactions” - OpenAI

Where there were signs of emotional engagement, power users tended to show higher rates.

Where there were signs of emotional engagement, power users tended to show higher rates.
Image Source: OpenAI and MIT Study (Figure 5, page 9)

When looking at usage, OpenAI found that power users of Advanced Voice Mode:

  • Opened up more often to seek support
  • Shared more personal thoughts or questions
  • Used more emotionally charged and affectionate language

Essentially, the classifiers found that these power user conversations showed higher rates of validation-seeking and emotional openness with ChatGPT. 

Yet, overall, the study notes that both the proportion of users likely to view ChatGPT as a friend was low among both groups (‘control’ and ‘power’), stating  “these views remain a minority in both groups.

So, even though most used ChatGPT as just another tool, some (even if a small group) did start to form more of a personal connection.

‘Engaging’ voice-based chat typically led to more emotional responses from ChatGPT

In the MIT Media Lab trial, researchers found that participants who used voice, often the “engaging” one, were typically more likely to receive emotional responses from ChatGPT compared to the neutral voice. 

Using a more engaging voice model, as opposed to a neutral voice model significantly increased the affective cues from the model, but the impact on user affective cues was less clear.” - OpenAI and MIT Study

In MIT’s blog summarizing the research, they further highlight that, “Importantly, using a more engaging voice did not lead to more negative outcomes for users over the course of the study compared to neutral voice or text conditions.”

So, the key takeaway is that although the findings indicated that ChatGPT was more likely to respond with an affective cue when participants were using the ‘engaging’ voice-based chat, overall, it did not negatively impact users.

Heavy use and low well-being were linked with emotional engagement

Personal factors did have an impact. Participants who did show signs of emotional engagement with ChatGPT tended to have a few things in common:

  • Heavy use of ChatGPT
  • Lower well-being before the study
  • Engagement in personal conversations with the chatbot

...users who spent more time using the model and users who self-reported greater loneliness and less socialization were more likely to use engage in affective use of the model.” - OpenAI and MIT Study

A closer look at the survey responses in the context of emotional well-being

Another aspect to note is the increasing signs of attachment that the classifiers flagged (as documented by the first part of the study via ‘control’ and ‘power’ users). 

For instance, users turning to ChatGPT for comfort, worrying about losing access, or relying on it for emotional support. 

A closer look at the survey responses in the context of emotional well-being
Image Source: OpenAI and MIT Study (Figure 4, page 8)

This was reflected in some of the survey responses, depicted in the chart above, with users essentially describing ChatGPT as comforting or supportive.

Final Thoughts: Why This Study Matters

Ultimately, OpenAI and MIT Media Lab’s study revealed that most people do indeed use ChatGPT as a tool. However, it also found that for a small group, these AI conversations can become more personal.

As interactions with AI writing tools and chatbots like ChatGPT get more human-like, humans may start to feel that AI chatbots are more emotionally aware, even when we’re perfectly aware that they’re machines. 

This can lead to all kinds of responses from users, with some feeling more supported, some more dependent, and even others who are uncomfortable with the whole situation in the first place (often referred to as AI anxiety).

Why are the findings of this study important to take note of?

The results of this study indicate that the way we use AI tools can shape how we handle emotions, ask for help, and relate to other people altogether. 

That’s what makes this study worth paying attention to. 

Not just because it shows us how people are using AI today, but because it hints at where that use might take us next.

Maintain transparency in the age of AI with the Originality.ai AI Checker and identify whether the text you’re reading is human-written or AI-generated.

Further Reading:

Jess Sawyer

Jess Sawyer

Jess Sawyer is a seasoned writer and content marketing expert with a passion for crafting engaging and SEO-optimized content. With several years of experience in the digital marketing, Jess has honed her skills in creating content that not only captivates audiences but also ranks high on search engine results.

More From The Blog

Al Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity!