AI Studies

List of Companies Banning ChatGPT

With the versatile capabilities of ChatGPT many large companies are taking steps to prohibit its use internally amongst staff and within company policies and procedures.

Jonathan Gillham

With the versatile capabilities of generative AI tools like ChatGPT, many large companies are taking steps to prohibit its use internally amongst staff and within organizational policies and procedures. While research by BlackBerry shows that nearly 75% of IT decision-makers believe that these bans demonstrate "excessive control" when it comes to bring-your-own and corporate devices, it's not stopping some of the biggest organizations in the world from blocking or at least limiting use of the platform.

Here is a list of major companies that have implemented company policies and restrictions on ChatGPT usage:

  • Accenture
  • Amazon
  • Goldman Sachs Group
  • Verizon Communications
  • Apple
  • Bank of America
  • Samsung Citigroup
  • Commonwealth Bank of Australia
  • Deutsche Bank
  • iHeartRadio
  • JPMorgan Chase
  • Chase Wells Fargo
  • World Economic Forum
  • Italian Data Protection Authority
  • Calix
  • Northrop Grumman Corporation
  • Spotify
  • PWCCNET
  • Mishcon de Reya

Looking at the list, you may notice industry leaders like Accenture, Amazon, Goldman Sachs Group, Verizon Communications, law firm Mishcon de Reya, defense company Northrop Grumman Corporation, and even tech giant Apple. But the big question is, why?

Why Are Companies Banning ChatGPT?

Of course, there are many reasons that companies may choose to ban any kind of program at the workplace. They may not like using third-party applications for business purposes, or consider it a distraction to employees. But when it comes to ChatGPT in particular, there are two reasons that seem to stand out: privacy concerns and data protection, and damage to corporate reputation.

Privacy Concerns and Data Protection

Perhaps the number one reason that companies are banning ChatGPT is to protect their confidential information. In fact, almost 70% of respondents to a BlackBerry survey said they were blocking ChatGPT for this reason. They fear that such a powerful tool could potentially open their companies up to cybersecurity threats and other unknown risks around disclosure of sensitive data and internal communications.

And their cautiousness isn't completely unfounded. Take Samsung, for example. They banned the generative AI tool after they found that sensitive code had been uploaded by an employee.

Amazon has similar reasons for limiting the use of ChatGPT among employees. They've found examples of responses from the generative AI program that look suspiciously similar to the company's confidential information.

Damage to Corporate Reputation

But protecting against potential leaks of source code and other types of internal data aren't the only concerns. In the same survey, nearly 60% of respondents worried that ChatGPT could negatively affect their corporate reputation.

And again, companies have good reason to exercise caution here. While it can help increase efficiency around the office, there are still some serious limitations to ChatGPT that could cause irreparable damage to a company's reputation.

For example, while it can generate human-like responses, it has trouble understanding context, and just generally possesses a lack of common sense and emotional intelligence. This can lead to ChatGPT responses that many would consider inappropriate. And if any of these responses made it into their marketing materials, it could put a lot of companies into hot water. 

Together, these measures highlight the cautious approach that these organizations are adopting to navigate the potential risks associated with large language models like ChatGPT and its capabilities, including both legal and reputational consequences.

The Launch of ChatGPT Enterprise

ChatGPT Enterprise, launched in August 2023, is geared towards businesses with some built-in features, like robust privacy measures and advanced data analysis capabilities, to address these concerns.

ChatGPT Enterprise confirms that:

  1.  It will not use business data sent to ChatGPT Enterprise or any usage data for model training
  2. That all interactions with ChatGPT Enterprise are encrypted during transmission and while stored

In light of the lawsuit claiming that OpenAI was secretly using personal data to train ChatGPT, this first measure is especially crucial. But only time will tell if it will be enough to ease concern over the potential cybersecurity risks and other issues that are causing organizations to take a step back from the generative AI program.

Final Thoughts

The launch of ChatGPT Enterprise introduces an opportunity for companies to rethink their reservations about AI adoption, especially for those with heavy reliance on processes and compliance documentation. If they're able to preserve their corporate reputation, this solution could pave the way for a more welcoming approach towards the potential business benefits of integrating artificial intelligence tools within large organizations.

Jonathan Gillham

Founder / CEO of Originality.AI I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

AI Content Detector & Plagiarism Checker for Serious Content Publishers

Improve your content quality by accurately detecting duplicate content and artificially generated text.