No one can doubt the fact that AI has opened up new frontiers in content creation – everything from text to images, to audio and videos and much more. And while AI offers unprecedented opportunities to automate tasks and give voice (or art) to our creativity, there’s also growing concern about the societal costs of AI that’s undetectable.
This type of content, so convincing that it becomes indistinguishable from human content, can have significant and impactful implications for trust building, credibility and ethics in the age we live in. Here are a few of the costs that we must consider to society as a whole, and what they mean for us moving forward from the information age to the age of AI.
Deepfakes are realistic, AI-generated videos that alter a person’s expression or voice or even swap faces with others. They can make politicians look like they’ve said things they didn’t say, or were in places where they had never been. These types of videos have already created controversy for their ability to deceive and manipulate people.
Meanwhile, AI-powered text generators like ChatGPT can create convincing-looking news articles, academic papers and social media posts that are nearly indistinguishable from content written by humans. Although both types of technology have uses in real-world applications (such as using deepfakes to make biographical films of a character’s life throughout the ages, or using them to restore classic films or outdated-looking CGI), our inability to be able to detect AI-generated content at first glance can undermine trust and comes with numerous risks.
With the ability to generate undetectable deepfakes and synthetic texts, criminals can now weaponize texts, videos and photos to commit identity theft and fraud. For example, a deepfake video could feature a CEO announcing a fake corporate merger, which could lead to stock market manipulations and the vast selling or buying of stock which could in turn result in disaster for the markets.
AI-generated emails can be made to look like they came from executives, resulting in employees willingly sharing passwords or other security credentials that can compromise the integrity of the business or organization.
Misinformation refers to false or misleading information that is shared without the intent to deceive, whereas disinformation involves the deliberate creation of content that’s designed to manipulate others. Disinformation is more organized and may involve the use of fake social media accounts, doctored images or fake videos whereas misinformation is more likely to spread from misunderstandings, a misinterpretation of data or incorrect reporting. Both deepfake videos and AI-generated articles can spread misinformation and disinformation which can have severe repercussions while distorting public perception on critical issues within a society.
Beyond critical impacts of spreading misinformation and disinformation, undetectable AI can have an emotional and psychological impact on society as well. Deepfakes that manipulate personal videos can cause a great deal of distress and shatter reputations. Using AI to generate fake social media comments or reviews can also impact mental health, particularly among more vulnerable populations.
As AI continues to become more sophisticated and harder to detect, the public’s trust in digital media as a whole is starting to erode. People are becoming more and more skeptical of the content they come across online, which can extend to institutions and the media, further dividing society into fractured groups.
As a whole, sooner or later society will need to deal with the implications that creating deceptive content poses to us. AI as a tool in and of itself is neutral, but creating content with transparency and accountability will be paramount as people challenge the ethical issues surrounding using AI in all of its forms.
One of the more sinister side effects of AI generated content is in its potential to amplify and irritate already inflamed social and political divisions. Today, algorithms can create content that appeals to specific political and social ideologies, further reinforcing beliefs and pushing individuals into their own familiar echo chambers.
Over time, a human author may show their biases through their writing, but AI has the ability to appear to seem impartial, which can lead to misinformation or a slant that is not entirely accurate. This further polarizes the public which makes it even harder for people to find common ground or engage in constructive debate.
It may not seem like it on the surface, but undetectable AI-generated content can also exacerbate existing economic disparities. Imagine a product launched in a country where access to AI-generated content is cheap and readily available. Now imagine its competitor launched in an economically-disadvantaged area. The online retailer in the former country can create AI-generated but convincingly real positive reviews over their competitor.
Unequal access to AI can create a snowball effect, widening the gap between different groups in terms of those who have and those who don’t have access to such tools.
It sounds rather dystopian, but AI-generated content can be used to exploit individuals in ways that society as a whole simply isn’t prepared for. Imagine targeted advertising taken to an extreme with ads that aren’t just tailored to your demographics, but are created on-the-fly using AI to exploit your emotional state, detected through the posts you write and share on social media or your interactions on other third party sites.
This practice, known as “surveillance capitalism” involves corporations and governments alike potentially misusing AI’s capabilities for ongoing data collection, refinement and analysis to manipulate citizens into consuming more and more.
We’ve already seen the beginnings of what can happen when AI is used in academic, legal and journalistic settings. Fake research articles, fabricated court cases and data, and convincing-sounding reports can pass initial scrutiny. In addition, deepfake “interviews” could be used to create a scandal or reveal misinformation that could potentially damage reputations and manipulate public opinion.
Because we hold the law, journalism and academia in such high regard in terms of their status as bastions of credibility and the truth, AI has the potential to damage their credibility and authenticity, leading to a society that’s not only divided and fragmented, but inherently skeptical of expertise, even when it comes from authoritative sources.
We’ve already seen what can happen when profits are put over people, in the loss of employment in fields that were traditionally reliant on human creativity and skill. Journalists, writers, graphic designers, video editors and many more are seeing themselves go head to head with machines that can produce similar content at a fraction of the time and cost.
Automation is by no means a new concern, but it often leads to economic shifts rather than outright job loss. AI turns this notion on its head and the transition can be quite jarring and painful for many, causing increased income inequality.
Although not as dire and foreboding as creating scandalous deepfake videos and upsetting elections or other institutions, the widespread prevalence and use of AI-generated content can have implications for human creativity as well. Now that AI tools can easily generate art, music and writing, people will lean more into these technologies to give them exactly what they want at the expense of developing their own creative skills.
An over-reliance over time to AI can create a sort of “creative atrophy” where the next generation is even less capable of innovative thinking and outside the box problem solving – skills that are vital for society to progress.
Because AI models are trained on large and diverse datasets, the content they generate can trend toward broad but shallow understandings of culture and ethics. This can create a type of cultural and ethical homogenization. where local traditions and minority perspectives get lost in the shuffle of content that is crafted to have mass appeal.
We may not realize it in our day to day lives, but undetectable AI generated content has already created a whole host of legal quagmires and unforeseen challenges. For example, who is responsible when an AI generates defamatory content? Traditional legal frameworks were never meant to be applied in these complex cases, which in turn creates uncertainties and blatant miscarriages of justice.
So what can we do about these issues and how can we prevent society as a whole from falling into these traps? The development pace of AI is far outstripping our capacity to rein it in. Although today’s AI detection tools like Originality.AI can go quite far in detecting AI-generated text, AI continues to evolve at an unheard-of pace. Governments have floated the idea of regulation, although trying to enforce laws around AI-generated content is, much like aforementioned legal issues, complex and needs the cooperation of a wide range of local, national and international groups.
When we consider the sheer potential for AI to lend a hand toward identity theft, fraud, misinformation, lack of trust and other serious societal challenges, it’s easy to say that something, anything must be done to circumvent it. Although technical advances, public awareness and governmental involvement each has their own specific method of trying to tackle the issue, there’s one thing we cannot do as a society, which is to simply sit back and see how things unfold.
With this in mind, it’s vital that society as a whole take a proactive stance on combating and mitigating these risks by facing them head-on. By leveraging all of the tools and expertise we have at our disposal, including AI detectors, legal, technological, educational and even government solutions, we’ll be able to tame the unwieldy beast that is AI and turn it into a tool used for positive gains and growth.