The rise of artificial intelligence (AI) tools has begun to reshape many facets of academic research and communication.
As these tools are further integrated into scholarly workflows, questions arise about the extent to which AI-generated content is influencing academic literature.
Neuroscience, with its multidisciplinary overlap across biology, psychology, computer science, and medicine, is a compelling and linguistically complex area of academic literature.
Within journal publications on neuroscience, the abstracts play a critical role in sharing findings to a wide academic and clinical audience.
This makes neuroscience abstracts a key focal point for understanding shifts in scientific communication.
In this study, we aim to analyze the prevalence of Likely AI neuroscience-related abstracts published between 2018 and 2025.
By leveraging our proprietary Originality.ai AI detection tool to evaluate abstracts, we aim to quantify shifts in the use of generative AI tools within academic publishing.
This study focuses specifically on the impact of AI in neuroscience abstracts published between 2018 and 2025.
Our analysis is shaped by these central research questions:
By addressing these questions, this study sheds light on how generative AI is reshaping scholarly communication in the field of neuroscience.
Interested in learning about how AI is impacting other areas of academia? Read our analysis on abstracts in AI journals and AI climate change paper abstracts.
The prevalence of AI-generated abstracts in neuroscience-related journals has shown a clear upward trend over the period studied, from 2018 to 2025.
From 2022 to 2025: AI neuroscience abstracts increased 314%
Why look at the timeframe of 2022 to 2025? In 2022, ChatGPT was launched, dramatically increasing the popularity of AI and LLMs.
AI levels in 2022 and 2025 represent a notable contrast from the years studied before ChatGPT was launched (2018 to 2021), when Likely AI abstracts were consistently less than 10%
This exceptional growth highlights a transformative shift in scientific communication within the field of neuroscience, suggesting that AI-generated content is becoming normalized in academic publishing.
In 2025, just under half (46.4%) of neuroscience abstracts are AI-generated.
This sharp rise is likely tied to the growing accessibility and performance of large language models and other generative technologies.
Neuroscience research requires high levels of precision, clarity, and industry-specific knowledge.
While AI tools can reduce linguistic barriers and improve efficiency, they also raise concerns, considering that AI can and does produce AI hallucinations.
Further, questions around the disclosure of AI use, the incorporation of AI detection into peer-review processes, and updates to editorial standards must be addressed as AI tools become increasingly integrated across industries.
This study offers a data-driven view of the evolving role of AI in scientific communication within neuroscience.
The findings reveal a 314% increase in AI-generated abstracts from 2022 to 2025, corresponding to the release of popular AI tools like ChatGPT and underscoring the rapid normalization of AI-assisted writing in this field.
Such a significant shift cannot be overlooked.
It calls for the development of clear policies around disclosure of AI use, tools for distinguishing between human- and AI-generated content, and new frameworks for understanding authorship and accountability in an era of hybrid scientific writing.
Wondering whether a post, abstract, or review you’re reading might be AI-generated? Use the Originality.ai AI detector to find out.
Read more about the impact of AI:
We examined the prevalence of AI-generated abstracts in AI-related scientific literature from 2018 to 2025. Using academic metadata from OpenAlex and the Originality.ai detection tool, we collected and analyzed abstracts over multiple years.
Data Collection
Abstracts were retrieved via the OpenAlex API for each year from 2018 to 2025, filtered by the keyword “artificial intelligence” and up to 500 non-empty abstracts were collected per year. A custom function reconstructed the original text from OpenAlex’s inverted index. Metadata such as title, journal, and publication date were preserved.
AI Detection
Each abstract (min. 50 words) was scanned using Originality.ai, which returned a confidence score and binary classification for AI authorship. Scans were repeated (up to three times) in case of errors, with handling for timeouts and rate limits.
Output and Storage
Results were compiled into a master dataset using pandas and exported as a CSV. Fields included: year, title, abstract, publication_date, journal, ai_likely_score, is_ai_generated, and scan_status.
This approach enabled scalable tracking of AI-generated content in scientific abstracts, supporting further analysis and trend visualization.
No, that’s one of the benefits, only fill out the areas which you think will be relevant to the prompts you require.
When making the tool we had to make each prompt as general as possible to be able to include every kind of input. Not to worry though ChatGPT is smart and will still understand the prompt.
Originality.ai did a fantastic job on all three prompts, precisely detecting them as AI-written. Additionally, after I checked with actual human-written textual content, it did determine it as 100% human-generated, which is important.
Vahan Petrosyan
searchenginejournal.com
I use this tool most frequently to check for AI content personally. My most frequent use-case is checking content submitted by freelance writers we work with for AI and plagiarism.
Tom Demers
searchengineland.com
After extensive research and testing, we determined Originality.ai to be the most accurate technology.
Rock Content Team
rockcontent.com
Jon Gillham, Founder of Originality.ai came up with a tool to detect whether the content is written by humans or AI tools. It’s built on such technology that can specifically detect content by ChatGPT-3 — by giving you a spam score of 0-100, with an accuracy of 94%.
Felix Rose-Collins
ranktracker.com
ChatGPT lacks empathy and originality. It’s also recognized as AI-generated content most of the time by plagiarism and AI detectors like Originality.ai
Ashley Stahl
forbes.com
Originality.ai Do give them a shot!
Sri Krishna
venturebeat.com
For web publishers, Originality.ai will enable you to scan your content seamlessly, see who has checked it previously, and detect if an AI-powered tool was implored.
Industry Trends
analyticsinsight.net
Tools for conducting a plagiarism check between two documents online are important as it helps to ensure the originality and authenticity of written work. Plagiarism undermines the value of professional and educational institutions, as well as the integrity of the authors who write articles. By checking for plagiarism, you can ensure the work that you produce is original or properly attributed to the original author. This helps prevent the distribution of copied and misrepresented information.
Text comparison is the process of taking two or more pieces of text and comparing them to see if there are any similarities, differences and/or plagiarism. The objective of a text comparison is to see if one of the texts has been copied or paraphrased from another text. This text compare tool for plagiarism check between two documents has been built to help you streamline that process by finding the discrepancies with ease.
Text comparison tools work by analyzing and comparing the contents of two or more text documents to find similarities and differences between them. This is typically done by breaking the texts down into smaller units such as sentences or phrases, and then calculating a similarity score based on the number of identical or nearly identical units. The comparison may be based on the exact wording of the text, or it may take into account synonyms and other variations in language. The results of the comparison are usually presented in the form of a report or visual representation, highlighting the similarities and differences between the texts.
String comparison is a fundamental operation in text comparison tools that involves comparing two sequences of characters to determine if they are identical or not. This comparison can be done at the character level or at a higher level, such as the word or sentence level.
The most basic form of string comparison is the equality test, where the two strings are compared character by character and a Boolean result indicating whether they are equal or not is returned. More sophisticated string comparison algorithms use heuristics and statistical models to determine the similarity between two strings, even if they are not exactly the same. These algorithms often use techniques such as edit distance, which measures the minimum number of operations (such as insertions, deletions, and substitutions) required to transform one string into another.
Another common technique for string comparison is n-gram analysis, where the strings are divided into overlapping sequences of characters (n-grams) and the frequency of each n-gram is compared between the two strings. This allows for a more nuanced comparison that takes into account partial similarities, rather than just exact matches.
String comparison is a crucial component of text comparison tools, as it forms the basis for determining the similarities and differences between texts. The results of the string comparison can then be used to generate a report or visual representation of the similarities and differences between the texts.
Syntax highlighting is a feature of text editors and integrated development environments (IDEs) that helps to visually distinguish different elements of a code or markup language. It does this by coloring different elements of the code, such as keywords, variables, functions, and operators, based on a predefined set of rules.
The purpose of syntax highlighting is to make the code easier to read and understand, by drawing attention to the different elements and their structure. For example, keywords may be colored in a different hue to emphasize their importance, while comments or strings may be colored differently to distinguish them from the code itself. This helps to make the code more readable, reducing the cognitive load of the reader and making it easier to identify potential syntax errors.
With our tool it’s easy, just enter or upload some text, click on the button “Compare text” and the tool will automatically display the diff between the two texts.
Using text comparison tools is much easier, more efficient, and more reliable than proofreading a piece of text by hand. Eliminate the risk of human error by using a tool to detect and display the text difference within seconds.
We have support for the file extensions .pdf, .docx, .odt, .doc and .txt. You can also enter your text or copy and paste text to compare.
There is never any data saved by the tool, when you hit “Upload” we are just scanning the text and pasting it into our text area so with our text compare tool, no data ever enters our servers.
Copyright © 2023, Originality.ai
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This table below shows a heat map of features on other sites compared to ours as you can see we almost have greens across the board!