AI-generated content continues to become increasingly common in every sector, and scientific abstracts are no exception. In 2024, the American Society of Clinical Oncology (ASCO) noted a significant increase in the use of large language models (LLMs) for writing scientific abstracts in their study, “Characterizing the Increase in Artificial Intelligence Content Detection in Oncology Scientific Abstracts From 2021 to 2023.”
The ASCO study evaluated the performance of three AI content detectors (Originality.ai, GPTZero, and Sapling) in identifying AI-generated content in scientific abstracts. The scientific abstracts were submitted to the ASCO Annual Meetings from 2021 to 2023.
This study analyzed 15,553 oncology scientific abstracts from ASCO Annual Meetings between 2021 and 2023. AI-generated content in the abstracts increased significantly from 2021 to 2023. Logistic regression models were used to evaluate the association of predicted AI content with submission years and abstract characteristics.
The table below clarifies why the researchers chose GPTZero, Originality.ai, and Sapling for this study and why they excluded other AI-generated text detection tools.
The table below shows characteristics of ASCO Annual Meeting abstracts and authors, from 2021 to 2023.
Originality.ai excels in detecting AI-generated content in scientific abstracts. Its high accuracy, low false-positive rate, and adaptability to different abstract characteristics make it a critical tool for researchers, publishers, and academic institutions committed to preserving the integrity of scientific literature. AI-generated text detection tools are particularly important for maintaining trust in scientific research and publications.
In the constantly evolving realm of AI generated content the veracity of information is of utmost importance. With a couple of fact checking solutions available, discerning their efficacy becomes crucial. Originality.ai, revered for its transparency and accuracy in AI content detection had recently ventured into the domain of fact checking but how does our solution stack up against well established giants like ChatGPT or emerging contenders like Llama-2? This study aims to answer this question.
We believe that it is crucial for AI content detectors reported accuracy to be open, transparent, and accountable. The reality is, each person seeking AI-detection services deserves to know which detector is the most accurate for their specific use case.