Following changes to the Higher Education Act (HEA) as a result of the One Big Beautiful Bill Act (signed into law in 2025), the US Department of Education intended to establish committees.
The purpose of these committees would be to consider the changes to the Federal student loan programs and Pell Grants (among other changes). As part of this, the Department of Education held a public hearing, which accepted written comments and feedback.
But how much of that feedback was authentically human and how much was Likely AI?
This study analyzed hundreds of those public submissions to determine the prevalence of likely AI-generated content with industry-leading Originality.ai AI Detection.
For more information on this study’s data, check out the methodology.
As a quick overview, Pell Grants mark a significant change to federal student aid.
The policy updates Pell Grant eligibility to programs between 150 and 600 clock hours and 8 to 15 weeks in length, improving the accessibility of education, particularly for short-term and career-focused training.
Keep reading for more details.
Ahead of December negotiations and discussions, the Department of Education gathered public input on HEA changes through a regulatory government page.
These submissions were designed to inform the rulemaking process, making them extremely important. The feedback provided addressed Pell, loan forgiveness, student loans, eligibility, repayment, and more.
We processed each one through the industry-leading Originality.ai AI detection tool to determine the likelihood of AI-generated content. Submissions containing fewer than 100 words were excluded, leaving 707 comments for our analysis.
These are our findings.
Of those 707 submissions analyzed, the results were striking. Nearly one in three were flagged as likely AI-generated.
Specifically, 215 submissions (30.41%) were classified as likely AI, while 491 (69.45%) were identified as likely human-written. One submission (0.14%) returned an error.

That nearly a third of public responses to a federal rulemaking process were likely machine-generated raises important questions about the authenticity and representativeness of the feedback informing Workforce Pell policy, among other changes to the HEA.
Public comment periods exist to capture genuine stakeholder perspectives from those directly affected by regulatory decisions.
When a significant proportion of that input may not reflect authentic human experience or expertise, it risks diluting the voices the process was designed to elevate.
Beyond the overall split, our study examined which topics attracted the highest concentration of the likely AI-generated responses. Using keyword matching, each submission was categorized into thematic groups based on the language (keywords) used.

Some of the themes that are most central to the changes made up the greatest proportion of likely AI comments (percentages determined out of the 215 Likely AI responses).
These are the very topics regulators need the most authentic stakeholder input on. Who should qualify, how are programmes held accountable, and whether costs remain fair.
The high concentration of likely AI responses in these categories suggests that the themes most critical to policy outcomes may also be the most vulnerable to artificial amplification.
By contrast, more personal policy areas made up the lowest proportion of likely AI comments. (percentages determined out of the 215 Likely AI responses).
These are categories that tend to reflect more individual, experience-driven concerns, such as loan repayment, public service careers, or personal finance.
The finding that these categories were the least prevalent among probable AI responses suggests that when feedback is rooted in lived experience, rather than policy positioning, it is more likely to be authentically human-written.
The data also revealed a clear pattern in submission length. Likely AI-generated responses, for example, were notably longer than their human-written counterparts.

The average likely AI response was 250 words, which is much longer than the 185-word average for human-written submissions.
That gap gets even wider when you look into character volumes. Likely AI responses averaged at 1,468 characters compared to 961 characters for human-written feedback (not including whitespace).
Of course, length alone is not an indicator of quality or authenticity.
However, it does raise concern that AI text could carry outsized influence in a process designed to review genuine public input.
To place these study findings in the proper context, let’s explain what Workforce Pell is and why it’s important.
According to the Association of American Universities, the HEA is what authorizes financial support programs for students at the federal level.
This means that it authorizes some of the federal government’s most important student financial assistance programs for postsecondary and higher education.
In simple terms, the Workforce Pell is the federal Pell Grant program that extends financial aid to students enrolled in short-term, career-focused training programs.
Workforce Pell is part of the One Big Beautiful Bill Act (OBBB) that made statutory changes to the Higher Education Act (HEA). The OBBB Act was signed into law back in July of last year (2025). The full bill is available through the U.S. Congress.
The OBBB Act updates Pell Grant eligibility to programs between 150 and 600 clock hours and 8 to 15 weeks in length.
As noted by the UPCEA (The Online and Professional Education Association), this highlights a policy shift for financial aid that provides support for shorter programs. Implementation is set for July 1, 2026.
A publication on Implementing Workforce Pell from the National Skills Coalition explains that this is particularly helpful for those preparing for roles like welding, HVAC technicians, and IT support specialists.
This policy is very important for educators and institutions, as it brings with it an influx of federal funding for short-term credential programs, improving the accessibility of education.
However, some have raised concerns about the rushed timeline, data infrastructure readiness, and readiness by state (as depending on the state, some already have state-level initiatives for different short-term education financial supports).
As a bit of background, before the discussions began, the Department of Education (in July 2025) noted its intention to create committees.
The aim of these committees was to consider the changes to the HEA, such as student loan programs at a Federal level and the Pell grant program, among others.
Then, the Department of Education published draft rules covering short-term Pell Grant provisions and how institutions should factor in non-federal grant aid when awarding Pell, part of the rulemaking process under the OBBB.
The December 8-12 session aimed to finalize these proposals within a single week.
The Department of Education’s docket describes the discussion or hearing as follows,
“Public Hearing related to recent statutory changes to the Title IV, HEA programs included in Pub. L. 119-21, known as the One Big Beautiful Bill Act, that President Trump signed into law on July 4, 2025, as well as to implement other Administration priorities.”
In conclusion, this study clearly shows that 30% of the 707 public feedback submissions analyzed from the rulemaking docket were likely AI-generated.
Further, some of the most policy-critical categories made up the highest proportion of the Likely AI feedback as well, and may have commanded more attention during review.
Public comment periods are the cornerstone of democratic policymaking.
As AI tools become more accessible, regulators, educators, and advocates must consider how to preserve the integrity of these processes, implementing industry-leading AI detection methods like the Originality.ai AI detector.
Read more Originality.ai AI Studies on the impact of AI content across industries.
The study evaluates the prevalence and characteristics of likely AI‑generated public comments submitted to the U.S. Department of Education docket ED‑2025‑OPE‑0151.
Data Sources
Public Comment Dataset: A consolidated dataset derived from public comments submitted to docket ED‑2025‑OPE‑0151.
The docket summary states that feedback was gathered for the: “Public Hearing related to recent statutory changes to the Title IV, HEA programs included in Pub. L. 119-21, known as the One Big Beautiful Bill Act, that President Trump signed into law on July 4, 2025, as well as to implement other Administration priorities.”
The dataset was curated from the publicly available Regulations.gov submissions that were compiled in an On EdTech post on LinkedIn. This study evaluated the .csv of data compiled from the On EdTech LinkedIn post.
AI‑Detection Output: Each comment was processed using Originality.ai, producing a probability score indicating whether the text was likely AI‑generated. Entries containing fewer than 100 words were excluded.
Keywords were then used to identify topical themes within the feedback. Additionally, word count and character count were analyzed.

MoltBook may be making waves in the media… but these viral agent posts are highly concerning. Originality.ai’s study with our proprietary fact-checking software found that Moltbook produces 3 X more harmful factual errors than Reddit.