AI Studies

30% of Public Feedback on Higher Education Act Changes (Student Financial Aid) was Likely AI

In 2025, the US Department of Education held a public hearing, accepting comments and feedback about changes to the Higher Education Act for Federal student financial assistance programs. At Originality.ai, we analyzed how much of that feedback was Likely AI.

Following changes to the Higher Education Act (HEA) as a result of the One Big Beautiful Bill Act (signed into law in 2025), the US Department of Education intended to establish committees. 

The purpose of these committees would be to consider the changes to the Federal student loan programs and Pell Grants (among other changes). As part of this, the Department of Education held a public hearing, which accepted written comments and feedback.

But how much of that feedback was authentically human and how much was Likely AI?

This study analyzed hundreds of those public submissions to determine the prevalence of likely AI-generated content with industry-leading Originality.ai AI Detection.

Key Takeaways (TL;DR)

Study Findings:

  • Finding 1: Nearly 1 in 3 public comments (30.41%) were flagged as Likely AI.
  • Finding 2: Some of the most policy-critical themes made up the highest proportion of likely AI responses, such as eligibility-related feedback, accountability, and cost/affordability. Personal experience topics, like loan repayment, made up the lowest proportion of Likely AI comments.
  • Finding 3: Likely AI responses were typically longer at an average of 250 words vs. 185 words for human-written submissions.

For more information on this study’s data, check out the methodology.

Insight: Why are changes to the HEA, such as Pell grants, important?

As a quick overview, Pell Grants mark a significant change to federal student aid.

The policy updates Pell Grant eligibility to programs between 150 and 600 clock hours and 8 to 15 weeks in length, improving the accessibility of education, particularly for short-term and career-focused training.

Keep reading for more details.

The data source for the public comments for this study

Ahead of December negotiations and discussions, the Department of Education gathered public input on HEA changes through a regulatory government page.

These submissions were designed to inform the rulemaking process, making them extremely important. The feedback provided addressed Pell, loan forgiveness, student loans, eligibility, repayment, and more. 

  • Phil Hill of Phill Hill & Associates consolidated all 1,124 public comments from the docket to improve transparency and enable AI-assisted research.
  • This study used that compiled dataset as its foundation. 

We processed each one through the industry-leading Originality.ai AI detection tool to determine the likelihood of AI-generated content. Submissions containing fewer than 100 words were excluded, leaving 707 comments for our analysis.

These are our findings.

30% of the Feedback Was Likely AI

Of those 707 submissions analyzed, the results were striking. Nearly one in three were flagged as likely AI-generated.

Specifically, 215 submissions (30.41%) were classified as likely AI, while 491 (69.45%) were identified as likely human-written. One submission (0.14%) returned an error.

Public Feedback on Higher Education Act Changes Study

Why is checking for Likely AI feedback important?

That nearly a third of public responses to a federal rulemaking process were likely machine-generated raises important questions about the authenticity and representativeness of the feedback informing Workforce Pell policy, among other changes to the HEA.

Public comment periods exist to capture genuine stakeholder perspectives from those directly affected by regulatory decisions.

When a significant proportion of that input may not reflect authentic human experience or expertise, it risks diluting the voices the process was designed to elevate.

Likely AI Responses by Feedback Theme

Beyond the overall split, our study examined which topics attracted the highest concentration of the likely AI-generated responses. Using keyword matching, each submission was categorized into thematic groups based on the language (keywords) used.

Likely AI Feedback on Higher Education Act Changes by Theme

Categories with the highest levels of Likely AI responses

Some of the themes that are most central to the changes made up the greatest proportion of likely AI comments (percentages determined out of the 215 Likely AI responses). 

  • Eligibility-related feedback led at 83.72% 
  • This was followed by Accountability at 76.74%
  • Then, Cost/Affordability at 68.84%

These are the very topics regulators need the most authentic stakeholder input on. Who should qualify, how are programmes held accountable, and whether costs remain fair.

The high concentration of likely AI responses in these categories suggests that the themes most critical to policy outcomes may also be the most vulnerable to artificial amplification.

Categories with the least Likely AI responses 

By contrast, more personal policy areas made up the lowest proportion of likely AI comments. (percentages determined out of the 215 Likely AI responses). 

  • Taxpayer-related feedback had the lowest rate at just 2.33%
  • This was followed by income-driven and repayment keywords at 10.70%
  • Forgiveness (such as for keywords like loan relief) at 13.49%
  • Then, themes around public service loan forgiveness at 20.47%

These are categories that tend to reflect more individual, experience-driven concerns, such as loan repayment, public service careers, or personal finance.

The finding that these categories were the least prevalent among probable AI responses suggests that when feedback is rooted in lived experience, rather than policy positioning, it is more likely to be authentically human-written.

Probable AI Responses Were Longer on Average

The data also revealed a clear pattern in submission length. Likely AI-generated responses, for example, were notably longer than their human-written counterparts.

Length of Likely AI vs. Human-Written Comments

The average likely AI response was 250 words, which is much longer than the 185-word average for human-written submissions. 

That gap gets even wider when you look into character volumes. Likely AI responses averaged at 1,468 characters compared to 961 characters for human-written feedback (not including whitespace).

Of course, length alone is not an indicator of quality or authenticity. 

However, it does raise concern that AI text could carry outsized influence in a process designed to review genuine public input.

The Higher Education Act, Workforce Pell, and How it Impacts Student Financial Support

To place these study findings in the proper context, let’s explain what Workforce Pell is and why it’s important.

What is the Higher Education Act (HEA)?

According to the Association of American Universities, the HEA is what authorizes financial support programs for students at the federal level. 

This means that it authorizes some of the federal government’s most important student financial assistance programs for postsecondary and higher education.

What is Workforce Pell?

In simple terms, the Workforce Pell is the federal Pell Grant program that extends financial aid to students enrolled in short-term, career-focused training programs.

Workforce Pell is part of the One Big Beautiful Bill Act (OBBB) that made statutory changes to the Higher Education Act (HEA). The OBBB Act was signed into law back in July of last year (2025). The full bill is available through the U.S. Congress.

How does it impact student financial aid?

The OBBB Act updates Pell Grant eligibility to programs between 150 and 600 clock hours and 8 to 15 weeks in length.

As noted by the UPCEA (The Online and Professional Education Association), this highlights a policy shift for financial aid that provides support for shorter programs. Implementation is set for July 1, 2026.

A publication on Implementing Workforce Pell from the National Skills Coalition explains that this is particularly helpful for those preparing for roles like welding, HVAC technicians, and IT support specialists.

This policy is very important for educators and institutions, as it brings with it an influx of federal funding for short-term credential programs, improving the accessibility of education.

However, some have raised concerns about the rushed timeline, data infrastructure readiness, and readiness by state (as depending on the state, some already have state-level initiatives for different short-term education financial supports).

What were the Dec. 2025 Department of Education Discussions?

As a bit of background, before the discussions began, the Department of Education (in July 2025) noted its intention to create committees.

The aim of these committees was to consider the changes to the HEA, such as student loan programs at a Federal level and the Pell grant program, among others.

Then, the Department of Education published draft rules covering short-term Pell Grant provisions and how institutions should factor in non-federal grant aid when awarding Pell, part of the rulemaking process under the OBBB. 

The December 8-12 session aimed to finalize these proposals within a single week.

The Department of Education’s docket describes the discussion or hearing as follows, 

Public Hearing related to recent statutory changes to the Title IV, HEA programs included in Pub. L. 119-21, known as the One Big Beautiful Bill Act, that President Trump signed into law on July 4, 2025, as well as to implement other Administration priorities.

Final Thoughts

In conclusion, this study clearly shows that 30% of the 707 public feedback submissions analyzed from the rulemaking docket were likely AI-generated. 

Further, some of the most policy-critical categories made up the highest proportion of the Likely AI feedback as well, and may have commanded more attention during review.

Public comment periods are the cornerstone of democratic policymaking. 

As AI tools become more accessible, regulators, educators, and advocates must consider how to preserve the integrity of these processes, implementing industry-leading AI detection methods like the Originality.ai AI detector.

Read more Originality.ai AI Studies on the impact of AI content across industries.

Methodology

The study evaluates the prevalence and characteristics of likely AI‑generated public comments submitted to the U.S. Department of Education docket ED‑2025‑OPE‑0151.

Data Sources

Public Comment Dataset: A consolidated dataset derived from public comments submitted to docket ED‑2025‑OPE‑0151. 

The docket summary states that feedback was gathered for the: “Public Hearing related to recent statutory changes to the Title IV, HEA programs included in Pub. L. 119-21, known as the One Big Beautiful Bill Act, that President Trump signed into law on July 4, 2025, as well as to implement other Administration priorities.”

The dataset was curated from the publicly available Regulations.gov submissions that were compiled in an On EdTech post on LinkedIn. This study evaluated the .csv of data compiled from the On EdTech LinkedIn post.

AI‑Detection Output: Each comment was processed using Originality.ai, producing a probability score indicating whether the text was likely AI‑generated. Entries containing fewer than 100 words were excluded.

Keywords were then used to identify topical themes within the feedback. Additionally, word count and character count were analyzed.

Madeleine Lambert

Madeleine Lambert

Madeleine Lambert is the Director of Marketing and Sales at Originality.ai, with over a decade of experience in SEO and content creation. She previously owned and operated a successful content marketing agency, which she scaled and exited. Madeleine specializes in digital PR—contact her for media inquiries and story collaborations.

More From The Blog

Al Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity!

Try our AI Checker now!

cross image
Free Tool Popup image

Sign up now!

Free Tool Image step1
Free Tool Image step2
Free Tool Image step3
Free Tool Image step4
Free Tool Image step5
Free Tool Image step1
Free Tool Image step2
Free Tool Image step3
Free Tool Image step4
Free Tool Image step5