Home Education AI and Higher Ed: An Impending Collapse (opinion)

AI and Higher Ed: An Impending Collapse (opinion)

by lifestylespot
0 comments
AI and Higher Ed: An Impending Collapse (opinion)

“We pretend to work and they pretend to pay us.” That’s what everyday Soviets said in the 1970s and 1980s, as the Soviet Union teetered toward collapse.

American higher education today is facing a similar crisis of confidence.

Most people within academia seem content to ignore the signs of impending collapse and continue on as if the status quo is inevitable. Sustained increases in tuition, expansion of the administrative bureaucracy, relentless fundraising drives and a preoccupation with buzzwords such as “efficiency” dominate the academic ecosystem. Efficiency in today’s academic parlance seems aligned with how to teach the most students (i.e., maximize revenue) with the least overhead (i.e., by employing the fewest number or lowest-paid faculty). This endless drive for efficiency is the biggest crisis in higher education today.

banner

For at least the last two academic cycles, people have recognized that artificial intelligence (AI) is poised to play a serious role in American higher education. At first, the challenge was how to detect whether students are using AI to complete assignments. Once ChatGPT was released for public consumption, it became clear that the software could do a fair bit of work on behalf of the enterprising student. Simply insert your prompt and input a few parameters, and the chatbot would return a rather cogent piece of writing. The only questions became, 1) how much did students need to alter the chatbot’s output before submission and 2) how could faculty spot such artificial intervention. Faculty debates centered on how to identify AI-generated work and what the appropriate response would be. Do we make the charge of plagiarism? Using a chatbot seems to be a form of academic dishonesty, but from whom is the student copying? Like many faculty, I saw some clear examples of AI in student essay submissions. Thankfully, since I employed a specific rubric in my classes, I was able to disregard whether the student acted alone or not and simply grade the essay based on how well it met each of the expectations. The fact that AI-generated content tended to include a lot of fluff, that it frequently lacked precision and direct quotes, and that it often reflected a hesitancy to take strong positions made it all the easier to detect, and made its use less attractive to my students given the severe grade implications.

If complications around grading AI-enhanced or AI-sourced work represented a challenge to the integrity of the education system, we could rest easy knowing that we would be able to persevere indefinitely and overcome. But alas we cannot. The most severe issue that threatens to upend the system is not the challenge of detecting AI in students’ work, but the fact that universities are now encouraging a wholesale embrace of AI.

Universities across the United States—especially the self-proclaimed cutting-edge or innovative ones—are declaring that AI is the future and that we must teach students how to master AI in order to prepare for their careers. We faculty are urged to leverage AI in the classroom accordingly. What does this look like, you might ask? In part, it means asking faculty to think about how AI can be used to create assignments and lesson-plans, how it can aid in research, and how it might help grade student work.

Using AI as a teaching tool seems innocuous enough—after all, if an instructor uses AI to create questions for a test, prompts for an essay, or a slideshow for student consumption, it would presumably all be based on the material delivered in the course, with the AI using as its source the same corpus of information. Or so it should be.

Using AI to aid in research also seems innocent enough. Before, I had to use keywords to search through databases and catalogues and then read through an enormous amount of material. Taking notes, organizing my thoughts, and developing an argument was an inherently time-consuming and inefficient process. I might read hundreds of pages of material and then realize that the direction I’d taken was in vain, therefore requiring me to start fresh. AI promises to expand my search and deliver summaries that I can more efficiently process as I seek to find a direction for my scholarship. I can now use my time more wisely thanks to AI, so the story goes. All of this efficiency means that I can conduct even more research, or that I can free up my time to teach students more effectively.

And so, we get to the crux of the issue: using AI to grade student work.

Grading represents a significant time allotment for most faculty in higher education. Essays probably take the longest to grade, but multiple-choice tests and discussion posts can similarly require significant outlays of effort to evaluate them fairly. Feedback on assignments represents a pillar of education, an opportunity to guide students and challenge them to think critically. Grading for my discussion seminars, which are based on a participation portion and an argumentative essay portion, is manageable with my courses capped at 21. I can devote the time needed to help students and award them a score for the course commensurate with their displayed abilities (ideally as demonstrated through progress over the course of the semester). But, once the class size grows beyond 21, my ability to grade and use feedback as a learning tool diminishes.

Here we return to the drive for efficiency. Universities have already embraced more part-time faculty, a reliance on grading assistants (usually drawn from the ranks of other students, who work for much less money), and large class sizes to maximize profitability. All institutions need to remain solvent, so this in and of itself is not a sin. Yet, the continued pushing of the boundaries has meant that the actual student experience has been in decline for decades. AI promises to make it worse. One can scale up the number of students in a course and scale down paid facilitators of said class by using AI. The machine can take a rubric and grade thousands of student submissions—no matter how complex—in a miniscule amount of time. It doesn’t take a big imagination to envision the college administrator thinking about how much more profitable a course would be in such a scenario.

Herein lies the trap. If students learn how to use AI to complete assignments and faculty use AI to design courses, assignments, and grade student work, then what is the value of higher education? How long until people dismiss the degree as an absurdly overpriced piece of paper? How long until that trickles down and influences our economic and cultural output? Simply put, can we afford a scenario where students pretend to learn and we pretend to teach them?

Robert Niebuhr is a teaching professor and honors faculty fellow at Arizona State University.

You may also like

Leave a Comment

Welcome to LifestyleSpot.online, your trusted source for the latest news and insights across a variety of topics. We are dedicated to delivering high-quality, up-to-date content on World News, Technology, Health, Lifestyle, Business, Entertainment, Sports, Education, Politics, and Opinion pieces.

Edtior's Picks

Latest Articles

© 2025 LifestyleSpot.online. All rights reserved. Developed By Pro