A Model for the Future: AI in Higher Education
- Aidan Brown
- 2 days ago
- 6 min read
Large Language Models such as OpenAI’s ChatGPT and Google’s Gemini are no longer niche technologies. Instead, they are now in widespread use. Despite their popularity, dedicated higher-education programs for teaching and practicing Artificial Intelligence (AI) competencies are relatively scarce. Programs such as The University of Maryland’s Artificial Intelligence Interdisciplinary Institute provide infrastructure and coursework for students to develop the AI competencies many say are needed to succeed in an AI-dominated future.
Education Curricula Lag Behind Ethics Policy
AI models such as Large Language Models (ChatGPT and Google Gemini) are no longer a niche technology; many people in the Western world have used them. Their widespread adoption has prompted many discussions about the ethics of their use, particularly in higher education. A majority of institutions have begun implementing AI policies, primarily related to academic integrity; yet far fewer have developed comprehensive curricula centered on AI and its critical role in our future. As of early 2026, sources suggest that as many as 193 US higher education institutions offer some form of bachelor's degree involving AI, often as a concentration of a more traditional degree, such as computer science. According to U.S. News, of the more than 1,700 Bachelor-granting colleges and universities in the United States, only 26 offer degrees in Artificial Intelligence.
To equip students for an AI-dominated future, universities must integrate AI literacy and competency into their curricula rather than merely allowing or regulating its use. A student’s success will depend on leveraging AI more effectively than their peers, enabling them to work faster and produce more impactful work. To discuss the future of AI in higher education, I sat down with Alyssa Ryan, the Program Director of Academic Program Operations at the Artificial Intelligence Interdisciplinary Institute (AIM) at the University of Maryland.

The University of Maryland founded AIM in April 2024 to improve collaboration, education, and ethics in the fast-growing field of AI. AIM’s goals include developing new programs, such as majors, minors, and certificates that teach AI skills, as well as increasing the number of courses covering AI and AI competencies. The goal of these programs is to prepare students for an AI-dominated world by providing essential skills to succeed–both inside and outside of the classroom. The Institute also promotes interdisciplinary AI research and funds other AI-related needs, such as new computing resources for research. Purdue University is one of the few universities that have laid out detailed policies and programs for AI.
Bridging the AI Gap
In addition to her role at AIM, Alyssa Ryan is a PhD student at UMD’s College of Information. She studies digital accessibility in video games and AI, and its integration into assistive and adaptive technology. Ryan came to AIM for her interest in AI and its role in our future, particularly inspired by her own research and how it fits into education. When AIM was founded, she became the inaugural program director. “What was interesting about it was this idea of creating undergraduate degree programs in AI from two core disciplines, while also maintaining interdisciplinarity,” Ryan said. She emphasized that when studying AI, there should be not just a technical component but also an ethical one, a goal AIM has shared since its inception.
University AI policy can be split into two distinct fronts: Regulating AI use in higher education and teaching AI skills. On the one hand, the adoption and use of AI in the classroom is already here. A 2025 Higher Education Policy Institute survey found that 92% of students used AI, a nearly 25% increase from the previous year. Universities are now curbing or eliminating the use of AI in writing, tests, and other assignments. Restricting the use of AI in the classroom is important to protect the development of students' critical thinking skills. If students overrely on AI, it becomes a crutch that replaces their thinking and diminishes their intellectual growth. A recent study found that in an essay-writing task, LLM users not only could not accurately quote their own work but also showed lower brain activation, underperforming at the neural, linguistic, and behavioral levels. Studies like this and others show detrimental effects on critical thinking, analysis, and decision-making skills when AI is overrelied on and overused. Therefore, the responsibility falls on both the student and university to limit AI use in tasks like essay writing so that students develop the necessary cognitive skills that are critical for work throughout their education and their lives. Students must limit their use of AI for tasks such as essay writing, and universities must develop strong guidelines for AI use, similar to those for other issues of academic integrity, such as plagiarism.
Separately, schools are grappling with how to develop students’ AI skills. This could be anything from the technical aspects of AI (such as how LLMs work) to its ethical implications or resource use. One interdisciplinary aspect of an AI-educated individual would be to be an AI advocate–someone who can push for the responsible use of the technology to improve other aspects of their job, while prioritizing ethical concerns and human emphasis.
Does the most significant path forward lie in allowing for the use and regulation of AI in the classroom, or teaching students skills to be proactive AI advocates? And does AI-integration vary across departments in a college or university? The answer is nuanced. Ryan described how different departments she worked with had different approaches to AI integration. Departments that adopted it quickly and developed technical skills didn't need additional classroom integration; they were already using it in the classroom (such as with coding) and had a better understanding of AI's limitations. Students need both aspects of AI: an understanding of the technology and its constraints.
Ryan explained that needs may vary across disciplines, but emphasized that, regardless of field, nearly everyone should integrate it. As needs evolve across fields, it is important to build a well-rounded understanding of AI and delve into specific use cases.

Defining AI Proficiency Across Fields
With needs changing across disciplines, what defines success in each area changes. Among the most important teachers of AI competency, professors play a vital role in student success, making faculty proficiency a top priority. Workshops for faculty are an important part of AIM and UMD’s approach to the AI future, as they can improve staff technical skills, which in turn benefit students. Ryan described how the role of AI in a curriculum shifts across a student’s education level. Teachers may allow more AI use in upper-level courses where students already have a solid understanding of the content, but overreliance on AI early in education may be harmful. AI use can accelerate writing and research processes, but has the potential to erode analytical and critical thinking skills, which are critical to a student’s development. An additional ethical consequence of overreliance on AI use may be accidental plagiarism. A recent study extracted copyrighted training data (such as books) from several prominent LLMs, suggesting that even with safeguards, overuse of AI-generated output could inadvertently lead students to plagiarize if an LLM produces copyrighted text.
What does a knowledgeable and successful graduate look like to AIM? This varies between the two AI degrees the University of Maryland offers: a BS and a BA. “The Bachelor of Science degree is a very technical degree, and it’s setting students up to be successful designers, computational scientists, and algorithm assistants around AI…while also understanding that there is a social component to their work, ethics.” Ryan described. “On the flip side of that is the Bachelor of Arts degree, and they will have a technical core, but I don’t expect students to be doing algorithmic work in the same way; I imagine them being AI ethicists and really leading with the difficult questions and conversing across the technologists and philosophers of the world.”
Building Programs for an Ever-Evolving Field
In such a fast-growing field, AI may look completely different five years from now, rendering a strict education curriculum obsolete. Ryan emphasized the importance of teaching broad fundamentals while focusing on specific details, so that key concepts like interdisciplinary work and ethical considerations are at the core of a student’s education, helping them remain resilient in a changing world. “Part of why I really see value in what we do in AIM is because we are not siloing the students, and I think any of the students who leave here who see the value in an interdisciplinary education, I would think are successful,” Ryan said.
AI education will also become increasingly important at all levels of education as it becomes more common in our everyday lives, something that Ryan pointed out: “AI literacy is also becoming more and more of a thing in K through 12… it won’t be solely the responsibility of higher education institutions to lessen the gap there.” Students who learn to live with and leverage new technologies will succeed most in the future workplace, and universities must incorporate these skills into their curricula or risk becoming obsolete. A curriculum that teaches fundamental skills and the importance of understanding the nuances of new technology will prepare the next generation of responsible AI users, who will shape our future.



Comments