- Undergraduate
Bachelor's Degrees
Bachelor of ArtsBachelor of EngineeringDual-Degree ProgramUndergraduate AdmissionsUndergraduate Experience
- Graduate
Graduate Experience
- Research
- Entrepreneurship
- Community
- About
-
Search
All Thayer News
Training the Next Generation of AI Architects: Q&A with Eugene Santos Jr.
Jan 20, 2026 | by Catha Mayor
Dartmouth Engineering has launched a new artificial intelligence (AI) track in its Master of Engineering (MEng) degree program, a reflection of the growing impact of AI and the School's foundational role in the field. In this interview, MEng Program Director Eugene Santos Jr., the Sydney E. Junkins 1887 Professor of Engineering and a leading AI researcher, discusses the new track and its goal of preparing responsible, human-centered "AI architects" for society.
Eugene Santos Jr., Dartmouth's Sydney E. Junkins 1887 Professor of Engineering, is director of the MEng program and a leading AI researcher. (Photo by Rob Strong '04)
What was the inspiration for the new AI track?
Santos: This is the age of AI explosion. We've had many ages of AI in the past, but it's really taken off, because now we have real-world—not just theoretical—applications. Dartmouth is where the term "AI" started, and we've been advancing AI for a long time, such as my work in core AI on computational intentions for reasoning, and, of course, George Cybenko's work such as his universal approximation theorem for neural networks. Recently, however, especially at Thayer, many more faculty have been delving into both core and applied AI. So the timing was right.
What can students expect?
Santos: We want to teach the mathematical foundations so that when something goes right—or when something goes wrong—you understand why. You'll be able to assess: Is this the right solution? And more importantly, what's the next solution? And, I'm not just talking about this in terms of research. I'm also talking about how this will apply to technology created by commercial businesses. I don't know if this is the right term, but I view students as "AI architects" or "AI framers." Not just framers of big picture concepts, but people who can drill down and assess, "No, we shouldn't go in this direction," and find the right roads.
Are there certain unchanging principles in AI?
Santos: What's unchanging are those mathematical foundations. We can have changing algorithms, better algorithms, better representations. Those will always evolve, but if you don't have the base foundations, you'll keep repeating the same errors, and that's what we want to prevent. I could teach you how to use a particular API or a particular AI tool or large language model. Sure, now you know how to do it, but can you answer, "Is this the right model?" Or, "Why is it failing?" Or even better, "How could you have known ahead of time that it was likely to fail?" Without the background, you end up building something that's just not that great.
"AI is about humans. You need to have that understanding because it is humans who ultimately interact with AI. There's not going to be just AI out in the wild. That's why it has to be human-centered."
—Professor Eugene Santos Jr.
Are the courses hands-on and project-based?
Santos: All the classes for the AI track are project-focused, especially some of the foundation classes. In the advanced classes, you'll rely on what you learned and, potentially, on some of the pieces you built earlier. One of the hottest topics right now is agentic AI, which is actually an old idea about agents and agency. When you have those, you have these little pieces that you fit together. That's why the word architect comes into play, because you're architecting each of them. That's where our projects come in and what we build on.
How does Thayer's systems-based approach fit in?
Santos: That will always be a natural part of it. AI is very complex, and the many pieces of it may seem like magic. But as we put those pieces together—say, take a large language frontier model, combine it with a planning system, combined with a first order logic reasoning system—we want students to ask, "What's going on here? What does it mean to combine them?" Systems-based projects use these principles so we can have some control of the output. There are so many people out there trying different things and that's great, but in the end, a large percentage are just hacking things. Hacking has a place, but do I want to deploy something that was hacked together? There are so often unexpected outcomes, and you need a systems-based approach to avoid that.
What about the role of ethics?
One of the most important things we present is the appropriate use of AI, the ethics of AI, and the impacts of AI. As you try to build an AI model, you need to understand where you're deploying it in society, its particular uses, and then the impact of that. And, also consider the extremes—both good and bad—because either could happen. That will be an essential part of our projects.
We also need to make sure people are AI literate. The definition is still evolving, but at its core, AI literacy is about knowing when to use AI and use it appropriately. AI assessment is building it, deploying it, and solving a particular problem, but AI literacy is about using it. Not everyone has the knowledge to be an "AI architect," but everyday, people use tools like Claude in their work process and need the ability to assess whether they should use it in a certain way or not.
As engineers, we're trying to solve problems that no one has solved before. So for our curriculum, it means going all the way down to the deeper levels of assessment to demonstrate how it can properly enable solutions in powerful ways, especially in cases where no other solution has been found.
How does this relate to Thayer's human-centered philosophy?
Santos: AI is really about humans. You could try to argue that AI is not human, but how do we define intelligence? We define intelligence with respect to ourselves. You need to have that understanding because it is humans who ultimately interact with AI. There's not going to be just AI out in the wild. That's why it has to be human-centered. There are many directions AI can go, but what do you want to do? Do you want it to solve our problems? If it has anything to do with solving world problems, humans are involved.
This program is something that will continue to evolve. It has to. Just since fall of 2023, when ChatGPT blasted onto the stage, the technology has changed—not only the scale of things but also the different techniques and ways of looking at it. We will watch how the evolution goes as we update things. The important thing is we're not shooting for trends and fads. We want to give students the foundation, so you have the right tools and framework to assess all the different paths to the future.
For contacts and other media information visit our Media Resources page.
