Online Engineering Programs Home

Take Aim: The Hottest Problems in Artificial Intelligence

Find schools

Meet the Experts

James Lester

James Lester, PhD
Dr. James Lester is the Goodnight Distinguished University Professor in Artificial Intelligence and Machine Learning at North Carolina State University. He is the director of the Center for Educational Informatics and the National Science Foundation AI Institute for Engaged Learning. His research centers on transforming education with artificial intelligence. His current work ranges from AI-driven narrative-centered learning environments and virtual agents for learning to multimodal learning analytics and sketch-based learning environments.

Dr. Lester received a National Science Foundation CAREER Award, four Best Paper Awards, and the International Federation for Autonomous Agents and Multiagent Systems Influential Paper Award. At North Carolina State University, he has been recognized with the Alumni Association Outstanding Research Award, the Outstanding Teacher Award, and the Alexander Quarles Holladay Medal for Excellence. His research is supported by the National Science Foundation, the US Department of Education’s Institute of Education Sciences, the National Institutes of Health, and the Army Futures Command. He has served as Editor-in-Chief of the International Journal of Artificial Intelligence in Education. He is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI).

Jung Eun

Jung-Eun Kim, PhD

Dr. Jung-Eun Kim is an assistant professor of computer science at North Carolina State University. She received her BS and MS in computer science and engineering at Seoul National University, and her PhD in computer science at the University of Illinois at Urbana-Champaign. Before joining the faculty at NCSU, Dr. Kim was an assistant professor of electrical engineering and computer science at Syracuse University and an associate research scientist in computer science at Yale University.

Dr. Kim’s research focuses on the bias and efficiency of AI and deep learning. She is particularly interested in the tradeoffs between performance, resource consumption, and other factors when designing robust, reliable, and sustainable models. She has served on the program committee of the Association for the Advancement of Artificial Intelligence (AAAI), the International Joint Conference on Artificial Intelligence (IJCAI), the Design Automation Conference (DAC), and the Design, Automation, and Test in Europe (DATE) Conference.

Reimagining Education

“A really interesting question about human learning is: what is this going to look like in the age of AI?” Dr. Lester says. “When you have AI systems that are incredibly capable, what is it going to mean to be a student? What is it going to mean to be a lifelong learner? And how can we develop AI technologies that’ll create learning experiences that are highly personalized to everybody?”

Generative AI adjoins gracefully with many branches of traditional education theory. One-on-one tutoring has been found, repeatedly, to be extremely effective for learners across diverse populations and subject matters, but it’s so far been infeasible to scale. LLM-powered chatbots, using natural language, change the math.

They might also change the structure: Dr. Lester and his colleagues are using generative AI to create interactive, narrative-centered learning experiences for K-12 students. They’re also exploring embodied conversational agents—a form of virtual expert—and multimodal learning analytics that track what works and what doesn’t work when it comes to keeping students engaged and learning.

Making AI-powered education work isn’t the only issue. Challenges remain around explainability: when an AI recommends student learning, parents, teachers, and students need to understand the rationale behind that recommendation; but deep learning models are famously uninspectable, and their reasoning is not human. As with other AI applications, deep philosophical and ethical questions lurk just below the surface of every potential breakthrough. Engineers, engineering students, and other researchers need to think deeply about the ethical challenges associated with what are otherwise very data-driven activities.

“It is going to be possible to create learning environments that are deeply engaging and highly effective, but we have to be wise about how we create these technologies, and we have to be really thoughtful about how we’re going to begin implementing them in classrooms, both K-12 and higher ed,” Dr. Lester says. “But if we do it right, it’s an opportunity to develop experiences for people that will let us thrive as a society.”

Teaming Humans and AI

AI systems need humans to unlock their potential. The collaborative interaction between the two is sometimes referred to as “teaming,” wherein the different strengths of each are leveraged towards the same goal. Practically every field, engineering or otherwise, will likely use AI, just as so many use software today; proficiency with AI could become as fundamental as proficiency with the keyboard. Education professionals will need to explore how to integrate not only AI-powered learning but AI literacy, into their core curriculums.

“We know AI skills are going to be important,” Dr. Lester says. “So how do we put that into K-12 systems, and how do we put that into higher ed?”

Building teaming skills in K-12 education and higher learning is one challenge. But it’s another for engineers to build up their own teaming skills to work better with and alongside AI. Dr. Lester encourages new and aspiring engineering students to take AI courses and develop a foundational understanding of statistics, which functions as the underlying logic of many deep-learning algorithms. Given the potential ubiquity of AI systems in the near future, the advice might also be extended to other disciplines.

“It’s an interesting time,” Dr. Lester says. “Because few have had the opportunity to acquire these skills yet. There’s no previous generation that had them.”

AI Sustainability

To the layman, AI is a magic box. To the engineer, it’s the product of both innovative design and intelligent tradeoffs, pulled together from various possible components. And while today’s AI applications dazzle on the front end, continuing on their current trajectory may not be sustainable: to become more powerful, they require more data, more training, and more computing power. Issues like carbon emissions, data storage, and e-waste quickly scale up.

“Artificial intelligence and machine learning technologies have been drastically advanced in the last decade or so, but have not necessarily had resource consumption in mind,” Dr. Kim says. “However, nowadays, as the datasets are getting ever bigger and the compute requirements keep increasing with more generative models and large language models, resource consumption—especially energy consumption—becomes an issue that we all need to pay attention to.”

Once factored in, sustainability changes the calculus of AI development. Reducing unnecessary model complexity can save time and resources in the long run. Clean energy usage, responsible data management, and carbon offset initiatives can help mitigate sustainability concerns further. Like climate change, however, the ongoing challenge will be properly incentivizing the future over the present: sustainability can slow down development, and thus delay profitability.

AI Bias & Reliability

A large vocabulary, well-wielded, can hide a plethora of imperfections. That’s certainly true for the large language models powering the most popular AI chatbots today, which can easily, if unintentionally, deceive users. Instances of “hallucination,” where chatbots provide incorrect answers with certainty, are only one small aspect of the problem. Much more insidious is the concept of AI bias, which can be accidentally built into many algorithms and models.

“Deep learning suffers from a bias problem,” Dr. Kim says. “This happens because there is a discrepancy between the data that a model learns from and the data the model sees in the real world. The discrepancy can come from many factors, such as an imbalanced or biased training dataset, and misleading or noisy features embedded in data samples. Those make a model brittle and unreliable when it is fielded, which can cause unfair, biased, distorted, or harmful results.”

Engineers need to take concrete steps to design more unbiased AI systems. They can do so by using more diverse and representative training data, regularly auditing their algorithms for fairness, working across disciplines and demographics, and favoring transparent and explainable models. It’s easier said than done: unconscious bias is pervasive throughout modern society. But small steps, and intentionality, count.

Ethics & Safety in AI

The future of AI is mined with theoretical, ethical, and practical obstacles. Powerful tools can be used for negative as well as positive ends. Unforeseen consequences sometimes overshadow the achievements of otherwise revolutionary technology, as evidenced by social media. The incentives of corporations and markets rarely align with the incentives of broader society, and engineers need to be thoughtful about the future they create with AI. Moving recklessly could have disastrous results.

“I encourage new and aspiring engineers to care about issues beyond performance,” Dr. Kim says. “We appreciate the performance advancement of AI and machine learning that many researchers have achieved. It made possible many things that we utilize today, which were impossible even 15 years ago. Now, it is time to think about what we may have missed or may miss beyond such performance advancement, in terms of environment, society, security, etc. It should not necessarily slow down technical advancement, though.”

Engineers and researchers must maintain a commitment to building AI systems that behave in ways consistent with humanity’s societal values, a concept generally referred to as AI alignment. It can sound vague, but it’s essential: without specifying clear objectives and prioritizing ethics and safety, the chances of calamitous unintended consequences increase. That choice to act hastily or thoughtfully, selfishly or compassionately, is present in every step of AI design and development. Get it right, and the upside is unbounded.

“The opportunity to carve out a future, given how powerful this technology is going to be, is before us,” Dr. Lester says. “That doesn’t mean there aren’t serious challenges to confront as well. But we can develop a kind of roadmap that will let us as a society build out applications of the technology that will best serve us, as opposed to lots of other possible futures that we could pursue but want to avoid.”

Related Features

An Expert’s Guide to Using Digital Twins

Today, digital twins are not limited to just physical objects. With the rise of virtual and augmented reality technologies, digital twins can now replicate entire environments and systems in a virtual space. This has opened up new possibilities for testing and simulation, allowing companies to reduce costs and risks associated with physical prototypes.

Artificial Intelligence in Job Recruitment: How AI Can Identify Top Talent

Diversity and inclusivity aren’t purely idealistic goals. A growing body of research shows that greater diversity, particularly within executive teams, is closely correlated with greater profitability. Today’s businesses are highly incentivized to identify a diverse pool of top talent, but they’ve still struggled to achieve it. Recent advances in AI could help.

Artificial Intelligence Systems & Specializations: An Interview with Microsoft’s Sha Viswanathan

The ability of a computer to learn and problem solve (i.e., machine learning) is what makes AI different from any other major technological advances we’ve seen in the last century. More than simply assisting people with tasks, AI allows the technology to take the reins and improve processes without any help from humans.

Building Web3: Expert Interview on Non-Fungible Tokens (NFTs)

Unlike fungible items, which are interchangeable and can be exchanged like-for-like, non-fungible tokens (NFTs) are verifiably unique. Broadly speaking, NFTs take what amounts to a cryptographic signature, ascribe it to a particular digital asset, and then log it on a blockchain’s distributed ledger.

Building Web3: Smart Contracts, Solidity, and the Ethereum Network

First proposed by computer scientist Nick Szabo in the 1990s and later pioneered by the Ethereum blockchain in 2010, smart contracts are programs that execute themselves when certain predetermined conditions are met.