What happens when AI goes wrong? Probably not the Terminator or the Matrix – despite what Hollywood suggests – but rather, something that could still harm a human, such as a self-driving car that gets into an accident, or an algorithm that discriminates against certain people. Fortunately Penn has innovative researchers like Eric Wong, who build tools to make sure AI works correctly!
You may have already seen Eric on campus or perhaps teaching his advanced graduate class. Just like the Class of 2026 who are quickly learning their way around Levine Hall, Eric is one of the C.I.S. Department’s newest faculty members. An Assistant Professor who works in Machine Learning, Eric is a Carnegie Mellon Ph.D. graduate and a former MIT post-doctoral researcher in the Computer Science and Artificial Intelligence Lab.
As this semester is in full swing, Eric Wong is busy at work teaching course 7000-05: Debugging Data and Models. When asked what he is looking forward to most about teaching in Penn Engineering, Eric stated,
“One of the key skills that students will learn is how to tinker with AI systems in order to debug and identify their failure modes. I’m excited to see the new ways in which Penn Engineering students will break AI systems, as well as the innovations they come up with to repair them!”
The initiatives that Penn Engineering has launched in recent times are what drew Eric to the C.I.S. Department, specifically the ASSET Center. “Penn Engineering is well-situated to ensure that the tools and systems we develop as computer scientists actually satisfy the needs and requirements of those that want to use them.”, said Eric. He will be one of many faculty members working with ASSET to develop reliable and trustworthy AI-systems which coincides with his own research.
Some of Eric’s specialized interests in this field include “verifying safety properties of an AI-system, designing interpretable systems, and debugging the entire AI pipeline (i.e. the data, models, and algorithms).” His research goals are working towards debugging AI-systems so that the user is able to understand the decision process of a system and learn how to inspect its defects. Eric is also interested by the interdisciplinary work of connecting these methods to other fields outside of engineering. Collaborators in medicine, security, autonomous driving, and energy would “ensure that the fundamental methods we develop are guided by real-world issues with AI reliability.”
As AI is being developed and deployed at a rapid rate, Eric worries that, “it is only a matter of time before the ‘perfect storm’ induces a catastrophic accident for a deployed AI system.” In teaching methods of debugging AI-systems, he strives to give his students the tools and knowledge toward building safer and more trustworthy AI for the future. He hopes that with his research and teachings in the classroom, students take the time to “critically examine their own system” before sending them out into the world.
When Eric is not spending time making sure AI-systems are at the top tier of trustworthiness and reliability, he enjoys trying to recreate the recipes of meals that he orders at restaurants. Trying to “reverse engineer its creation process” is harder than it might seem. Eric mentioned that, “It does not always look the same as the original, nor does it always taste as good, but sometimes it works!”. Maybe someday that too will be something an AI can do (correctly)!