Picture this: you’re getting ready to watch a movie on Netflix, popcorn in hand, and several films pop up that have been curated just for you. What are you going to do: choose one from the list recommended by the underlying AI algorithm, or worry about how this list was generated and whether you should trust it? Now, think about when you are at the doctors’ office and the physician decides to consult an online system to figure out what dosage of medicine you as the patient should take. Would you feel comfortable having a course of treatment chosen for you by artificial intelligence? What will the future of medicine look like where the doctor is not involved at all in making the decision?
This is where the ASSET Center comes into play. This initiative, led by the C.I.S. Department in Penn Engineering, to focuses on the trustworthiness, explainability, and safety of AI-systems. The faculty members and students who are a part of this Center have tasked themselves with finding ways to achieve this trust between an AI-system and a user. Through new collaborations between departments all throughout Penn, innovative research projects, and student engagement, ASSET will be able to unlock AI abilities that have never been achieved before.
I recently spoke with Rajeev Alur, Zisman Family Professor in the C.I.S. Department and inaugural director of ASSET. He elaborated on our Netflix example to explain the trust between an AI-system and a user and when it is absolutely critical for the adoption of AI by society. Based on movies and shows that the user watches, Netflix is able to give several recommendations, and it is the user’s choice as to whether they will go for something new. While the recommendations may be decent picks to the user “there is no guarantee or assurance that what they are recommending is foolproof or safe”, says Rajeev. Although AI is found to be useful in the case of choosing what to watch, the user needs a higher level of assurance with the system in more critical applications. An example of this could be when a patient is receiving treatment from a doctor. This high assurance can become important in two cases. One is when the system is completely autonomous, or what is called a “closed loop system,” and the other case is when the system is making a recommendation to a physician who decides what course of action to take. For this latter case, the AI does not make the decision directly, but its recommendation may still be highly influential. In many clinical settings, there are AI-systems already in place that dole out courses of treatment that best suits the patient, and a physician consults and tweaks these choices. What ASSET is looking to implement in the medical field are autonomous AI-systems that are trustworthy and safe in their decision making for the users.
“The ultimate goal is to create trust between AI and its users. One way to do this is to have an explanation and the other one is to have higher guarantees that this decision the AI-system is making is going to be correct,” Rajeev explains.
For ASSET to succeed, the center must nurture connections throughout Penn Engineering and beyond its walls. Within C.I.S., machine learning and AI experts are working together with faculty members in formal methods and programming languages to come up with tools and ideas for AI safety. Outside of C.I.S., Rajeev explains that robotics faculty in ESE and MEAM are interested in designing control systems and software that uses AI techniques in the Center. Going beyond Penn Engineering, ASSET is dedicated to making connections with Penn’s Perelman School of Medicine. “There is a great opportunity because Penn Medicine is right here and there are lots of exciting projects going on. They all want to use AI for a variety of applications and we have started a dialogue with them…This will all be a catalyst to having new research collaborations”, says Rajeev.
In keeping with the idea of autonomous AI that was discussed earlier, one of ASSET’s flagship projects is called Verisig. The goal of this project involving the collaborative efforts of Rajeev Alur, Insup Lee, and George Pappas, is to “give verified guarantees regarding correct behaviors of AI-based systems” (Rajeev Alur). In a case study being performed by the Center, researchers have verified the controller of an autonomous F1/10 racing car to check the design and safety of the controller so that the car is guaranteed to avoid collisions. The purpose of this project is to further understand assured autonomy; if a controller of a small car can be found trustworthy and safe, these methods may eventually be generalized and used in AI applications within the medical field.
How to get involved
The best way for students to get involved with ASSET is engaging in the center’s Seminar Series. They happen every Wednesday in Levine 307 from noon to 1pm, and the great thing about them is that any Penn student can join. There are incredible speakers lined up through the Fall and Spring semesters this school year, so instead of turning on Netflix and letting the system choose your next bingeworthy show, join ASSET every Wednesday for exciting talks about creating safe and trustworthy AI!