In the Spotlight: Osbert Bastani and Integrating Machine Learning into Real-world Settings

Osbert Bastani, Assistant Professor in the Computer and Information Science Department in the School of Engineering, University of Pennsylvania

Many students and faculty alike may recognize the face above as Osbert Bastani. Well that’s because this Assistant Professor is not a new member of the Penn Engineering team. Osbert joined the Computer and Information Science Department as a Research Assistant Professor in 2018 specializing in programming languages and machine learning.

“Penn has a great group of faculty working on interesting research problems, and they are all incredibly supportive of junior faculty. I’ve been fortunate enough to collaborate with Penn CIS faculty in a range of disciplines, from programming languages to NLP to theory, and I hope to have the chance to collaborate with many more.” (Osbert Bastani)

Osbert actually began his research career in programming languages. This major challenge in this research is “verifying correctness properties for software systems deployed in safety-critical settings.” He explains that because machine learning is progressively being incorporated into these systems, it has become a greater challenge facing verification. In his research, he is tackling this overarching question; “How can one possibly hope to verify that a neural network guiding a self-driving car correctly detects all obstacles?” While there has been progress made in trustworthy machine learning, there is still a long road ahead to finding solid solutions.

His enthusiasm in working with the Ph.D. students on various topics and research projects is what he has looked forward to most as he entered into this new role in his teaching career at the start of this Fall semester. Since the school year began, he has been teaching Applied Machine Learning (CIS 4190/5190) with Department Chair, Zachary Ives. When asked about how the semester is going Osbert replied:

“I’ve been very fortunate to have strong students with very diverse interests, meaning I’ve had the opportunity to learn a great deal from them on a variety of topics ranging from convex duality for reinforcement learning to graph terms in linear logic. An incoming PhD student and I are now learning about diffusion models in deep learning, which are really exciting!” (Osbert Bastani)

While teaching, Osbert is also involved in several research projects that are dealing with trustworthy machine learning within real-world settings. One project that raises several questions about fairness and interpretability includes “building a machine learning pipeline to help allocate limited inventories of essential medicines to health facilities in Sierra Leone.” In addition, during a summer internship at Meta, one of Osbert’s students has been in the process of “developing deep reinforcement learning algorithms that can learn from very little data by pretraining on a huge corpus of human videos.”

Osbert Bastani wears many hats in the CIS Department. Not only is he involved in teaching and research projects with students, he is also a member of several groups within the department. Those include PRECISE, PRiML, PLClub, and the ASSET Center and he encourages all students to attend the seminars that each club holds and get the opportunity to learn about research in their areas or outside of their own.

Just as Osbert works to problem solve within the classroom and in his research, he does just about same outside of work as well! He expresses that he is an avid board game player and frequents the restaurant just down the street from Penn called “The Board and Brew”. He and his wife have played through the restaurant’s entire collection of the game “Unlock!”. The Board and Brew has great food and several hundred games to choose from. It is highly recommended by Osbert himself!

The ASSET Center: Enabling Trust Between AI and its User

Picture this: you’re getting ready to watch a movie on Netflix, popcorn in hand, and several films pop up that have been curated just for you. What are you going to do: choose one from the list recommended by the underlying AI algorithm, or worry about how this list was generated and whether you should trust it? Now, think about when you are at the doctors’ office and the physician decides to consult an online system to figure out what dosage of medicine you as the patient should take. Would you feel comfortable having a course of treatment chosen for you by artificial intelligence? What will the future of medicine look like where the doctor is not involved at all in making the decision?

This is where the ASSET Center comes into play. This initiative, led by the C.I.S. Department in Penn Engineering, to focuses on the trustworthiness, explainability, and safety of AI-systems. The faculty members and students who are a part of this Center have tasked themselves with finding ways to achieve this trust between an AI-system and a user. Through new collaborations between departments all throughout Penn, innovative research projects, and student engagement, ASSET will be able to unlock AI abilities that have never been achieved before.

Rajeev Alur, Zisman Family Professor and inaugural director of ASSET

I recently spoke with Rajeev Alur, Zisman Family Professor in the C.I.S. Department and inaugural director of ASSET. He elaborated on our Netflix example to explain the trust between an AI-system and a user and when it is absolutely critical for the adoption of AI by society. Based on movies and shows that the user watches, Netflix is able to give several recommendations, and it is the user’s choice as to whether they will go for something new. While the recommendations may be decent picks to the user “there is no guarantee or assurance that what they are recommending is foolproof or safe”, says Rajeev. Although AI is found to be useful in the case of choosing what to watch, the user needs a higher level of assurance with the system in more critical applications. An example of this could be when a patient is receiving treatment from a doctor. This high assurance can become important in two cases. One is when the system is completely autonomous, or what is called a “closed loop system,” and the other case is when the system is making a recommendation to a physician who decides what course of action to take. For this latter case, the AI does not make the decision directly, but its recommendation may still be highly influential. In many clinical settings, there are AI-systems already in place that dole out courses of treatment that best suits the patient, and a physician consults and tweaks these choices. What ASSET is looking to implement in the medical field are autonomous AI-systems that are trustworthy and safe in their decision making for the users.

“The ultimate goal is to create trust between AI and its users. One way to do this is to have an explanation and the other one is to have higher guarantees that this decision the AI-system is making is going to be correct,” Rajeev explains.

Collaborations

For ASSET to succeed, the center must nurture connections throughout Penn Engineering and beyond its walls. Within C.I.S., machine learning and AI experts are working together with faculty members in formal methods and programming languages to come up with tools and ideas for AI safety. Outside of C.I.S., Rajeev explains that robotics faculty in ESE and MEAM are interested in designing control systems and software that uses AI techniques in the Center. Going beyond Penn Engineering, ASSET is dedicated to making connections with Penn’s Perelman School of Medicine. “There is a great opportunity because Penn Medicine is right here and there are lots of exciting projects going on. They all want to use AI for a variety of applications and we have started a dialogue with them…This will all be a catalyst to having new research collaborations”, says Rajeev.

Research Projects

F1Tenth Racing Car that is used in competitions
F1Tenth Racing Car

In keeping with the idea of autonomous AI that was discussed earlier, one of ASSET’s flagship projects is called Verisig. The goal of this project involving the collaborative efforts of Rajeev Alur, Insup Lee, and George Pappas, is to “give verified guarantees regarding correct behaviors of AI-based systems” (Rajeev Alur). In a case study being performed by the Center, researchers have verified the controller of an autonomous F1/10 racing car to check the design and safety of the controller so that the car is guaranteed to avoid collisions. The purpose of this project is to further understand assured autonomy; if a controller of a small car can be found trustworthy and safe, these methods may eventually be generalized and used in AI applications within the medical field.

How to get involved

The best way for students to get involved with ASSET is engaging in the center’s Seminar Series. They happen every Wednesday in Levine 307 from noon to 1pm, and the great thing about them is that any Penn student can join. There are incredible speakers lined up through the Fall and Spring semesters this school year, so instead of turning on Netflix and letting the system choose your next bingeworthy show, join ASSET every Wednesday for exciting talks about creating safe and trustworthy AI!

Senior Design Presentations Highlight Creativity and Perseverance in Uncertain Times

By Ebonee Johnson

A still from video footage of Team 17 demonstrating their design, Personalizing Physical Therapy via Muscular Feedback.

On Friday, April 24th, CIS students gathered to present their projects at the Spring 2020 Senior Design Alumni Presentations

Due to the current COVID-19 crisis and its subsequent effects, “gathered” took on a virtual connotation: Seniors were asked to attend using video call platform Zoom.

CIS Associate Professor Ani Nenkova facilitated the event, and despite a few technical difficulties, Penn perseverance prevailed. Robert Zajac, one of 14 panelists, believes that the presentations shone bright although communication during these final months of development must have been undoubtedly more difficult.

“Ordinarily you would have a more human element when seeing the project in person,” said Zajac. “But I think the understanding among everyone is that we have to just do the best we can and try extra hard to have empathy for each other.”

Students were separated into breakout rooms (or sessions) on the call, with 3-4 teams presenting in each session for a group of assigned panelists. The seniors utilized PowerPoints to demonstrate their work via frontend and backend analysis, algorithm and tech evaluation, and planned improvements.

Team members included Jack Buttimer, Matthew Kongsiri and Siyuan Liu (Team 7) introduced an interface they called Newsfluence, which hopes to help users recognize their own biases by allowing them to see how media outlets are comparatively reporting. Studies that the group encountered suggested that those who are most susceptible to fake news are “tech illiterate,” so their most prioritized metrics included interface friendliness, intuitiveness and seamlessness.

Like Newsfluence, many of the groups have working models of the projects currently available.

The Online OH Queue project – winner in the social impact category and created by team members Steven Bursztyn, Christopher Fischer, Monal Garg, Karen Shen and Marshall Vail (Team 19) – has already been effectively helping students and faculty schedule electronic office hours during this global crisis.

“It is rare in this class to have a project deployed and used by 1000+ people before the end of the Spring semester,” said Nenkova via email. “This year is a first for that.” 

Zajac, who received both his BSE and MSE in Computer Science in 2019, sat in on the presentation of a document reader called CoParse. Group members Jacob Beckerman, Josh Doman, Sarah Herman and James Xue (Team 5) posed the question, “What if we could navigate contracts like we navigate the web?” and replied with an app that allowed for just that. The project was awarded the panel’s technical sophistication honor.

“CoParse was a smart legal document reader that used natural language processing to build a rich representation of documents,” said Zajac. “It was impressive because the team received interest from industry in trying the product, which is not always common for senior design projects.”

Zajac, currently a Software Engineer with company Two Sigma, imparts words of wisdom to future presenters:

“Start your presentation with the problem you’re solving and explain why it’s meaningful! If we don’t first agree on why the problem is meaningful, we can’t begin to talk about the technical solution.”

The complete list of winners is as follows:

Technical sophistication: Team 5 (Jacob Beckerman, Josh Doman, Sarah Herman and James Xue): CoParse: Performance-enhancing document reader for legal contracts

Creativity: Team 14 (Alexander Chea, Yi Ching, Vijay Ramanujan, Anelia Valtchanova, Leon Wu): Gamifying Physical Therapy Using Virtual Reality

Societal impact: Team 19 (Steven Bursztyn, Christopher Fischer, Monal Garg, Karen Shen, Marshall Vail): Online OH Queue

Alumni’s choice: Team 16 (Suyog Bobhate, Tsz Lam, William Sun, Zeyu Zhao, Zhilei Zheng): Data Synchronization

Click HERE for a full list of participants and panelists