With the increasing complexity of AI systems, it has become harder for naive users to understand these systems at an intuitive level and work with them effectively. Thus the onus is on us, as AI system developers, to equip these systems with capabilities that allow them to effectively interact and collaborate with humans-in-the-loop. In this tutorial, we will introduce the problem of human-aware decision making, and the challenges associated with generation of agent behaviors in these settings. In particular, we will discuss state-of-the-art works that have looked at capturing and reasoning with the human mental models to achieve fluent coordination. We will discuss how such models allow the agent to generate interpretable as well as privacy-preserving behavior and provide explanations. This half day tutorial is aimed at researchers and graduate students with a background and/or interest in exploring real-world AI systems that are meant to interact and collaborate with people.
The audience will walk away with an introduction to the problem of human-aware decision making, and the challenges associated with generation of agent behaviors in these settings. In this tutorial, we will explore settings with an AI agent in the actor's role and a human in the observer's role. The tutorial will motivate three important problems: generation of interpretable behavior, explanations for model reconciliation and generation of privacy preserving and adversarial behavior centered around the idea that human-aware AI systems need to model both human's mental model and expectations. All tutorial discussions will be grounded in published work on human-aware decision making both by the proposers as well as researchers across different AI communities and disciplines. By the end of the tutorial, the audience will take home an appreciation of the challenges involved in human-aware planning scenarios and different agent behaviors.
Subbarao Kambhampati is a professor of Computer Science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. Kambhampati is a fellow of AAAI and AAAS, and the past president of AAAI.
HomeTathagata Chakraborti works at IBM Research on human-AI interaction and explainable AI. He had received back to back IBM PhD Fellowships, an honorable mention for the ICAPS Best Dissertation and was invited to draft the landscaping primer for the Partnership of AI (PAI) Pillar on Collaborations Between People and AI Systems.
HomeSarath Sreedharan is a fourth-year Ph.D. student at Arizona State University working at Yochan lab under Prof. Subbarao Kambhampati. His research interests include explanations for automated planning and human-aware decision making. His research has been featured in conferences like AAMAS, ICAPS, ICRA, IJCAI, HRI and journals like AIJ.
HomeAnagha Kulkarni is a fifth-year Ph.D. student at Arizona State University working at Yochan lab spearheaded by Prof. Subbarao Kambhampati. Her research interests include human-aware AI planning and privacy preservation planning for AI systems. Her research has been featured in conferences like AAAI, AAMAS, ICAPS, ICRA.
Home