DIST Usability Laboratory

Curriculum Vitae

Research

Publications

Home/Research

Current Projects

ASIST- A Robust and Adaptive Agent that Supports High Performance Teams

ToM architecture

The ability to infer another human’s beliefs, desires, intentions and behavior from observed actions and use those inferences to predict future actions has been described as the Theory of Mind (ToM).  In conjunction with researchers at CMU we are working to develop AI agents capable of making such ToM inferences: 1st year for a single human, next dyads, and by year 4 human teams with heterogeneous roles. 

Robust Human-Machine Teaming


In the past robots were programmed to perform tasks and interact with humans.  We are now exploring ways to learn optimal policies for teaming and ways to adapt robots and humans to cooperate more effectively.

Trustworthy Interaction with Robot Swarms

turtlebotsSuccessful Interaction with autonomy depends crucially on human trust and its influence on reliance. In our work we are developing both normative models of performance based on proper calibration of trust and adaptive models by which autonomous systems can use indices of human trust to adapt their characteristics to improve performance of the overall system. While this would be challenging for a single robot we are working with very large (1,000+) swarms in CUDA-based simulations and a small swarm of 10 Turtlebots in the lab.

Recent Projects

Formal Models of Human Control of Cyber Physical Systems

anesthesia

While formal models of human cognition have advanced greatly, formal verification of human-machine systems has remained limited to "engineering models" of human performance such as characterizing a human operator as an optimal controller. In this project we are seeking to develop a methodology for deriving models of human performance that capture the architectural idiosyncrasies of human cognition in an analytic form that can be used for formal verification. Work with a swarm control task is complete and current efforts are investigating the highly nonlinear problem of fluid maintenance in anesthesia.

The influence of Cultural Factors on Trust in Automation

This project seeks to develop a validated measure of trust in automation and investigate cultural differences in the concept and resulting behavior in samples from the US, Taiwan, and Turkey.

Cognitive Compliant Command for Multirobot Teams

In this project we are developing methods for commanding robot teams of various sizes and levels of autonomy. We have conducted studies examining the feasibility of scheduling operator attention to enable supervision of multiple robots. In other work we have investigated approaches allowing human supervision of robot swarms.

Modeling Synergies in Large Human-Machine Networked Systems

This research is mathematically and empirically based drawing on human data and models to characterize their behavior within the system. Research involved human control of multiple robots, cognitive modeling of human operators, and experiments and models of multi-human/multi-robot teams.

Cultural Models, Collaborations and Negotiation

Researchers from the University of Pittsburgh will lead in designing, conducting, and analyzing data from online negotiation experiments. Observing interactive negotiation is necessary to capture the processes to analyze and understand the dynamics of cooperation and negotiation and tipping points that could lead to beneficial or disastrous effects.