Robots, autonomous systems, and other agents possessing artificial intelligence are on the forefront of the world’s technological development. These systems have the potential to shape how humans live in the twenty-first century and beyond. They will shape our physical world, economy, work, and even our social lives. Having intelligent machines in every part of our lives requires multiple levels of trust between the humans and the autonomous agents/robots. These levels range from physical safety and confidence that the agents will do the task that they are made to do all the way to trust that the agent is making the correct decision or performing the correct action in an ambiguous, uncertain, or even ethics-dependent situations. A few major questions surrounding this are 1) how can you objectively measure trust between two or more agents, 2) what are the antecedents of trust between humans and machines, 3) what laws, rules, and/or regulations will help ensure trust between robotic and human agents, and 4) are there standard testbeds or methodologies that should be shared by the research community to advance a common understanding of trust. This workshop will feature handpicked experts in the pertaining fields to give lectures covering these topics. This workshop will also accept paper submissions by researchers on this topic. Authors of the top papers will have a chance to present their work via an oral presentation and/or a poster session. It is currently planned to have a total of four invited speakers and up to five top paper presentations. The number of poster presentations will depend on the quality of submitted documents. It is currently planned to have three of the invited speakers and two top papers presented in the morning session followed by an independent lunch. The afternoon session will start off with a fourth invited speaker and three more top paper presentations. Finally there will be a poster session followed by a 30 minute moderated discussion session followed by closing remarks. The primary goals of this workshop are to continue the international dialog on trust research, encourage discussion across disciplines, and identify important future research questions to advance the study of trust in robotics, autonomous systems, and artificial intelligence. It will serve as an international extension to previous workshops held in the US, which assisted in defining a forward-looking research agenda of interest to potential sponsors, and fostered awareness and coordination across related research efforts (Atkinson, Friedland, and Lyons, 2012; Gratch, Friedland, and Knott, 2015).
An additional desired outcome of this workshop is to promote research relationships between the U.S. and the rest of the world. To facilitate this we will also have a dedicated time to allow non-U.S. researchers to propose, or “pitch,” projects that they think would be appropriate to be funded as a U.S. and non-U.S. collaboration. These pitches will be done in a private setting and are planned to occur during the poster session.
© Workshop on Future Trust in Robotics, Autonomous Systems, and Artificial Intelligence at the 17th IEEE-RAS International Conference on Ubiquitous Robots