Workshop Summary
Thank you all for your contributions to the 2023 IROS Workshop on Advances in Multi-Agent Learning - Coordination, Perception, and Control. Thanks to your engagement, we had an excellent workshop with a great set of contributed papers and talks from our invited speakers. We are pleased to announce the following awards and contributed papers for the workshop:
Best Paper Award w/ Oral Presentation - $500 Cash Prize
Oral Presentations
Poster Presentations
Support
This workshop is proudly sponsored by the IEEE RAS Technical Committee on Multi-Robot Systems, Google DeepMind, and Avian AG.
Call for Papers
We are pleased to announce the full-day workshop “Advances in Multi-Agent Learning - Coordination, Perception, and Control” will take place as part of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) on October 1st in Detroit, USA. We invite the submission of short papers (maximum 4 pages, plus “n” pages for references) describing current work and early results on the workshop’s themes, to be presented during an interactive poster session. We will also have short oral presentations for spotlighted talks.
We welcome submissions on all topics related to multi-agent learning, including but not limited to: -Perception across robot teams -Multi-robot coordination and communication -Learning in multi-agent robot contexts -Task allocation -Resource management -Planning -Control -Game theory -Algorithmic approaches for multi-task expertise and distillation
Please submit your abstracts in 2 column IEEE format to our CMT website.
Short papers will be peer-reviewed in single-blind fashion. If you are interested and qualified to act as a reviewer, please send an email to Drew Hanover with the subject line “Reviewer Request - IROS 2023 Workshop”.
Accepted papers will be presented in-person during an interactive poster session. At least one author will need to be present at the venue for each poster. Accepted papers will be made available on the workshop website after IROS.
Important Dates
- Submissions Open: July 14th, 2023
- Submission Deadline: August 20th, 2023
- Notification of Publication Decision: August 31st, 2023
- Workshop Date: October 1st, 2023
Information
The world around us is inherently “multi-agent”. When you drive your car, or walk through a crowd, you are engaging in a collaborative and/or competitive environment with other agents. This phenomenon makes the world an exciting place because it requires individuals to learn game-oriented strategies for all aspects of life which maximize our returns, whether it be longevity, wealth, or happiness. The environment is typically partially observable, necessitating decision making strategies which are capable of accounting for uncertainties in highly complex domains.
In this workshop, we aim to explore these ideas by presenting the latest advancements in multi-agent reinforcement learning and game theory from top researchers and practitioners in the field. The workshop will cover a broad range of topics, including multi-robot coordination and communication, task allocation, resource management, planning, control, and game theory. Additionally, we encourage topics on algorithmic approaches for multi-task expertise and distillation.
The workshop will include practical demonstrations and hands-on sessions to provide participants with the requisite tools to enable robots to interact in a highly complex, multi-agent world.
Location and Date
October 1st, 2023 in Detroit, Michigan IROS Conference Room 2993
Schedule
Time | Activity |
---|---|
9:00 | Opening Remarks: Drew Hanover & Ben Moran |
9:10-9:40 | Peter Stone |
9:45-10:15 | Dimitra Panagou |
10:15-10:30 | Q & A |
10:30-11:00 | Coffee Break |
11:05-11:35 | Andrew Davison |
11:40-12:10 | Ben Moran |
12:15-12:30 | Q & A |
12:30-1:30 | Lunch |
1:35-2:10 | Mac Schwager |
2:15-3:30 | Posters Session and Coffee Break |
3:30-4:30 | Contributed Papers Lightning Talks |
4:30-5:00 | Awards Sesssion and Conclusion |
Speakers
Ben Moran
DeepMind made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, a world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning. Ben will present his recent research at DeepMind on using self-play techniques to train humanoid robots to play the game of soccer.
Mac Schwager
I study distributed algorithms for control, estimation, and learning in groups of autonomous aircraft, autonomous cars, and robots. Here are a few of my current and past research topics:
- Distributed control for the deployment of aerial camera networks
- Agile control of quadrotor swarms
- Monitoring and controlling environmental phenomena with aerial robots
- Cooperative manipulation with teams of mobile manipulators
- Trustworthiness, deception, and adversaries in multi-robot systems
- Human-swarm interfaces
- Autonomous drone racing
Andrew Davison
I hold the position of Professor of Robot Vision at the Department of Computing, Imperial College London, and lead the Dyson Robotics Laboratory at Imperial College where we are working on vision and AI technology for next generation home robotics. I also lead the Robot Vision Research Group though most of my activity is now within the Dyson Lab.
I am working in computer vision and robotics: specifically my main research has concerned SLAM (Simultaneous Localisation and Mapping) using vision, with a particular emphasis on methods that work in real-time with commodity cameras. I pioneered SLAM with vision from the mid 1990s onwards, and brought the SLAM acronym and methods from robotics to single camera computer vision with the breakthrough MonoSLAM algorithm in 2003 which enabled long-term, drift-free, real-time SLAM from a single camera for the first time, inspiring many researchers and industry developments in robotics and inside-out tracking for VR and AR.
Peter Stone
Dr. Peter Stone holds the Truchard Foundation Chair in Computer Science at the University of Texas at Austin. He is Associate Chair of the Computer Science Department, as well as Director of Texas Robotics. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, and robotics. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs - Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, IEEE Fellow, AAAS Fellow, ACM Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award. Professor Stone co-founded Cogitai, Inc., a startup company focused on continual learning, in 2015, and currently serves as Executive Director of Sony AI America.
Dimitra Panagou
Title: Task Allocation and Trust-based Adaptation and Control in Adversarial Multi-Robot Systems
Multi-robot systems and robotic networks are vulnerable to attacks in the “cyber” domain (e.g., to malicious/faulty information shared via communication or acquired via sensing), and/or on attacks that target the “physical” domain (e.g., the actuators or the entire physical system/network). Despite tremendous progress, there are still open problems, including but not limited to how we can obtain less conservative models of, and responses to, the attacks (beyond worst-case assumptions). In this talk, I will present an overview of our recent work on safety-driven task allocation, and on trust-based adaptation and control for multi-agent systems against adversarial agents.
Dimitra Panagou received the Diploma and PhD degrees in Mechanical Engineering from the National Technical University of Athens, Greece, in 2006 and 2012, respectively. In September 2014 she joined the Department of Aerospace Engineering, University of Michigan as an Assistant Professor. Since July 2022 she is an Associate Professor with the newly established Department of Robotics, with a courtesy appointment with the Department of Aerospace Engineering, University of Michigan. Prior to joining the University of Michigan, she was a postdoctoral research associate with the Coordinated Science Laboratory, University of Illinois, Urbana-Champaign (2012-2014), a visiting research scholar with the GRASP Lab, University of Pennsylvania (June 2013, Fall 2010) and a visiting research scholar with the University of Delaware, Mechanical Engineering Department (Spring 2009). Her research program spans the areas of nonlinear systems and control; multi-agent systems and networks; motion and path planning; human-robot interaction; navigation, guidance, and control of aerospace vehicles. She is particularly interested in the development of provably-correct methods for the safe and secure (resilient) operation of autonomous systems in complex missions, with applications in robot/sensor networks and multi-vehicle systems (ground, marine, aerial, space). She is a recipient of the NASA Early Career Faculty Award, the AFOSR Young Investigator Award, the NSF CAREER Award, and a Senior Member of the IEEE and the AIAA.