If I can do it, you can do it!
Interactive learning as the key to reliable intent detection
Claudio Castellini
FAU Erlangen-Nürnberg, Germany
ABSTRACT
Intent detection is a quintessential part of human-machine interaction and proves to be especially hard in rehabilitation and assistive robotics – decades of research in signal processing, machine learning and mechatronics still fail to produce a dexterous, flexible, and reliable HMI for prostheses and exoskeletons. In this talk I’ll sketch the state of the art in the field, try and highlight a few pitfalls and possible ways ahead, and illustrate what I consider to be the main path to success: interactive (human and machine-) learning, leading to tight coupling, optimal control, and embodiment of robotic artefacts.
SPEAKER BIO
I am a researcher in medical robotics, focussing on rehabilitation and assistive robotics, human-machine interfaces and interaction and applied machine learning. In 1998 I obtained a degree in Electronic (Biomedical) Engineering from the University of Genoa, then in 2005 a Ph.D. in Artificial Intelligence (Mathematical Logic) from the School of Informatics of the University of Edinburgh. I then turned my attention to robotics for the disabled and spent 4.5 years as a post-doctoral fellow at the Advanced Robotics Laboratory of the University of Genoa. In 2009 I landed at the German Aerospace Centre where, as of now, I am a team leader and senior researcher at the Institute of Robotics and Mechatronics.
In 2021 I have been appointed full professor at the Chair of Medical Robotics at the University of Erlangen-Nuremberg. As of now, I have (co)authored short of 120 scientific papers, I am involved in a few research projects and I have served, or am currently serving, for some international editorial boards and committees.