Workshop at ICRA in Singapore, May 29, 2017

 

ROOMS 4211/4212 (please double check)

Organizers

F. Bonsignorio (CORRESPONDING CONTACT PERSON)

The BioRobotics Institute, SSSA

Viale Rinaldo Piaggio 34,

56025 Pontedera (Pisa), Italy

tel: +39 339 84 06 011

This e-mail address is being protected from spambots. You need JavaScript enabled to view it. , This e-mail address is being protected from spambots. You need JavaScript enabled to view it. , This e-mail address is being protected from spambots. You need JavaScript enabled to view it.

 

Signe A. Redfield

Naval Research Lab

4555 Overlook Ave, SW

Washington, D.C.  20375

This e-mail address is being protected from spambots. You need JavaScript enabled to view it. , This e-mail address is being protected from spambots. You need JavaScript enabled to view it.

 

A. P. del Pobil

Department of Engineering and Computer Science, Universitat Jaume I

12071 Castellon, Spain,

This e-mail address is being protected from spambots. You need JavaScript enabled to view it.

Abstract

In Robotics research the replicability  and reproducibility of results and their objective evaluation and comparison is very difficult to put into practice.  Controlling for environmental considerations is hard, defining comparable metrics and identifying goal similarity across various domains is poorly understood, and techniques for .  Even determining the information required to enable replication of results has been the subject of extensive discussion. Even worse, there is still no solid theoretical foundation for experimental replicability of experiments in robotics.  This situation impairs both research progress and technology transfer. Significant progress has been made in these respects in recent years and this workshop will provide a curated view of the state of the art.

Content

In Robotics research the replicability  and reproducibility of results and their objective evaluation and comparison is very difficult to put into practice.  Controlling for environmental considerations is hard, defining comparable metrics and identifying goal similarity across various domains is poorly understood, and techniques for .  Even determining the information required to enable replication of results has been the subject of extensive discussion. Even worse, there is still no solid theoretical foundation for experimental replicability in robotics.  This situation impairs both research progress and technology transfer. Significant progress has been made in these respects in recent years and this workshop will provide a curated view of the state of the art.

The importance and timeliness of this topic is highlighted in the editorial and the turning point column interview in the RAM's September 2015 issue.

This special issue (http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=100) is composed of replicable experiments, demonstrating the improvements in the state of the art and identifying areas where further work is still needed.  The workshop proposers organized and held the 2015 IEEE RAS Summer School dedicated to these topics this September in Benicassim, Castellon, Spain (http://ieee-raspebras2015.org).  The attendees could verify the improved understanding of the problem and the maturity of the approaches at least against specific topics or interest in the topic, as it is also shown by the RAM Special Issue quoted above.

This workshop aims to gather researchers active in academia and industry to share the ideas so far developed and discuss the challenges still ahead. We will discuss how to design, plan and perform reproducible and measurable research in Robotics and above all, issues regarding generalization and replication of results. Moreover, we will consider how the need for reproducibility affects publication formats, problem definitions and evaluation tools.  Robotics is a wide and diverse science and engineering field and we will try to cover the different approaches to reproducibility required by different aspects of the discipline.  Finally, epistemological issues in robotics research and its evaluation related to performance measurement, methods for the objective comparison of different algorithms and systems including shared concepts for task and capability representation, and the replication of published results will be presented and discussed,

Agenda

9:00 Introduction: 

F. Bonsignorio

 

Session 1:  Performance Measurement and Standards


9:30 G. Virk, InnoTecUK, UK

Topic: Benchmarking standardization vs. Reproducible Robotics

 

10:00 Coffee Break

 

Session 2:  Objective Comparison 1


10:30 J. Dias, Khalifa University, UAE

Topic: Benchmarking and Assessment In Robotics Acting in Dynamic Environments - Facts and   Figures from MBZIRC International Robotic Challenge 2017

 

11:00 D. Scaramuzza  and Zichao Zhang, University of Zurich, Zurich, Switzerland

Topic: How to run reproducible visual SLAM experiments

  

11:30 S. Carpin, University of California Merced, California, USA

Topic: How to run reproducible visual grasping experiments

 

12:00 W. Burgard and C. Dornhege, University of Freiburg, Freiburg, Germany

Topic:  Shakey 2016 - How Much Does it Take to Redo Shakey the Robot?

 

12:30 Lunch Break

 

Session 3:  Replication of Published Results


13:30 Francesco Amigoni, Politecnico di Milano, Milano, Italy

Topic: Experiments, tests, and benchmarks in robotics: Is a demarcation possible?

 

14:00 F. Bonsignorio, The BioRobotics Institute, Scuola Superiore S. Anna and Heron Robots, Italy

Topic: Reproducible Research in Robotics: state of the art and road ahead

 

14:30 Contributed:

F. Lier, P. Lucking, S. Wachsmuth, CITEC, Bielefeld University, Germany

J. de Leeuw, Cognitive Science Department, Vassar College, USA

S. Sabanovic, School of Informatics and Computing, Indiana University, USA

R. Goldstone, Dep. of Psychological and Brain Sciences, Indiana University, USA

Can we Reproduce it? Toward the Implementation of good Experimental Methodology in Interdisciplinary Robotics Research


14:45 Contributed:

C. Kertesz, M. Turunen, University of Tampere, Tampere, Finland

Reproducible Testing for Behavior-Based Robotics

 

15:00 Coffee Break

 

Session 4:  Task and Capability Epistemology

 

15:30 Signe Redfield, Naval Research Laboratory, USA

Topic: Shared representations for Reproducible Robotics

 

Session 5:  Objective Comparison 2


16:00 Contributed:

T. Krajnk, M. Hanheide, T. Duckett, University of Lincoln, UK

K. Kusumam, University of Nottingham, UK

Towards Automated Benchmarking of Robotic Experiments

 

16:15 Contributed:

J. Mattila, J. Koivumaki, Laboratory of Automation and Hydraulics, Tampere University of Technology, Tampere, Finland

W. Zhu, Canadian Space Agency, Longueuil (St-Hubert), QC, Canada

Objective Performance Evaluation of Robotic Control Using Numerical Indicators

 

16:30 A. Kumar Pandey, Softbank Robotics, Paris, France

Topic: Benchmarking and Reproducible Robotics in social robotics

 

17:00 Conclusion and Discussion

 

Intended Audience

The intended audience for this workshop includes researchers and practitioners from academia, the military, and industry, as well as people involved in research publishing inside and outside RAS. Reproducibility of research results and performance evaluation are key for research progress, result exploitation, and publishing across all disciplines.

Format

Each session will include 15-20 minutes talks interleaved with discussions.  The discussions will be based on both preformatted questions for the speakers and the audience distributed weeks before the event and questions solicited from the audience during the event and from the web audience before it. We will encourage remote participation and interactions through streaming, hangouts and social media.

The talks provide context for the discussion periods and ensure the audience, both early-career and experienced, have enough information to participate and interact, becoming engaged in the workshop and ideally continuing the discussions through the breaks.

Topics

The proposed workshop is meant as a community town hall on the state of the art and the road ahead.  The best contributions will be invited to submit to a refereed edited book or special issue in a high impact robotics journal.

Topics of interest

•                 Replication of experiments in robotics

•                 Metrics of dexterity, adaptivity, flexibility, robustness

•                 Metrics for  visual servoing effectiveness and efficiency

•                 Metrics for shared control effectiveness and efficiency

•                 Benchmarking autonomy and robustness to changes in the environment/task

•                 Shared concept development for comparison across tasks and capabilities

•                 Scalable autonomy measurements

•                 Reporting experiments in Robotics

•                 Epistemological issues

•                 Examples of good practice

•                 Evaluation of experimental Robotics work

•                 Proposals for promotion of good experimental work

Support 

This workshop is supported by the IEEE RAS TC-Pebras. It is also meant as the TC-Pebras TC meeting.

These issues are core issues for RAS and IEEE and they certainly are for TC-Pebras.

 

Additional information