RoboTrio Afficher en francais Show in English
Recueil, modélisation et évaluation d'interactions sociales entre un robot et des partenaires humains dans une tâche d'interview et de remplissage de formulaire numérique
Collecting, modeling and evaluating social interactions between a robot and human partners in a digital interview and form-filling task.

DOWNLOAD

 

 

Content


These resources -- the RoboTrio corpus -- were recorded at the GIPSA laboratory during the 2018 Summer, in French. They belong to the RoboTrio project (funded by a CNRS PEPS). The project involves GIPSA-Lab (Grenoble), LPL (Aix-en-Provence) and INT (Marseille).

 

More details can be found here: http://www.gipsa-lab.grenoble-inp.fr/~frederic.elisei/RoboTrio/

 

The corpus involves a collaborative game, played simultaneously by two humans. They sit in front of a social robot that plays as a game animator and referee. This robot is teleoperated by a human pilot: Gaze for the robot, eye vergence, head orientation, lip and jaw articulations, speech are captured on the human pilot in real-time and drive the robot. In this immersive teleoperation setup, the pilot sees through the robot cameras (stereo) and perceives audio through the ears of the robot, leading to a high level of embodiment. What is demonstrated by the pilot is a viable solution for the robot sensors/actuators to conduct a natural interaction with humans and successfully perform the intended task (social interaction with gaze and speech turnovers in a gaming scenario). Data streams and events that link to the perception as well as to the action are logged. These were primarily intended to build autonomous behaviour models for a social robot (Nina robot, a modified iCub).

 

23 experiments were recorded (around 20 minutes each). The pilot is always the same, the human players are always different. The two players in an experiment are either both male or both female.

 

Details

 

  1. Example_Interaction-Photo.jpg:
    shows the setup, with the two humans facing the robot. One can also see the two cameras in the mobile eyes of the robot, and it's human-like ears (rich HRTF to support audio localization). In front of the robot, on the table, the support (B&W square marker) allows for an augmented reality display for the pilot to conduct the game, announce the themes, and validate the player answers. One can also see 2 fixed cameras, directed toward one human player each. Not used by the robot, they are used afterward to better analyze the corpus (head tracking, human gaze, prephonatory gestures, face analysis).
  2. Example_Interaction-From_expe_13.mp4:
    this video excerpt illustrates the nature of the in-game interaction. The montage depicts what was recorded by the two fixed cameras (top line), what the pilot sees when he directs his head/gaze (bottom line : stereo view, virtual tablet) and what humans see (center: robot face with gaze and blinks, neck movements, and speech articulation).
  3. Audio_Humans:
    23 wav files, with stereo recording of the human players. 1 file per experiment
  4. Transcriptions_Humans:
    46 Praat files, one for every human player. 2 files per experiment
  5. Audio_Pilot:
    23 wav mono files. Recording used an ambient microphone near the robot, capturing mainly the pilot but also the robot motors, the robot power supplies, and the human players.
  6. Transcriptions_Pilot:
    2 ELAN files and 2 video files (mkv format), for experiments 19 and 23. These illustrate the full transcription of the corpus, including the pilot speech and his actions/keywords/references to users, direction, theme of the game, answers of the players.
    Please contact us if you need more of these data.

 

Contact address

 

mailto:/frederic.elisei@gipsa-lab.grenoble-inp.fr