NINA
Humanoïd Robot

Videos

Beaming of the iCub NINA with chessboard using a Sony HMD with an Arrington eyetracker: head (21/7/2014)
Beaming of the iCub NINA using a HTC Vive HMD with an embedded SMI eyetracker: head, eyes, jaw and lips (20/6/2017)
NINA replicating a real interview conducted by Alessandra Juphard with an elderly subject. The videos are filmed from the interviewee's perspective. NINA holds a dummy tablet to give the impression that it's trigerring displays (the words to be learnt are displayed another tablet placed on the table in front of the subject) and taking care of scoring.
  1. first test (19/5/2016): behaviors are triggered by events: expressive text-to-speech synthesis, gazing, pointing & clicking, etc
  2. latest performance (10/6/2016): adding iris, adding blinks and new gaze events, etc
First demonstration of autonomous interviews (using Google speech recognition®: latest performance (10/5/2017) Autonomous Nina
Replaying an interview session (2018/07) from the PEPS RoboTrio project.

 

First video: thanks to a human pilot, the teleoperated robot interacts with 2 human players during a collaborative game.

 

In the bottom part of the second video, one can see what is visible from the stereo cameras of the robot (mobile eyes), including the augmented reality display that gives the scoring and collects the answers.

The static cameras directed to the users (top half of the second video) are not used by the robot/pilot. They collect HD video to help creating the learning data of a future autonomous behaviour model : detecting if users look at the robot or partner, and visual labeling of the corpus.