Directeur de Recherche CNRS
FLUENCE (e-FRAN 2016-2020). Website
  • Partners: GIPSA-Lab, LPNC (S. Valdois, coordinatrice), LIDILEM (M. Masperi) , LANSAD (N. Chalon)
  • Objectives: evaluate computerized reading assistance for improving fluency of young readers. GIPSA-Lab tests the RAKE (Reading Assistance by Karaoke) system that enlightens parts of text (syllables, words, breath groups) as they are read by an expert reader. The ELARGIR method consists in repeated Reading-while-Listening (RwL) tasks using the RAKE technology paced by evaluation by pairs.
  • Collaborations: Rectorat Grenoble (M. Zanoni, C. Lequette, J. Eudes), LSE (M. Bianco)
SOMBRERO (ANR 2015-2019). Website
  • Partners: GIPSA-Lab (G. Bailly, coordinateur), Lab-STICC (P. De Loor), LIG (H. Fiorino), LIP (M. Dubois), Aldebaran Robotics (R. Gelin)
  • Objectives: proposes to train humanoid robots (Nina and Roméo) i.e. provide them with sociocommunicative
    abilities by immersive teleoperation i.e. “beaming” of human pilots. The SOMBRERO project has three main objectives:
    • Realize beaming experiments involving adults interacting with a humanoid robot in cooperative, situated and finalized tasks. The targeted task is a series of tests used in neuropsychological records. We focus on the socio-communicative behavior that should accompany the monitoring of the task, i.e. the correct comprehension of the instructions, the seamless execution of the task by the interlocutor, the positive feedback and encouragement given along the performance, correction of errors, misunderstandings or precisions,
    • Develop and implement autonomous socio-communicative behaviors into the robot cognitive architecture via statistical modeling of the multimodal behaviors monitored during the prior robotmediated interactions,
    • Assess these behaviors and the achieved social embodiment with acceptance measures and analysis of user attitudes.
  • Collaborations: Grenoble & Brest Hospitals (neurogical departments)
ORTHOLEARN (ANR 2012-2014)
  • Partners: GIPSA-Lab, LPNC (S. Valdois, coordinatrice), LpnCog (S. Pacton), IUHC (M. Zorman) & Play Bac Editions (E. Serreno-Despres)
  • Objectives: role for the visual attention span in orthographic processing and orthographic learning. GIPSA will implement and study the impact of synchronous reading/listening of audio books on the acquisition of orthography.
  • Resources: audiobooks for Windows (each archive includes one directory with all ressources for synchronous reading):
ROBOTEX/RHIN (ANR 2009-2019)
  • Partners: GIPSA-Lab, LAAS (P. Laumon, coordinateur), ISIR, LIRMM, ETIS (P. Gaussier), PPRIME (P. Lacouture), IRCCyN (C. Chevalier), INRIA Rennes (F. Chaumette)
  • Objectives: Developp a talking iCub robot
AMORCES (ANR 2008-2011)
  • Partners: GIPSA-Lab, LAAS (R. Alami, coordinateur), SBRI (P.F. Dominey), LAMIH (R. Mandiau) & GREYC (A.-I. Mouaddib)
  • Objectives: this project studies decisional and operational human-robot interaction, and more specifically, the impact of verbal and non-verbal communication on the execution of collaborative tasks between a robot and a human partner.
CASSIS (PHC Sakura 2009-2010)
  • Partners: GIPSA-Lab (coordinateur), ENST (G. Chollet), ESCPI (B. Denby), NAIST (T. Toda), Wakayama University (H. Kawahara) 
  • Objectives: this project studies silent speech interfaces, i.e. computer-assisted  communication with silent speech production using various technologies such as ultrasound imaging, electro-articulography, electro-myography and stethoscopic microphones.
ARTIS (ANR 2009-2012)
  • Partners: LORIA (Y. Laprie, coordinateur), ENST (S. Maeda) & IRIT (R. André-Obrecht)
  • Objectives: Acoustic-to-articulatory inversion for speech pronunciation training. GIPSA-Lab promotes a data-driven approach with direct (GMM-based) versus language-specific (HMM-based) mappings.
ARTUS (RNRT 2002-2005)
  • Partners: LIS (J.-M. Chassery & P. Bas, coordinateurs), EUDIASYC (F. Davoine), TSI/ENST (N. Moreau), ARTE Labs (J.P. Léoni), Attitude Studio (R. Brun) & Nextamp (P. Nguyen)
  • Objectives: Virtual speech cuer as an alternative for teletext for deaf televiewers. Gestures are watermarked in the audiovisual flow.
MOTHER (ICP 2000)& VESALE (BQR INP 2003-2004)
  • Partners: TU Berlin (PHC PROCOPE 2006-2007 "static and dynamic replication of talking faces" with S. Fagel) & FT R&D (contract 2005-2007 "Animation de visages parlants" with G. Breton)
  • Objectives: Virtual clones of speakers
  • Resources:
    • Original speakers: SF (German),  HL (French), CD (Australian), OC (English) & PDB (English, model)
    • Rescalings: SF driven by HL & CD

Grenoble Images Parole Signal Automatique laboratoire

UMR 5216 CNRS - Grenoble INP - Université Joseph Fourier - Université Stendhal