BAILLY
Gérard
Directeur de Recherche CNRS
Projects
ORTHOLEARN (ANR 2012-2014)
  • Partners: GIPSA-Lab, LPNC (S. Valdois, coordinatrice), LpnCog (S. Pacton), IUHC (M. Zorman) & Play Bac Editions (E. Serreno-Despres)
  • Objectives: role for the visual attention span in orthographic processing and orthographic learning. GIPSA will implement and study the impact of synchronous reading/listening of audio books on the acquisition of orthography.
  • Resources: audiobooks for Windows (each archive includes one directory with all ressources for synchronous reading):
ROBOTEX/RHIN (ANR 2009-2019)
  • Partners: GIPSA-Lab, LAAS (P. Laumon, coordinateur), ISIR, LIRMM, ETIS (P. Gaussier), PPRIME (P. Lacouture), IRCCyN (C. Chevalier), INRIA Rennes (F. Chaumette)
  • Objectives: Developp a talking iCub robot
AMORCES (ANR 2008-2011)
  • Partners: GIPSA-Lab, LAAS (R. Alami, coordinateur), SBRI (P.F. Dominey), LAMIH (R. Mandiau) & GREYC (A.-I. Mouaddib)
  • Objectives: this project studies decisional and operational human-robot interaction, and more specifically, the impact of verbal and non-verbal communication on the execution of collaborative tasks between a robot and a human partner.
CASSIS (PHC Sakura 2009-2010)
  • Partners: GIPSA-Lab (coordinateur), ENST (G. Chollet), ESCPI (B. Denby), NAIST (T. Toda), Wakayama University (H. Kawahara) 
  • Objectives: this project studies silent speech interfaces, i.e. computer-assisted  communication with silent speech production using various technologies such as ultrasound imaging, electro-articulography, electro-myography and stethoscopic microphones.
ARTIS (ANR 2009-2012)
  • Partners: LORIA (Y. Laprie, coordinateur), ENST (S. Maeda) & IRIT (R. André-Obrecht)
  • Objectives: Acoustic-to-articulatory inversion for speech pronunciation training. GIPSA-Lab promotes a data-driven approach with direct (GMM-based) versus language-specific (HMM-based) mappings.
ARTUS (RNRT 2002-2005)
  • Partners: LIS (J.-M. Chassery & P. Bas, coordinateurs), EUDIASYC (F. Davoine), TSI/ENST (N. Moreau), ARTE Labs (J.P. Léoni), Attitude Studio (R. Brun) & Nextamp (P. Nguyen)
  • Objectives: Virtual speech cuer as an alternative for teletext for deaf televiewers. Gestures are watermarked in the audiovisual flow.
MOTHER (ICP 2000)& VESALE (BQR INP 2003-2004)
  • Partners: TU Berlin (PHC PROCOPE 2006-2007 "static and dynamic replication of talking faces" with S. Fagel) & FT R&D (contract 2005-2007 "Animation de visages parlants" with G. Breton)
  • Objectives: Virtual clones of speakers
  • Resources:
    • Original speakers: SF (German),  HL (French), CD (Australian), OC (English) & PDB (English, model)
    • Rescalings: SF driven by HL & CD

Grenoble Images Parole Signal Automatique laboratoire

UMR 5216 CNRS - Grenoble INP - Université Joseph Fourier - Université Stendhal