The CRISSP research team carries out fundamental and applied research in the field of automatic speech processing and social robotics.

In particular, we aim to : 

  • capture, analyze and model the various verbal and co-verbal signals involved in a communicative interaction situation.
  • enhance the socio-communicative capabilities of humanoid robots 
  • develop voice technologies that exploit the multimodal characteristics of speech (sound, vision, gestures), in particular to help people with disabilities (voice substitution, speech rehabilitation systems, communication aids for the hearing-impaired, reading aids). 
  • to better understand, through modeling and simulation, some of the processes involved in speech and language acquisition, perception and control. 

The main research themes of the CRISSP team are :

  • Text-based speech synthesis, with a focus on expressivity, reactivity (incremental TTS synthesis), prosody modeling, audiovisual synthesis (avatar) and gesture control.
  • Human-robot interaction: analysis, modeling and generation of verbal and co-verbal signals (e.g. gaze, head movements) 
  • Acoustic-articulatory modeling (inversion, synthesis, silent speech interface, biofeedback) 
  • Automatic processing of gesture-based language, with a focus on Cued-speech.

The team is involved in 3 chairs of the Grenoble-based 3IA artificial intelligence institute MIAI

parole, multimodalité, robotique humanoïde