Vous êtes ici : GIPSA-lab >CRISSPHome CRISSP
 
Team

COGNITIVE ROBOTICS, INTERACTIVE SYSTEMS, & SPEECH PROCESSING
Team manager : Gérard BAILLYThomas HUEBER

 

CRISSP team conducts theoretical, experimental and technological researches in the field of speech communication. More precisely, we aim at: 

    • Modeling verbal and co-verbal speech signal in face-to-face interaction involving humans, virtual avatar (talking head) and humanoid robots.
    • Understanding the human speech production process by modeling relationships between speech articulation and speech acoustics.
    • Studying communication of people with hearing impairment.
    • Designing speech technologies for handicapped people, language learning, and multimedia.

 

 

 

The 3 research axis of the CRISSP team are:

    • Cognitive robotics: improve socio-communicative skills of humanoid robots. 
    • Interactive systems: design real-time/reactive communicative systems exploiting the different modalities of speech (audio, visual, gesture, etc.).
    • Speech processing: articulatory synthesis, acoustic-articulatory inversion, speech synthesis, voice conversion.

Domains of expertise of CRISSP team

    • Audio signal processing (analysis, coding, denoising, source separation)
    • Speech processing (analysis, transformation, conversion/morphing, text-to-speech synthesis, articulatory synthesis/inversion)
    • Statistical machine learning
    • Acquisition of multimodal articulatory data (using electromagnetic articulography, ultrasound imaging, MRI, EMG, etc.)
    • Acquisition of social signals (eye gaze, body posture, head movements, etc.) during face-to-face interaction

 

Team members

(updated 18/12/2015)

 

Contact : Gérard Bailly et Thomas Hueber (mail : firstname.lastname@gipsa-lab.fr)



News
ParutionBiosignal-Based Spoken Communication

Special Issue edited by Tanja Schultz ; Thomas Hueber ; Dean J. Krusienski ; Jonathan S. Brumberg
Editeur : IEEE/ACM Transactions on Audio, Speech, and Language Processing, volume 25, n° 12, December 2017

Lire la suite



Latest publications of team

Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning

Emilie Gerbier, Gérard Bailly, Marie-Line Bosse. Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning. Computer Speech and Language, Elsevier, 2018, 47 (january), pp.79-92. 〈10.1016/j.csl.2017.07.003〉. 〈hal-01575227〉

Which prosodic features contribute to the recognition of dramatic attitudes?

Adela Barbulescu, Rémi Ronfard, Gérard Bailly. Which prosodic features contribute to the recognition of dramatic attitudes?. Speech Communication, Elsevier : North-Holland, 2017, 95, pp.78-86. 〈10.1016/j.specom.2017.07.003〉. 〈hal-01643330〉

Introduction to the Special Issue on Biosignal-Based Spoken Communication

Thomas Hueber, T. Schultz, D. J. Krusienski, J. S. Brumberg. Introduction to the Special Issue on Biosignal-Based Spoken Communication. IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2017, 25 (12), pp.2254 - 2256. 〈10.1109/TASLP.2017.2768838〉. 〈hal-01652752〉


All publications of team
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX - 33 (0)4 76 82 71 31