RHUM
ROBOTS in HUMAN ENVIRONMENTS

Join the robotique discussion list

Connect to https://listes.grenoble-inp.fr/sympa/home or send  "SUBSCRIBE robotique fistname family_name" to sympa@grenoble-inp.fr

News

Agenda

Dates

Place

Events

Actions

10/7/2019
13:30-16:30
GIPSA-Lab
B314
RHUM workshop This workshop features presentations of MIAI chairs dealing with robotics and pitches from M2R & PhD students.
  • MIAI chairs
    • Nicolas Marchand ("AIBot" chaired by C. Prieur) AI and dynamical systems: new paradigms for control and robots
      Abstract: New paradigms and/or methodologies for control systems and robotics with AI methods. The research program has three challenges: Learning (sensors/actuators and dynamical models; perception and navigation systems), Acting (robust AI-enabled optimal control; cooperation of robots) & Safety and certification (AI-powered autonomous systems & safety; guaranteed performance).
    • Gérard Bailly ("Collaborative intelligent systems" chaired by G. Bailly & J.-L. Crowley) Cognitive robotics
      Abstract: Cognitive robotics is concerned with endowing a robot with intelligent behavior by allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embedded cognition.Within the chair, we will explore robot-embodied & robot-embedded cognition via immersive teleoperation of a humanoid robot. We will in partiular explore how human pilots experience robot-mediated interactions and how the robot can learn from human-driven social interactions. A special focus will be given to goal- and partner- adaptative behavioural models.
    • Note that the "Audio-visual machine perception and interaction for companion robots" chaired by X. Alameda-Pineda & R. Horaud will not be presented
  • 2019 interns financed by RHUM
    • Thuc-Long Ha (supervised by D. Pellier & O. Aycard) Integrating perception, decision, and action for mobile industrial robots
      Abstract: In the realistic industrial scenario of a future factory, where a number of robots and machines provide manufacturing services to organize and maintain the supply chain in a facility. This means the orders of production are flexibles to create varieties of products. The purpose is to allow cost-effective production for low volumes. Such a factory requires more flexible actions, where mobile robots are a natural choice instead of a grand assembly lines. in this context, the thesis aims to integrate the perception aspect of the robot along with the decision making in artificial intelligence to achieve the successful time-constraint production in The Logistics League (RCLL)  competition scenario.
    • Juliette Rengot (supervised by G. Bailly, X. Alameda & F. Elisei) Automatic detection of gazes directed toward a robot
      Abstract: The gaze is an essential tool in human communication. Enabling a robot to detect, in real-time, if one of the humans in its field of vision is looking at it, will improve the quality of human-robot interactions. In this presentation, we will propose CNN models to provide this ability to NINA. Their performances will be analysed and compared to the state-of-art.
    • Jing Xiao (supervised by T. Fraichard & E. Gomez-Balderas ) Estimating visual salience for human robot motion
      Abstract: Nowadays more and more robotic technologies are applied to assist people in their daily life. When robots and human share living space together, the way how robots move among people becomes important. The classical navigation strategies tend to preventing robots from collisions, since humans are social entities, problem comes when robots are navigating in a human-centered environment. This project aims to explore if estimated visual salience can address the human robot motion in simulation.
  • PhD students financed by RHUM
    • Omar Samir Mohammed (Supervised by Gérard Bailly & Damien Pellier) Deep learning methods for style extraction and transfer
      Abstract: How can we learn, transfer and extract styles, using deep neural networks? This PhD explores these questions using in two cases: online handwriting and online sketch drawing data. We present a framework in order to perform such a study: multidimensional evaluation metrics, deep neural network paradigm and detailed experimentation. We show first the potential of this framework to extract verbose styles. Then, we show how to leverage neural networks in order to perform transfer of style information between different tasks.
    • Matteo Ciocca (Supervised by Thierry Fraichard & Pierre-Brice Wieber) Provably Safe Navigation of Biped Robots Among People
      Abstract: I address the problem of maintain balance and avoid collisions for biped robots moving in dynamic and uncertain environments, e.g. moving among humans. I control a biped robot with a control scheme called Model Predictive Control (MPC): an iterative control process that compute a sequence of actions for the robot over a limited time horizon. On the balance front, my contribution is a guarantee that, at each iteration of the MPC process, it si possible to compute a sequence of actions that makes the robot stop in a few footsteps and maintain its balance forever. On the collision front, the state-of-the-art safety level called Passive Safety (the robot is at rest when collisions happen) was previously combined with our MPC framework for biped robot. My contribution is to investigate the effect of iterating less often the MPC process, e.g. to save computational power, on the collision avoidance performance when the biped robot navigates among people (yet guaranteeing Passive Safety). I am currently working on my last contribution of my PhD thesis: another safety level that aims to mitigate the number of collision happening and to reduce injury for people when collisions happen.
31/5/2018 - 9:00-16:30 GIPSA-Lab
Salle Mt Blanc
RHUM workshop (see the announcement) This workshop features talks from two distinguished invited speakers (see below) as well as presentations of RHUM researchers and pitches from PhD students.
  • Invited speakers
  • RHUM Researchers
  • PhD students
    • Jose Grimaldo (INRIA; supervised by T. Fraichard) Human inspired effort distribution during collision avoidance in human-robot motion
    • Matteo Ciocca (INRIA; supervised by P.-B. Wieber & T. Fraichard) Safety Strategies for Biped Walking in Human Environments
    • Nestor Bohorquez (INRIA; supervised by P.-B. Wieber) Design of safe control laws for the locomotion of biped robots
    • Nahuel Villa (INRIA; supervised by P.-B. Wieber) Robust Control for Biped Robots
    • Duc-Canh Nguyen (GIPSA-Lab; supervised by G. Bailly & F. Elisei) Learning interactive behaviors for HRI
    • Rémi Cambuzat (GIPSA-Lab & INRIA; supervised by G. Bailly, O. Simonin & A. Spalanzani) Immersive control of telepresence robots
    • Omar Samir Mohammed (GIPSA-Lab & LIG; supervised by G. Bailly & D. Pellier) Perception to action by deep learning
    • Ying Siu Liang (LIG; supervised by Damien Pellier and Sylvie Pesty) A Robot Programming Framework for Non-Experts
19/5/2016 Gymnase de la piscine universitaire Challenge (PersyCup)
3/4/2016 - 9:00-16:30 GIPSA-Lab
Salle Mt Blanc
RHUM workshop (see the announcement)
  • Gordon Cheng (TU München) Robots doing their best without speech in Human Environments
  • Ludovic Righetti (MPI Tuebingen) Exploiting contact interactions for robust manipulation & locomotion skills
  • Radu Horaud (Perception/INRIA & LJK) Audio-Visual Scene Analysis for HRI
  • Damien Pellier (MAGMA/LIG) Robot Programming by Demonstration in Cobotic Environment
  • Ernesto Gomez-Balderas (AGPIG/GIPSA) Mini-UAV teleoperation in a structured environment
  • Jérôme Maisonnasse (FAB-Lab/MSTIC) Fablab & robotics
30/11/2015 & 1/12/2015 INSA Lyon Colloque J. Cartier "Robotique, Services et Santé"
  • Robotique de service et interaction en environnement complexe
  • Habitat intelligent en santé
  • Apprentissage et aide au geste pour le personnel médical
8/10/2015 - 14:00-16:00 GIPSA-Lab B314 Workshop "Learning interactive behavioral models" organized by G. Bailly after A. Mihoub defense
  • Olivier Pietquin: Interaction management as a stochastic game
  • Mohamed Chetouani: What could we capture from the social interaction layer?
  • Frederic Bevilacqua: Modelling expressive movements: case studies from music, dance and sound design
  • Abdel-Illah Mouaddib: Dealing with short-term and frequent human-robot interaction in public space
2/10/2015 MJK Working lunch Comment posters & demos. Identify joint research subjects (at least two teams). 
21/5/2015 Gymnase de la piscine universitaire Challenge (PersyCup)

Synopsis

The Action-Team Robots in Human Environment adresses several challenges on the two domains of personal and service robots. Our aims are to:
  1. Improve the coordination and collaborations between the different teams contributing to robotics within Persyval-lab, a significant challenge being the very different scientific backgrounds that are brought together: vision, sensor and signal processing, control theory, planning, machine learning, multi-agents, social interactions
  2. Tackle 3 scientific problems related to active perception, navigation in human environments, learning and adaptation of robots behaviors for social interaction, with naturally a significant emphasis on experiments.
  3. Support the development of emerging topics that draw interest in our community, such as ethics and soft robotics.
  4. Demonstrate our progress and collaboration with 2 distinct challenges combining the scientific problems mentioned earlier, one related to service robots, with a robot able to navigate seamlessly in a crowd of humans (autonomously or tele-operated with shared autonomy), and one related to personal robots, with a robot able to interact socially with a group of humans.
  5. Support a Robotics Academy gathering efforts on teaching robotics both in theory and in practice, with a significant use of student experimental challenges.
This project is financed by the ANR (ANR-11-LABX-0025-01). logo ANR