I participates and managed many projects
during my academic carreer, in cooperation with researchers
of GIPSA-lab and of other laboratories in France and abroad.
My projects were focused on methods in statistical
processing, especially neural networks, source separation
and its applications in audio-video speech processing,
hyperspectral imaging and biomedical engineering (ECG,
EEG/MEG/fMRI), brain-computer interfaces and processing on
the Riemannian manifold.
These projects have been funded either by the French
Research Agency, by European Research or by private
companies.
During the last pas 6 years, I worked on a European Research
Council project:
Challenges
on Extraction and Separation of Sources (CHESS). This
is and ERC Advanced grant, which began on March 2013 for 5
years, with 2.5 million Euros funding.
Summary of the ERC CHESS project
The CHESS project addresses 3 challenges in extraction and
separation of sources.
The first challenge concerns source separation for
multimodal recordings. In fact, multimodal recordings can be
due to different devices (e.g. EEG and MEG in brain
imaging), different time (space) windows for studying
dynamics of data along time (space), or different subjects
(e.g. patients) recorded by the same device. Although these
situations are very different, from a theoretical point of
view, they require to jointly process multiple datasets (one
per modality) with interactions between them. This challenge
relates to data fusion, but the main goal in CHESS is to
develop comprehensive foundations and generic methods for
multimodal processing instead of designing ad hoc
algorithms.
We propose a new source separation model assuming
multidimensional sources and multimodal recordings. This
model extends independent component analysis (ICA),
independent vector analysis (IVA) and independent subspace
analysis (ISA). Results, some of them based on
generalization of Schur’s Lemma, show that multimodality,
provided that hard or soft interactions exist between
datasets, leads to relaxed conditions for source
identifiability and uniqueness.
The core of multimodal models is the interaction between
datasets. For multi-devices or multi-temporal data sets, we
develop a general and flexible framework suited to a vast
class of models with interaction, e.g. when datasets share
common, correlated or weakly related factors, or with
factors varying along data sets. This leads to algorithmic
implementations, based on non convex optimization with
constraints: the cost function contains a classical data fit
term, completed by regularization terms, modeling
interactions between the datasets.
More generally, performance in joint processing of
multimodal recordings is usually assumed better than that
achieved using only one recording coming from a unique
modality. But, in the literature, there are results in
contradiction with this claim. We then studied, in an
information theoretic approach, the benefits or
disadvantages of using two or more modalities. Our results
explain how different sampling rates, SNRs in each modality
and correlation between modalities influence estimation
performance.
The second challenge focuses on source separation in
nonlinear mixtures. A new generic approach consists in
replacing the time-invariant nonlinear mixture of sources by
a time-varying linear mixture of the derivatives of the
sources. This idea only requires mild conditions, i.e. the
nonlinear model to be differentiable and sources to be
smooth enough. It leads to theoretical proof of
identifiability and new algorithms. A second, very generic
approach too, is based on the fact that the Gaussian process
property is lost when mixed nonlinearly with polynomial.
Thus, Gaussian process can be used as a criterion for
separating colored sources satisfying Gaussian process
model, using simple second-order statistics. Main
applications are focused on processing signals coming from
ion-sensitive or gas sensor arrays.
The third challenge (extraction of sources in high-
or low-dimension data) has been explored in three multimodal
applicative frameworks: PCG/ECG based non-invasive fetal
heart extraction, audio-video speech separation, gaze-EEG
recordings, and hyperscanning. Typically, we design methods,
which exploit simple hints of the sources of interest: hints
can be properties like quasi-periodicity or simple binary
information coming from one modality. For hyperscanning, we
show that approximate joint diagonalizer of a set of
matrices is related to the geometric mean of those matrices.
This finding links blind source separation to classification
on Riemannian manifold.
More details, and especially all the CHESS publications, can
be found on the
ERC
web pages of the CHESS project or on the open-access
site
HAL
using the acronym CHESS in “European project” topics.
International cooperations
Since 2004, I have strong cooperations with Prof. M.
Babaie-Zadeh from Sharif University of Tecnology (Tehran,
Iran) and his research team (since 2004). The cooperation is
supported by bilateral (France-Iran) funding in the
framework of the GundhiShapour program, and we supervised
together a few PhDs in co-cotuelle. The research we are
doing together is focused on sparse component analysis
(SCA), source separation in under-determined mixtures and
applications in various domains, from image denoising,
biomedical engineering to digital communications.
I also have cooperations on fetal ECG extraction with Prof.
M. Shamsollahi from Sharif University of Tecnology (Tehran,
Iran), Prof. R. Sameni from Shiraz university (Iran) and
Prof. G. Clifford (MIT, USA, then Oxford, UK, and now
Atlanta Univ., USA), mainly on ECG modeling and noninvasive
fetal ECG extraction. First results developed by Prof. R.
Sameni during his PhD have been patented and are exploited
by the US compagny
MindChild.
I also have a regular cooperation since 2010 with Prof. L.
Duarte at Univ. of Campinas. This cooperation is mainly
focused on source separation in nonlinear mixtures, with
applications in chemical sensing.
During the ERC CHESS project, in addition to the above
collaborations, I have a strong cooperation with Prof. T.
Adali (Univ. of Maryland, Baltimore County), on the fusion
of multimodal data. This cooperation appear with special
sessions or tutorials in international conferences, a
special issue in Proceedings of the IEEE, and papers.