Postdoctoral


Nested cortical models for human and non-human primate inter-species comparisons

Olivier Coulon, Adrien Meguerditchian

Nested cortical models for human and non-human primate inter-species comparisons.
O. Coulon (Institut de Neurosciences de la Timone)
A. Megerditchian (Laboratoire de Psychologie Cognitive, AMU, Marseille, France)
W. Hopkins (Neuroscience Institute and Language Research Center, Georgia State University, Atlanta USA)

post-doctoral position
duration: 2 years
location: MeCA team, Institut de Neurosciences de la Timone, Marseille, France.

The MeCA team at INT has developed a human cortical organization model that provide a statistical description of relative position, orientation, and long-range alignment of cortical sulci on the surface of the cortex [1]. This model can be implemented on the surface of any individual (extracted from MR images), and provides inter-subject comparisons and cortical parcellation [2]. The goal of this project is to build new models for non-human primate species. Starting from the human model, a nested sub-model can be developed for chimpanzees, from which in turn a model for baboons can be build, then again for macaques. This series of nested models will define a hierarchy of cortical complexity, and will provide the mean to transport any cortical information (functional, anatomical, geometrical) from one species to another and to perform direct inter-species comparisons. A proof of concept has already been proposed for humans and chimpanzees [3, Fig.1]. The post-doctoral fellow will develop complete models for chimpanzees, baboons, and macaques, and apply them to study local cortical expansions across species, as well as to compare the localization of known cortical asymmetries across species. Models and associated tools will be made available to the neuroimaging community via the BrainVisa software platform.
The candidate will use existing tools and adapt them to new species. MR image databases will be provided for each species. Basic knowledge of programming languages such as Matlab or Python is expected, as well as a strong interest in neuroimaging and/or computational anatomy.

[1] Auzias G, Lefèvre J, Le Troter A, Fischer C, Perrot M, Régis J, Coulon O (2013). Model-driven Harmonic Parameterization of the Cortical Surface: HIP-HOP, IEEE Trans Med Imaging, 32(5):873-887.
[2] Auzias G, Coulon O, Brovelli A (2016) MarsAtlas : A cortical parcellation atlas for functional mapping, Human Brain Mapping 37(4), p. 1573-1592
[3] Coulon O, Auzias G, Lemercier P, Hopkins W (208). Nested cortical organization models for human and non-human primate inter-species comparisons. Int. Conference of the Organization for Human Brain Mapping.







              

Cumulative culture in nonhuman primates and the evolution of language

Joël Fagot, Nicolas Claidière

Transverse question 1: “Precursors of Language”
Partners: Joël Fagot1, Nicolas Claidière1, Piera Filippi2 and Noel Nguyen2.
1 LPC – Laboratoire de Psychologie Cognitive, Aix-Marseille University
2 LPL – Laboratoire Parole et Langage, Aix Marseille University
Contact: Dr. J. Fagot, joel.fagot@univ-amu.fr, Webpage: https://lpc.univ-amu.fr/fr/profile/fagot-joel
Call for a two-year post-doc on: 
Cumulative culture in nonhuman primates and the evolution of language
Children learn a language by being exposed to the speech production of speakers of that language, they then become speakers themselves. This process of iterated learning largely explains why language evolve through time: every generation, the changes that are introduced by new generations of speakers are passed on to future generations. Experiments involving transmission chains can capture such process. For instance, Kirby, Cornish, and Smith (2008) introduced a non-structured language (random associations between a set of visual stimuli and artificially constructed labels) as input in transmission chains and found that this language became progressively more structured and easier to learn. However, the importance of iterated learning in determining the structure of a language is difficult to evaluate in humans, because humans have necessarily already acquired a language before participating in experiments. That first acquisition will then inevitably guide the evolution of the experimental language according to the principles just described (participants will be biased by their first language). Studies with non-human animals, such as baboons, can overcome this difficulty and the proposed project is to explore the effect of iterated learning on language like structures in the baboon, a nonhuman, nonlinguistic, primate species.
The post-doc will be based at the CNRS primate station in Rousset (nearby Aix-en-Provence), and will work with a word-unique “primate cognition and behavior plateform” where baboons can interact freely with experiments presented on touch screens (for a range of experiments using this system see https://www.youtube.com/watch?v=6Ofd8cHVCYM). This platform has previously been used to present transmission-chain experiments to baboons. In relation to this project, previous experiments have revealed that transmission chains promote the appearance of typically linguistic features (structure, systematicity and lineage specificity, see e.g. Claidière et al, 2014). The post-doc will have to explore this line of research further.   A major challenge will be to extend our previously used visual pattern reproduction task to sound patterns, which may lend themselves to the emergence of a combinatorial structure along the transmission chain.
We are looking for candidates who are highly motivated with a PhD in Biology or Psychology, preferably with a focus on either evolutionary mechanisms and/or language-related issues. Candidates are also expected to have very good skills in programming and data analysis. A previous experience with nonhuman primates would be a plus. 
Candidates should contact Dr. Joël Fagot at joel.fagot@univ-amu.fr
References: 
Claidière, N., Smith, K., Kirby, S. & Fagot, J. (2014). Cultural evolution of systematically structured behaviour in a non-human primate, Proc. R. Soc. B 2014 281, 20141541

Kirby S, Cornish H, Smith K. (2008). Cumulative cultural evolution in the laboratory: an experimental approach to the origins of structure in human language. Proc. Natl Acad. Sci. USA 105, 10 681–10 686. (doi:10.1073/pnas. 0707835105)




              

Functional Connectivity Dynamics of the Language Network

Andrea Brovelli, Demian Battaglia

Functional Connectivity Dynamics of the Language Network

Supervisors
Andrea Brovelli (Institut de Neurosciences de la Timone - www.andrea-brovelli.net/)
Demian Battaglia (Institut de Neurosciences des Systèmes - www.demian-battaglia.net)
Frédéric Richard (Institut de Mathématiques de Marseille - www.latp.univ-mrs.fr/~richard/)

Scientific context and state-of-the-art
Language is a network process arising from the complex interaction of regions of the frontal and temporal lobes connected anatomically via the dorsal and ventral pathways (Friederici and Gierhan, 2013; Fedorenko and Thompson-Schill, 2014; Chai et al., 2016). An open question is how these brain areas coordinate to support language. Functional Connectivity (FC) analysis can provide the methodological framework to address this question. FC analysis includes various forms of statistical dependencies between neural signals, ranging from linear correlation to more sophisticated measures quantifying directional influences between brain regions, such as Granger causality (Brovelli et al., 2004, 2015). Recently, however, it has become clear that a time-resolved analysis of FC, also known as Functional Connectivity Dynamics (FCD), can yield a novel perspective on brain networks dynamics (Hutchison et al., 2013; Allen et al., 2014). Indeed, we have shown that non-trivial resting-state FCD is expected to stem from complex dynamics in cortical networks (Hansen et al., 2015) and that the fluency of FCD correlates with single-subject level cognitive performance across the human lifespan (Battaglia et al., 2017). In task-related conditions, FCD analyses have shown that visuomotor transformations follow a schedule of recruitment of different networks over time intervals in the order of hundreds of milliseconds (Brovelli et al., 2017).
Objective of the research project
These recent advances open up the possibility to tackle one of the long-term objectives of the ILCB, which is to characterise how language-related brain regions communicate. This challenge, however, is limited by the lack of knowledge about the underlying neurophysiological mechanisms. 
The objective of the Post-Doc research project is to characterise the neural correlates that could be used to track information transfer between brain regions in task-related conditions. At first, the post-doc researcher will optimise current tools for the estimate of source-level brain activity (both power and phase information of neural oscillations) from magnetoencephalographic (MEG) data using an atlas-based approach (Auzias et al., 2016). Information transfer between brain regions will be quantified by means of FC and FCD analyses based on different metrics, including multivariate spectral methods, directional influences, such as Granger causality, and information-theoretical quantities, which can track information storage, sharing and transfer (Kirst et al., 2016). These metrics will be applied to different potential correlates of brain communication, such as power-to-power correlations, phases-to-phase relations and phase-to-amplitude couplings. The analysis of FC and FCD representations and extraction of functional modules will then be performed using graph theory and temporal network representations (Holme and Saramäki, 2012; Brovelli et al., 2017). 
To do so, we will exploit two MEG datasets. A first MEG dataset collected by Andrea Brovelli, in which participants were asked to perform finger movements in response to the presentation of numerical digits (simple visuomotor task). And a second MEG experiment collected by Xavier Alario, in which participants were required to name objects depicted a screen (naming task).

Profile of the Post-Doc candidate
The Post-Doc candidate will have a PhD in cognitive and computational neuroscience, bioengineering, physics or applied mathematics. Proficient computational skills (Matlab and/or Python) and experience in the analysis of MEG data is required. Experience in the cognitive bases of language is welcome.
Contacts
Candidates should send their CV, 1 or 2 reference letters and a motivation letter to:
	Andrea Brovelli 	andrea.brovelli@univ-amu.fr
	Demian Battaglia	demian.battaglia@univ-amu.fr
Frédéric Richard	frederic.richard@univ-amu.fr

References
Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD (2014) Tracking whole-brain connectivity dynamics in the resting state. Cereb Cortex 24:663–676.
Auzias G, Coulon O, Brovelli A (2016) MarsAtlas: A cortical parcellation atlas for functional mapping. Hum Brain Mapp 37:1573–1592.
Battaglia D, Thomas B, Hansen ECA, Chettouf S, Daffertshofer A, McIntosh AR, Zimmermann J, Ritter P, Jirsa V (2017) Functional Connectivity Dynamics of the Resting State across the Human Adult Lifespan. Available at: http://dx.doi.org/10.1101/107243.
Brovelli A, Badier J-M, Bonini F, Bartolomei F, Coulon O, Auzias G (2017) Dynamic Reconfiguration of Visuomotor-Related Functional Connectivity Networks. J Neurosci 37:839–853.
Brovelli A, Chicharro D, Badier J-M, Wang H, Jirsa V (2015) Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping. J Neurosci 35:12643–12658.
Brovelli A, Ding M, Ledberg A, Chen Y, Nakamura R, Bressler SL (2004) Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by Granger causality. Proc Natl Acad Sci U S A 101:9849–9854.
Chai LR, Mattar MG, Blank IA, Fedorenko E, Bassett DS (2016) Functional Network Dynamics of the Language System. Cereb Cortex 26:4148–4159.
Fedorenko E, Thompson-Schill SL (2014) Reworking the language network. Trends Cogn Sci 18:120–126.
Friederici AD, Gierhan SME (2013) The language network. Curr Opin Neurobiol 23:250–254.
Holme P, Saramäki J (2012) Temporal networks. Phys Rep 519:97–125.
Hutchison RM, Womelsdorf T, Allen EA, Bandettini PA, Calhoun VD, Corbetta M, Della Penna S, Duyn JH, Glover GH, Gonzalez-Castillo J, Handwerker DA, Keilholz S, Kiviniemi V, Leopold DA, de Pasquale F, Sporns O, Walter M, Chang C (2013) Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage 80:360–378.
Kirst C, Timme M, Battaglia D (2016) Dynamic information routing in complex networks. Nat Commun 7:11061.



              

Speech monitoring in conversation with human and artificial intelligence interlocutors

Elin Runnqvist, Magalie Ochs

Speech monitoring in conversation with human and artificial intelligence interlocutors

•	Post-doctoral project proposal supervised by Elin Runnqvist (LPL) and Magalie Ochs (LSIS)
•	Collaborators: Noël Nguyen (LPL), Kristof Strijkers, (LPL) & Martin Pickering (University of Edinburgh)
•	QT4: “Cerebral and cognitive underpinnings of conversational interactions”
 
Traditionally, researchers have focused on either production or comprehension to investigate the underlying mechanisms of language processing. However, in recent years a switch in focus has occurred towards the examination of both production and comprehension by looking at language processing in a conversational setting. While this trend has started in many key fields of language processing, not all research domains have taken up this exciting and new challenge. With the current project, we would examine how the interaction with another interlocutor might impact the processes involved in error monitoring (i.e., detection and repair of errors) during language production. While little to no research has examined monitoring in a conversational setting, there are monitoring models that take dialogue into account (e.g., Pickering & Garrod, 2014). In the current proposal, we would test the predictions put forward by theses models by employing several different tasks (e.g., the SLIP task, Runnqvist et al., 2016; the network description task, Declerck et al., 2016), and manipulating several variables related to the speaker, the task demands and to the different levels of linguistic representations while using both behavioral and electrophysiological methods. Furthermore, the use of an artificial agent as a conversational partner for parts of the project will allow for manipulation of conversational variables (e.g., location or type of feedback) and it will further allow us to examine whether the patterns observed for humans would be similar, speaking to the issue of monitoring being an automatic or controlled process. Both a virtual agent and a humanoid robot (Furhat) would be used to measure the effect of physical presence. Finally, multimodal aspects such as head nodding and smiling would be manipulated (e.g., Ochs et al., 2017). The end goal of this project would be twofold: Concerning language processing, the objective is to have a better understanding of monitoring in conversation and its relation to monitoring in isolation. Concerning artificial intelligence, the end-goal would be to further improve our understanding of the linguistic, social and emotional factors that are essential for successful human-robot interactions.   




              

Multi-modal averaging of neuroimaging data using multi-view machine learning. Methods and applications

Sylvain Takerkart, Hachem Kadri

Multi-modal averaging of neuroimaging data using multi-view machine learning.
Methods and applications.

QT6 : Machine learning and deep learning
Supervisors : Sylvain Takerkart (INT), Hachem Kadri (LIS), François-Xavier Dupé (LIS)


In neuroimaging, traditional group analyses relie on warping the functional data recorded in different individuals on a template brain. This template brain is constructed from the anatomy of the brain, either using standard templates (such as the ones provided in software librairies such as SPM or FSL) or using a population-specific template (which can e.g be computed using tools included in the ANTS and freesurfer packages). Once projected on such common space, the General Linear Model (GLM) is applied to identify commonalities across subjects.

In other terms, this can be viewed as two successive averaging steps : first, the anatomical averaging that produces the template brain ; second, the functional averaging that is performed through the GLM. Because the computation of the template brain is not a linear operation, these two steps are not commutative. The final result is therefore biased by the choice of this order, a bias which can be very important in regions where the inter-individual anatomical variability is very strong. In particular, brain regions involved in langage processing, such as the inferior frontal gyrus, are strongly impacted by this bias.

We here propose a new framework that free ourselves from this methodological bias by performing both averaging operations simultaneously. Intuitively, this means that the anatomical averaging will exploit the functional information, and that the functional group analysis will directly feed itself from the individual brain anatomy. We frame this problem as a multi-view machine learning question. The tasks of the post-doctoral fellow will consists in 1. designing and implementing an algorithm that can efficiently address this question, and 2. testing it on a variety of real MRI dataset available throughout the ILCB teams.

The first task will be conducted under the supervision of Sylvain Takerkart (INT, Banco team, Neuro-Computing Center), as well as François-Xavier Dupé and Hachem Kadri (LIS, Qarma team), who have been collaborating for several years on the design of new machine learning methods for neuroimaging. The second task will involve applying this new method on various existing fMRI datasets recorded by the ILCB teams, such as experiments dedicated to : 1. studying plasticity in the auditory cortex, with a comparison of pianists and controls using a tonotopy paradigm (D. Schön, INS ; S. Takerkart, INT), 2. understanding speaker recognition processes in the vocal brain (V. Aglieri, S. Takerkart, P. Belin, INT), 3. examining hierarchical processing in the inferior frontal gyrus (T. Chaminade, INT). The expected benefits are an improved sensitivity of group studies, both in univariate and multivariate settings. Finally, a software tool will be released publicly so that all ILCB members, as well as the members of the scientific community at large, can benefit from this new method.


 

              

Discrimination des différents processus cognitifs de reconnaissance de la voix.

Jean-François Bonastre, Christine Meunier

Propositions de JF Bonastre (avec le LPL et notamment Christine Meunier et Alain Ghio)

*** Proposition d'un post doc sur la discrimination des différents processus cognitifs de reconnaissance de la voix.
Il s'agit de comparer les processus mis en œuvre dans le cas de la reconnaissance de voix familière, de voix classiques et de "signaux d'alerte" (qui peuvent être des voix spécifiques ou des cris de prédateurs.
La réalisation d'expérience enregistrant l'activité cérébrale (MEG ?) présente un fort intérêt pour vérifier si il s'agit des mêmes processus cognitifs.
La simulation/comparaison par des réseaux de neurones permettra d'élargir le champs des résultats

 

              

Time as a functional mechanism of sensorimotor integration in speech and language?

Benjamin Morillon , Kristof Strijkers


Time as a functional mechanism of sensorimotor integration in speech and language?

Supervisors: Benjamin Morillon (INS) & Kristof Strijkers (LPL)

(potential) ILCB collaborators: Daniele Schon (INS), Andrea Brovelli (INT), Elin Runnqvist (LPL), Marie Montant (LPC)
(potential) external collaborators: Anne-Lise Giraud (UNIGE), Sonja Kotz (UM), Friedemann Pulvermuller (FUB) 

ILCB PhD & Postdoctoral Topic Proposal
Primary QT: QT3
Secondary QT: QT5

While traditional models proposed a strict separation between the activation of motor and sensory systems for the production versus perception of speech, respectively, by now most researchers agree that there is much more functional interaction between sensory and motor activation during language behavior. Despite this increasing consensus that the integration of sensorimotor knowledge plays an important role in the processing of speech and language, much less consensus exists on what that exact role may be as well as the functional mechanics that could underpin it. Indeed, many questions from various perspectives remain open issues in the current state-of-the-art: Is the role of sensorimotor activation modality-specific in that it serves a different functionality in perception than production? Is it only relevant for the processing of speech sounds or does it also play a role in language processing and meaning understanding in general? Can sensory codes be used to predict motor behavior (production) and can motor codes be used to predict sensory outcomes (perception)? And if so, how are such predictions implemented at the mechanistic level (e.g., do different oscillatory entrainment between sensory and motor systems reflect different dynamical and/or representational properties of speech and language processing)? And in which manner can such sensorimotor integration go from arbitrary speech sounds to well-structured meaningful words and language behavior? The goal of this project is to advance on our understanding on these open questions (in different ‘sub-topics’) taking advantage of the complementary knowledge of the supervisors, with B. Morillon being an expert on the cortical dynamics of sensorimotor activation in the perception of speech, and K. Strijkers being an expert on the cortical dynamics of sensorimotor activation in the production and perception of language. At the center of the project, and as connecting Red Thread, is the shared interest of the supervisors in the role of ‘time’ (temporal coding) as a potential key factor that causes sensorimotor activation to bind during the processing of speech and language. Upon this view, ‘time’ transcends its classical notion as a processing vehicle (i.e., simple propagation of activation from sensory to motor systems and vice versa) and may reflect representational knowledge of speech and language. One of the main goals of the current project is thus to test the hypothesis that temporal information between sensory and motor codes serves a key role in the production and perception of speech and language. More specifically, we will explore whether sensorimotor integration during speech and language processing reflects: (a) the prediction of temporal information; (b) the temporal structuring of speech sounds and articulatory movement; (c) the temporal binding of phonemic and even lexical elements in language.

We will consider PhD-candidates and post-doctoral researchers to conduct research around any of the three topics specified above (a-c), and interested candidates can contact us via email (Benjamin Morillon: bnmorillon@gmail.com; Kristof Strijkers: Kristof.strijkers@gmail.com) including a CV and motivational letter (1-2 pages). Candidates who have a strong background in speech and language processing and/or knowledge of spatiotemporal neurophysiological techniques and analyses, will be considered as a strong plus. 



              

Contour, rhythm or content? What does dogs brain grasp from human speech?

Florence Gaunet, Thierry Legou


Contour, rhythm or content? What does dogs brain grasp from human speech?

Florence Gaunet (LPC), Thierry Legou (LPL) & Pr. Anne-Lise Giraud (Geneve Univ/ IMERA position from Feb to June 2019)

Implications: QT1 (Principale : motricity/motor representation involvement in speech perception), QT3 (secondaire : animal as a model of the study of language)
Demande: Postdoc ou Doctoral grant 
Résumé : We intend to explore dog’s neural and perceptual responses to syllabic speech to understand speech auditory processing with reduced articulated production capabilities, and therefore reduced motor control. It might be the case that dogs only use for perceiving speech the acoustic cues they can themselves produce, i.e. short “syllable-like” intonated sounds.  Alternatively, they might be sensitive to cues that they cannot produce at all. Given the expertise of dogs in using human speech, the findings will provide insights concerning mechanisms on speech processing by the brain, i.e. the extent to which motor representation is involved in speech perception

              

Human-Machine Interaction, Artificial agent, Affective computing, Social signal processing

Magalie Ochs


Human-Machine Interaction, Artificial agent, Affective computing, Social signal processing. 



              

Efficiency of a Virtual Reality Headset to improve reading in low vision persons

Eric Castet


Efficiency of a Virtual Reality Headset to improve reading in low vision persons.
Low Vision people, in contrast to blind people, have not lost the entirety of their visual functions. The leading cause of low vision in occidental countries is AMD (Age-related Macular Degeneration), a degenerative non-curable retinal disease occurring mostly after the age of 60 years. Recent projections estimate that the total number of AMD persons in Europe will be between 19 and 26 millions in 2040.         
                The most important wish of people with AMD is to improve their ability to read by using their remaining functional vision. Capitalizing on recent technological developments in Virtual Reality Headsets we have developed a VR reading platform (implemented in the Samsung - Gear VR - headset). This platform allows us to create a dynamic system allowing readers to use augmented vision tools specifically designed for reading (Aguilar et Castet, 2017), as well as text simplification techniques currently tested in our lab. Our project is to assess whether this reading platform is able to improve reading performance both quantitatively (reading speed, accuracy, ...) and qualitatively (comfort, stamina, ...). Experiments will be performed in the ophthalmology department of the University Hospital of La Timone (Marseille).