06/07/2017 Deep learning models of perception and cognition by Marco Zorzi (University of Padova, Italy)

Jeudi 6/7 @ 14:00
St. Charles, Salle des Voûtes

Deep learning models of perception and cognition

Marco Zorzi
University of Padova, Italy
Deep learning in stochastic recurrent neural networks with many layers of neurons (“deep networks”) is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex representations of the sensory data through unsupervised learning. Using examples from research in my laboratory, I will show that deep learning models represent a major step forward for connectionist modeling in psychology and cognitive neuroscience. I will also focus on a new model of letter perception, which shows that learning written symbols can recycle the visual primitives of natural images, thereby requiring only limited domain-specific tuning.

30/06/2017 ILCB lunch-talk by Robert Hartsuiker (University of Ghent, Belgium)

Three decades of structural priming research: implications for syntactic representation, domain-specificity of syntax, and multilingualism

About thirty years ago, Kay Bock discovered structural priming, the tendency for speakers and listeners to recycle syntactic structures they have recently encountered. A recent meta-analysis of 70 published papers (Mahowald et al., 2017) shows that structural priming (as well as its enhancement by lexical overlap between prime and target sentence) is highly robust. Here, I look back at three decades of structural priming research, with a particular emphasis on the theoretical implications for syntactic representation, on the organization of the syntactic representations of multiples languages in multilinguals, and on the question of whether structural processing is domain-specific or is shared with other cognitive domains, such as music or math. I then look forward to an ongoing research line on the late acquisition of syntax in a second language. I will describe our account of this acquisition process, according to which syntactic representations start out as separate for each language but merge as the learner’s proficiency increases, and show the results of an artificial language learning study designed to test this account.


-To plan for the lunch buffet, attendance must be confirmed by sending an email to

Please let us know if you have any dietary restrictions (vegetarian, allergies, etc.).

-Speaker suggestions (warmly encouraged) for September-June should be sent to

12.00-13.00 Talk (Salle de conférences, LPL) by Robert Hartsuiker (University of Ghent, Belgium)
13.00-......  Lunch buffet (garden, LPL, Aix-en-Provence)

06/06/2017 Language Production from the Bottom Up

Melissa Redford*, Pascal Perrier**, Caterina Petrone***,  Serge Pinto***, Kristof Strijkers***, F-Xavier Alario****

*IMeRA, University of Oregon **GIPSA, Université de Grenoble,  ***LPL, BLRI, ****LPC, BLRI

• Date(s) : 06/06/2017
• Lieu : Fondation IMéRA, la Maison des astronomes 2 place le Verrier 13004 Marseille
• Heure / time : 9h20-16h
• Organisateur / organiser : Melissa Redford & Serge Pinto
• Conditions d'accès : entrée libre
• Contact : redford(arobase)



06/06/2017  (9h20-16h)

This workshop will consider spoken language production from the bottom up; that is, from speech motor control at the level of sound production to connected speech planning and lexical access. The goal is to brainstorm an approach to language production that would incorporate into theory the emergence of structure/function from underlying dynamical processes as well as physiological constraints and lifespan changes to the underlying system of representation and control.



9:20-45. F-Xavier ALARIO & Pierre LIVET – « Welcome »

9:45-10:30. Melissa REDFORD – « Development : constraints that shape the psychology of language »


10:45-11:30. Pascal PERRIER – « Speech motor control: Some hypotheses on sequence planning and speaker adaptation »

11:30-12:15. Caterina PETRONE – « Prosody and individual behavior in speech production planning »


13:30-14:15. Kristof STRIJKERS – « Neural coding in language: How neurophysiology may constrain psycholinguistic theory »

14:15-15:00. F-Xavier ALARIO – « Phonological constraints in determiner production »


15:15-15:30. Melissa REDFORD – « Summary & discussion »

15:30-16:00. Serge PINTO – « Cognisud moving forward »

Discourse Prosody and Sentence Processing in Prelingually Deaf Teenagers with Cochlear Implants


Katherine Demuth
(Macquarie University)

17h30 LPL,

The past few years has seen major improvements in the early diagnosis of hearing loss, early intervention, and device improvements.  
Much of the assessment of language development has focussed on the early years, 
with assessment of hearing levels, intelligibility, vocabulary size, 
and other standardized measure showing good to excellent attainment levels by many children fitted with both hearing aids and cochlear implants (CIs). 
However, much less is known about the language abilities of school-aged children with hearing loss, 
where many still experience challenges making themselves understood, understanding others, and fully engaging in social interaction. 
This talk discusses results from two recent studies of discourse interactions and sentence processing by prelingually deaf teenage CI users, 
showing that they are less interactive, exhibit a different use prosodic cues for certain discourse functions, 
and are much slower at sentence processing than their normal hearing peers. 
This raises many questions regarding the nature of their language model, and how it might be enhanced to achieve more efficient language processing and production.


Music and Language comprehension in the brain – a surprising connection


Richard Kunert
(Max Planck Institute for Psycholinguistics and Radboud University Nijmegen, 
Donders Institute for Brain, Cognition and Behavior, Nijmegen)

14h Salle des voûtes,
fac St Charles, Pôle 3 C
How the comprehension of instrumental music and spoken or written language is implemented in the brain remains mysterious. 
In this talk I present a new way to approach this issue. 
In a series of studies we investigated music and language comprehension at the same time in order to gain insights into both. 
Specifically, we asked whether these two kinds of stimuli are subserved by common neural circuitry despite their obvious differences. 
It turns out that structural properties of language and instrumental music are both processed in a common brain area. 
Does this truly imply shared structural processing or is this just a general effect related to attention? 
It turns out that the effects of music on attention are actually limited. 
Overall, the findings I will present suggest that music and language processing share very limited, and surprisingly specialized neural circuitry.
Overall, an interdisciplinary approach, as applied here, can open the way for asking ever more focused questions about brain organization.


Séminaires Groupe de recherche - Question transversale ILCB

« Langage & motricité : un système commun ?
Séminaires Groupe de recherche - Question transversale ILCB


13h30 à 17h Salle des voûtes au pôle 3C
Campus Saint-Charles, Faculté des Science, Marseille
Et si langage et action ne faisaient qu’un ? La parole aurait-elle une origine gestuelle ? Des tâches motrices aideraient-elles à la récupération de certaines fonctions du langage troublées ?
Des liens étroits entre le langage et le système moteur ont été mis en évidence par un nombre grandissant d’études en comportement et en neurosciences, qu’elles portent sur les rythmes, l’écriture manuscrite, les interactions sensorimotrices, la saisie en pince fine, l’élaboration et la manipulation d’outils, les troubles praxiques et moteurs, la communication gestuelle de nos cousins les primates et celle de l’espèce humaine (i.e., mouvements de mains qui accompagnent la parole, pointage référentiel, gestes préverbaux chez le jeune enfant, langue des signes, etc.). Ainsi, la perception et le contrôle des gestes semblent interagir ou interférer directement avec certaines fonctions du langage et son acquisition. Au-delà de la diversité de nos approches au sein de nos laboratoires respectifs réunis à l’ILCB, cette question nous intéresse au vu de ses implications potentielles à la fois cliniques ou théoriques sur le fonctionnement du langage, son développement et ses origines phylogénétiques. A travers ce premier atelier, nous souhaitons faire le point sur l’état de nos travaux issus des laboratoires de l’ILCB sur cette question transversale en psychologie, neuroscience, sciences du langage, sciences cognitives et primatologie.



13h30 -15h : Influence motrice sur la perception et la production du langage oral et écrit

Thierry Chaminade (INT)
"Une syntaxe commune pour le langage et la fabrication d'outils?"

Marieke Longcamp (LNC) & Sarah Palmis (Doctorante, LNC)
"Bases cérébrales des interactions entre processus orthographiques et moteurs au cours de la production d'écriture manuscrite"

 Benjamin Morillon (INS)
"Origine motrice des prédictions temporelle dans l'attention auditive"

Marc Sato (LPL)
"Percevoir et Agir: La nature sensorimotrice de la parole"

Daniele Schon (INS) & Céline Hidalgo (Doctorante, INS)
"Résonance entre rythme musical et rythme conversationnel: effets d'un entraînement rythmique actif chez les enfants sourds"

Jean-Luc Velay (LNC)
"Influence de l'activité manuelle sur la compréhension en lecture"

15h00-15h30 – PAUSE CAFE

15h30-16h15 - Troubles cliniques

Serge Pinto (LPL)
"Troubles moteurs de la parole dans les troubles du mouvement: physiopathologies et prises en charge"

Véronique Sabadell (Orthophoniste Timone, Collaboration INS)
"Approche gestuelle dans la rééducation de l’aphasie"

Agnès Trébuchon (INS) & Alexia Fasola (Doctorante, INS)
"Geste co-verbal dans le modèle pathologique de l’épilepsie partielle. Que nous dit-il sur l’organisation du langage du patient"

16h15-17h - Approche comparative entre espèces

Florence Gaunet (LPC) & Thierry Legou (LPL)
" Geste et voix / vocalisations référentiels de Canis familiaris avec l’humain : 
implications sur les origines de la communication et du langage "

Adrien Meguerditchian (LPC)
"Aux origines du langage : Actions, Gestes & Cerveau chez les primates non-humains"

Marie Montant (LPC)
"Cognition incarnée chez les primates humain et non-humains".

The referential value of prosody: A comparative approach to the study of animal vocal communication

Piera Filippi

11h Salle B011, bât. B
5 avenue Pasteur, Aix-en-Provence LPL
Recent studies addressing animal vocal communication have challenged the traditional view of meaning in animal communication as the context-specific denotation of a call. These studies have identified a central aspect of animal vocal communication in the ability to recognize the emotional state of signalers, or to trigger appropriate behaviors in response to vocalizations. This theoretical perspective is conceptually sound from an evolutionary point of view, as it assumes that, rather than merely referring to an object or an event, animals’ vocalizations are designed to trigger (intentionally, or not) reactions that may be adaptive for both listeners and signalers. Crucially, changes in emotional states may be reflected in prosodic modulation of the voice. Research focusing on the expression of emotional states through vocal signals suggests that prosodic correlates of emotional vocalizations are shared across mammalian vocal communication systems. In a recent empirical study, we showed that human participants use specific acoustic correlates (differences in fundamental frequency and spectral center of gravity) to judge the emotional content of vocalizations across amphibia, reptilia, and mammalia. These results suggest that fundamental mechanisms of vocal emotional expression are widely shared among vocalizing vertebrates and could represent an ancient signaling system. But what’s the evolutionary link between the ability to interpret emotional information in animal vocalizations and the ability for human linguistic communication? I suggest to identify this link in the ability to modulate emotional sounds to the aim to trigger behaviors within social interactions. Hence, I will emphasize the key role of the interactional value of prosody in relation to the evolution and ontogenetic development of language. Within this framework, I will report on recent empirical data on humans, showing that the prosodic modulation of the voice is dominant over verbal content and faces in emotion communication. This finding aligns with the hypothesis that prosody is evolutionarily older than the emergence of segmental articulation, and might have paved the way to its origins. Finally, implications for the study of the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language, will be discussed.

Common ground for action-perception coupling and its consequences for speech processing


Sonja A. Kotz
(Dept. of Neuropsychology and Psychopharmacology, Maastricht University, The Netherlands & Dept. of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany)

11h Salle A003, bât. A
5 avenue Pasteur, Aix-en-Provence LPL
While the role of forward models in predicting sensory consequences of action is well anchored in a cortico-cerebellar interface, it is an open question whether this interface is action specific or extends to perceptual consequences of sensory input (e.g. Knolle et al., 2012; 2013 a&b). Considering the functional relevance of a temporo-cerebellar-thalamo-cortical circuitry that aligns with well known cerebellar-thalamo-cortical connectivity patterns, one may consider that cerebellar computations apply similarly to incoming information coding action, sensation, or even higher level cognition such as speech and language (e.g. Ramnani, 2006; Kotz & Schwartze, 2010, 2016): (i) they simulate cortical information processing and (ii) cerebellar-thalamic output may provide a possible source for internally generated cortical activity that predicts the outcome of information processing in cortical target areas (Knolle et al., 2012; Schwartze & Kotz, 2013). I will discuss new empirical and patient evidence (motor-auditory coupling and auditory only) in support of these considerations and present an extended cortico-subcortical framework encompassing action-perception coupling, perception, and multimodal speech.

Learning new words: Implications for speech processing and for lexical memory


James M. McQueen
(Radboud University, Nijmegen, The Netherlands)
14h30 Salle de conférences B011, bât. B
5 avenue Paste
lle et al., 2012; Schwartze & Kotz, 2013). I will discuss new empirical and patient evidence (motor-auditory coupling and auditory only) in support of these considerations and present an extended cortico-subcortical framework encompassing action-perception coupling, perception, and multimodal speech.
ur, Aix-en-Provence LPL
Listeners are able to recognise words in spite of considerable variation in how words are realized physically. For example, Mary may need to recognise an English word spoken by Jacques, a non-native speaker that Mary has never heard before. Evidence from behavioural (eye-tracking) and neuroscientific (EEG and fMRI) studies on novel word learning will be presented which suggests that listeners cope with the variation in spoken words through abstracting away from the episodic details of particular experienced word forms. This process can be seen in on-line speech recognition: the way a novel realization of a new word is processed is based on phonological knowledge previously abstracted from other words. The need for abstraction also shapes lexical memory: sleep-enhanced memory consolidation processes support the transfer of newly-learned words from episodic memory to long-term lexical memory, making generalization across modalities possible. Listeners can recognise, for example, newly-learned words that they have previously read but that they have never heard before.

Figures of speech in the brain: The role of metaphoricity, familiarity, concreteness, and lateralization in language comprehension

Bálint Forgács
(Laboratoire Psychologie de la Perception (LPP) Université Paris Descartes)
11h Salle de conférences B011, bât. B
5 avenue Pasteur, Aix-en-Provence LPL
Debates are hot regarding how metaphors are related to literal language, in what steps we understand them, and how our brains deal with them. In my talk I am going to show fMRI and divided visual half field data arguing against a unique role for the right cerebral hemisphere and literal language in metaphor comprehension. If the relevant psycholinguistic factors are controlled for (such as context, emotional valence or imageability) classical left lateralized regions seem to compute not just dead, but even novel metaphors. Moreover, the latter do not seem to evoke the so called electrophysiological concreteness effect either, contrary to the claims of the strong version of embodiment. Based on the new evidence I am going to present a novel model of how the neural systems dedicated to language could compute figures of speech so swiftly and quickly, and why the lateralization debate could be viewed from a different perspective.

Man and Machine during Natural Language Processing: A Neurocognitive Approach

Chris Biemann and Markus J. Hofmann
Language Technology, Universität Hamburg
General and Biological Psychology, University of Wuppertal
11h00 Salle des voutes,
3, Place Victor Hugo - 13331 Marseille
While state-of-the-art NLP models lack a theory that systematically accounts for human performance at all levels of linguistic analysis, Neurocognitive Simulation Models of orthographic and phonological memory so far lacked a level of implemented semantic representations. To overcome these limitations, the authors of this talk decided to initiate a long-standing cooperation.

In part 1 of this talk, we introduce unsupervised methods from language technology that capture semantic information. We present a range of methods that extract semantic representation from corpora, as opposed to using manually created norms. We show how we applied language models based on n-grams, topic modelling, and the word2vec neural model across three different corpora to account for behavioral, brain-electric and eye movement data. We used a benchmark that has become standard for Neurocognitive Simulation Models in psychology: Thus we reproducibly accounted for half of the item-level variance in the cloze-completion-based word predictability from sentence context, and the resulting N400-, and single fixation duration data of the Potsdam sentence corpus.

In part 2 we discuss how relatively straightforward NLP methods can be used to define semantic processes in a neurocognitive simulation model. To extend an interactive activation model with a semantic layer, we used the log likelihood that two words occur more often together in the sentences of a large corpus than predictable by single-word frequency. The resulting Associative Read-Out Model (AROM) is an extension of the Multiple Read-Out Model. Here, we use it to account for association ratings and semantically induced false memories in human performance and P200/N400 brain-electric data. Then, we present a sequential version of the AROM accounting for primed lexical decision, and the resulting semantic competition in the left (and right!) inferior frontal gyrus of the human brain. Finally, we envision two routes of reading, complementing the form-based aspects of linguistic representations with one of the most defining feature of words: they carry meaning.

Institute of Language, Communication, and the Brain


10h Salle Pouillon, Faculté Saint Charles, Marseille
10h Présentation de l’ILCB
10h30 Conférence inaugurale, Professeur Stanislas Dehaene (Collège de France)
« Les langages du cerveau : comment la syntaxe est-elle codée au niveau cérébral? »
11h30 Prises de parole des officiels
12h Cocktail

L’ILCB (Institute of Language, Communication and the Brain) vient d’être sélectionné dans le cadre du programme “Instituts Convergences” du Programme d’Investissements d’Avenir. Il s’agit de l’un des 5 Instituts qui seront créés en France. C’est un événement majeur pour notre site et au-delà pour notre communauté scientifique. Le site d’Aix-Marseille (renforcé par l’Université d’Avignon) est désormais reconnu comme l’un des leaders mondiaux dans l’étude des bases cérébrales du langage et de la communication. Notre environnement scientifique (10 unités de recherche), humain (150 personnes) et technologique (6 plateformes expérimentales) positionne en effet l’ILCB comme structure unique au monde dans ce domaine.
L’ILCB est un projet ambitieux, aux objectifs multiples. En termes de recherche, il s’agit pour nous de mieux comprendre le fonctionnement du cerveau en nous focalisant sur le traitement du langage et de la communication. Nous avons pour objectif la construction d’un cadre unificateur permettant de rassembler pour la première fois les connaissances acquises dans différentes disciplines : linguistique, neurosciences, psychologie, informatique, mathématiques et médecine. Il s’agit également de produire des résultats appliqués dans des domaines variés allant de la neurologie (p. ex. aide au traitement chirurgical de l’épilepsie, contrôle de la production de parole chez les sujets parkinsoniens) à l’interface homme-machine (p. ex. interface cerveau-machine, aide à la compréhension automatique de dialogue) en passant par la rééducation orthophonique des troubles de la parole ou de la communication (p. ex. dyslexie, autisme, aphasie, schizophrénie). Il s’agit enfin de s’impliquer fortement dans la formation initiale (création d’un master et d’une formation doctorale) et continue (création d’un diplôme universitaire et d’un mastère spécialisé à destination des professionnels).
Réponse souhaitée avant le 15/11/2016 ( /

Semantic processing beyond categories: Influences of semantic richness, associations and social-communicative contexts on language production


Rasha Abdel Rahman
(Humboldt-Universität zu Berlin)
Salle des Voûtes
Site Saint-Charles LPC/BLRI
The ultimate goal of speaking is to convey meaning. However, while semantic-categorical relations are well-investigated, little is known about other aspects of meaning processing during speech planning. In this talk I will present evidence on how message-inherent attributes, for instance, the semantic richness or emotional content of the message, shape language production. Furthermore, I will discuss the flexible nature of the language production system that can adapt to different contexts from ad-hoc relations to associations and social-communicative situations. Together, these findings demonstrate a high level of flexibility of the language production system as a basis for intimate - and thus far underinvestigated - relations between language production, emotion and social cognition.

Workshop on Linguistic Complexity


Salle de conférences B011, bât. B
5 avenue Pasteur, Aix-en-Provence
9h - 9h40
Identifying and removing complexity in syntax: the case of French anticausatives
Geraldine Legendre, Cognitive Science Department, Johns Hopkins University and Département d’Etudes Cognitives, Ecole Normale Supérieure

9h40 - 10h20
Information processing and sentence complexity 
Ted Gibson, MIT Department of Brain and Cognitive Sciences

10h40 - 11h20
Methods for making the complex simpler
Paul Smolensky, Cognitive Science Department, Johns Hopkins University and Département d’Etudes Cognitives, Ecole Normale Supérieure

11h20 - 12h
The future of the BLRI: towards a new Institute
Philippe Blache, LPL, Aix-Marseille Université

14h - 14h40
Linguistic complexity: The contributions of language-specific vs. domain-general mechanisms
Ev Fedorenko, Harvard Medical School / Massachusetts General Hospital

14h40 - 15h20
Integrating different complexity notions into a computational analysis of readability and proficiency
Detmar Meurers, University of Tübingen

16h - 17h30
General discussion

Annotating Information Structure in Authentic Data: From Expert Annotation to Crowd Sourcing Experiments


Detmar Meurers, Kordula De Kuthy
(University of Tübingen)
Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
While the formal pragmatic concepts in information structure, such as the focus of an utterance, are precisely defined in theoretical linguistics and potentially very useful in conceptual and practical terms, it has turned out to be difficult to reliably annotate such notions in corpus data (Ritz et al., 2008; Calhoun et al., 2010). We present a large-scale focus annotation effort designed to overcome this problem. Our annotation study is based on the tasked-based corpus CREG (Ott et al., 2012), which consists of answers to explicitly given reading comprehension questions. We compare focus annotation by trained annotators with a crowd-sourcing setup making use of untrained native speakers. Given the task context and an annotation process incrementally making the question form and answer type explicit, the trained annotators reach substantial agreement for focus annotation. Interestingly, the crowd-sourcing setup also supports high-quality annotation, for specific subtypes of data. To refine the crowd-sourcing setup, we introduce the Consensus Cost as a measure of agreement within the crowd. We investigate the usefulness of Consensus Cost as a measure of crowd annotation quality both intrinsically, in relation to the expert gold standard, and extrinsically, by integrating focus annotation information into a system performing Short Answer Assessment taking into account the Consensus Cost. Finally, we turn to the question whether the relevance of focus annotation can be extrinsically evaluated. We show that automatic short-answer assessment indeed significantly improves for focus annotated data.

Workshop: Speech coordination and entrainment


Fred Cummins (1), Mariapaola D’Imperio (2), Leonardo Lancia (3), Daniele Schön (4)
((1) Univ. Dublin, (2) LPL/AMU, (3) BLRI, (4) INS)
LPL, salle B011, 5 avenue Pasteur, Aix-en-Provence
Le 22 avril prochain aura lieu le workshop ""Speech coordination and entrainment"", avec une key note de Fred Cummins (Dublin).

Si vous souhaitez participer, merci de bien vouloir remplir le formulaire suivant :

Programme :

9h30 Fred Cummins, Univ. Dublin : Prayer, Protestand Football: the Puzzles of Joint Speech

10h30 Mariapaola D’Imperio, LPL : Direct imitation of metrical and intonational patterns and the production-perception link

Coffee break

11h30 Leonardo Lancia, BLRI : Effects of inter-speaker coordination on the stability of speech production patterns

12h10 Daniele Schön, INS : Music to speech entrainment

12h50 The turkeys: What about the turkeys?

13h Lunch and improvised interactions

Fred Cummins Abstract

Joint speech is an umbrella term covering choral speech, synchronous speech, chant, and all forms of speech where many peoplesay the same thing at the same. Under an orthodox linguistic analysis, there is nothing here to study, as the formal symbolic structures of joint speech do not appear to differ from those of language arising in other forms of practice. As a result, there is essentially no body of scientific inquiry into practices of joint speaking. Yet joint speaking practices are ubiquitous, ancient, and deeply integrated into rituals and domains to which we accord the highest significance.

I will discuss Joint Speech, as found in prayer, protest, classrooms, and sports stadia around the world. If we merely take the time to look there is much to be found in joint speech that is crying out for elaboration and investigation. I will attempt to sketchthe terra incognita that opens up and present a few initial findings (phonetic, anthropological, neuroscientific) that suggest that Joint Speech is far from being a peripheral and exotic special case. It is, rather, a central example of language use that must inform ourtheories of what language, languagingand subjects are.

Dissociating Prediction and Attention Components in Language


Ruth de Diego-Balaguer
(ICREA Research Professor - Cognition and Brain Plasticity Unit, Universitat de Barcelona)
10h00 Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence LPL
Speech is composed of sequences of syllables, words and phrases. These elements unfold in time in specific orders. Thus, acquiring a language requires not only learning each of these representations but also their temporal organisation. The areas conforming the dorsal stream in language has been proposed to have a role in the processing of sequential information. In this talk I will present novel behavioural, developmental and neuroimaging evidence indicating that the roles of the fronto-parietal and fronto temporal connectivity within this dorsal stream can be dissociated in language learning. In addition, I will present data indicating that learning non-adjacent dependencies in language, a core mechanism for the acquisition of syntactic rules, involves both the ability to predict forthcoming elements implicitly and to endogenously orient attention based on the predictive cues learned. This type of learning implies the interface between the language and attention networks during the early stages of language acquisition.

Perceptual adaptation and speech motor control: A new perspective on some well known mechanisms


Douglas Shiller
(Université de Montréal, Faculté de médecine)
11h00 Salle de conférences B011, bât. B
5 avenue Pasteur, Aix-en-Provence LPL
Acoustic speech signals are notoriously variable within and between talkers. To aid in the linguistic decoding of such noisy signals, it is well known that listeners employ a number of perceptual mechanisms to help reduce the impact of linguistically irrelevant acoustic variation. Rapid perceptual accommodation to differences in age and gender is achieved, in part, through vowel-extrinsic normalization, whereby the immediately preceding speech signal provides a frame-of-reference within which talker-specific vowel category boundaries are determined (Ladefoged & Broadbent, 1957). Listeners also draw upon higher-order linguistic information to facilitate phonetic processing of noisy or ambiguous speech acoustic signals, as illustrated by the well-known lexical effect on perceptual category boundaries (Ganong, 1980). 
Since their discovery many decades ago, these adaptive perceptual mechanisms have been considered primarily as processes supporting the decoding of ambiguous speech signals originating from other talkers. Here, I will describe two recent studies demonstrating that such adaptive processes can also alter the processing of self-generated speech acoustic signals (i.e., auditory feedback), and by extension, the sensorimotor control of speech production. The results provide strong support for the idea that short-term auditory-perceptual plasticity rapidly transfers to the sensory processes guiding speech motor function. The findings will be discussed within the context of current models of speech production, in particular those that highlight a role for auditory-feedback in the fine-tuning of predictive, feed-forward control processes.

Questions théoriques et expérimentales sur la liaison et l'acquisition du français / Theoretical and Experimental Issues in Liaison and the Acquisition of French


Géraldine Legendre*, Jennifer Culbertson**, Paul Smolensky*, Sophie Wauquier***
(*Johns Hopkins University, Dpt. of Cognitive Science, **University d'Edimbourg, ***Université de Paris 8)
LPL, 5 av. Pasteur, Aix-en-Provence (salle B011)
Programme :

09h30: Welcome of participants

09h45: Introduction by Philippe Blache (LPL)

Morning Session (Chairperson: Amandine Michelas, LPL)

10h-11h: Verbal pro-clitic liaison in the early acquisition of subject-verb agreement: Evidence from grammaticality preference and comprehension, Géraldine Legendre (Johns Hopkins University)

11h-12h: Generality of phonological knowledge in early acquisition: Evidence from spontaneous and elicited production of liaison, Jennifer Culbertson (University of Edinburgh)

12h-14h: Lunch

Afternoon Session (Chairperson: Sophie Herment, LPL)

14h-15h: Acquisition of liaison in L1 and L2: the role of implicit vs explicit learning, literacy and typology, Sophie Wauquier (Université de Paris 8)

15h-16h: Weighted blending of competing analyses for a unified account of the heterogeneous liaison evidence, Paul Smolensky (Johns Hopkins University)

16h-17h: General discussion

Learning to take turns : The role of linguistic and interactional cues in children's conversation


Marisa Casillas
(Max Planck Institute for Psycholinguistics)
LPL, salle de conférences B011, bât. B, 5 avenue Pasteur, Aix-en-Provence
Children begin taking turns with their caregivers long before their first words emerge. But as their turns begin to change from vocalizations to true, verbal utterances, children face a major challenge in integrating linguistic cues into their previously functional non-verbal turn-taking systems. I will present an overview of children's turn-taking behaviors from infancy through young childhood and will review recent corpus and experimental work on how children's response timing is affected by linguistic planning and how their spontaneous predictions about upcoming turns change as they develop.

Analyzing, Cognitive, and Neural Modeling of Language-Related Brain Potentials


Peter Beim Graben
(Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität zu Berlin)
LPL, salle de conférences B011, bât. B, 5 avenue Pasteur, Aix-en-Provence
How is the human language faculty neurally implemented in the brain? What are the neural correlates of linguistic computations? To which extent are neuromorphic cognitive architectures feasible and could they eventually lead to new diagnosis and treatment methods in clinical linguistics (such as linguistic prosthetics)? These questions interfacing neurolinguistics with computational linguistics and computational neuroscience are addressed by the emergent discipline of computational neurolinguistics. In my presentation I will give an overview about my own research in computational neurolinguistics in the framework of language-related brain potentials (ERPs). By means of a paradigmatic ERP experiment for the processing and resolution of local ambiguities in German [1], I first introduce a novel method to identifying ERP components such as the P600 as ""recurrence domains"" in neuronal dynamics [2]. In a second step, I use a neuro-computational approach, called ""nonlinear dynamical automaton"" NDA [1] in order to construct a context-free ""limited repair parser"" [3] for processing the linguistic stimuli of the study. Finally, I demonstrate how the time-discrete evolution of the NDA can be embedded into continuous time using winner-less competition in neural population models [4]. This leads to a representation of the automaton's configurations as recurrence domains in the neural network that can be correlated with experimentally measured ERPs through subsequent statistical modeling [5,6]

Statistical learning as an individual ability


Ram Frost
(The Hebrew University of Jerusalem - Department of Psychology)
Fédération de Recherche 3 C, 3 place Victor Hugo, Marseille
Most research in Statistical Learning (SL) has focused on mean success rate of participants in detecting statistical contingencies at a group level. In recent years, however, researchers show increased interest in individual abilities in SL. What determines individuals' efficacy in detecting regularities in SL? What does it predict? Is it stable across modalities? We explore these questions by trying to understand the source of variance in performance in a visual SL task through a novel methodology. The theoretical implications for a mechanistic explanation of SL will be discussed.

Apes, Language and the Brain


Bill Hopkins
(Georgia State University)
Amphi de sciences naturelles, 3 place Victor Hugo, Marseille
(Labex BLRI et la SFR)
For more than 150 years, philosophers and scientist have pondered the uniqueness of human language with a particular fascination with the linguistic, cognitive and neural capacities of great apes. A majority of the scientific work on this topic has come from so-called ""ape-language"" studies With the advent of modern imaging technologies, the question of human language uniqueness can now be addressed from a neurological perspective. In this presentation, I discuss the neurobiology of language from the standpoint of comparative studies on the evolution of Broca's and Werncicke's areas in primates, notably chimpanzees.

Conseil Scientifique du BLRI


9h-9h45 Introduction, présentation du Labex, sa politique : Philippe et Jo

9h45-10h30 Le CREX et les plateformes : Thierry

10h30-11h Pause

11h-11h45 Axe 1

11h45-14h Déjeuner + Posters

14h-14h45 Axe 2

14h45-15h30 Axe 3

15h30-16h Pause

16h-16h45 Axe 4

16h45-17h30 Axe 5

If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart


Albert Costa
We are constantly making decisions of many different sorts. From more mundane decisions such as which clothes to wear every morning or where to go for lunch, to more relevant ones, such as whether we can afford the price of a nice holiday on a Pacific island, or whether an investment plan is too risky; decision making is an everyday life activity. It is well known that our decisions often depart from a purely rational cost benefit economical analysis, and that indeed they are biased by several factors that prompt intuitive responses that often drive the decision made. In this talk, I will describe several studies in which there is a pervasive effect of the language in which problems are presented on decision-making. These studies cover economic, moral and intellectual decisions. Together the evidence suggests that a reduction in the emotional resonance prompted by the problem leads to a reduction in the impact of intuitive processes on decision-making. This evidence not only helps to understand the forces driving decision-making, but it also has important implications for a world in which people are commonly faced with problems in a foreign language.

Gesture as a Window Onto Conceptualization.


Gale Stam
(National Louis University)
Salle de conférences B011, bât. B, 5 avenue Pasteur, Aix-en-Provence
According to McNeill (1992, 2005, 2012) gestures are as much a part of language as speech is. Together gesture and speech develop from a 'growth point' that has both imagistic and verbal aspects. This model for verbal thought is ""a 'language-imagery' or language-gesture dialectic"" in which thought, language, and gesture develop over time and influence each other (McNeill, 2005 p.25). 
Research on both the perception of speech and gesture (Kelly, Kravitz & Hopkins, 2004) and the production of speech and gesture (Marstaller & Burianová, 2014) have shown that the same areas of the brain are involved in both. In addition, empirical research (e.g., Chase & Wittman, 2013; Goldin-Meadow, Wein, and Chang, 1992; Goldin-Meadow & Alibali, 1995; Iverson & Goldin-Meadow, 2005; McNeill & Duncan, 2000; Özçalışkan & Goldin-Meadow, 2005, 2009; Stam, 1998, 2006, 2008, 2010b, 2014) on co-speech gestures indicates that gestures provide information about speakers' thinking and conceptualizations that speech alone does not. Research on the light gestures can shed on the second language acquisition process and second language teaching has been growing (for reviews, see Stam 2013; Stam & McCafferty 2008). One area in particular where gestures have been shown to provide an enhanced window onto the mind is that of motion events and thinking for speaking (Stam 2007). This talk will discuss how gestures allow us to see speakers' conceptualizations in first language and second language thinking for speaking. It will present evidence from several studies (Stam, 2010a, 2015 ;

Reverse engineering early language learning


Emmanuel Dupoux
(Ecole des Hautes Etudes en Sciences Sociales, Laboratoire de Sciences Cognitives et Psycholinguistique)
Salle des Voûtes, Fédération de Recherche 3 C, 3 place Victor Hugo, Marseille
Decades of research on early language acquisition have documented how infants quickly and robustly acquire their native tongue(s) across large variations in their input and environment. The mechanisms that enable such a feat remain, however, poorly understood. The proposition, here, is to supplement experimental investigations by a quantitative approach based on tools from machine learning and language technologies, applied to corpora of infant directed input. I illustrate the power of this approach through a reanalysis of some previous claims made regarding the nature and function of Infant Directed as opposed to Adult Directed Speech (IDS vs ADS). I also revisit current ideas about the learning of phoneme categories, a problem that has been long thought to involve only bottom-up statistical learning. In contrast, I show that a bottom up strategy does not scale up to real speech input, and that phoneme learning requires not only the joint learning of phoneme and word forms but also of prosodic and semantic representations. I discuss a global learning architecture where provisional linguistic representations are gradually learned in parallel, and present some predictions for language learning in infants.

Do inhibit or not to inhibit during bilingual language control.


Mathieu Declerck
Salle des Voûtes, Fédération de Recherche 3 C, 3 place Victor Hugo, Marseille
One of the mayor topics in the language control literature specifically, and the bilingual literature in general is inhibition, which entails the reduction of non-target language activation and thus interference resolution. In this talk I would discuss the existing evidence for inhibitory control processes at work during language switching, a commonly used task to investigate the underlying mechanism of language control. More specifically, asymmetrical switch costs, n-2 language repetition costs, and reversed language proficiency in mixed language blocks will be discussed in relation to inhibition. Finally, since several models assume little to no implementation of inhibition with highly proficient bilinguals, the role of language proficiency will also be considered.

Do visual and attentional factors predict reading skills?


Veronica Montani
LPC, Campus St. Charles, 3 place Victor Hugo, Marseille
Visual-attentional abilities have a prominent role in reading. Reading rate is constrained by the number of letters acquired at each fixation, i.e. the visual span, that in turn seems to be mainly determined by the effect of crowding. On the other hand, spatial attention is critically involved in reading process, in particular for the phonological decoding of unfamiliar strings. I will briefly review studies that investigated the role of low-level processing factors in reading and their possible implication in reading disorders. Furthermore, I will present new data that show the distinct contribution of different visual-attentional factors on various reading measures, such as text reading, word naming and eye movements.

How do central processes cascade into peripheral processes in written language production ?


Sonia Kandel
(LPNC & Gipsa-Lab Grenoble (Univ. Grenoble Alpes, CNRS))
16h Fédération de Recherche 3 C (Comportement, Cerveau, Cognition), 3 place Victor Hugo, Marseille
With the arrival of internet, tablets and smartphones many people spend more time writing than speaking (email, chat, SMS, etc.). Despite the importance of writing in our society, the studies investigating written language production are scarce. In addition, most studies investigated written production either from a central point of view (i.e., spelling processing) or a peripheral approach (i.e., motor production) without questioning their relation. We believe, instead, that central and peripheral processing cannot be investigated independently. There is a functional interaction between spelling and motor processing. Letter production does not merely depend on its shape –and its specifications for stroke order and direction– but also on the way we encode it orthographically. For example, the movements to produce letters PAR in the orthographically irregular word PARFUM (perfume) are different than in the regular word PARDON (pardon). Spelling processes cascade into motor production. The nature of the spelling processes that are activated before movement initiation will determine the way the cascade will operate during movement production. Lexical and sub-lexical processes do not spread into motor execution to the same extent.

Probabilistic Graphical Models of Dyslexia


('Sagol' school of neuroscience, Tel-Aviv University)
11h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
Reading is a complex cognitive faculty, errors in which assume diverse forms. To capture the complex structure of reading errors, we propose a novel way of analyzing these errors using probabilistic graphical models. Our study focuses on three inquiries. (a) We examine which graphical model best captures the hidden structure of reading errors. (b) We draw on the results of (a) to resolve a theoretical debate on whether dyslexia is a monolithic or heterogeneous disorder. (c) We examine whether a graphical model can diagnose dyslexia closely to how experts do. We explore three different models: an LDA-based model and two Naïve Bayes models which differ by their assumptions about the generation process of reading errors. The models are trained on a large corpus of reading errors. Our results show that the LDA-based model best captures patterns of reading errors and may therefore contribute to the understanding of dyslexia and to the diagnostic procedure. We also demonstrate that patterns of reading errors are best described by a model assuming multiple dyslexia subtypes, therefore supporting the heterogeneous approach to dyslexia. Finally, a Naïve Bayes model, which shares assumptions with diagnostic practice, best replicates labels given by clinicians and can be therefore used for automation of the diagnosis process.

The influence of expertise on perception, cognition, and brain connectivity


Stefan Elmer
(Universität Zürich)
11h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
A better understanding of the perceptual and cognitive neural underpinnings underlying exceptional behavioural skills has important educational, societal, as well as clinical implications (i.e., for example in the context of developmental dyslexia, aphasia, and foreign language learning). Here, I will present recent data collected from professional musicians with and without absolute pitch, as well as from simultaneous language interpreters, to reveal how expertise and training has an influence on the functional-structural malleability of perceptual and cognitive subdivisions of the human brain. In the same context, I will also provide some evidence for transfer effects from musicianship to specific aspects of speech processing. Finally, since currently there is no doubt that perceptual and cognitive functions do not work in isolation but are embedded in neuronal assemblies consisting of networks influencing each other's in a reciprocal manner, I will propose some novel methodological approaches for evaluating functional and structural connectivity within small-scale perceptual-cognitive networks in musicians with and without absolute pitch.

Towards an Online Rhyming Dictionary for Mexican Spanish


Alfonso Medina
14h LIA, chemin des Meinajariès 84911 Avignon cedex 9 (Labex BLRI)
Rhyming dictionaries are a kind of reverse dictionaries. They group words according to rhyming patterns. Rhymes can share exact sequences of vowel and consonant sounds towards the end of a word (consonant rhyme) or just similar vowel sounds (assonant rhyme). Thus, these dictionaries are based on pronunciation, not on writing patterns. Also, since consonance and assonance depend on the stressed syllable, words which end with a stressed syllable are grouped together, those whose stressed syllable is the next to last appear together, and so on.

In addition, word pronunciation may vary with time and across geographical and social dialects. In Spanish, this is particularly clear when word loans (for instance, Anglicisms and Galicisms) are considered. In fact, they tend to keep their original writing, at least in the Mexican variant which is the most spoken one. For example, the following loan words, common in Mexican Spanish, rhyme: flash, collage, garage, cottage, squash. Their last syllable is stressed and they are ordered in reverse according to their sounds and not their letters: (respectively, /fláʃ/, /ko.láʃ/, /ga.ráʃ/, /ko.táʃ/ and /es.kwáʃ/).

The project described takes the current nomenclature of the Diccionario del español de México ( to generate automatically a rhyming dictionary. Also, since the results of an online query to such a dictionary can be quite large, a procedure was developed to rank them semantically. The idea is to measure the similarities of the query definition to each of the definitions of the rhyming words. These words are then ordered from highest to lowest similarity to the query.

The bilingual brain: Plasticity and processing from cradle to grave


Manuel Carreira

(Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain IKERBASQUE. Basque Foundation for Science. Bilbao. Spain)
Fédération de Recherche 3 C (Comportement, Cerveau, Cognition), 3 place Victor Hugo, Marseille
Most people either learn more than one language from birth or invest quite a lot of time and effort learning a second language. Bilingualism and second language learning is an interesting case for investigating cognitive and brain plasticity. In this talk I will describe behavioral and neuroimaging evidence on the cognitive and brain mechanisms adults and infants (monlinguals, bilinguals and second language learners) use for processing language. In particular I will address whether proficient second language learners use similar or different brain mechanisms during processing and what are the neural consequences (structural and functional) of dealing with two languages.

How do central processes cascade into peripheral processes in written language production?


Sonia Kandel
(Univ. Grenoble Alpes, LPNC (CNRS UMR 5105) – Grenoble, France And
Univ. Grenoble Alpes, GIPSA-LAB (CNRS UMR 5216), Dept. Parole & Cognition –Grenoble, France)
With the arrival of internet, tablets and smartphones many people spend more time writing than speaking (email, chat, SMS, etc.). Despite the importance of writing in our society, the studies investigating written language production are scarce. In addition, most studies investigated written production either from a central point of view (i.e., spelling processing) or a peripheral approach (i.e., motor production) without questioning their relation. We believe, instead, that central and peripheral processing cannot be investigated independently. There is a functional interaction between spelling and motor processing. Letter production does not merely depend on its shape –and its specifications for stroke order and direction– but also on the way we encode it orthographically. For example, the movements to produce letters PAR in the orthographically irregular word PARFUM (perfume) are different than in the regular word PARDON (pardon). Spelling processes cascade into motor production. The nature of the spelling processes that are activated before movement initiation will determine the way the cascade will operate during movement production. Lexical and sub-lexical processes do not spread into motor execution to the same extent.

Analyse de données MEG par régions d'intérêt


Valérie Chanoine
(Labex BLRI)
Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
Ce tutoriel s'intéresse à un type d'analyse de données MEG connu sous le terme d'analyse par régions d'intérêt.
A l'aide du logiciel ""Brainstorm"" , nous présentons deux approches de l'analyse par régions d'intérêt. 
La première s'appuie sur des régions fonctionnelles, repérées dans la tâche réalisée pendant l'enregistrement MEG, alors que la seconde repose sur des régions anatomiques définies à partir d'un atlas cérébral de référence.

Niveau matlab et Brainstorm : débutant.

Fan Cao, Chotiga Pattamadilok, Johannes Ziegler


Fan Cao (1), Chotiga Pattamadilok (2), Johannes Ziegler (3)
((1)Michigan State University), (2) LPL UMR7309 CNRS AMU, (3) LPC UMR7290 CNRS AMU)
Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
9h30-12h30 Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence

Cross-linguistic and neurolinguistic perspectives on reading and speech processing

Neural specialization and reading ability 
Fan Cao (Michigan State University) 

The brain becomes specialized with exposure to the environment. One piece of evidence comes from how the language system shapes brain function. In a cross-linguistic developmental study, we show growing divergence between Chinese reading and English reading from children to adults. We found that specialization is positively correlated with proficiency. For example, there is reduced specialization in children with reading disability. Another example is proficiency effect in bilinguals, where we found greater specialization with higher proficiency in a group of late Chinese-English bilinguals. We also found that specialization can be facilitated by providing more effective instruction. In a series of training studies, we compared writing and visual-only learning in English learners of Chinese, and we found writing training evoked a more native-like brain network, suggesting greater specialization and accommodation. In summary, the brain becomes specialized with language experience and optimal instruction will promote the process of specialization.

How does learning to read modify speech processing ? Chotiga Pattamadilok (Laboratoire Parole et Langage) 

Behavioral and brain imaging studies have demonstrated that learning to read and write changes the way the brain processes spoken language. However, the cognitive and neural mechanisms underlying such modification are still under debate. Two complementary hypotheses have been proposed. According to the ""online"" account, strong connections between spoken and written language result in the automatic co-activation of both codes when one processes language, such that hearing a spoken word activates, in real time, its corresponding written form and vice-versa. According to the ""offline or developmental account"" learning to read induces more profound changes withinthe spoken language system itself, probably by restructuring the nature of the phonological representations. Evidence supporting both hypotheses will be discussed.

A cross-language perspective on reading, reading development and dyslexia Johannes Ziegler (Laboratoire de Psychologie Cognitive) 

Many theories assume that different languages or writing systems afford different reading styles. One idea that has been around since the early 70s is that opaque writing systems favor a ""Chinese"" style of reading (a direct route to meaning) whereas transparent writing systems favor a ""Phoenician"" style (an indirect route that is phonologically mediated). However, research on reading development and dyslexia across languages draws a different picture, one in which the core reading processes are very similar across languages. The main differences are related to consistency and orthographic complexity – these variables affect the granularity of the computations rather than the computations themselves.

Journée du labex BLRI


9:00 Accueil

9:30 Intro de la journée

9:45 -10:15 Agnès Trebuchon : Organisation et Désorganisation de la Dynamique Cérébrale du Langage

10:30-11:00 Maud Champagne-Lavau : Aspects pragmatiques du langage et théorie de l'esprit chez le sujet sain et dans la schizophrénie

Projet MEG Catsem - de la conception du design expérimental aux analyses statistiques pour le moyennage et la localisation des sources.


Valérie Chanoine, Christelle Zielinski
(Labex BLRI)
LPL, salle de conférences B011, 5 av. Pasteur, Aix-en-Provence
Dans le contexte d'un projet BLRI réalisé en Magnéto-Encéphalographie (MEG), nous vous proposons une revue pratique des étapes courantes du traitement des données MEG. Après une brève description du design expérimental (tâche de décision sémantique sur présentation visuelle ou auditive de mots), nous nous appuierons sur deux logiciels couramment utilisés en MEG (à savoir « Fieldtrip » et « Brainstorm »), pour aborder de manière très pratique les thèmes du pré-traitement des données MEG (filtrage, nettoyage du signal), du moyennage (permettant l'obtention de champs évoqués), de la visualisation de l'activité cérébrale par topographie ou par la localisation de source. A cette occasion, nous vous présenterons également un outil statistique mis à disposition par le logiciel « Brainstorm » pour effectuer des comparaisons multiples.

Niveau débutant en MEG et en statistiques.

Fallait-il brûler Verbal Behavior ?


Marc Richelle
(University of Liège, Belgium)
Salle des Voûtes, Fédération de Recherche 3 C, 3 place Victor Hugo, Marseille
En partant de mon expérience personnelle de ma rencontre avec Verbal Behavior (et avec son auteur) , et des avatars que ce livre a connus, - notamment sa mise à l'écart plus encore dans le monde francophone qu'ailleurs - je tenterai d'en restituer la portée, et aussi de comprendre les raisons, par delà la critique de Chomsky, qui en ont fait un texte largement méconnu.

Introduction au pré-traitement dans EEGLA


Deirdre Bolger
(Labex BLRI)
14h Salle de cours A003, bât. A 5 avenue Pasteur, Aix-en-Provence
Dans cette prochaine présentation nous allons introduire le traitement des données électrophysiologiques en utilisant EEGLAB, la boîte à outil Matlab. Nous aborderons le pré-traitement des données EEG en utilisant des méthodes de bases: le filtrage, la segmentation et la correction par la ligne de base, et le moyennage au niveau individuel et au niveau de groupe. Comme EEGLAB est beaucoup basé sur l'utilisation de l'ICA (Analyse des Composantes Indépendantes), nous allons présenter l'utilisation de cette méthode pour corriger certains artéfacts souvent rencontrés dans les signaux électrophysiologiques. Nous allons montrer, également, comment créer un ""batch"" qui permet d'automatiser les étapes du pré-traitement. Cette présentation s'adresse aux personnes qui s'intéressent au traitement de données électrophysiologiques via EEGLAB. Une connaissance de Matlab n'est pas obligatoire pour cette présentation mais aiderait à mieux comprendre la création des ""batchs"".

niveau matlab et EEGLab : débutant.

Neuroanatomical correlates of developmental dyslexia.


Irène Altarelli
(Brain and Learning Lab., University of Geneva)
Fédération de Recherche 3 C, Salle des Voûtes, 3 place Victor Hugo, Marseille
Developmental dyslexia is a specific learning disorder that impacts reading abilities, despite normal education, intelligence and perception. The aim of the present work is to determine its neuroanatomical correlates, with the broader goal of identifying associations between genetic variants, brain anatomy and cognitive impairments. To this end, three studies were conducted, comparing magnetic resonance images of dyslexic and control subjects. In a first study, we analysed a variety of cortical measures with both a region of interest and a global vertex-by-vertex approach. In a second study, we focused on the ventral temporo-occipital regions, looking at the structure of functionally defined areas. We defined the subject-by-subject location of cortical regions preferentially responding to written words, faces or houses. A cortical thickness reduction in dyslexic subjects was observed in the left-hemisphere word-responsive region, an effect exclusively driven by dyslexic girls. Finally, in a third study we examined the anatomical asymmetry of the planum temporale, a region which importance in dyslexia has been widely debated. By manually labelling this structure, we observed an abnormal pattern of asymmetry in dyslexic boys only. To conclude, a number of anatomical correlates of dyslexia have emerged from the work presented here, offering a better characterisation of its brain basis.Importantly, our results also stress the importance of gender, a long neglected factor in dyslexia

Introduction au traitement des données avec Matlab


Deirdre Bolger, Christelle Zielinski
5 avenue Pasteur, Aix-en-Provence
L'objectif de ce tutoriel est de donner les bases nécessaires à l'utilisation du logiciel Matlab pour le traitement de données. Principalement, nous montrerons comment accéder aux données, les visualiser, les représenter dans le domaine temporel et fréquentiel. Nous nous focaliserons sur la manipulation des données d'électro-encéphalographie, en particulier depuis la boîte à outil EEGlab. Ce tutoriel s'adresse aux débutants en Matlab

Prosodic and Social Dimensions of Entrainment in Dialogue


Julia Hirschberg
(Columbia University)
5 avenue Pasteur, Aix-en-Provence
When people speak together, they often adapt aspects of their speaking style based upon the style of their conversational partner. This phenomena goes by many names, including adaptation, alignment, and entrainment, inter alia. In this talk, I will describe experiments in English and Mandarin prosodic entrainment in the Columbia Games Corpus and in the Tongji Games Corpus, large corpora of speech recorded from subjects playing a series of computer games. I will discuss how prosodic entrainment is related to turn-taking behaviors and to several measures of task and dialogue success. I will also discuss experiments relating entrainment to several social dimensions, including likeability and dominance. This is joint work with Stefan Benus, Agustín Gravano, Ani Nenkova, Rivka Levitan, Laura Willson, and Zhihua Xia.

Sensorimotor processing of speech : brain inspired approaches to automatic speech recognition


Luciano Fadiga, Alessandro D'Ausilio, Leonardo Badino

(University of ferrara, Istituto Italiano Di Tecnologia à Gênes)
Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
11h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence

14h30 Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence

Une réunion de travail se tiendra avec les conférenciers

Le signal en EEG et MEG


Deirdre Bolger*, Valérie Chanoine*, Christelle Zielinski*, Thierry Legou**
(*BLRI, **LPL)
14h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence

- Le signal en EEG et MEG L'électroencéphalographie (EEG) et la magnétoencéphalographie (MEG) sont deux techniques non invasives de surface, permettant de suivre l'activité électrique du cerveau. Elles reposent en grande partie sur des connaissances en traitement du signal. 
Dans ce tutoriel, nous vous proposons d'introduire les bases en traitement du signal : caractéristiques d'un signal, représentation temporelle (Théorème de Shannon) et fréquentielle (Transformée de Fourier). 
Les principes du filtrage(filtres FIR et IIR) et du moyennage des signaux, appliqués aux données acquises en EEG et MEG, seront ensuite abordés.

A Vocal Brain: Cerebral Processing of Voice Information


Pascal Belin
(Institut des Neurosciences de La Timone, Marseille, France)
11h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
The human voice carries speech but also a wealth of socially-relevant, speaker-related information. Listeners routinely perceive precious information on the speaker's identity (gender, age), affective state (happy, scared), as well as more subtle cues on perceived personality traits (attractiveness, dominance, etc.), strongly influencing social interactions. Using voice psychoacoustics and neuroimaging techniques, we examine the cerebral processing of person-related information in perceptual and neural voice representations. Results indicate a cerebral architecture of voice cognition sharing many similarities with the cerebral organization of face processing, with the main types of information in voices (identity, affect, speech) processed in interacting, but partly dissociable functional pathways.

La Poesie des synapses ou comment les mots nous font plaisir ou peur ?


Arthur Jacobs
(Freie Universität Berlin, Dahlem Institute for Neuroimaging of Emotion (D.I.N.E.))
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition), 3 place Victor Hugo, Marseille
La lecture n'est pas seulement un processus de traitement d'information, mais elle comporte des réponses affectives et esthétiques qui vont bien au-delà de ce que les modèles actuels de la lecture décrivent. Des mots peuvent nous plaire ou faire mal, des textes nous rendre heureux ou faire pleurer. Mais comment des stimuli ""symboliques"" peuvent-ils évoquer des réponses émotionelles? Quels sont les processus neurocognitifs qui les sous-tendent, et comment les réactions émotionelles à des narrations fictives se distinguent-elles de celles à des narrations factuelles? Dans ce séminaire, j'aborde ces questions dans le cadre d'un modèle neuropoétique de la lecture littéraire qui intègre des éléments de rhétorique, d'esthétique et de poétique cognitive avec des concepts de neurolinguistique et de psychonarrotologie (Jacobs, 2011; 2013). Ces prédictions sont discutées à la lumière des données venant d'études empiriques sur la reconnaissance des mots, la réception de la poésie, et de la compréhension des phrases et textes.

Axe pathologie du Langage


14h - 18h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
Dyslexie et Orthophonie : comment la recherche peut informer la clinique ?

14:00 Introduction : Antoine Giovanni & Johannes Ziegler
14:20 - 14:50 Michel Habib : Neuroanatomie de la dyslexie: derniers apports à la compréhension des mécanismes des troubles d'apprentissages 
14:50 - 15:20 Liliane Sprenger-Charolles : Existe-t-ils des sous-types de dyslexies ? Si oui lesquels ? 15:20 - 15:30 PAUSE

15:30 - 16:00 Stéphanie Bellocchi & Stéphanie Ducrot : Dyslexie et déficits visuo-attentionnels 

16 : 00 - 16 :30 Julie Chobert & Mireille Besson : dyslexie et musique, nouvelles pistes de prises en charge

16:00 - 16:10 PAUSE

16 :10 - 16 :40 Pascale Colé : Les capacités langagières de l'adulte dyslexique

16 :40 - 17:00 Johannes Ziegler : L’apprentissage de la lecture et dyslexie, l’hétérogénéité et l’apport de la modélisation computationnelle 

17:00 – 18:00 Table Ronde, Discussion générale : Recherche/Clinique

Apprentissage par renforcement : de la modélisation des processus neuraux aux applications robotiques


Medhi Khamassi
(l'Institut des Systèmes Intelligents et de Robotique (UPMC))
10h Centre d'Enseignement et de Recherche en Informatique (CERI), chemin des Meinajariès 84911 Avignon cedex 9 (Labex BLRI)
L'activité phasique des neurones dopaminergiques est considérée depuis une quinzaine d'années comme le substrat neural de signaux d'erreur de prédiction de la récompense (RPE). Ces signaux se sont avérés très proches des signaux d'erreurs générés par les algorithmes d'apprentissage par renforcement (RL). De plus, de nombreuses études ont montré que les algorithmes RL permettaient de bien décrire l'apprentissage animal et humain pendant des tâches de conditionnement pavlovien. Ceci a conduit au développement d'un nombre croissant de modèles computationnels d'apprentissage par renforcement pour décrire les processus neuraux sous-jacents à ces apprentissages. La première partie de l'intervention présentera une introduction à ces modèles, à leur formalisme, et aux données montrant qu'ils permettent de bien décrire certaines activités cérébrales liées à l'apprentissage. 
Toutefois, ces modèles d'apprentissage par renforcement commencent à montrer leurs limites, dont l'une est celle du passage à l'échelle, dans le monde réel. Lorsque l'on sort du cadre des simulations parfaites et simplifiées des tâches de laboratoire et que l'on se place dans le cadre de l'interaction réaliste d'un robot avec son environnement, on se rend compte que ces algorithmes ont beaucoup de mal à faire face au bruit, à l'incertitude, aux délais. L'application de ces modèles d'apprentissage au contrôle d'un robot montre que pour réussir à obtenir une bonne performance du robot, il faut faire en plus des hypothèses sur l'interaction de ces modèles avec d'autres systèmes d'apprentissage et avec d'autres processus cognitifs tels que la perception, la cartographie, la navigation. Je montrerai en particulier comment des algorithmes combinant deux types d'apprentissage, dits model-free et model-based, permettent de donner une meilleure performance comportementale au robot et d'expliquer également un plus grand nombre de données expérimentales, en particulier dans des tâches de navigation.

Apprentissage par renforcement (direct et inverse) pour les systèmes interactifs


Olivier Pietquin
(SequeL team, University Lille 1, LIFL CNRS UMR 8022, INRIA Lille)
Centre d'Enseignement et de Recherche en Informatique (CERI), chemin des Meinajariès 84911 Avignon cedex 9 (Labex BLRI)
L'apprentissage par renforcement est une catégorie d'apprentissage automatique qui se différencie des autres par le fait qu'elle a pour objectif l'optimisation d'une séquence de décisions, prenant en compte l'aspect temporel et surtout dirigé par un but du comportement. Cette méthode, d'inspiration biologique, est fondée sur l'accumulation par la machine de récompenses numériques distribuées après chaque décision. Le comportement appris est celui qui maximise, sur le long terme, l'accumulation de récompenses, menant à une séquence de décisions optimale. Ce paradigme d'apprentissage a été introduit dans le domaine des systèmes de dialogue parlé il y a une quinzaine d'année afin d'optimiser les stratégies d'interaction. En effet, ce type de système doit prendre des décisions sur les actes dialogiques à produire à chaque tour d'interaction avec un utilisateur. Ces décisions doivent mener à une interaction la plus naturelle et efficace possible alors que les informations recueillies sont entachées d'erreurs (due à la reconnaissance et la compréhension imparfaites du langage parlé). Il est difficile de définir formellement ce que serait une interaction parfaite, en revanche un utilisateur peut fournir une évaluation a posteriori de cette interaction servant de signal de récompense. Toutefois, un certain nombre de problèmes subsistent encore aujourd'hui pour faire un usage performant de ces méthodes dans le cadre de l'interaction homme-machine. Un de ces problèmes est la définition de la récompense à fournir à la machine pour la voir se comporter de manière naturelle. En effet, l'utilisation de la satisfaction de l'utilisateur a montré quelques limites et est difficile à prédire automatiquement. Dans cet exposé, nous présenterons le paradigme de l'apprentissage par renforcement inverse, visant à estimer la fonction de récompense optimisée par un opérateur humain (supposé optimal) et à la transférer à la machine pour obtenir un comportement similaire dans une tâche d'interaction.

Workshop on EEG sources analysis : theory and pratice V


Eduardo Martinez-Montes
(Head of the Neuroinformatics Department, Cuban Neuroscience Center)
10h - 17h Fédération de Recherche 3 C (Comportement, Cerveau, Cognition), 3 place Victor Hugo, Marseille (Labex BLRI)
Morning 1st part: Theoretical aspects
9:30 am - 12:30 pm

1. The measurements: Genesis EEG/MEG
2. Forward Problem
3. Inverse Problem
4. Minimun Norm/Low Resolution Electromagnetic Tomography (LORETA) in detail 5. Source localization with Multiple Penalized Least Squares regression 6. Bayesian Model Averaging in detail

Afternoon 2nd part: Practical issues:
General Pipeline for EEG source localization
2:30 pm - 5:00 pm

1. Data recording and preprocessing
2. Head model definition: Forward Problem
Electrodes positions
Grid inside the brain
Electric Lead Field Computation
3. Source localization: Inverse Problem
4. Visualization
Discussion: specific problems in source analysis...

Si vous êtes intéressés merci d'envoyer un mail à :

Electrophysiological tools to assess brain activity


Eduardo Martinez-Montes
(Head of the Neuroinformatics Department, Cuban Neuroscience Center)
15h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition), 3 place Victor Hugo, Marseille (Labex BLRI)
The development of a wide variety of neuroimaging methods based on Magnetic Resonance Imaging has opened new ways for studying brain organization and functioning with high spatial resolution. It is also becoming increasingly clear that the complex functions of the human brain such as cognition, language, attention and several pathologies are not fully explained by only considering the activation of spatially fixed cerebral structures. Rather we need to consider brain functioning as networks of structures that dynamically interact through electrical signals coded by different frequencies. To thoroughly investigate the dynamics of such networks it is important to also use other techniques such as EEG and MEG, which provide direct measurements of the brain electrical activity with high temporal resolution. In this talk, we will present classical and novel strategies for the analysis of EEG\MEG data, from statistical measures for detection of the Event Related Brain Dynamics, to methods of dimensionality reduction and the combination of multidimensional analysis with source localization methods. We will describe new approaches for obtaining space-time-frequency characterization of brain electrical activity at the level of neural sources.

Vocalisations of Captive Guinea Baboons


Caralyn Kemp
(Labex BLRI)
11h salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
As part of a larger study investigating vocal production in Guinea baboons, I have been examining the vocalisations of a captive group at the CNRS primate station in Rousset. The main goal of this aspect of the project was to produce a large-scale database in order to 1) characterise the vocal repertoire of this baboon species, 2) determine the acoustic features of the vocalisations, and 3) test the descriptive adequacy of existing categories. Twelve vocalisations were distinguishable by ear, but not all vocalisations were produced by all age and sex groups. A single vocalisation type could occur in different contexts and some vocalisations had a large degree of variability. This database will be useful for new researchers and care staff who work with Guinea baboons in captivity as well as aiding in our understanding of the evolution of vocal communication between baboon taxa and within the primate genera.

Journée du Labex BLRI


salle de conférence B0011, bât. B 5 avenue Pasteur, Aix-en-Provence
Pendant cette journée, chaque axe sera chargé de faire un bilan scientifique de sa thématique, rendant compte de l'état de nos connaissances dans ce domaine, identifiant les verrous et proposant des perspectives de recherche. L'objectif de la journée est de faire une première synthèse qui permettra de proposer une orientation de nos travaux et de faire des propositions d'action. Pendant cette journée, parallèlement à cet objectif, les premiers projets soutenus par le BLRI seront présentés sous la forme de posters.

What Freud got right about speech errors


Gary S. Dell
(University of Illinois, Urbana-Champaign, USA)
Salle des Voûtes, Fédération de Recherche 3 C, 3 place Victor Hugo, Marseille
Most people associate Sigmund Freud with the assertion that speech errors reveal repressed thoughts, a claim that does not have a great deal of support. I will mention some other things that Freud said about slips, showing that these, in contrast to the repression notion, do fit well with some modern theories of language production. I will illustrate using the interactive two-step theory of lexical access during production, which we have used to understand aspects of aphasic speech error patterns

Social meaning and speech perception


Benjamin Munson
(University of Minnesota, Etats-Unis)
11h salle de conférence B0011, bât. B 5 avenue Pasteur, Aix-en-Provence
It is well established that listeners may categorize ambiguous sounds differently when they are led to believe something about the person who produced them, such as their age, social class, gender, or regional background (Hay, Drager & Nolan, 2006 ; McGowan 2011 ; Staum Casasanto 2008 ; Strand & Johnson 1996). This talk will review a set of studies designed to examine two aspects of this phenomenon. First, are the effects different depending on the specific social meaning ascribed to the variation (i.e., what is it about gender that makes listeners change their categorizations when the speaker is suggested to be a man or a woman?). Second, do these effects occur relatively early and automatically in processing, or do they reflect ambiguity resolutions that occur relatively late in processing ?

Prosodic Constraints on Children's Variable Production of Grammatical Morphemes


Katherine Demuth
(Macquarie University, New South Wales, Australia)
Salle de conférences, B011, LPL, 5 av. Pasteur, Aix-en-Provence
Language acquisition researchers have long observed that children's early use of grammatical morphemes is highly variable. It is generally thought that this is due to incomplete syntactic or semantic representations. However, recent crosslinguistic research has found that the variable production of grammatical morphemes such as articles and verbal inflections is phonologically conditioned. Thus, children are more likely to produce grammatical morphemes in simple phonological contexts than in those that are more complex. This suggests that some of the variability in children's early production (and perception) of grammatical morphemes may be due to phonological context effects, and that some aspects of children's syntactic/semantic representations may be in place earlier than typically assumed. This raises important theoretical and methodological issues for investigating syntactic knowledge in L1 acquisition, but also in bilinguals, L2 children and adults, and those with language impairment (SLI, bilinguals, children with hearing loss). Implications for understanding the mechanisms underlying language processing, the 'perception-production' gap, and a developmental model of speech planning, are discussed.

Synergie in Language Acquisition


Mark Johnson
(Macquarie University, New South Wales, Australia)
10h Salle de conférence B011, bât.B 5 avenue Pasteur, Aix-en-Provence
Each human language contains an unbounded number of different sentences. How can something so large and complex possibly be learnt? Over the past decade and a half we've learned how to define probability distributions over grammars and the linguistic structures they generate, making it possible to define statistical models that learn regularities of complex linguistic structures. Bayesian approaches are particularly attractive because they can exploit ""prior"" (e.g., innate) knowledge as well as learn statistical generalizations from the input.

This talk compares two different Bayesian models of language acquisition. A staged learner learns the components of language independently of each other, while a joint learner learns them simultaneously. A joint learner can take advantages of synergistic dependencies between linguistic components to bootstrap acquisition in ways that a staged learner cannot. We use Bayesian models to show that there are dependencies between word reference, syllable structure and the lexicon that a learner could take advantage of to synergistically improve language acquisition.

Is lexical selection by competition ?


Robert Hartsuiker 
(Ghent University)
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille(Labex BLRI)
There are several contrasting views on the mechanisms of lexical selection in language production. On one view, words compete with each other for selection, so that the time to select one word depends on the activation of competitors. This competitive view is often thought to be supported by semantic interference in picture-word tasks (name the picture, ignore the distractor word). But on another view, the time to select a word depends only on the activation of the highest activated lexical unit. This account is consistent with semantic facilitation in some versions of the picture-word task, but requires an additional mechanism to account for semantic interference effects. Our work of the last few years has tested whether this mechanism is one of self-monitoring and covert error repair. On this view, the distractor sometimes gets ahead of the picture name in production process. To prevent the inadvertent naming of the distractor, it therefore needs to be filtered out covertly, and the more difficult it is to detect and rule out the distractor, the more naming will be delayed. To test this account, we have conducted behavioral experiments and EEG experiments that manipulated parameters we suspect the self-monitoring system to be sensitive to, such as lexical status of the distractor, context (i.e., composition of list of stimuli), and even taboo status of the distractor word. Based on my review of this evidence I will argue that response exclusion by self-monitoring is a viable alternative to lexical selection by competition.

Etudes de potentiels évoqués sur le traitement de l'accord en langue orale (or ""ERP studies on oral langage processing of agreement"" (en Français))


Phaedra Royle
(École d'orthophonie et d'audiologie, Université de Montréal)
16h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence (Labex BLRI)
Afin d'étudier l'acquisition du traitement de l'accord en genre en français, nous avons développé une étude de potentiels évoqués (PÉs) ayant une interface auditivo-visuelle avec un appariement d'images et de phrases ciblant l'accord intra-nomial. Ce domaine du français est difficile à maîtriser, étant donné la nature irrégulière des marques de l'accord en genre. Les conditions incongruentes présentaient des erreurs du genre pour le déterminant (Det-N « la soulier brun »), et l'adjectif (N-Adj « le soulier « brune »), mais aussi des conditions sémantiquement où le nom et l'image étaient incongruents (par ex. l'enfant voit [un soulier brun] et entend « un POISSON brun ») Deux groupes ont participé à l'étude. Le premier, des jeunes adultes francophones, ont été assignés aléatoirement à deux sous-groupes : avec ou sans tâche. Cette manipulation visait à vérifier si les PÉs obtenus sans tâche étaient similaires à ceux obtenus avec jugement de grammaticalité, étant donné qu'on ne demanderait pas aux enfants de faire de jugement. Les deux groups d'adultes ont montré les PÉs attendus (N400, LAN, P600) en plus d'une onde PMN (Phoneme mismatch negativity) dans les conditions sémantiques et Det-N. L'effet de tâche a été important, en particulier pour l'onde P600 qui montrait une amplitude considérablement plus grande dans le groupe Tâche.

Le second groupe de participants était des enfants âgés de 5 à 9 ans (N=50). Aucune tâche ne leur était demandée mis à part de porter attention à l'histoire racontée (une extraterrestre qui apprend le français au cours de son voyage intergalactique vers le Québec). Nos analyses préliminaires démontrent que ce paradigme induit des composantes chez les enfants qui sont à la fois similaires (N400, P600) et différentes (LAN) de ceux des adultes. Les données des adultes démontrent que les jugements de grammaticalité ne sont pas une précondition à l'apparition de certaines ondes PÉs linguistiques dans des tâches orales. Les différences d?ondes entres les conditions sémantiques et (non)grammaticales appuient une dissociation entre les processus de vérification grammaticale et l'accès lexical dans la vérification de l'accord. Les données préliminaires des enfants démontrent l'utilité de ce paradigme pour l'étude de l'acquisition de processus grammaticaux et lexicaux-sémantiques chez les enfants tout venant qui pourraient maîtriser l'accord mais néanmoins faire des erreurs de production. tâches orales. Les différences d?ondes entres les conditions sémantiques et (non)grammaticales appuient une dissociation entre les processus de vérification grammaticale et l'accès lexical dans la vérification de l'accord. Les données préliminaires des enfants démontrent l'utilité de ce paradigme pour l'étude de l'acquisition de processus grammaticaux et lexicaux-sémantiques chez les enfants tout venant qui pourraient maîtriser l'accord mais néanmoins faire des erreurs de production.

Prosodic phrasing and ambiguity resolution as revealed by brain potentials


Karsten Steinhauer
(McGill University, School of Communication Sciences & Disorders, Montréal)
16h 16h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence (Labex BLRI)
Prosodic phrasing has a major impact on our interpretation of utterances. For example, the sentence ""Mary said Peter's brother was the nicest girl at the party"" results in confusion, unless it is presented with prosodic boundaries before and after ""said Peter's brother"". Event-related brain potentials (ERPs) provide an excellent tool to investigate the temporal dynamics of language processing in real-time. In the domain of prosody, distinct ERP components immediately reflect both the processing of prosodic boundaries as well as the subsequent integration of prosody with other types of linguistic information. In my talk, I will give an overview of this research area. After a brief introduction to ERPs, I will review a number of auditory and visual ERP studies and address questions such as: How much time does our brain need to take advantage of prosodic cues? When do children's brains learn to use this information? Does prosodic information play a role during silent reading? Are the brain mechanisms underlying prosodic phrasing in speech comparable to those involved in musical phrasing? How do we integrate multiple (conflicting) boundaries within the same utterance?

Neurocomputation & Language Processing


L'axe 1
9h30 Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo (Labex BLRI)
Au programme : 
* 9h30-10h : présentation des objectifs de l'axe Neurocomputation & Language Processing 
* 10h-11h30 : deux présentations sur l'apprentissage profond
""Deep learning of orthographic representations in baboons"", Thomas Hannagan 
""Deep neural nets and signal labeling"", Thierry Artières 

* 11h30-12h : conclusions et perspectives de collaborations pluri-displinaires sur la thématique de l'axe.

Grunt yak wahoo : baboon speak.


Caralyn KEMP
16h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence (Labex BLRI)
Primates vocalise to maintain contact with conspecifics, warn of predators, alert group members to food and to advertise territory, sexual availability and size, but we know surprisingly little about how and why these calls are produced. Can they be varied and is this context dependent? Are these calls vocal responses to emotional states or can they be produced voluntarily? How does the production of these calls compare to human speech? Studying these questions not only helps us to understand what our closest relatives are saying, but also helps us to understand the evolution of our own speech. As part of a larger study considering these questions, I am examining the vocalisations of a captive group of Guinea baboons at the Primate Cognition and Behavior Platform in Rousset. The main goal of this aspect of the project is to produce a large-scale database in order to 1) characterise the vocal repertoire of this baboon species, 2) determine the acoustic features of the vocalisations, and 3) test the descriptive adequacy of existing categories. Determining the precise repertoire of baboon vocalisations will allow us to specify the 'acoustic space' that the vocal track of baboons can produce and how this compares to human vowel production. Taking into consideration the social context in which these vocalisations are produced and how specific situations alter vocal production, we aim to determine whether the baboons are capable of producing these calls voluntarily.

Sonifying handwriting movements for the diagnosis and the rehabilitation of movement disorders.


Jérémy DANNA
16h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence (Labex BLRI)
Except for the slight scratching of the pen, handwriting is a silent activity. Transforming it into an audible activity might sound curious. However, because audition is particularly appropriate for the perception of fine temporal and dynamical differences, using sounds to gain information about handwriting movements seems judicious. We use the sonification that consists in adding synthetic sounds to silent movements in order to provide support for information processing activities. The idea is to associate a melodious sound, which flows, to a fluent handwriting, and a dissonant sound, which squeaks, to a jerky handwriting. By sonifying the relevant variables of handwriting in dysgraphic children or in Parkinsonian patients, it could be possible to detect their handwritings troubles 'by ear' only.

My talk will be organized in two parts. First, I will expose an experiment showing that adding relevant auditory information is sufficient for discriminating the handwriting of dysgraphic children and the skilled handwriting of proficient children 'by ear' only. I will also present an experiment in progress in which real-time auditory feedback are supplied to help dysgraphic children to improve their handwriting movements. Secondly, I will present the BLRI project that consists in using computerized analysis and sonification of handwriting movements for the early diagnosis of Parkinson Disease.

Speech perception across the adult lifespan with clinically normal hearings.


(MRC Institute of Hearing Research, Nottingham, UK)
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo
Subjective reports suggest that older listeners experience increased listening difficulties in noisy environments, and experimental investigations seem to confirm this age-dependent deficit. However, older persons are generally unaware of their peripheral hearing status (i.e., the presence of a hearing loss) and most published studies used lax audiometric inclusion criteria. Hence, lower speech intelligibility could, at least partially, be explained by a reduction in audibility with age. Also, most aging studies limited their age comparison to groups of ""young"" (e.g. ≤ 30 years) and ""older"" listeners (e.g. ≥ 60 years), making it impossible to pinpoint the onset of the putative age effect. This talk will present two cross-sectional investigations of central age effects on speech perception, using participants with clinically normal hearing. Performance on supra-threshold temporal-processing and a battery of cognitive tasks (including tests of processing speed, working memory and attention) was assessed, and compared with speech identification in quiet and in different (steady and fluctuating) background noises. To determine when during adulthood a decline with age in these abilities first becomes apparent, participants were sampled continuously from the entire adult age range (18-91 years). Despite a large individual variability, the results show an age-dependent decline in speech identification, especially above 70 years. Sensitivity to temporal information and cognitive performance deteriorated as early as middle age, and both correlated with speech-in-noise perception.

In conclusion, even when peripheral hearing sensitivity is clinically normal, the identification of speech in noise declines with age, and this deficit co-occurs with changes in retro-cochlear auditory processing and cognitive function.

Decomposition makes things worse: A discrimination learning approach to the time course of understanding compounds in reading


Harald Baayen
(Eberhard Karls University, Tübingen, Allemagne)
Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)
The current literature on morphological processing is dominated by the view that reading a complex word is a two-staged process, with an early blind morphemic decomposition process followed by a late process of semantic recombination (Taft, 2004; Rastle and Davis, 2008a). Various behavioral and magneto- and electroencephalography studies suggest semantic recombination would take place approximately 300-500 ms post onset of the visual stimulus (Lavric et al., 2007). However, eye-tracking studies show that both simple and complex words are read at a rate of 4 to 5 words/second (Rayner, 1998). We report an eye-tracking experiment tracing the reading of English compounds in simple sentences. For about 33% of the trials, a single fixation sufficed for understanding the meaning of the compound. For such trials, the meaning of the compound was available already some 140 ms after the eye first landed on the modifier. All first fixations also revealed an effect of the semantic relatedness of the modifier and head constituents, gauged with a latent
semantic analysis (LSA) similarity measure. These results indicate a much earlier involvement of semantics than predicted by the first-form-then-meaning scenario. Second and subsequent fixation durations revealed that at later processing stages very different semantic processes were involved, gauged by modifier-compound and head-compound LSA similarity measures. Computational modeling of the first fixation with naive discrimination learning (Baayen et al., 2011) indicated that the early (and only the early) semantic effect arises due to the model's connection weights' sensitivity to the collocational co-occurence statistics of orthographic and semantic information carried by word trigrams. We understand the LSA effects arising at later fixations as
reflecting semantic processes seeking to resolve the uncertainty about the targeted meaning that arises as an unintended and time-costly side effect of later fixations causing the head's meaning to be co-activated along with the compound's meaning. Instead of viewing blind morphological decomposition as the gateway through which meaning can be reached, we think that when the meaning of the head becomes available, due to the (non-morphological) nature of visual information uptake when the initial landing position of the eye is non-optimal, understanding comes with greater cognitive costs: Decomposition makes things worse. We speculate that the late semantic effects in the electrophysiological literature, especially those around the N400 time window, reflect late semantic cleaning operations.

Simplicity and Expressivity Compete in Cultural Evolution : Linguistic Structure is the Result.


(University of Edinburgh, UK)
11h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo
Language, like other human behaviours, exhibits striking systematic structure. For example, two central design features of human language are the way in which sentences are composed of recombinable words, and the way in which those words in turn are created out of combinations of reusable sounds. These properties make language unique among communication systems and enable us to convey an open-ended array of messages.

Recently, researchers have turned to cultural evolution as a possible mechanism to explain systematic structure such as this in language. In this talk, I will briefly present a series of experiments and a computational model that demonstrate why this is a promising avenue for research. Using diffusion chain methods in the laboratory, we can observe how behaviour evolves as it is transmitted through repeated cycles of learning and production (a process known as ""iterated learning""). Across a wide range of experimental contexts, we observe an apparent universal: behaviour transmitted by iterated learning becomes increasingly compressible. When combined with a pressure to also be expressive, this may be sufficient to deliver up the structural design features of language.

Although this work is focussed on human language as a test case, the conclusions are quite general. Cultural transmission by iterated learning is an adaptive process that delivers systematic structure for free.

Quantitative models of early language acquisition


Emmanuel Dupoux
11h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)
The past 40 years of psycholinguistic research has shown that infants learn their first language at an impressive speed. During the first year of life, even before they start to talk, infants converge on the basic building blocks of the phonological structure of their language. Yet, the mechanisms that they use to achieve this early phonological acquisition are still not well known. We show that a modeling approach based on machine learning algorithms and speech technology applied to large speech databases can help to shed light on the early pattern of development. First, we argue that because of acoustic variability, phonemes cannot be acquired directly from the acoustic signal; only highly context dependent and talker dependent phones or phones fragments can be extracted in a bottom-up way. Second, words cannot be acquired directly from the acoustic signal either, but a small number of protowords or sentence fragments can be extracted on the basis of repetition frequency. Third, these two kinds of protolinguistic units can interact with one another in order to converge with more abstract units. The proposal is therefore that the different levels of the phonological system are acquired in parallel, through increasingly more precise approximations. This accounts for the largely overlapping development of lexical and phonological knowledge during the first year of life.

Cartographie des fonctions du Langage par stimulation électrique corticale


Jean-François Demonet
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)

Sound change and its relationship to variation in production and categorization in perception.


Jonathan Harrington
(Institute of Phonetics and Speech Processing, Ludwig-Maximilians University of Munich, Germany)
10h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence
In some models (Lindblom et al, 1995; Bybee, 2002), sound change is associated with the type of synchronic reduction that occurs in prosodically weak and semantically predictable contexts. In other models (Ohala, 1993), sound change can be brought about through listeners’ misperception of coarticulation in speech production. The talk will draw upon both models in order to explore whether coarticulatory misperception is more likely in prosodically weak contexts. In order to do so, the magnitude of trans-consonantal vowel coarticulation was investigated in /pV1pV2l/ non-words with the pitch-accent falling either on the first or second syllable and in which V1 = /ʊ, ʏ/ and V2 = /e, o/. The analysis of these words produced by 20 L1-German speakers showed that prosodic weakening caused vowel undershoot in /ʊ/ but had little effect on V2-on-V1 coarticulation. In a perception experiment, a V1 = /ʊ-ʏ/ continuum was synthesised and the same speakers made forced choice judgements to the same non-words with the prosody manipulated such that stress was perceived on V1 or on V2. Listeners compensated for V2-on-V1 coarticulation; however, the magnitude of compensation was less in the prosodically weak than in the strong context. The general conclusion is that segmental context influences both the dynamics of speech production and perceptual categorization, but not always in the same way: it is this divergence between the two which may be especially likely in prosodically weak contexts and which may, in turn, facilitate sound change.

ReferencesBybee, J. (2002). Word frequency and context of use in the lexical diffusion of phonetically conditioned sound change. Language Variation Change, 14, 261–290. Lindblom, B., Guion, S., Hura, S., Moon, S. J., and Willerman, R. (1995). Is sound change adaptive? Rivista di Linguistica, 7, 5–36. Ohala, J. J. (1993). Sound change as nature’s speech perception experiment. Speech Communication, 13, 155–161.

The communicative basis of word order


16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)
Some recent evidence suggests that subject-object-verb (SOV) may be the default word order for human language. For example, SOV is the preferred word order in a task where participants gesture event meanings (Goldin-Meadow et al. 2008). Critically, SOV gesture production occurs not only for speakers of SOV languages, but also for speakers of SVO languages, such as English, Chinese, Spanish (Goldin-Meadow et al. 2008) and Italian (Langus & Nespor, 2010). The gesture-production task therefore plausibly reflects default word order independent of native language. However, this leaves open the question of why there are so many SVO languages (41.2% of languages; Dryer, 2005). We propose that the high percentage of SVO languages cross-linguistically is due to communication pressures over a noisy channel (Jelinek, 1975; Brill & Moore, 2000; Levy et al. 2009). In particular, we propose that people understand that the subject will tend to be produced before the object (a near universal cross-linguistically; Greenberg, 1963). Given this bias, people will produce SOV word order – the word order that Goldin-Meadow et al. show is the default – when there are cues in the input that tell the comprehender who the subject and the object are. But when the roles of the event participants are not disambiguated by the verb, then the noisy channel model predicts either (i) a shift to the SVO word order, in order to minimize the confusion between SOV and OSV, which are minimally different; or (ii) the invention of case marking, which can also disambiguate the roles of the event participants. We test the predictions of this hypothesis and provide support for it using gesture experiments in English, Japanese and Korean. We also provide evidence for the noisy channel model in language understanding in English.



Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI) 
What cognitive and neural mechanisms do we use to understand language? Since Broca's and Wernicke's seminal discoveries in the 19th century, a broad array of brain regions have been implicated in linguistic processing spanning frontal, temporal and parietal lobes, both hemispheres, and subcortical and cerebellar structures. However, characterizing the precise contribution of these different structures to linguistic processing has proven challenging. In this talk I will argue that high-level linguistic processing - including understanding individual word meanings and combining them into more complex structures/meanings - is accomplished by the joint engagement of two functionally and computationally distinct brain systems. The first is comprised of the classic “language regions” on the lateral surfaces of left frontal and temporal lobes that appear to be functionally specialized for linguistic processing (e.g., Fedorenko et al., 2011; Monti et al., 2009, 2012). And the second is the fronto-parietal ""multiple demand"" network, a set of regions that are engaged across a wide range of cognitive demands (e.g., Duncan, 2001, 2010). Most past neuroimaging work on language processing has not explicitly distinguished between these two systems, especially in the frontal lobes, where subsets of each system reside side by side within the region referred to as “Broca’s area” (Fedorenko et al., in press). Using methods which surpass traditional neuroimaging methods in sensitivity and functional resolution (Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, in press; Saxe et al., 2006), we are beginning to characterize the important roles played by both domain-specific and domain-general brain regions in linguistic processing.

Rudiments de langage chez les primates non-humains ?


(Université de Rennes 1, Institut universitaire de France)
11h Amphi Fabry Bât 5 3 place Victor Hugo, Marseille (Labex BLRI) 
La communication vocale des primates non-humains a longtemps été considérée comme déterminée uniquement génétiquement et émotionnellement, encourageant les théoriciens de l’origine du langage humain à en rechercher les précurseurs ailleurs, notamment dans les gestes des grands singes. Pourtant, les études menées au cours des dix dernières années, particulièrement sur les cris des cercopithèques forestiers, démontrent un parallèle avec plusieurs caractéristiques fondamentales du langage (p.ex. sémantique, affixation, syntaxe, prosodie, conversation, accommodation et convergence vocale). Les différences entre le langage humain et la communication vocale des singes, qui sont des actes sociaux comparables, seraient donc plus d’ordre quantitatif que qualitatif

Not all skilled readers have cracked the code: The role of lexical expertise in skilled reading


Sally Andrews
(University of Sydney)
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)
Most theories and computational models of skilled reading have been built upon average data for unselected samples of university students, reflecting an implicit assumption that all skilled readers read in the same way. I will review evidence that challenges this assumption by demonstrating that individual differences in measures of written language proficiency predict systematic variability in both the early stages of lexical retrieval indexed by masked priming, and in tasks assessing the contribution of lexical retrieval to sentence processing. These data highlight the critical role played by precise lexical representations in supporting optimally efficient reading.

Entropy Reduction and Asian Language


John Hale
(Cornell University, NY, USA)
10h Salle de conférences B011, bât. B 5 avenue Pasteur, Aix-en-Provence (Labex BLRI)
This talk presents a particular conceptualization of human language understanding as information processing. From this viewpoint, understanding a sentence word-by-word is a kind of incomplete perception problem in which comprehenders over time become more certain about the linguistic structure of the utterance they are trying to understand. The Entropy Reduction hypothesis holds that the scale of these certainty-increases reflects psychological effort. This claim revives the application of information theory to psycholinguistics, which languished since the 1950s. But in contrast to that earlier work, modern applications of information theory to language-understanding now use generative grammars to specify the relevant structures and their probabilities. This representation makes it possible to apply standard techniques from computational linguistics to work out weighted ""expectations"" about as-yet-unheard words. The talk exemplifies the general theory using examples from Chinese, Japanese & Korean. The prenomial character of relative clauses in these languages is an important test case for any general cognitive theory of sentence processing.

Si tous les chemins mènent à Rome, ils ne se valent pas tous. Le problème d'accès lexical en production


Michael ZOCK
(Laboratoire d'Informatique Fondamentale, CNRS & Aix-Marseille Université)
16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau, Cognition) 3 place Victor Hugo, Marseille (Labex BLRI)
Tout le monde a déjà rencontré le problème suivant : on cherche un mot (ou le nom d'une personne) que l’on connaît, sans être en mesure d’y accéder à temps. Les travaux des psychologues ont montré que les personnes se trouvant dans cet état savent énormément de choses concernant le mot recherché (sens, nombre de syllabes, origine, etc.), et que les mots avec lequel ils le confondent lui ressemblent étrangement (lettre ou son initial, catégorie syntaxique, champ sémantique, etc.). Mon objectif (à long terme) est de réaliser un programme tirant bénéfice de cet état de faits pour assister un locuteur ou rédacteur à (re)trouver le mot qu’il a sur le bout de la langue. À cette fin, je prévois d’ajouter à un dictionnaire électronique existant un index d’association (collocations rencontrées dans un grand corpus). Autrement dit, je propose de construire un dictionnaire analogue à celui des êtres humains, qui, outre les informations conventionnelles (définition, forme écrite, informations grammaticales) contiendrait des liens (associations), permettant de naviguer entre les idées (concepts) et leurs expressions (mots). Un tel dictionnaire permettrait donc l’accès à l’information recherchée soit par la forme (lexicale : analyse), soit par le sens (concepts : production), soit par les deux. Ma démarche est fondée sur plusieurs hypothèses. 1° Les stratégies de recherche dans notre dictionnaire mental dépendent, bien entendu, de la représentation des mots dans notre cerveau. Hélas, on n'a toujours pas une carte précise de cette organisation. Quant à la recherche on pourrait dire qu'elle s'opère essentiellement sur deux axes : Le premier décrit le passage des idées à leurs expression (idées, forme, sons). Cette vision représente l'ordre naturel des « choses » : partant du sens on va vers l'expression (forme sonore ou graphique du mot) en passant par les concepts lexicaux (lèmmes dans la théorie de Levelt). Le deuxième axe est plus proche de ce qu'on peut considérer comme une forme d'organisation de mots. Il représente leur usage (fréquent/typique) dans le discours. C'est un graphe de co-occurences ou d'associations. Il y a donc deux idées complémentaires : (a) l'expression des idées au sens restreint (passage des concepts aux mots) et (b) les rôles que ces idées (concepts/mots) peuvent jouer dans le cadre d'une phrase (discours, contextes possibles des mots). Ce contexte précise d'ailleurs souvent le sens des mots. Si le 1er axe représente la voie naturelle en production, voie empruntée pratiquement en toutes circonstances (plan A), le 2ème axe (voie associative) est la voie de rechange (plan-B), utilisée en cas d'échec du plan A. Le premier processus est automatique (rapide et inconscient), tandis que le second est contrôlé, donc lent est accessible à notre conscience. C'est lui qui m'intéresse, car il reflète la situation dans laquelle un auteur se trouve lorsqu'il fait appel à un dictionnaire ou thesaurus. 2° Le dictionnaire mental est un vaste réseau dont les noeuds sont des concepts ou mots (lemmes ou expressions) et les liens essentiellement des associations. Etant donné que tout est lié, tout peut être trouvén du moins en principe : il suffit de suivre des bons liens. Chercher un mot consisterait donc d?entrer dans ce réseau, puis de suivre les liens pour (re)trouver le terme faisant obstruction. 3° Le dictionnaire mental est à la fois un dictionnaire et une encyclopédie. Etant donné que les mots sont utilisés pour coder des connaissances du monde, ces dernières peuvent être sollicitées pour nous aider à retrouver le mot recherché (ainsi le terme 'baguette' pourrait-il être obtenu à partir de 'restaurant chinois' ou à partir de 'type de couvert'). Tout nous fera penser à qc, tout est associé à qc. De ce fait, tout est susceptible d'être évoqué par un terme lié, fût-il indirect (chaîne associative; recherche à plusieurs pas). 4° Les informations permettant d'effectuer ce type de navigation (atlas sémantique) se trouvent non seulement dans notre cerveau, mais aussi dans nos productions (manifestation linguistiques : phrases, textes). Comme ces traces constituent une forme d'extériorisation de l'organisation des idées (concepts/mots) dans notre cerveau, on peut s'en servir pour créer un modèle analogue. Ceci donnera un atlas ou une carte sémantique permettant alors aux auteurs de s'orienter pour trouver le mot qui leur fait (momentanément) défaut. Voici mon ambition. L’objectif de cet exposé est de montrer comment on pourrait construire une telle ressource et comment s'en servir.