skip to Main Content
Menu
DIVERSITY IN LANGUAGE, CULTURE & COGNITION

This series, co-organized by Laura Speed and Gunter Senft, takes place weekly on Thursdays. The colloquium series brings together speakers from within and outside of the university invited by members of the Meaning, Culture and Cognition group, the Languages in Contact group and the Language and Cognition group. For further information, please contact Laura Speed. All meetings begin at 3.45pm at the MPI.

nijmegensunny
PREVIOUS SPEAKERS

08 JUNE 2017

JACK WILSON
University of Salford
15.45 – 17.00 | MPI room 163

The expression of direction and orientation in two Modern South Arabian languages

Until fairly recently most linguistic fieldwork relied on written records of spoken data or audio-only recordings. The recent increase in research focusing on audio-visual data, with emphasis on the co-expressiveness of speech and gesture, has led to a greater understanding of the relationship between language, gesture and thought. In this paper, we discuss gesture and what it illuminates linguistically in two Modern South Arabian Languages: Mehri and Śḥerɛ̄t.

Gesture researchers have highlighted the close relationship between linguistic structure and gesture segmentation. For example, in English motion descriptions, the manner of movement can be realised as part of the meaning of the verb (e.g., She rolled down the hill) whereas in Turkish it is realised as a separate linguistic unit: yuvarlan-arak cadde-den iniyor (lit. ‘(s/he) descends on the street, as (s/he) rolls.’). English speakers are more likely to produce gestures that depict conflated manner and path, whereas Turkish speakers are more likely to produce separate gestures. Kita and Özyürek (2003) have used these findings to suggest that speech and gesture form an incredibly tight bond in the process of packaging information for speaking.

In this paper, we argue that during descriptions relating to movement through space, Mehri and Śḥerɛ̄t speakers seem to separate orientation and direction both in speech and gesture: for example: ḳəfēdī məns əl-ḥaydiš ḥaymal (lit. ‘go down from it[direction], on your right hand[orientation]). In producing this utterance, the Mehri speaker produced two gestures. The first relating to the direction (‘down’), and the second relating to the orientation (‘on your right hand’). In a cognate expression in English, direction and orientation are expressed linguistically within the verb phrase: go down the right side of it, with accompanying gestures conflating direction and orientation.

01 JUNE 2017

ASHLEY MICKLOS
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

Doing repair in silent gesture: Indicating trouble, achieving alignment, and what it gets you in language evolution

As problems of understanding and alignment arise in conversational interaction, we must find a means to indicate to our interlocutor that we have misunderstood and potentially the reason for our misunderstanding. Thus, we engage in repair sequences that are aimed at boosting our communicative success and interactive alignment. A similar, and perhaps exaggerated, problem occurs when building a communication system, as we face the challenge of aligning novel forms to meanings. Novel communication tasks positioned in the field of language evolution have shown (counterintuitively) that repair in these contexts facilitates communicative efficiency but not communicative success. In this vein, I will present the qualitative and quantitative findings of an iterated referential communication game in which participants could only use silent gesture to signal easily confusable meanings. Participants played in one of two conditions: one that restricted repair opportunities while the other promoted them. I will detail how participants initiated and engaged in repair sequences and how these strategies relied on facial gestures. I will also show how repair affected communicative success and communicative efficiency, pointing to the benefits of repair in negotiating new form-meaning mappings. I will conclude with a call to consider the natural ecology of language use in studies of language evolution, focusing on features of face-to-face interaction which drive negotiation and alignment.

18 MAY 2017

ANDREA MARTIN
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

Linking linguistic and cortical computation via hierarchy and time

Human language is a fundamental biological signal with computational properties that differ from other perception-action systems: hierarchical relationships between words, phrases, and sentences, and the unbounded ability to combine smaller units into larger ones, resulting in a “discrete infinity” of expressions. These properties have long made language hard to account for from a biological systems perspective and within models of cognition. One way to begin to reconcile the language faculty with both these domains is to hypothesize that, when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism, the brain repurposed an available neurobiological subroutine. Under such an account, a single mechanism must have the capacity to perform multiple, functionally-related computations, e.g., detect and represent the linguistic signal, and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a well-supported symbolic-connectionist model of analogy (Discovery Of Relations by Analogy; Doumas, Hummel, & Sandhofer, 2008) oscillates while processing sentences – despite being built for an entirely different purpose (learning relational concepts and performing analogical reasoning). The model processes hierarchical representations of sentences, and while doing so, it exhibits oscillatory patterns of activation that closely resemble the human cortical response to the same stimuli (cf. Ding, Melloni, Zhang, Tian, & Poeppel, 2016). From the model, we derive an explicit computational mechanism for how the brain could convert perceptual features into hierarchical representations across multiple timescales, providing a linking hypothesis between linguistic and cortical computation. We argue that this computational mechanism – using time to encode hierarchy across a layered network, while preserving (de)compositionality – can satisfy the computational requirements of language, in addition to performing other cognitive functions. Our results suggest a formal and mechanistic alignment between representational structure building and cortical oscillations that has broad implications for discovering the first principles of linguistic computation in the human brain.

11 MAY 2017

FRANCESCA STRIK LIEVERS
Universidad de Milán-Bicocca
15.45 – 17.00 | MPI room 163

Linguistic synaesthesia

Common expressions such as ‘sweet music’, and more complex and creative ones such as ‘Blows with a perfume of songs’ (Swinburne) are examples of linguistic synaesthesia. Several studies have shown that cross-modal associations in linguistic synaesthesia are not random. In most cases it is a hearing- or sight-related element that is qualified in terms of one of the other senses, rather than the reverse (cf. ‘sweet music’ vs. ‘musical sweetness’, the former being more likely to occur and sounding more “natural” than the latter). How can this tendency be explained? Does the explanation lie in (multisensory) perception, in cognition, or in language structure?

To answer this question, at least two issues must be addressed. First, many data, and data from many languages, are needed to verify the tendency and to understand whether it can be considered universal or not. To this end, I introduce a semi-automatic procedure that can be used for the identification of synaesthesia. Second, a clear definition of linguistic synaesthesia has to be provided. Based on a review of alternative accounts, I argue that synaesthesia is a metaphor, and that different types of synaesthetic metaphor conform to the general tendency observed for cross-modal associations to different degrees (e.g., ‘sweet music’ does, while ‘Blows with a perfume of songs’ does not). Finally, I argue how the tendency itself can be accounted for by a combination of different factors.

18 APR 2017

CHARLES SPENCE
University of Oxford
15.45 – 17.00 | MPI room 163

Crossmodal correspondences: Looking for links between sound symbolism & synaesthesia & their application to multisensory marketing

“Are lemons fast or slow?”; “Is carbonated water round or angular?”; Most people agree on their answers to these questions. These are examples of correspondences, that is, the tendency for a feature, in one sensory modality, either physically present, or merely imagined, to be matched (or associated) with a feature, either physically present, or merely imagined, in another modality. Crossmodal correspondences appear to exist between all pairings of senses, and have been shown to affect everything from people’s speeded responses to their performance in unspeeded psychophysical tasks. While some correspondences are culture-specific (e.g., the correspondence between angularity and bitterness), others are likely to be universal (e.g., the correspondence between auditory pitch and visual or haptic size, for example). Intriguingly, some animals (e.g., chimpanzees), as well as young infants, appear to be sensitive to certain crossmodal correspondences. In this talk, I will discuss a number of the explanations that have been put forward to account for the existence of crossmodal correspondences. I will also examine the relationship between crossmodal correspondences and sound symbolism, and tackle the thorny question of whether crossmodal correspondences should be thought of as a kind of synaesthesia that is common to us all. Finally, I will highlight some of the latest marketing applications that are now emerging from basic research on crossmodal correspondences in the design of everything from beverage labels through to the music you listen to while drinking your coffee (or cognac), and the dishes that are starting to appear at modernist restaurants.

Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73, 971-995.

Spence, C. (2012). Managing sensory expectations concerning products and brands: Capitalizing on the potential of sound and shape symbolism. Journal of Consumer Psychology, 22, 37-54.

Spence, C. (2012). Synaesthetic marketing: Cross sensory selling that exploits unusual neural cues is finally coming of age. The Wired World in 2013, November, 104-107.

13 APR 2017

YASAMIN MOTAMEDI
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

Artificial sign language learning: a method for evolutionary linguistics

Previous research in evolutionary linguistics has made wide use of artificial language learning (ALL) paradigms, where learners are taught artificial languages in laboratory experiments and are subsequently tested about the language they have learnt. The ALL framework has proved particularly useful in the study of the evolution of language, allowing the manipulation of specific linguistic phenomena that cannot be isolated for study in natural languages. Furthermore, using ALL in populations of learners, for example with iterated learning methods, has highlighted the importance of cultural evolutionary processes in the evolution of linguistic structure.

In this talk, I present a methodology for studying the evolution of language in experimental populations. In the artificial sign language learning (ASLL) methodology I demonstrate, participants learn manual signaling systems that are used to interact with other participants. The research I present attempts to present a controlled study of the evolutionary mechanisms that drive linguistic structure, whilst providing an experimental companion to some of the only available evidence of language emergence and evolution in natural languages: emerging sign languages.

I detail two experiments that investigate the role of cultural evolutionary processes in the evolution of systematic linguistic structure. In the first study, I demonstrate how the combination of interaction and transmission to new learners leads to structured and efficient systems, in comparison to either interaction or transmission alone. In the second experiment, I expand the method to investigate complex grammatical constructions, investigating the emergence of systematic spatial modulation in novel communication systems, and providing a comparison to research on the emergence of spatial reference systems in natural sign languages.

The findings from these experiments offer a more precise understanding of the roles that different cultural mechanisms play in the evolution of language, and further builds a bridge between data collected from natural languages in the early stages of their evolution and the more controllable environments of experimental linguistic research.

30 MAR 2017

JUSTIN SULIK
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

Perspective taking in context

When two people don’t share a language, they are often able to communicate successfully by creating signals such as spontaneous gestures. Even when they share a language but lack a conventional way to refer to a particular something, they can use old words in new ways to get their point across.

In all such cases, however, a signaler typically has several ways they could convey their meaning: they could signal ‘snake’ by imitating its hissing or by gesturing its winding motion. They could describe a bank as ‘where you go to deposit money’ or as ‘what Santander is’. Since some of these options might be more informative than others from the recipient’s perspective, the question is whether or not people are able to take their interlocutors’ perspective and generate an informative signal.

We present the results of a novel signaling task that focuses on the contribution of salience, shared world knowledge and contextual constraint, and conclude that (1) in general, people signal based on what is salient from their own perspective, not their interlocutors’ point of view, even though they share world knowledge, (2) contextual constraint can boost perspective taking, and (3) some people are better than others at taking perspective, but different cognitive mechanisms predict success in different contexts.

23 MAR 2017

BILL THOMPSON
Vrije Universiteit Brussel / MPI
15.45 – 17.00 | MPI room 163

Culture as computation

The generative enterprise in linguistics is associated with two major propositions about language and its place in nature. The first has been to argue that language can be understood as a computational innovation. The second has been to argue that the primary explanandum in the evolution of language is the emergence of a capacity for individual human brains to represent a specific formal class of grammatical structures. I’d like to argue that the first of these propositions is broadly right, but the second is too restrictive. Humans have achieved something computationally remarkable by building languages together, but the unique computational system responsible for this achievement is not (just) Merge: it is our species-unique platform for storing and re-using computations performed by earlier individuals — cumulative cultural evolution. I’ll present some of the work I have been doing to try to move towards a computational understanding of language at this level of analysis, using probabilistic inference as a unifying model for acquisition, transmission, and social interaction.

16 MAR 2017

MARCUS PERLMAN
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

The potential for iconicity in vocalization, and the origins of language

Arguments for a gestural origin of language often assume that gestures afford vastly more potential for iconicity than vocalizations. Therefore, it is reasoned, gestures are necessary to bootstrap the formation of spoken symbols. To the contrary, in this talk, I present results from several “vocal” charades experiments that highlight the considerable potential for iconicity in vocalizations. The findings show that 1) people are able to create iconic vocalizations to represent a diverse array of meanings, and that 2) naïve listeners are able to understand these vocalizations. The findings also suggest that 3) people are able to use iconic vocalizations to ground the formation of more word-like symbols. In conclusion, I argue for the more balanced hypothesis that language originated from iconic signals in both gesture and vocalization alike.

9 MAR 2017

CHIARA BARBIERI
Max Planck Institute for the Science of Human History
15.45 – 17.00 | MPI room 163

Matches and mismatches between genetic and linguistic diversity

The coevolution of languages and genes represents the ultimate Darwinian paradigm for the reconstruction of population dynamics in time and space, and is still one of the most evoked parallels between cultural and biological diversity. In recent years, scholars have focused on the congruence of linguistic and genetic histories to shed light on population origin, diversification and contact. Popular case studies include the diffusion of major language families, such as Indo-European and Austronesian, but smaller, regional cases of population contact have also been examined.

Mismatches between linguistic and genetic variation are usually disregarded as an exception to the general pattern. But how often do these mismatches actually occur? Can we estimate the incidence of language shift and reconstruct more realistic models of cultural evolution? And which circumstances drive such discontinuities in cultural transmission?

In this talk I will examine the two sides of the gene-language mismatch. The first concerns genetically diverse populations who speak languages of the same family. I will use the case study of the Quechua language family (western South America) to illustrate different patterns of genetic relatedness between speakers of the subfamilies proposed, in particular focusing on the northern and southern varieties. The genetic results support a complex interaction of demographic and cultural forces to account for the spread and diversification of Quechua.

The second side of the mismatch concerns genetically similar populations who speak unrelated languages. This phenomenon, which occurs in various regions of the world, can be directly associated to the formation and maintenance of language barriers within human groups. Such linguistic boundaries, permeable to gene flow, are sustained by socio-cultural forces, for example multilingualism and group identity marking. I propose to address these interaction dynamics with large-scale datasets which take into account the environmental, historical and social contexts.

The final aim of these lines of research is to develop a more realistic understanding of the complex mechanisms behind cultural transmission and cultural change. The fluidity of cultural features through time and space not only impacts our ability of tracing back human prehistory, but also influences the definition of “population” as the unit of research.

2 MAR 2017

SANDER LESTRADE
Radboud University Nijmegen
15.45 – 17.00 | MPI room 163

Modeling language evolution: Principles and parameters

Language is the product of two qualitatively very different evolutionary processes. Through biological evolution, people slowly became “language ready” as the social, physical, and cognitive prerequisites of language were established. In the subsequent process of cultural evolution, it was mostly the language itself that changed as a result of being used and learned across many generations of speakers.

A fundamental task for the study of language evolution is to determine what goes where. Given the nature of the two processes and our current knowledge about the genome and development of the brain, it is desirable to explain language-particular phenomena as much as possible through cultural evolution, reserving biological evolution for more basic capacities only. Still, it needs to be shown how exactly language particular features could have emerged from more general principles.

For this, I have developed an artificial intelligence computer simulation of cultural language evolution. It is assumed that speakers initially have a lexicon of referential items only. In addition, principles of change are implemented that have been established independently in the literature.

In this talk, I will first show how it is possible to model the emergence of such sophisticated grammatical phenomena as case marking, person indexing, and pronominal paradigms. Next, I will discuss some of the design choices that have to be made to do this in a valid way.

16 FEB 2017

STEPHEN LEVINSON
Max Planck Institute for Psycholinguistics
15.45 – 17.00 | MPI room 163

Spatial cognition  and language evolution

In this talk I will argue that language evolution may have been closely tied to spatial cognition.  We are natively poor navigators, compared to many animal species (although we make up for this with cultural prostheses, including language). This may have to do with the recruitment of the human hippocampus for things other than spatial navigation, namely memory and language. That cooption of spatial mechanisms may have left its mark deep on the conceptual structure of language, providing conceptual primitives differentially exploited in different languages – easily illustrated in the spatial domain. Reasons for that recycling of neuronal circuitry from space to language may have to do with the natural preoccupations in human communication with spatial and social concerns, both of which have a network structure coded in the hippocampus. Above all, gesture – a spatial modality ideal for indicating spatial concepts – seems to have anteceded spoken language in human communication, and may have been the Trojan horse facilitating the invasion of spatial circuitry by language. A crucial additional ingredient, explaining why other animals haven’t gone the same route, is the development of an interactional infrastructure for communication, which is exclusive to humans. A long-standing strand of linguistic thought, together with increasing evidence about the deep history of language (including its gestural origins), seems compatible with this story.

9 FEB 2017

MARTINE BRUIL
Universiteit Leiden
15.45 – 17.00 | MPI room 163

Evidentials as Sentence Types and Sentence Type Modifiers
Evidence from South American Indigenous Languages

In all languages, speakers can express how they acquired the information that they are transmitting. Some languages use lexical and others grammatical means, that is, evidentials, to do this. There is tremendous crosslinguistic variation in the expression of evidentiality: languages express different evidential meanings, such as direct evidentiality, indirect evidentiality, inferentiality, reportativity etc. and these meanings are expressed by different elements, such as verbal affixes, clitics, particles etc. In my dissertation (Bruil, 2014), I argued that this variation is not just a superficial one; it is due to the fact that evidentials can function within different domains of the language, such as tense, aspect, and sentence-typing. I will focus on the latter type of evidentials in this talk.

It has been known for a long time that evidentials and sentence types interact (Aikhenvald, 2004). Sentence-typing is the grammatical marking of the function of a sentence (König & Siemund, 2007). Examples of crosslinguistically common sentence types are declarative, interrogative, and imperative. Nevertheless, it has not been discussed in the typological literature that the effects of this interaction can tell us more about the semantics of the evidential. An interesting example of a language in which evidentiality and sentence-typing interact is Ecuadorian Siona. In this language, the reportative is part of the sentence-typing system. The reportative, a verb form that speakers use when they report what someone else has said, is mutually exclusive with the assertive, interrogative, and the imperative sentence type. It is semantically different from the other sentence types, because speakers do not vouch for the information when they use a reportative, they just present the information without saying if it is true or false. Because the Ecuadorian Siona reportative behaves both structurally and semantically as a sentence type, I analyze it as a fourth sentence type in the language.

There are languages in which evidentials show similar behavior to that of the Ecuadorian Siona reportative: when they are used, the function of the sentence changes. This is the case for the Cuzco Quechua reportative clitic. When this clitic is used in a declarative sentence, speakers do not vouch for the information they are presenting. However, this clitic cannot be analyzed as a sentence type, because it is not mutually exclusive with other sentence types. It can, for instance, be used in content questions rendering the effect that the speaker asks the question on behalf of someone else. The function of the sentence has changed: the role of the inquirer is shifted away from the speaker to a non-speech act participant. Therefore, I analyze this type of evidentials as sentence type modifiers. In this talk, I will discuss and compare the semantic features of both of these evidential types.

References

Aikhenvald, A. Y. (2004). Evidentiality. Oxford: Oxford University Press.

Bruil, M. (2014). Clause-typing and evidentiality in Ecuadorian Siona. PhD Thesis, Leiden University.

König, E., & Siemund, P. (2007). Speech Act Distinctions in Grammar. In T. Shopen, Language Typology and Syntactic Description, Second Edition (Vol. 1, pp. 276-324). Cambridge: Cambridge University Press.

02 FEB 2017

TESSA VERHOEF
University of California
15.45 – 17.00 | MPI room 163

Iconicity and systematicity in the emergence of patterns in sign language

In sign languages and gesture, systematic preferences have been found for the use of different iconic naming strategies when representing tools. In this talk, I will present experiments that were conducted to explore the influence of biases in gestural representation on the emergence of conventionalized patterns in sign languages. The first experiment maps out the initial biases people have for pairing ACTION and OBJECT concepts related to tools (e.g. ‘using a toothbrush’ and ‘a toothbrush’) with HANDLING (showing how you hold it) and INSTRUMENT (showing what it looks like) strategies in an online experiment with 720 participants. In line with earlier findings (Padden et al., 2015; Ortega & Ozyurek, 2016), we show that non-signers have a strong preference for HANDLING forms. We also find a strong preference for mapping HANDLING to ACTION and INSTRUMENT to OBJECT, demonstrating clear biases for use of iconic strategies. The second experiment investigates the effects of these biases on the learnability of artificial languages. In addition to reflecting naturalness on an item by item basis, languages can also vary in systematicity across sets of items (i.e. the extent to which all ACTIONS pattern the same way, and all OBJECTS pattern the same way).  As expected, we found unsystematic languages to be harder to learn than systematic ones. Surprisingly, languages that are systematic, but with a mappings that violates the bias, seem just as learnable as systematic languages. Moreover, participants seem to need only a few examples before they detect and accept the unexpected pattern. The patterns we see in natural sign languages are often only partially systematic though, therefore the third experiment explores the learnability and direction of change in artificial languages that merely show tendencies towards systematic patterns. Here we see a clear influence of the tension between initial preferences and systematicity. Together, these studies help improve our understanding of the subtle interplay between learning biases and gestural preferences and how these affect the emergence of patterns in language.

26 JAN 2017

PIERA FILIPPI
Vrije Universiteit Brussel
15.45 – 17.00 | MPI room 163

The referential value of prosody: A comparative approach to the study of vocal communication

Recent studies addressing animal vocal communication have challenged the traditional view of meaning in animal communication as the context-specific denotation of a call. These studies have identified a central aspect of animal vocal communication in the ability to recognize the emotional state of signalers, or to trigger appropriate behaviors in response to vocalizations. This theoretical perspective is conceptually sound from an evolutionary point of view, as it assumes that, rather than merely referring to an object or an event, animals’ vocalizations are designed to trigger (intentionally, or not) reactions that may be adaptive for both listeners and signalers. Crucially, changes in emotional states may be reflected in prosodic modulation of the voice. Research focusing on the expression of emotional states through vocal signals suggests that prosodic correlates of emotional vocalizations are shared across mammalian vocal communication systems. In a recent empirical study, we showed that human participants are able to identify the emotional content of vocalizations across amphibia, reptilia, and mammalia. These results suggest that fundamental mechanisms of vocal emotional expression are widely shared among vocalizing vertebrates and could represent an ancient signaling system. But what’s the evolutionary link between the ability to interpret emotional information in animal vocalizations and the ability for human linguistic communication? I suggest to identify this link in the ability to modulate emotional sounds to the aim to trigger behaviors within social interactions. Hence, I will emphasize the key role of the interactional value of prosody in relation to the evolution and ontogenetic development of language.  Within this framework, I will report on recent empirical data on humans, showing that the prosodic modulation of the voice is dominant over verbal content and faces in emotion communication. This finding aligns with the hypothesis that prosody is evolutionarily older than the emergence of segmental articulation, and might have paved the way to its origins. Finally, implications for the study of the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language, will be discussed.

19 JAN 2017

ALAN NIELSEN
Max Planck Institute for Psycholintuistics
15.45 – 17.00 | MPI room 163

Systematicity, Iconicity, and the structure of the lexicon

In recent years, the proposal that the relationship between words and their meanings is entirely arbitrary has been heavily criticised. Recent findings have suggested that parts of the lexicon are non-arbitrary in two ways:  iconicity refers to direct relationships between words and meanings, whereas systematicity refers to relationships between sets of words and sets of meanings. Both of these types of non-arbitrariness are suggested to have important implications for language learning.

In this talk I will present the results of a series of experiments investigating these claims, demonstrating:

  • That systematicity and iconicity both enhance learning in certain contexts

BUT

  • That each has inherent limitations
  • That systematicity and iconicity are often conflated in the literature
  • That broader claims about the centrality of non-arbitrariness for language learning should be tempered until we have a better understanding of the processes involved

I conclude by suggesting a number of productive avenues for future research, including the need to account for the different distribution of non-arbitrariness both within and between the world’s languages.

08 DEC 2016

HANNAH LITTLE | Max Planck Institute for Psycholinguistics

15.45 – 17.00 | MPI room 163

It’s not all in your Mind: Modality Effects on the Emergence of Combinatorial Structure

The majority of work in evolutionary linguistics has focused on the effects of cognitive biases on linguistic structure in relation to both learning and communication. In contrast, work on the evolution of speech primarily focuses on the physical features of the vocal tract. In this talk, I bridge the gap between these two bodies of work and investigate how the physical properties of a linguistic modality (speech or sign) affects the emergence of combinatorial structure in language. I will present a collection of artificial signalling experiments which use continuous audio signals. In these experiments, I manipulate signal spaces, and the mappings between signal and meaning spaces, to investigate how features of linguistic modalities might affect the emergence of combinatorial structure. The results of these experiments have implications not only for the emergence of structure in real world languages, but also in the design of artificial signal spaces for experimental work, and the validity of generalisations from previous experimental results.

24 NOV 2016

MATTHEW BAERMAN | University of Surrey

15.45 – 17.00 | MPI room 163

The elusiveness of inflectional objects

We talk about the structures of language in terms of discrete units: phonemes, morphemes, words, and so on. Of course this is a simplifying assumption, nowhere more so than in the realm of inflectional morphology. The building blocks of word forms often follow patterns at cross-purposes to the functional categories that we otherwise believe they express, in effect describing a covert system of contrasts. Using data from two languages – Nuer (a Western Nilotic language of South Sudan) and Seri (a language isolate of Mexico) – I show some particularly striking ways in which inflectional objects elude precise classification in terms of functional categories.

17 NOV 2016

ALEX CARSTENSEN | Radboud University

15.45 – 17.00 | MPI room 163

Universals and variation in spatial language and thought

Why do languages parcel human experience into categories in the ways they do? And to what extent do these categories in language shape our view of the world? In this talk, I’ll evaluate the role of cognitive universals and linguistic relativity in spatial reasoning and explore the mechanisms underlying contributions from each. This research draws on semantic typology, nonlinguistic behavioral experiments, and simulated language evolution in the lab, finding convergent evidence in support of a set of universal features of spatial cognition that may be somewhat adjusted by language. I will argue that strong universal pressures constrain variation in spatial cognition across languages, and that the semantic structure of language in turn shapes nonlinguistic cognition, but not always in the way we might imagine.

10 NOV 2016

NEIL COHN | Tilburg University

15.45 – 17.00 | MPI room 163

Cross-Cultural Diversity and the Cognition of Visual Narrative

Just how is it that our brains understand the drawings and sequential images found in visual narratives like comics? Building on contemporary theories from the language sciences, I will present a provocative theory: that the structure and cognition of drawings and sequential images is similar to language. This talk will explore two facets of these “visual languages.” First, it will cover the basic linguistic structure of the “narrative grammar” that governs sequential images, and will describe corpus research suggesting cross-cultural variation in the visual languages used by comics of the world. Just as verbal and signed languages differ, so too do visual languages. Second, we will show that manipulation of sequential images evokes similar ERP effects as sentences (i.e., N400s, P600s, anterior negativities), and connected with corpus research, that culturally dependent patterns modulate readers’ processing of visual narratives. Altogether, this work explores emerging research from the linguistic and cognitive sciences that challenges conventional wisdom with a new paradigm of thinking about the connections between language and graphic communication.

2 NOV 2016

JEFF ZACKS | Washington University

15.45 – 17.00 | MPI room 163

Event Comprehension in Language and Perception

The experience of events unfolding over time is fundamental to perceptual experience and to discourse understanding. In this talk, I will describe a theory that relates the subjective experience of events in perception and language comprehension to computational mechanisms of prediction error monitoring and memory updating. Briefly, Event Segmentation Theory proposes that comprehenders maintain a working memory representation of the current event and use it to guide predictions about what will happen in the near future. When prediction error spikes, they update their model. Data from individual differences, neuropsychology, and neuroimaging suggest that this mechanism is functionally significant for discourse comprehension memory and that it can be impaired by neurological injury or disease. New results indicate that it is possible to improve the encoding of event structure and that this may improve subsequent memory. Such results have implications for storytelling across media, and for the remediation of memory disorders in conditions including healthy aging, Alzheimer’s disease, and post-traumatic stress disorder.

27 OCT 2016

DAN DEDIU | Max Planck Institute for Psycholinguistics

15.45 – 17.00 | MPI room 163

Vocal tract anatomy and language: non-linguistic factors may influence language diversity and evolution

Lately, there are several strands of evidence emerging that seems to suggest that language is also shaped by non-linguistic factors, and I will explore in this talk several such proposals, focusing on a particular subtype, namely biases with a biological component. I will give a glimpse of work-in-progress exploring the link between vocal tract anatomy and physiology and phonetic/phonological diversity and discuss the relevance of such “biased cultural evolution” for understand language evolution, language change and the present-day linguistic diversity.

20 OCT 2016

SEAN ROBERTS | Max Planck Institute for Psycholinguistics

15.45 – 17.00 | MPI room 163

Language adapts to interaction

Language appears to be adapted to constraints from many domains such as  production, transmission, memory, processing and acquisition.  These  adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is  face-to-face conversation. Taking turns at talk, repairing problems in  communication and organising conversation into contingent sequences seem
completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species.

In this talk I discuss how one might link features of real time  interaction to different levels of language evolution: the evolution of  a capacity for language; the initial emergence of linguistic systems;  and the ongoing cultural evolution of languages.  I argue that a full  explanation of the origin and structures of languages needs to take into
account the ecology in which language is used: face to face interactive  communication. I’ll review some studies that try to address this including methods such as computational models, lab experiments and corpus analyses.

27 SEPT 2016

LERA BORODITSKY | University of California San Diego

15.45 – 17.00 | MPI room 163

13 APR 2016

FERNANDO MARMOLEJO RAMOS | Stockholm University

15.45 – 17.00 | MPI room 163

The valence-space metaphor posits that emotion concepts map onto vertical space such that positive concepts are in upper locations and negative ones in lower locations. Whilst previous studies have demonstrated this pattern for positive and negative emotions e.g. ‘joy’ and ‘sadness’, the spatial location of neutral emotions e.g. ‘surprise’ has not been investigated and little is known about the effect of linguistic background. In this study we first characterised the emotions joy, surprise and sadness via ratings of their concreteness, imageability, context availability and valence before examining the allocation of these emotions in vertical space. Participants from six linguistic groups completed either a rating task used to characterise the emotions or a word allocation task to implicitly assess where these emotions are positioned in vertical space. Our findings suggest that, regardless of language, gender, handedness and age, positive emotions are located in upper spatial locations and negative emotions in lower spatial locations. Additionally we found that the neutral emotional valence of surprise is reflected in this emotion being mapped mid-way between upper and lower locations onto the vertical plane. This novel finding indicates that the location of a concept on the vertical plane mimics the concept’s degree of emotional valence. Keywords: emotions; embodiment; spatial cognition; social cognition; metaphorical mapping

20 APR 2016

FRANS HINSKENS (Meertens Instituut) | STEF GRONDELAERS (CLS) | PIETER MUYSKEN (CLS)

15.45 – 17.00 | MPI room 163

Perspectives on ethnolect research

In this triple-decker sandwich talk we will present some perspectives on ethnolect research as carried out at CLS and the Meertens Institute.

In the first part, Pieter Muysken will survey the study of Dutch ethnolects in broad outline and in a historical perspective, discussing briefly a number of ethnolects that have emerged (and sometimes disappeared again) in the past. It goes on to describe the main research questions, the methodology, the data and some of the results of our Roots of Ethnolects project, which has been carried out for ten years now at the Meertens and Radboud.

In the second part, Stef Grondelaers will report on recent work on attitudes towards ethnolects. Building on speech clips from the Roots of Ethnolects-corpus, Grondelaers and his colleagues carried out a speaker evaluation experiment to investigate whether a (comparatively light) Moroccan accent of Dutch could be considered standard Dutch or not. While there appears to be increasing tolerance for regional accents in Netherlandic Standard Dutch, the acceptance of non-native, ethnic accents clearly is a different matter…

In the third part, Frans Hinskens will discuss morphosyntactic variation in ethnolects with a focus on grammatical gender, both in determiners and in adnominal inflection. Standard Dutch as well as the Nijmegen and Amsterdam varieties distinguish common and neuter gender; in our data neuter gender varies greatly. Variability in the interactions appears to be conditioned both linguistically and socially. With regard to the internal conditioning, grammatical (word class and the like) and semantic (animacy) dimensions have been studied. Apart from the speakers’ age and city of residence, the social dimensions also include background of the speaker and background of the interlocutor.

11 MAY 2016

MARIA LARSSON | Stockholm University

15.45 – 17.00 | MPI room 163

Reminders of the recent and distant past: Odor-based context dependent memory

Even though rarely thought of, all environmental spaces contain odor information. It has been proposed that the preconditions for episodic olfactory memory may not be optimal. For example, environmental olfactory information often goes unnoticed and barely evokes attention in humans and semantic activations that are a prerequisite for optimal episodic memory functioning are typically restricted. Still, it is highly likely that olfactory information will become part of a memory representation that is linked to a specific event. This implies that an event-congruent exposure of an odor carries the potential to trigger all, or parts of, a previous episode. Indeed, available evidence shows that odors may serve as powerful reminders of past experiences. This is demonstrated by studies exploring the nature of odor-evoked autobiographical memories and by controlled experimental paradigms where odors have been embedded in a learning context and later reinstated at retrieval where an increased memory recollection for the target information often is observed. These observations converge on the notion that odor memories are retained over long periods of time.

This work was supported by a grant from The Swedish Foundation for Humanities and Social Sciences (M14-0375:1) to Maria Larsson.

25 MAY 2016

MARISA CASILLAS | Max Planck Institute for Psycholinguistics

15.45 – 17.00 | MPI room 163

Communicative development in a Mayan village

Most current developmental language theories assume that Western-style caregiver-child interaction is the basis for children’s linguistic development. On the contrary, there is a great deal of ethnographic evidence showing wide variation in caregiving styles across cultures. For example, Mayan mothers traditionally aim to keep their infants calm (i.e. not highly socially stimulated; Brown, 2011, 2014; Gaskins, 2006). Nevertheless, Mayan children attain normal linguistic competence as adults. Taking these ethnographic findings as a starting point, I use modern quantitative methods from developmental psycholinguistics to ask (a) what are Mayan children’s early linguistic experiences like, and (b) how does their linguistic experience relate to their linguistic development? To answer these questions, I have collected day-long natural speech recordings and linguistic experimental data from 55 children under age five in one Mayan village. I present some initial findings from these data and discuss their relevance for theories of early language experience and linguistic development.

7 JUN 2016

FARZAD SHARIFIAN | Monash University

15.45 – 17.00 | MPI room 163

21 SEPT 2016

GARY LUPYAN | University of Wisconsin-Madison

15.45 – 17.00 | MPI room 163

Beyond the mapping metaphor: the role of words in human cognition

A common assumption in psychology and linguistics is that words map onto pre-existing meanings. I will argue that this mapping metaphor is mistaken and that words play a much more central role in creating meaning than is generally acknowledged. I will present a range of empirical evidence for the functions of language beyond communication, focusing on categorization and visual perception. On the presented view, many of the unique aspects of human cognition stem from the power of words to flexibly create categories from perceptual representations, allowing language to act as a high-level control system for the mind. This view has immediate consequences for understanding the cognitive consequences of using different languages.

6 APR 2016

GABRIELA PÉREZ BÁEZ | Smithsonian – National Museum of Natural History

15.45 – 17.00 | MPI room 163

Diidxa Za (Isthmus Zapotec, Otomanguean), as many other Mesoamerican languages, readily uses body part terms (BPTs) to refer to object parts. Several questions have been raised in the literature as to the role of metaphor in the semantic extension of BPTs in Mesoamerican languages. MacLaury 1989 presents a global mapping explanation for Ayoquesco Zapotec focusing on the role of metaphor but without explaining the varying degrees of productivity of individual body part-derived meronyms. The role of metaphor is questioned in Levinson 1994, following an analysis of Tseltal Maya meronyms. Based on extensive Diidxa Za data collected in elicitation and experimental tasks, I propose a Structure Mapping Theory (Gentner 1983, Gentner & Markman 1997, Gentner et al. 2001, inter alia) approach to explain that the different degrees of productivity of body part-derived meronyms correspond to the three types of comparisons proposed by the theory: literal similarity, analogy and abstraction. The semantic extension of less productive BPTs depends on the mapping of attributes over relations (literal similarity). BPTs of greater productivity are extended based on the mapping of relations over attributes (analogy). In the process of semantic extension of a reduced set of 6 most productive BPTs, the base domain is an abstraction of the human body. This approach provides a refined explanation of the process of global mapping in the extension of BPTs as spatial relators in a Zapotec language. It also provides answers to some questions raised in Levinson 1994 to explain the degree of flexibility with which BPTs can be mapped onto object parts without regard to basic attributes such as number of parts of the base domain, and cases where the geometry of the target domain has few or no discernable parts.

Back To Top
Search