generalization

Systematic reviews 101: Internal and External Validity

Who remembers last summer when I started writing a series of posts on systematic literature reviews?

I apologise for neglecting it for so long, but here is a quick write up on assessing the studies you are including in your review for internal and external validity, with special reference to experiments in artificial language learning and evolutionary linguistics (though this is relevant to any field which aspires to adopt scientific method).

In the first post in the series, I outlined the differences between narrative and systematic reviews. One of the defining features of a systematic review is that it is not written with a specific hypothesis in mind. The literature search (which my next post will be about) is conducted with predefined inclusion criteria and, as a result, you will end up with a pile of studies to review regardless of there conclusion, or indeed regardless of there quality. Due to a lack of a filter to catch bad science, we need methods to assess the quality of a study or experiment which is what this post will be about.

(This will also help with DESIGNING a valid experiment, as well as assessing the validity of other people’s.)

What is validity?

Validity is the extent to which a conclusion is a well-founded one given the design and analysis of an experiment. It comes in two different flavours: external validity and internal validity.

External Validity

External validity is the extent to which the results of an experiment or study can be extrapolated to different situations. This is EXTREMELY important in the case of experiments in evolutionary linguistics because the whole point of experiments in evolutionary linguistics is to extrapolate your results to different situations (i.e. the emergence of linguistic structure in our ancestors), and we don’t have access to our ancestors to experiment on.

Here are some of things that effect an experiment’s external validity (in linguistics/psychology):

  • Participant characteristics (age (especially important in language learning experiments), gender, etc.)
  • Sample size
  • Type of learning/training (important in artificial language learning experiments)
  • Characteristics of the input (e.g. the nature of the structure in an input language)
  • Modality of the artificial language (how similar to actual linguistic modalities?)
  • Modality of output measures (how the outcome was measured and analysed)
  • The task from which the output was produced (straightforward imitation or communication or some other task)

Internal Validity

Internal validity is how well an experiment reduces its own systematic error within the circumstances of the experiment being performed.

Here are some of things that effect an experiment’s internal validity:

  •  Selection bias (who’s doing the experiment and who gets put in which condition)
  • Performance bias (differences between conditions other than the ones of interest, e.g. running people in condition one in the morning and condition two in the afternoon)
  • Detection bias (how the outcomes measures are coded and interpreted, blinding which condition a participant is in before coding is paramount to reduce the researcher’s bias to want to find a difference between conditions. A lot of retractions lately have been down to failures to act against detection bias.)
  • Attrition bias (Ignoring drop-outs, especially if one condition is especially stressful, causing high drop-out rates and therefore bias in the participants who completed it. This probably isn’t a big problem in most evolutionary linguistics research, but may be in other psychological stuff.)

Different types of bias will be relevant to different fields of research and different research questions, so it may be an idea to come up with your own scoring method for validity to subject different studies to within your review. But remember to be explicit about what your scoring methods are, and the pros and cons of the studies you are writing about.

Hopefully this introduction will have helped you think about validity within experiments in what you’re interested in, and helped you take an objective view on assessing the quality of studies you are reviewing, or indeed conducting.

 

Screen Shot 2014-07-07 at 10.06.50

PhD Opportunities: The Wellsprings of Linguistic Diversity

PhD positions are available at ANU, working with a team of people investigating diversity and cultural evolution.  The call is below:

Applications are now being sought for three PhD positions on the project ‘The Wellsprings of Linguistics Diversity’, funded by the Australian Research Council for the period mid-2014 to mid-2019.

Each PhD position will undertake substantial fieldwork on variation in a particular speech community: Western Arnhem Land (Bininj Gun-wok and neighbouring areas), Vanuatu (Sa and adjoining languages, South Pentecost Island), and Samoa (Samoan). Support will include a four-year stipend ($29,844 p/a), generous fieldwork funding, and embedding of the doctoral research in the dynamic team setting of the project, as well as the newly established ARC Centre of Excellence for the Dynamics of Language.  Positions will start in early February 2015.

The project is led by Prof. Nick Evans and the project team including postdocs Dr Murray Garde, Dr Ruth Singer, and Dr Dineke Schokkin and doctoral scholar Eri Kashima (fieldworkers), postdoc Dr Mark Ellison (computational modelling), and consultants Profs. Miriam Meyerhoff and Catherine Travis (variationist sociolinguistics) and Emeritus Prof. Andy Pawley (Samoan).

The project’s goal is to understand the causes of why linguistic diversity evolves differentially in different parts of the world, through a combination of detailed sociolinguistic case-studies of small-scale speech communities in their anthropological setting, and computational modelling of how micro-variation engenders macro-variation over iterations of transmission. The three high-diversity field sites are western Arnhem Land (Bininj Gun-wok and neighbouring languages), Morehead district of Southern New Guinea (Nen, Nambu, Idi), and South Pentecost Island, Vanuatu (Sa and neighbouring languages).  Samoa (Samoan) supplies a low-diversity comparator to the Vanuatu, and controls from small speech communities in global languages (English and Spanish) will be obtained by other investigators on the project.

A fuller description of the project can be downloaded from http://chl.anu.edu.au/school/laureate.php

General information about the doctoral program in School of Culture, History and Language at the ANU College of Asia and the Pacific can be found at http://chl.anu.edu.au/school/students_phd.php

Specific enquiries should be directed to Nick Evans (nicholas.evans@anu.edu.au) and completed application dossiers sent to geoff.sjollema@anu.edu.au. Completed applications should include the following information:
(a)    CV with educational qualifications, any publications and other relevant experience (e.g. fieldwork, relevant internships)
(b)    a two-page statement setting out your preferred field site or sites, what skills and personal attributes you will bring to the project, and what you see as the most interesting and challenging issues you will need to solve
(c)    if available, other materials supporting your case (e.g. relevant articles or other materials)

Deadline:  Aug 3rd 2014, midnight, AEST

Once awards are made, successful applicants will be notified and then guided through making a formal application for enrolment status through the regular ANU system.

133979692.4vfjUmE4.a14

Syntax came before phonology?

ResearchBlogging.org
A new paper has just appeared in the proceedings of the royal society B entitled, “Language evolution: syntax before phonology?” by Collier et al.

The abstract is here:

Phonology and syntax represent two layers of sound combination central to language’s expressive power. Comparative animal studies represent one approach to understand the origins of these combinatorial layers. Traditionally, phonology, where meaningless sounds form words, has been considered a simpler combination than syntax, and thus should be more common in animals. A linguistically informed review of animal call sequences demonstrates that phonology in animal vocal systems is rare, whereas syntax is more widespread. In the light of this and the absence of phonology in some languages, we hypothesize that syntax, present in all languages, evolved before phonology.

This is essentially a paper about the distinction between combinatorial and compositional structure and the emergence narrative of duality of patterning. I wrote a post about this a few months ago, see here. The paper focusses on evidence from non-human animals and also evidence from human languages, including Al-Sayyid Bedouin Sign Language, looking at differences and similarities between human abilities and those of other animals.

Peter Marler outlined different types of call combinations found in animal communication by making a distinction between ‘Phonological syntax’ (combinatorial structure), which he claims is widespread in animals, and ‘lexical syntax’ (compositional structure), which he claims  have yet to be described in animals (I can’t find a copy of the 1998 paper which Collier et al. cite, but he talks about this on his homepage here). Collier et al. however, disagree and review several animal communication systems which they claim fall under a definition of “lexical syntax”.

They start by defining what they mean by the different levels of structure within language (I talk about this here).  They present the following relatively uncontroversial table:

inline-graphic-1

 Evidence from non-human species

The paper reviews evidence from 4 species; 1) Winter wrens (though you could arguably lump all birdsong in with their analysis for this one),  2) Campbell monkeys, 3) Putty-nosed monkeys and 4) Banded mongooses.

1) Birdsong is argued to be combinatorial, as whatever the combination of notes or syllables, the songs always have the same purpose and so the “meaning” can not be argued to be a result of the combination.

2) In contrast to  Marler, the authors argue that Campbell monkeys have compositional structure in their calls. The monkeys give a ‘krak’ call when there is a leopard near, and a ‘hok’ call when there is an eagle. Interestingly, they can add an ‘-oo’ to either of these calls change their meanings. ‘Krak-oo’ denotes any general disturbance and ‘hok-oo’ denotes a disturbance in the canopy. One can argue then that this “-oo” has the same meaning of “disturbance”, no matter what construction it is in, and “hok” generally means “above”, hinting at compositional structure.

3) The authors also discuss Putty-nosed monkeys, which were also discussed in this paper by Scott-Philips and Blythe (again, discussed here). While Scott-Philips and Blythe arrive at the conclusion that the calls of putty-nosed monkeys are combinatorial (i.e. the combined effect of two signals does not amount to the combined meaning of those two signals):

F1.medium

“Applied to the putty-nosed monkey system, the symbols in this figure are: a, presence of eagles; b, presence of leopards; c, absence of food; A, ‘pyow’; B, ‘hack’ call; C = A + B ‘pyow–hack’; X, climb down; Y, climb up; Z ≠ X + Y, move to a new location. Combinatorial communication is rare in nature: many systems have a signal C = A + B with an effect Z = X + Y; very few have a signal C = A + B with an effect Z ≠ X + Y.”

However, Collier et al. argue this example is not necessarily combinatorial, as the pyow-hack sequences could be interpreted as idiomatic, or have much more abstract meanings such as ‘move-on-ground’ and ‘move-in-air’, however in order for this analysis to hold weight, one must assume the monkeys are able to use contextual information to make inferences about meaning, which is a pretty controversial claim. However, Collier et al. argue that it shouldn’t be considered so far-fetched given the presence of compositionality in the calls of Campbell monkeys.

4) The author’s also  discuss Branded Mongooses who emit close calls while looking for food.  Their calls begin with an initial noisy segment that encodes the caller’s identity, which is stable across all contexts. In searching and moving contexts, there is a second tonal harmonic that varies in length consistently with context. So one could argue that identity and context are being systematically encoded into their call sequences with one to one mappings between signal and meaning.

(One can’t help but think that a discussion of the possibility of compositionality in bee dances is a missed opportunity here.)

Syntax before phonology?

The authors use the above (very sketchy and controversial) examples of compositional structure to make the case that syntax came before phonology. Indeed, there exist languages where a level of phonological patterning does not exist (the go-to example being Al-Sayyid Bedouin Sign Language). However, I would argue that the emergence of combinatoriality is, in large part, the result of the modality one is using to produce language. My current work is looking at how the size and dimensionality of a signal space, as well as how mappable that signal space is to a meaning space (to enable iconicity), can massively effect the emergence of a combinatorial system, and I don’t think it’s crazy to suggest the modality used will effect the emergence narrative for duality of patterning.

Collier et al. attempt to use some evidence from spoken languages with large inventories, or instances where single phonemes in spoken languages are highly context-dependant meaningful elements, to back up a story where syntax might have come first in spoken language. But given the physical and perceptual constraints of a spoken system, it’s really hard for me to imagine how a productive syntactic system could have existed without a level of phonological patterning. The paper makes the point that it is theoretically possible (which is really interesting), but I’m not convinced that it is likely (though this paper by Juliette Blevins is well worth a read).

Whilst I don’t disagree with Collier et al.’s conclusion that phonological patterning is most likely the product of cultural evolution, I feel like the physical constraints of a linguistic modality will massively effect the emergence of such a system, and arguing for an over-arching emergence story without consideration for non-cognitive factors is an over-sight. 

References

Collier, K., Bickel, B., van Schaik, C., Manser, M., & Townsend, S. (2014). Language evolution: syntax before phonology? Proceedings of the Royal Society B: Biological Sciences, 281 (1788), 20140263-20140263 DOI: 10.1098/rspb.2014.0263

miyagawaetal2014_3

Why Disagree? Some Critical Remarks on the Integration Hypothesis of Human Language Evolution

Shigeru Miyagawa, Shiro Ojima, Robert Berwick and Kazuo Okanoya have recently published a new paper in Frontiers in Psychology, which can be seen as a follow-up to the 2013 Frontiers paper by Miyagawa, Berwick and Okanoya (see Hannah’s post on this paper). While the earlier paper introduced what they call the “Integration Hypothesis of Human Language Evolution”, the follow-up paper seeks to provide empirical evidence for this theory and discusses potential challenges to the Integration Hypothesis.

The basic idea of the Integration Hypothesis, in a nutshell, is this: “All human language sentences are composed of two meaning layers” (Miyagawa et al. 2013: 2), namely “E” (for “expressive”) and “L” (for “lexical”). For example, sentences like “John eats a pizza”, “John ate a pizza”, and “Did John eat a pizza?” are supposed to have the same lexical meaning, but they vary in their expressive meaning. Miyagawa et al. point to some parallels between expressive structure and birdsong on the one hand and lexical structure and the alarm calls of non-human primates on the other. More specifically, “birdsongs have syntax without meaning” (Miyagawa et al. 2014: 2), whereas alarm calls consist of “isolated uttered units that correlate with real-world references” (ibid.). Importantly, however, even in human language, the Expression Structure (ES) only admits one layer of hierarchical structure, while the Lexical Structure (LS) does not admit any hierarchical structure at all (Miyagawa et al. 2013: 4). The unbounded hierarchical structure of human language (“discrete infinity”) comes about through recursive combination of both types of structure.

This is an interesting hypothesis (“interesting” being a convenient euphemism for “well, perhaps not that interesting after all”). Let’s have a closer look at the evidence brought forward for this theory.

Miyagawa et al. “focus on the structures found in human language” (Miyagawa et al. 2014: 1), particularly emphasizing the syntactic structure of sentences and the internal structure of words. In a sentence like “Did John eat pasta?”, the lexical items John, eat, and pasta constitute the LS, while the auxiliary do, being a functional element, is seen as belonging to the expressive layer. In a more complex sentence like “John read the book that Mary wrote”, the VP and NP notes are allocated to the lexical layer, while the DP and CP nodes are allocated to the expressive layer.

Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures

Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures

As pointed out above, LS elements cannot directly combine with each other according to Miyagawa et al. (the ungrammaticality of e.g. John book and want eat pizza is taken as evidence for this), while ES is restricted to one layer of hierarchical structure. Discrete infinity then arises through recursive application of two rules:

(i) EP →  E LP
(ii) LP → L EP
Rule (i) states that the E category can combine with LP to form an E-level structure. Rule (ii) states that the L category can combine with an E-level structure to form an L-level structure. Together, these two rules suffice to yield arbitrarily deep hierarchical structures.

The alternation between lexical and expressive elements, as exemplified in Figure (3) from the 2014 paper (= Figure 9 from the 2013 paper, reproduced above), is thus essential to their theory since they argue that “inside E and L we only find finite-state processes” (Miyagawa et al. 2014: 3). Several phenomena, most notably Agreement and Movement, are explained as “linking elements” between lexical and functional heads (cf. also Miyagawa 2010). A large proportion of the 2014 paper is therefore dedicated to phenomena that seem to argue against this hypothesis.

For example, word-formation patterns that can be applied recursively seem to provide a challenge for the theory, cf. example (4) in the 2014 paper:

(4) a. [anti-missile]
b. [anti-[anti-missile]missile] missile

The ostensible point is that this formation can involve center embedding, which would constitute a non-finite state construction.

However, they propose a different explanation:

When anti- combines with a noun such as missile, the sequence anti-missile is a modifier that would modify a noun with this property, thus, [anti-missile]-missile,  [anti-missile]-defense. Each successive expansion forms via strict adjacency, (…) without the need to posit a center embedding, non-regular grammar.

Similarly, reduplication is re-interpreted as a finite state process. Furthermore, they discuss N+N compounds, which seems to violate “the assumption that L items cannot combine directly — any combination requires intervention from E.” However, they argue that the existence of linking elements in some languages provides evidence “that some E element does occur between the two L’s”. Their example is German Blume-n-wiese ‘flower meadow’, others include Freundeskreis ‘circle of friends’ or Schweinshaxe ‘pork knuckle’. It is commonly assumed that linking elements arose from grammatical markers such as genitive -s, e.g. Königswürde ‘royal dignity’ (from des Königs Würde ‘the king’s dignity’). In this example, the origin of the linking element is still transparent. The -es- in Freundeskreis, by contrast, is an example of a so-called unparadigmatic linking element since it literally translates to ‘circle of a friend’. In this case as well as in many others, the linking element cannot be traced back directly to a grammatical affix. Instead, it seems plausible to assume that the former inflectional suffix was reanalyzed as a linking element from the paradigmatic cases and subsequently used in other compounds as well.

To be sure, the historical genesis of German linking elements doesn’t shed much light on their function in present-day German, which is subject to considerable debate. Keeping in mind that these items evolved gradually however raises the question how the E and L layers of compounds were linked in earlier stages of German (or any other language that has linking elements). In addition, there are many German compounds without a linking element, and in other languages such as English, “linked” compounds like craft-s-man are the exception rather than the rule. Miyagawa et al.’s solution seems a bit too easy to me: “In the case of teacup, where there is no overt linker, we surmise that a phonologically null element occurs in that position.”

As an empiricist, I am of course very skeptical towards any kind of null element. One could possibly rescue their argument by adopting concepts from Construction Grammar and assigning E status to the morphological schema [N+N], regardless of the presence or absence of a linking element, but then again, from a Construction Grammar point of view, assuming a fundamental dichotomy between E and L structures doesn’t make much sense in the first place. That said, I must concede that the E vs. L distinction reflects basic properties of language that play a role in any linguistic theory, but especially in Construction Grammar and in Cognitive Linguistics. On the one hand, it reflects the rough distinction between “open-class” and “closed-class” items, which plays a key role in Talmy’s (2000) Cognitive Semantics and in the grammaticalization literature (cf. e.g. Hopper & Traugott 2003). As many grammaticalization studies have shown, most if not all closed-class items are “fossils” of open-class items. The abstract concepts they encode (e.g. tense or modality) are highly relevant to our everyday experience and, consequently, to our communication, which is why they got grammaticized in the first place. As Rose (1973: 516) put it, there is no need for a word-formation affix deriving denominal verbs meaning “grasp NOUN in the left hand and shake vigorously while standing on the right foot in a 2 ½ gallon galvanized pail of corn-meal-mush”. But again, being aware of the historical emergence of these elements begs the question if a principled distinction between the meanings of open-class vs. closed-class elements is warranted.

On the other hand, the E vs. L distinction captures the fundamental insight that languages pair form with meaning. Although they are explicitly talking about the “duality of semantics“, Miyagawa et al. frequently allude to formal properties of language, e.g. by linking up syntactic strutures with the E layer:

The expression layer is similar to birdsongs; birdsongs have specific patterns, but they do not contain words, so that birdsongs have syntax without meaning (Berwick et al., 2012), thus it is of the E type.

While the “expression” layer thus seems to account for syntactic and morphological structures, which are traditionally regarded as purely “formal” and meaningless, the “lexical” layer captures the referential function of linguistic units, i.e. their “meaning”. But what is meaning, actually? The LS as conceptualized by Miyagawa et al. only covers the truth-conditional meaning of sentences, or their “conceptual content”, as Langacker (2008) calls it. From a usage-based perspective, however, “an expression’s meaning consists of more than conceptual content – equally important to linguistic semantics is how that content is shaped and construed.” (Langacker 2002: xv) According to the Integration Hypothesis, this “construal” aspect is taken care of by closed-class items belonging to the E layer. However, the division of labor envisaged here seems highly idealized. For example, tense and modality can be expressed using open-class (lexical) items and/or relying on contextual inference, e.g. German Ich gehe morgen ins Kino ‘I go to the cinema tomorrow’.

It is a truism that languages are inherently dynamic, exhibiting a great deal of synchronic variation and diachronic change. Given this dynamicity, it seems hard to defend the hypothesis that a fundamental distinction between E and L structures which cannot combine directly can be found universally in the languages of the world (which is what Miyagawa et al. presuppose). We have already seen that in the case of compounds, Miyagawa et al. have to resort to null elements in order to uphold their hypothesis. Furthermore, it seems highly likely that some of the “impossible lexical structures” mentioned as evidence for the non-combinability hypothesis are grammatical at least in some creole languages (e.g. John book, want eat pizza).

In addition, it seems somewhat odd that E- and L-level structures as “relics” of evolutionarily earlier forms of communication are sought (and expected to be found) in present-day languages, which have been subject to millennia of development. This wouldn’t be a problem if the authors were not dealing with meaning, which is not only particularly prone to change and variation, but also highly flexible and context-dependent. But even if we assume that the existence of E-layer elements such as affixes and other closed-class items draws on innate dispositions, it seems highly speculative to link the E layer with birdsong and the L layer with primate calls on semantic grounds.

The idea that human language combines features of birdsong with features of primate alarm calls is certainly not too far-fetched, but the way this hypothesis is defended in the two papers discussed here seems strangely halfhearted and, all in all, quite unconvincing. What is announced as “providing empirical evidence” turns out to be a mostly introspective discussion of made-up English example sentences, and if the English examples aren’t convincing enough, the next best language (e.g. German) is consulted. (To be fair, in his monograph, Miyagawa (2010) takes a broader variety of languages into account.) In addition, much of the discussion is purely theory-internal and thus reminiscent of what James has so appropriately called “Procrustean Linguistics“.

To their credit, Miyagawa et al. do not rely exclusively on theory-driven analyses of made-up sentences but also take some comparative and neurological studies into account. Thus, the Integration Hypothesis – quite unlike the “Mystery” paper (Hauser et al. 2014) co-authored by Berwick and published in, you guessed it, Frontiers in Psychology (and insightfully discussed by Sean) – might be seen as a tentative step towards bridging the gap pointed out by Sverker Johansson in his contribution to the “Perspectives on Evolang” section in this year’s Evolang proceedings:

A deeper divide has been lurking for some years, and surfaced in earnest in Kyoto 2012: that between Chomskyan biolinguistics and everybody else. For many years, Chomsky totally dismissed evolutionary linguistics. But in the past decade, Chomsky and his friends have built a parallel effort at elucidating the origins of language under the label ‘biolinguistics’, without really connecting with mainstream Evolang, either intellectually or culturally. We have here a Kuhnian incommensurability problem, with contradictory views of the nature of language.

On the other hand, one could also see the Integration Hypothesis as deepening the gap since it entirely draws on generative (or “biolinguistic”) preassumptions about the nature of language which are not backed by independent empirical evidence. Therefore, to conclusively support the Integration Hypothesis, much more evidence from many different fields would be necessary, and the theoretical preassumptions it draws on would have to be scrutinized on empirical grounds, as well.

References

Hauser, Marc D.; Yang, Charles; Berwick, Robert C.; Tattersall, Ian; Ryan, Michael J.; Watumull, Jeffrey; Chomsky, Noam; Lewontin, Richard C. (2014): The Mystery of Language Evolution. In: Frontiers in Psychology 4. doi: 10.3389/fpsyg.2014.00401

Hopper, Paul J.; Traugott, Elizabeth Closs (2003): Grammaticalization. 2nd ed. Cambridge: Cambridge University Press.

Johansson, Sverker: Perspectives on Evolang. In: Cartmill, Erica A.; Roberts, Séan; Lyn, Heidi; Cornish, Hannah (eds.) (2014): The Evolution of Language. Proceedings of the 10th International Conference. Singapore: World Scientific, 14.

Langacker, Ronald W. (2002): Concept, Image, and Symbol. The Cognitive Basis of Grammar. 2nd ed. Berlin, New York: De Gruyter (Cognitive Linguistics Research, 1).

Langacker, Ronald W. (2008): Cognitive Grammar. A Basic Introduction. Oxford: Oxford University Press.

Miyagawa, Shigeru (2010): Why Agree? Why Move? Unifying Agreement-Based and Discourse-Configurational Languages. Cambridge: MIT Press (Linguistic Inquiry, Monographs, 54).

Miyagawa, Shigeru; Berwick, Robert C.; Okanoya, Kazuo (2013): The Emergence of Hierarchical Structure in Human Language. In: Frontiers in Psychology 4. doi 10.3389/fpsyg.2013.00071

Miyagawa, Shigeru; Ojima, Shiro; Berwick, Robert C.; Okanoya, Kazuo (2014): The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages. In: Frontiers in Psychology 5. doi 10.3389/fpsyg.2014.00564

Rose, James H. (1973): Principled Limitations on Productivity in Denominal Verbs. In: Foundations of Language 10, 509–526.

Talmy, Leonard (2000): Toward a Cognitive Semantics. 2 vol. Cambridge, Mass: MIT Press.

P.S.: After writing three posts in a row in which I critizised all kinds of studies and papers, I herby promise that in my next post, I will thoroughly recommend a book and return to a question raised only in passing in this post.  [*suspenseful cliffhanger music*]

glasgow

The evolution of phonetic capabilities: causes, constraints and consequences

At next year’s International Congress of Phonetic Sciences in Glasgow there will be a special interest group on the Evolution of our phonetic capabilities. It will focus on the interaction between biological and cultural evolution and encourages work from different modalities too. The call for papers is below:

In recent years, there has been a resurgence in research in the evolution of language and speech. New techniques in computational and mathematical modelling, experimental paradigms, brain and vocal tract imaging, corpus analysis and animal studies, as well as new archeological evidence, have allowed us to address questions relevant to the evolution of our phonetic capabilities.

This workshop requests contributions from researchers which address the emergence of our phonetic capabilities. We are interested in empirical evidence from models and experiments which explore evolutionary pressures causing the emergence of our phonetic capabilities, both in biological and cultural evolution, and the consequences biological constraints will have on processes of cultural evolution and vice versa. Contributions are welcome to cover not only the evolution of our physical ability to produce structured signals in different modalities, but also cognitive or functional processes that have a bearing on the emergence of phonemic inventories. We are also interested in contributions which look at the interaction between the two areas mentioned above which are often dealt with separately in the field, that is the interaction between physical constraints imposed by a linguistic modality, and cognitive constraints born from learning biases and functional factors, and the consequences this interaction will have on emerging linguistic systems and inventories.

Contributions must fit the same submission requirements on the main ICPhS 2015 call for papers page.

Contributions can be sent as an attachment to hannah@ai.vub.ac.be by 16th February 2015

The deadline is obviously quite far away, but feel free to use the same email address above to ask any questions about suitability of possible submissions or anything else.

Screen Shot 2014-05-26 at 14.48.23

Talking Heads at EvoLangX

This year saw the 10th instalment of the EvoLang Conference, and it was also the 15th anniversary of Luc Steels’ Talking Heads Experiment (brief review here). In celebration, the Evolutionary Linguistics Association organised a birthday party in Replugged (Vienna). The party not only featured some excellent tuneage by replicated typo’s very own Sean Roberts along with Bill Thompson, Tessa Verhoef and me, but it also featured, very aptly, a Talking Heads tribute band headed by none other than Luc Steels himself! For those of you who were there (or weren’t there), you can now relive (or see for the first time) the experience through YouTube (extra points for spotting your favourite evolutionary linguists dancing their little socks off):

typewriter2

QWERTY: The Next Generation

 

[This is a guest post by

Oh wait, I'm not a guest anymore. Thanks to James for inviting me to become a regular contributor to Replicated Typo. I hope I will have to say some interesting things about the evoution of language, cognition, and culture, and I promise that I'll try to keep my next posts a bit shorter than the guest post two weeks ago.

Today I'd like to pick up on an ongoing discussion over at Language Log. In a series of blog posts in early 2012, Mark Liberman has taken issue with the so-called "QWERTY effect". The QWERTY effect seems like an ideal topic for my first regular post as it is tightly connected to some key topics of Replicated Typo: Cultural evolution, the cognitive basis of language, and, potentially, spurious correlations. In addition, Liberman's coverage of the QWERTY effect has spawned an interesting discussion about research blogging (cf. Littauer et al. 2014).

But what is the QWERTY effect, actually? According to Kyle Jasmin and Daniel Casasanto (Jasmin & Casasanto 2012), the written form of words can influence their meaning, more particularly, their emotional valence. The idea, in a nutshell, is this: Words that contain more characters from the right-hand side of the QWERTY keyboard tend to "acquire more positive valences" (Jasmin & Casasanto 2012). Casasanto and his colleagues tested this hypothesis with a variety of corpus analyses and valence rating tasks.

Whenever I tell fellow linguists who haven't heard of the QWERTY effect yet about these studies, their reactions are quite predictable, ranging from "WHAT?!?" to "asdf". But unlike other commentors, I don't want to reject the idea that a QWERTY effect exists out of hand. Indeed, there is abundant evidence that "right" is commonly associated with "good". In his earlier papers, Casasanto provides quite convincing experimental evidence for the bodily basis of the cross-linguistically well-attested metaphors RIGHT IS GOOD and LEFT IS BAD (e.g. Casasanto 2009). In addition, it is fairly obvious that at the end of the 20th century, computer keyboards started to play an increasingly important role in our lives. Also, it seems legitimate to assume that in a highly literate society, written representations of words form an important part of our linguistic knowledge. Given these factors, the QWERTY effect is not such an outrageous idea. However, measuring it by determining the "Right-Side Advantage" of words in corpora is highly problematic since a variety of potential confounding factors are not taken into account.

Finding the Right Name(s)

blogger_names

Frequencies of some (almost) randomly selected names in the USA.

In a new CogSci paper, Casasanto, Jasmin, Geoffrey Brookshire, and Tom Gijssels present five new experiments to support the QWERTY hypothesis. Since I am based at a department with a strong focus on onomastics, I found their investigation of baby names particularly interesting. Drawing on data from the US Social Security Administration website, they analyze all names that have been given to more than 100 babys in every year from 1960 to 2012. To determine the effect of keyboard position, they use a measure they call “Right Side Adventage” (RSA): [(#right-side letters)-(#left-side letters)]. They find that

“that the mean RSA has increased since the popularization of the QWERTY keyboard, as indicated by a correlation between the year and average RSA in that year (1960–2012, r = .78, df = 51, p =8.6 × 10-12

In addition,

“Names invented after 1990 (n = 38,746) use more letters from the right side of the keyboard than names in use before 1990 (n = 43,429; 1960–1990 mean RSA = -0.79; 1991–2012 mean RSA = -0.27, t(81277.66) = 33.3, p < 2.2 × 10-16 [...]). This difference remained significant when length was controlled by dividing each name’s RSA by the number of letters in the name (t(81648.1) = 32.0, p < 2.2 × 10-16)”

Mark Liberman has already pointed to some problematic aspects of this analysis (but see also Casasanto et al.’s reply). They do not justify why they choose the timeframe of 1960-2012 (although data are available from 1880 onwards), nor do they explain why they only include names given to at least 100 children in each year. Liberman shows that the results look quite different if all available data are taken into account – although, admittedly, an increase in right-side characters from 1990 onwards can still be detected. In their response, Casasanto et al. try to clarify some of these issues. They present an analysis of all names back to 1880 (well, not all names, but all names attested in every year since 1880), and they explain:

“In our longitudinal analysis we only considered names that had been given to more than 100 children in *every year* between 1960 and 2012. By looking at longitudinal changes in the same group of names, this analysis shows changes in names’ popularity over time. If instead you only look at names that were present in a given year, you are performing a haphazard collection of cross-sectional analyses, since many names come and go. The longitudinal analysis we report compares the popularity of the same names over time.

I am not sure what to think of this. On the one hand, this is certainly a methodologically valid approach. On the other hand, I don’t agree that it is necessarily wrong to take all names into account. Given that 3,625 of all name types are attested in every year from 1960 to 2013 and that only 927 of all name types are attested in every year from 1880 to 2013 (the total number of types being 90,979), the vast majority of names is simply not taken into account in Casasanto et al.’s approach. This is all the more problematic given that parents have become increasingly individualistic in naming their children: The mean number of people sharing one and the same name has decreased in absolute terms since the 1960s. If we normalize these data by dividing them by the total number of name tokens in each year, we find that the mean relative frequency of names has continuously decreased over the timespan covered by the SSA data.

shared_names

Mean frequency of a name (i.e. mean number of people sharing one name) in absolute and relative terms, respectively.

Thus, Casasanto et al. use a sample that might be not very representative of how people name their babies. If the QWERTY effect is a general phenomenon, it should also be found when all available data are taken into account.

As Mark Liberman has already shown, this is indeed the case – although some quite significant ups and downs in the frequency of right-side characters can be detected well before the QWERTY era. But is this rise in frequency from 1990 onwards necessarily due to the spread of QWERTY keyboards – or is there an alternative explanation? Liberman has already pointed to “the popularity of a few names, name-morphemes, or name fragments” as potential factors determining the rise and fall of mean RSA values. In this post, I’d like to take a closer look at one of these potential confounding factors.

Sonorous Sounds and “Soft” Characters
When I saw Casasanto et al.’s data, I was immediately wondering if the change in character distribution could not be explained in terms of phoneme distribution. My PhD advisor, Damaris Nübling, has done some work (see e.g. here [in German]) showing an increasing tendency towards names with a higher proportion of sonorous sounds in Germany. More specifically, she demonstrates that German baby names become more “androgynous” in that male names tend to assume features that used to be characteristic of (German) female names (e.g. hiatus; final full vowel; increase in the overall number of sonorous phonemes). Couldn’t a similar trend be detectable in American baby names?

names_f_m

Names showing particularly strong frequency changes among those names that appear among the Top 20 most frequent names at least once between 1960 and 2013.

If we take a cursory glance at those names that can be found among the Top 20 most frequent names of at least one year since 1960 and if we single out those names that experienced a particularly strong increase or decrease in frequency, we find that, indeed, sonorous names seem to become more popular. Those names that gain in popularity are characterized by lots of vowels, diphthongs (Aiden, Jayden, Abigail), hiatus (Liam, Zoey), as well as nasals and liquids (Lily, Liam).
To be sure, these cursory observations are not significant in and of themselves. To test the hypothesis if phonological changes can (partly) account for the QWERTY effect in a bit more detail, I basically split the sonority scale in half. I categorized characters typically representing vowels and sonorants as “soft sound characters” and those typically representing obstruents as “hard sound characters”. This is of course a ridiculously crude distinction entailing some problematic classifications. A more thorough analysis would have to take into account the fact that in many cases, one letter can stand for a variety of different phonemes. But as this is just an exploratory analysis for a blog post, I’ll go with this crude binary distinction. In addition, we can justify this binary categorization with an argument presented above: We can assume that the written representations of words are an important part of the linguistic knowledge of present-day language users. Thus, parents will probably not only be concerned with the question how a name sounds – they will also consider how it looks like in written form. Hence, there might be a preference for characters that prototypically represent “soft sounds”, irrespective of the sounds they actually stand for in a concrete case. But this is highly speculative and would have to be investigated in an entirely different experimental setup (e.g. with a psycholinguistic study using nonce names).

hardsoftrightleft

Distribution of “hard sound” vs. “soft sound” characters on the QWERTY keyboard.

Note that the characters representing “soft sounds” and “hard sounds”, respectively, are distributed unequally over the QWERTY keyboard. Given that most “soft sound characters” are also right-side characters, it is hardly surprising that we cannot only detect an increase in the “Right-Side Advantage” (as well as the “Right-Side Ratio”, see below) of baby names, but also an increase in the mean “Soft Sound Ratio” (SSR – # of soft sound characters / total # of characters). This increase is significant for the time from 1960 to 2013 irrespective of the sample we use: a) all names attested since 1960, b) names attested in every year since 1960, c) names attested in every year since 1960 more than 100 times.

ssa_three_samples_r

“Soft Sound Ratio” in three different samples: a) All names attested in the SSA data; b) all names attested in every year since 1960; c) all names attested in every year since 1960 at least 100 times.

Note that both the “Right-Side Advantage” and the “Soft Sound Ratio” are particularly high in names only attested after 1990. (For the sake of (rough) comparability, I use the relative frequency of right-side characters here, i.e. Right Side Ratio = # of right-side letters / total number of letters.)

SSR_1990_ff

“Soft Sound Ratio” and “Right-Side Ratio” for names only attested after 1990.

Due to the considerable overlap between right-side and “soft” characters, both the QWERTY Effect and the “Soft Sound” Hypothesis might account for the changes that can be observed in the data. If the QWERTY hypothesis is correct, we should expect an increase for all right-side characters, even those that stand for “hard” sounds. Conversely, we should expect a decrease in the relative frequency of left-side characters, even if they typically represent “soft” sounds. Indeed, the frequency of “Right-Side Hard Characters” does increase – in the time from 1960 to the mid-1980s. In the QWERTY era, by contrast, <h>, <p>, <k>, and <j> suffer a significant decrease in frequency. The frequency of “Left-Side Soft Characters”, by contrast, increases slightly from the late 1960s onwards.

rshc_lssc_ri

Frequency of left-side “soft” characters and right-side “hard” characters in all baby names attested from 1960 to 2013.

Further potential challenges to the QWERTY Effect and possible alternative experimental setups
The commentors over at Language Log have also been quite creative in coming up with possible alternative explanations and challenging the QWERTY hypothesis by showing that random collections of letters show similarly strong patterns of increase or decrease. Thus, the increase in the frequency of right-side letters in baby names is perhaps equally well, if not better explained by factors independent of character positions on the QWERTY keyboard. Of course, this does not prove that there is no such thing as a QWERTY effect. But as countless cases discussed on Replicated Typo have shown, taking multiple factors into account and considering alternative hypotheses is crucial in the study of cultural evolution. Although the phonological form of words is an obvious candidate as a potential confounding factor, it is not discussed at all in Casasanto et al.’s CogSci paper. However, it is briefly mentioned in Jasmin & Casasanto (2012: 502):

“In any single language, it could happen by chance that words with higher RSAs are more positive, due to sound–valence associations. But despite some commonalities, English, Dutch, and Spanish have different phonological systems and different letter-to-sound mappings.”

While this is certainly true, the sound systems and letter-to-sound mappings of these languages (as well as German and Portugese, which are investigated in the new CogSci paper) are still quite similar in many respects. To rule out the possibility of sound-valence associations, it would be necessary to investigate the phonological makeup of positively vs. negatively connotated words in much more detail.

rsa_m_f

Right-Side Advantage (RSA) for male vs. female names in two different samples (all names attested in the SSA data and all names attested in every year since 1960).

The SSA name lists provide another means to critically examine the QWERTY hypothesis since they differentiate between male and female names. If the QWERTY effect does play a significant role in parents’ name choices, we would expect it to be equally strong for boys names and girls names – or at least approximately so.

rsr_all

Right-Side Ratio for three different samples (all names attested in the SSA lists, all names attested in every year since 1960, all years attested in every year since 1960 at least 100 times).

On the hypothesis that other factors such as trend names play a much more important role, by contrast, differences between the developments of male vs. female names are to be expected. Indeed, the data reveal some differences between the RSA / RSR development of boys vs. girls names. At the same time, however, these differences show that the “Soft Sound Hypothesis” can only partly account for the QWERTY Effect since the “Soft Sound Ratios” of male vs. female names develop roughly in parallel.

ssr_male_female_r

“Soft Sound Ratio” of male vs. female names .

Given the complexity of cultural phenomena such as naming preferences, we would of course hardly expect one factor alone to determine people’s choices. The QWERTY Effect, like the “Soft Sound” Preference, might well be one factor governing parents’ naming decisions. However, the experimental setups used so far to investigate the QWERTY hypothesis are much too prone to spurious correlations to provide convincing evidence for the idea that words with a higher RSA assume more positive valences because of their number of right-side letters.

Granted, the amount of experimental evidence assembled by Casasanto et al. for the QWERTY effect is impressive. Nevertheless, the correlations they find may well be spurious ones. Don’t get me wrong – I’m absolutely in favor of bold hypotheses (e.g. about Neanderthal language). But as a corpus linguist, I doubt that such a subtle preference can be meaningfully investigated using corpus-linguistic methods. As a corpus linguist, you’re always dealing with a lot of variables you can’t control for. This is not too big a problem if your research question is framed appropriately and if potential confounding factors are explicitly taken into account. But when it comes to a possible connection between single letters and emotional valence, the number of potential confounding factors just seems to outweigh the significance of an effect as subtle as the correlation between time and average RSA of baby names. In addition, some of the presumptions of the QWERTY studies would have to be examined independently: Does the average QWERTY user really use their left hand for typing left-side characters and their right hand for typing right-side characters – or are there significant differences between individual typing styles? How fluent is the average QWERTY user in typing? (The question of typing fluency is discussed in passing in the 2012 paper.)

The study of naming preferences entails even more potentially confounding variables. For example, if we assume that people want their children’s names to be as beautiful as possible not only in phonological, but also in graphemic terms, we could speculate that the form of letters (round vs. edgy or pointed) and the position of letters within the graphemic representation of a name play a more or less important role. In addition, you can’t control for, say, all names of persons that were famous in a given year and thus might have influenced parents’ naming choices.

If corpus analyses are, in my view, an inappropriate method to investigate the QWERTY effect, then what about behavioral experiments? In their 2012 paper, Jasmin & Casasanto have reported an experiment in which they elicited valence judgments for pseudowords to rule out possible frequency effects:

“In principle, if words with higher RSAs also had higher frequencies, this could result in a spurious correlation between RSA and valence. Information about lexical frequency was not available for all of the words from Experiments 1 and 2, complicating an analysis to rule out possible frequency effects. In the present experiment, however, all items were novel and, therefore, had frequencies of zero.”

Note, however, that they used phonologically well-formed stimuli such as pleek or ploke. These can be expected to yield associations to existing words such as, say, peak connotated) and poke, or speak and spoke, etc. It would be interesting to repeat this experiment with phonologically ill-formed pseudowords. (After all, participants were told they were reading words in an alien language – why shouldn’t this language only consist of consonants?) Furthermore, Casasanto & Chrysikou (2011) have shown that space-valence mappings can change fairly quickly following a short-term handicap (e.g. being unable to use your right hand as a right-hander). Considering this, it would be interesting to perform experiments using a different kind of keyboard, e.g. an ABCDE keyboard, a KALQ keyboard, or – perhaps the best solution – a keyboard in which the right and the left side of the QWERTY keyboard are simply inverted. In a training phase, participants would have to become acquainted with the unfamiliar keyboard design. In the test phase, then, pseudowords that don’t resemble words in the participants’ native language should be used to figure out whether an ABCDE-, KALQ-, or reverse QWERTY effect can be detected.

 

References

Casasanto, D. (2009). Embodiment of Abstract Concepts: Good and Bad in Right- and Left-Handers. Journal of Experimental Psychology: General 138, 351–367.

Casasanto, D., & Chrysikou, E. G. (2011). When Left Is “Right”. Motor Fluency Shapes Abstract Concepts. Psychological Science 22, 419–422.

Casasanto, D., Jasmin, K., Brookshire, G., & Gijssels, T. (2014). The QWERTY Effect: How typing shapes word meanings and baby names. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How Typing Shapes the Meanings of Words. Psychonomic Bulletin & Review 19, 499–504.

Littauer, R., Roberts, S., Winters, J., Bailes, R., Pleyer, M., & Little, H. (2014). From the Savannah to the Cloud. Blogging Evolutionary Linguistics Research. In L. McCrohon, B. Thompson, T. Verhoef, & H. Yamauchi, The Past, Present, and Future of Language Evolution Research. Student Volume following the 9th International Conference on the Evolution of Language (pp. 121–131).

Nübling, D. (2009). Von Monika zu Mia, von Norbert zu Noah. Zur Androgynisierung der Rufnamen seit 1945 auf prosodisch-phonologischer Ebene. Beiträge zur Namenforschung 44.

c3f4638365

Defining iconicity and its repercussions in language evolution

There was an awful lot of talk about iconicity at this year’s EvoLang conference (as well as in previous years), and its ability to bootstrap communication systems and solve symbol grounding problems, and this has lead to talk on its possible role in the emergence of human language. Some work has been more sceptical than other’s about the role of iconicity, and so I thought it would be useful to do a wee overview of some of the talks I saw in relation to how different presenters define iconicity (though this is by no stretch a comprehensive overview).

As with almost everything, how people define iconicity differs across studies. In a recent paper, Monaghan, Shillcock, Christiansen & Kirby (2014) identify two forms of iconicity in language; absolute iconicity and relative iconicity. Absolute iconicity is where some linguistic feature imitates a referent, e.g. onomatopoeia or gestural pantomime. Relative iconicity is where there is a signal-meaning mapping or there is a correlation between similar signals and similar meanings. Relative iconicity is usually only clear when the whole meaning and signal spaces can be observed together and systematic relations can be observed between them.

Liz Irvine gave a talk on the core assumption that iconicity played a big role in in bootstrapping language. She teases apart the distinction above by calling absolute iconicity, “diagrammatic iconicity” and relative iconicity, “imagic iconicity”. “Imagic iconicity” can be broken down even further and can be measured on a continuum either in terms of how signals are used and interpreted by language users, or simply by objectively looking at meaning-signal mappings where signs can be non-arbitrary, but not necessarily treated as iconic by language users. Irvine claims that this distinction is important in accessing the role of iconicity in the emergence of language. She argues that diagrammatic or absolute iconicity may aid adults in understanding new signs, but it doesn’t necessarily aid early language learning in infants. Whereas imagic, or relative iconicity, is a better candidate to aid language acquisition and language emergence, where language users do not interpret the signal-meaning mappings explicitly as being iconic, even though they are non-arbitrary.

Irvine briefly discusses that ape gestures are not iconic from the perspective of their users. Marcus Perlman, Nathaniel Clark and Joanne A. Tanner presented work on whether iconicity exists in ape gesture. They define iconicity as being gestures which in any way resemble or depict their meanings but break down these gestures into pantomimed actions, directive touches and visible directives, which are all arguably examples of absolute iconicity. Following from Irvine’s arguments, this broad definition of iconicity may not be so useful when drawing up scenarios for language evolution, and the authors try to provide more detailed and nuanced analysis drawing from the interpretation of signs from the ape’s perspective. Theories which currently exist on iconicity in ape gesture maintain that any iconicity is an artefact of the gesture’s development through inheritance and ritualisation. However, the authors argue that these theories do not currently account for the variability and creativity seen in iconic ape gestures which may help frame iconicity from the perspective of its user.

It’s difficult to analyse iconicity from an ape’s perspective, however, it should be much easier to get at how human’s perceive and interpret different types of iconicity via experiments. I think that experimental design can help get at this, but also analysis from a user perspective from post-experimental questionnaires or even post-experimental experiments (where naive participants are asked to rate to what degree a sign represents a meaning).

Gareth Roberts and Bruno Galantucci presented a study where their hypothesis was that a modality’s capacity for iconicity may inhibit the emergence of combinatorial structure (phonological patterning) in a system. This hypothesis may explain why emerging sign languages, which have more capacity for iconicity than spoken languages, can have fully expressive systems without a level of combinatorial structure (see here). They used the now famous paradigm from Galantucci’s 2005 experiment here. They asked participants to communicate a variety of meanings which were either lines, which could be represented through absolute iconicity with the modality provided, or circles which were various shades of green, which could not be iconically represented. The experiment showed that indeed, the signals used for circles were made up from combinatorial elements where the lines retained iconicity throughout the experiment. This is a great experiment and I really like it, however, I worry that it is only looking at two extreme ends of the iconicity continuum, and has not considered the effects of relative iconicity, or nuances of signal-meaning relations.  In de Boer and Verhoef (2012), a mathematical model shows that shared topology between signal and meaning spaces will generate an iconic system with signal-meaning mapping, but mismatched topologies will generate systems with conventionalised structure. I think it is important that experimental work now looks into more slight differences between signal and meaning spaces and the effects these differences will have on structure in emerging linguistic systems in the lab, and also how participant’s interpretation of any iconicity or structure in a system effects the nature of that iconicity or structure. I’m currently running some experiments exploring this myself, so watch this space!

References

Where possible, I’ve linked to studies as I’ve cited them.

All other studies cited are included in Erica A. Cartmill, Seán Roberts, Heidi Lyn & Hannah Cornish, ed., The Evolution of Language: Proceedings of the 10th international conference (EvoLang 10). It’s only £87.67 on Amazon, (but it may be wiser to email the authors if you don’t have a friend with a copy).

Culture, its evolution and anything inbetween