20150523-_IGP3782

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs together. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):

Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.

I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled. Continue reading

20150514-_IGP3710

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation. Continue reading

Screen Shot 2015-05-12 at 14.18.10

What the Songbird Said Radio Programme

BBC radio 4 have a new radio programme about songbirds and human language including contributions from Simon Fisher, Katie Slocombe and Johan Bolhuis, among others.

You can listen here:

http://bbc.in/1KAO2Cq

And here’s the synopsis:

Could birdsong tell us something about the evolution of human language? Language is arguably the single thing that most defines what it is to be human and unique as a species. But its origins – and its apparent sudden emergence around a hundred thousand years ago – remains mysterious and perplexing to researchers. But could something called vocal learning provide a vital clue as to how language might have evolved? The ability to learn and imitate sounds – vocal learning – is something that humans share with only a few other species, most notably, songbirds. Charles Darwin noticed this similarity as far back as 1871 in the Descent of Man and in the last couple of decades, research has uncovered a whole host of similarities in the way humans and songbirds perceive and process speech and song. But just how useful are animal models of vocal communication in understanding how human language might have evolved? Why is it that there seem to be parallels with songbirds but little evidence that our closest primate relatives, chimps and bonobos, share at least some of our linguistic abilities?

Computational Construction Grammar and Constructional Change

—————————-
Call For Participation
—————————-

Computational Construction Grammar and Constructional Change
Annual Conference of the Linguistic Society of Belgium
8 June 2015, Vrije Universiteit Brussel, Belgium

http://ai.vub.ac.be/bkl-2015

After several decades in scientific purgatory, language evolution has reclaimed its place as one of the most important branches in linguistics, and it is increasingly recognised as one of the most crucial sources of evidence for understanding human cognition. This renewed interest is accompanied by exciting breakthroughs in the science of language. Historical linguists can now couple their expertise to powerful methods for retrieving and documenting which changes have taken place. At the same time, construction grammar is increasingly being embraced in all areas of linguistics as a fruitful way of making sense of all these empirical observations. Construction grammar has also enthused formal and computational linguists, who have developed sophisticated tools for exploring issues in language processing and learning, and how new forms of grammar may emerge in speech populations.

Separately, linguists and computational linguists can therefore explain which changes take place in language and how these changes are possible. When working together, however, they can also address the question of why language evolves over time and how it emerged in the first place. This year, the BKL-CBL conference therefore brings together top researchers from both fields to put evidence and methods from both perspectives on the table, and to take up the challenge of uniting these efforts.

————————
Invited Speakers
————————
The conference contains presentations by 5 different keynote speakers.
* Graeme Trousdale (University of Edinburgh)
* Luc Steels (VUB/ IBE Barcelona)
* Kristin Davidse (University of Leuven)
* Peter Petré (University of Lille)
* Arie Verhagen (University of Leiden)

————————
Poster Presentations
————————
We still accept 500-word abstracts for poster presentations. All presentations must represent original, unpublished work not currently under review elsewhere. Work presented at the conference can be selected as a contribution for a special issue of the Belgian Journal of Linguistics (Summer 2016).

————————
Important dates
————————
* Abstract Submission: 29 May 2015
* Notification of acceptance: 1 June 2015
* Conference: 8 June 2015

————————
Introductory tutorial on Fluid Construction Grammar
————————
Learn how to write your own operational grammars in Fluid Construction Grammar in our tutorial on 7 and 9 June. The tutorial is practically oriented and mainly consists of hands-on exercises. Participation is free but registration is required.

————————
Organising Committee
————————
* Katrien Beuls, Vrije Universiteit Brussel, Belgium
* Remi van Trijp, Sony Computer Science Laboratories, Paris, France

20150507-_IGP3388-2

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality? Continue reading

20150507-_IGP3438-2

Dennet’s WRONG: the Mind is NOT Software for the Brain

And he more or less knows it; but he wants to have his cake and eat it too. It’s a little late in the game to be learning new tricks.

I don’t know just when people started casually talking about the brain as a computer and the mind as software, but it’s been going on for a long time. But it’s one thing to use such language in casual conversation. It’s something else to take it as a serious way of investigating mind and brain. Back in the 1950s and 1960s, when computers and digital computing were still new and the territory – both computers and the brain – relatively unexplored, one could reasonably proceed on the assumption that brains are digital computers. But an opposed assumption – that brains cannot possibly be computers – was also plausible.

The second assumption strikes me as being beside the point for those of us who find computational ideas essential to thinking about the mind, for we can proceed without the somewhat stronger assumption that the mind/brain is just a digital computer. It seems to me that the sell-by date on that one is now past.

The major problem is that living neural tissue is quite different from silicon and metal. Silicon and metal passively take on the impress of purposes and processes humans program into them. Neural tissue is a bit trickier. As for Dennett, no one championed the computational mind more vigorously than he did, but now he’s trying to rethink his views, and that’s interesting to watch.

The Living Brain

In 2014 Tecumseh Fitch published an article in which he laid out a computational framework for “cognitive biology” [1]. In that article he pointed out why the software/hardware distinction doesn’t really work for brains (p. 314):

Neurons are living cells – complex self-modifying arrangements of living matter – while silicon transistors are etched and fixed. This means that applying the “software/hardware” distinction to the nervous system is misleading. The fact that neurons change their form, and that such change is at the heart of learning and plasticity, makes the term “neural hardware” particularly inappropriate. The mind is not a program running on the hardware of the brain. The mind is constituted by the ever-changing living tissue of the brain, made up of a class of complex cells, each one different in ways that matter, and that are specialized to process information.

Yes, though I’m just a little antsy about that last phrase – “specialized to process information” – as it suggests that these cells “process” information in the way that clerks process paperwork: moving it around, stamping it, denying it, approving it, amending it, and so forth. But we’ll leave that alone.

One consequence of the fact that the nervous system is made of living tissue is that it is very difficult to undo what has been learned into the detailed micro-structure of this tissue. It’s easy to wipe a hunk of code or data from a digital computer without damaging the hardware, but it’s almost impossible to do the something like that with a mind/brain. How do you remove a person’s knowledge of Chinese history, or their ability to speak Basque, and nothing else, and do so without physical harm? It’s impossible. Continue reading

glasgow

ICPhS Phonetic Evolution Meeting. 12/8/2015 in Glasgow

At this year’s International Congress of Phonetic Sciences in Glasgow, there is a special interest satellite meeting on the evolution of phonetic capabilities.

Title: The Evolution of Phonetic Capabilities: Causes, Constraints and Consequences

Date: Wednesday 12th August 2015

Time: 13.30 – 18.30

Place: Glasgow SECC, Boisdale 1

Registration is £10, and can be completed through the ICPhS registration page under “Registration only with no accommodation”.

If you would like to register only for this meeting, without registering for the main ICPhS conference, you can do so by emailing contact@icphs2015.info

For any other queries, contact hannah@ai.vub.ac.be

About:

In recent years, there has been a resurgence in research in the evolution of language and speech. New techniques in computational and mathematical modelling, experimental paradigms, brain and vocal tract imaging, corpus analysis and animal studies, as well as new archeological evidence, have allowed us to address questions relevant to the evolution of our phonetic capabilities. The workshop will focus on recent work addressing the emergence of our phonetic capabilities, with a special focus on the interaction between biological and cultural evolution.

Program:

The Evolution of Phonetic Capabilities: Causes, Constraints and Consequences

Wednesday 12th August – Glasgow SECC, Boisdale 1

13.50 – 14.00 Welcome
14.00-14.20 Introduction Hannah Little
14.20 – 14.50 Laryngeal Articulatory Function and Speech Origins John H. Esling,

Allison Benner &

Scott R. Moisik

14.50 – 15.20 Anatomical biasing and clicks: Preliminary biomechanical modeling Scott R. Moisik &

Dan Dediu

15.20 – 15.50 Exploring potential climate effects on the evolution of human sound systems Seán G. Roberts,

Caleb Everett &

Damián Blasi

15.50 – 16.20 Coffee Break
16.20 – 16.50 General purpose cognitive processing constraints and phonotactic properties of the vocabulary Padraic Monaghan & Willem H. Zuidema
16.50 – 17.20 Simulating the interaction of functional pressure, redundancy and category variation in phonetic systems Bodo Winter &

Andy Wedel

17.20 – 17.50 Universality in Cultural Transmission Bill Thompson
17.50 – 18.30 Discussion Panel Chaired by

Bart de Boer

 

20150425-_IGP3163

On the Direction of Cultural Evolution: Lessons from the 19th Century Anglophone Novel

I’ve got another working paper available (title above):

Most of the material in this document was in an earlier working paper, Cultural Evolution: Literary History, Popular Music, Cultural Beings, Temporality, and the Mesh, which also has a great deal of material that isn’t in this paper. I’ve created this version so that I can focus on the issue of directionality and so I’ve dropped all the material that didn’t related to that issue. The last section, The Universe and Time, is new, as is this introduction.

* * * * *

Abstract: Matthew Jockers has analyzed a corpus of 19th century American and British novels (Macroanalysis 2013). Using standard techniques from natural language processing (NLP) Jockers created a 600-dimensional design space for a corpus of 3300 novels. There is no temporal information in that space, but when the novels are grouped according to close similarity that grouping generates a diagonal through the space that, upon inspection, is aligned with the direction of time. That implies that the process that created those novels is a directional one. Certain (kinds of) novels are necessarily earlier than others because that is how the causal mechanism (whatever they are) work. This result has implications for our understanding of cultural evolution in general and of the relationship between cultural evolution and biological evolution.

1. Introduction: Direction in Design Space, Telos? 2
2. The Direction of Cultural Evolution: The Child is Father or the Man 6
3. Nineteenth Century English-Language Novels 9
4. Macroanalysis: Styles 10
5. Macroanalysis: Themes 13
6. Influence and Large Scale Direction 15
7. The 19th Century Anglophone Novel 18
8. Why Did Jockers Get That Result? 20
9. What Remains to be Done? 21
10. Literary History, Temporal Orders, and Many Worlds 22
11. The Universe and Time 30

Introduction: Evolving Along a Direction in Design Space

In 2013 Matthew Jockers published Macroanalysis: Digital Methods & Literary History (2013). I devoted considerable blogging effort to it 2014, including most, but not all, of the material in this working paper. In Jockers’ final study he operationalized the idea of influence by calculating the similarity between each pair of texts in his corpus of roughly 3300 19th century English-language novels. The rationale is obvious enough: If novelist K was influenced by novelist F, then you would expect her novels to resemble those of F more than those of C, who K had never even read.

Jockers examined this data by creating a directed graph in which each text was represented by a node and each text (node) was connected only to those texts to which it had a high degree of resemblance. This is the resulting graph:

9dot3

It is, alas, almost impossible to read this graph as represented here. But Jockers, of course, had interactive access to it and to all the data and calculations behind it. What is particularly interesting, though, is that the graph lays out the novels more or less in chronological order, from left to right (notice the coloring of the graph), though there was no temporal information in the underlying data. Much of the material in the rest of this working paper deals with that most interesting result (in particular, sections 2, 6, 7, 8, and 10).

What I want to do here is, first of all, reframe my treatment of Jockers’ analysis in terms of something we might call a design space (a phrase I take from Dan Dennett, though I believe it is a common one in certain intellectual circles). Then I emphasize the broader metaphysical implications of Jockers’ analysis. Continue reading

20150329-_IGP2804

Has Dennett Undercut His Own Position on Words as Memes?

Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:

Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, …) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.

While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, “Aching Voids and Making Voids” (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage:

But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’être.

In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug. Continue reading

Screen Shot 2015-04-11 at 23.09.54

Causality in linguistics: Nodes and edges in causal graphs

This coming week I’ll be at the Causality in the Language Sciences conference.  One of the topics of discussion will be how to integrate theories of causality into linguistic work.  Bayesian Causal Graphs are a core approach to causality, and seem like a useful framework for thinking about linguistic problems.  However, it’s not entirely clear whether all questions in linguistics can be represented using causal graphs.  In this post, I’ll discuss some possible uses of Bayesian Causal Graphs, and test the fit of some actual data to some causal structures.  (and please forgive my basic understanding of causality theory!)

Causal graphs are composed of states connected by edges.  A change or activation of a state causes a change in another.  States and causes can be categorical and absolute, or statistical and even complex in their relations.  Causal graphs are often introduced with the following kind of structure, taken from Pearl’s seminal book on Causality.  The season causes it to rain (in winter) and causes the sprinkler to come on (in summer).  Both the sprinkler being on and rain independently cause the grass to be wet.  If the grass is wet, the grass becomes slippery:

Screen Shot 2015-04-11 at 22.06.52

This example is easy to understand because each state is binary and (in this simple world) each causal effect is immediate and direct.  However, finding a similar example for linguistics is tricky.  Linguists may simply not agree on what the nodes are or what the edges represent.

Continue reading

Culture, its evolution and anything inbetween