“Where does he get this?” I heard another workshop participant exclaim after David N. Evans’ flash animation eye blink slide illustrating the natural coordination of the reading mind with the biological moistening mechanisms that lubricate the eyeball.
“Stern and Dunham (1990) … noted task demands affect when one blinks (referred to as blink location). For example, readers tend to place their blinks at ‘semantically appropriate places in the text,’ such as the end of a sentence, paragraph, or page” (italics in original, bold added)
Coordination between when (timing) and where (location, the “places”) is the focal point of most of my research. The two examples of eye blinking during reading (English text) and as grammar and prosody during signed utterances (ASL specific) inspire a hypothesis about Mikhael Bakhtin’s original, conceptual use of the term “utterance” in his analyses of discourse in novels and the uptake of the term by researchers of spontaneous spoken language in real (nonfictional) face-to-face interaction. Could Bakhtin have, intuitively or subconsciously, noted a physiological coordination of eye blinks with the spoken production? Or felt his own blinking while he read?
David swears he did not color coordinate his wardrobe with the background, but he did follow Deaf norms and tell us (hundreds of participants) that someone had informed him of the match. That’s a concrete example of the kind of co-incidence of space (place/location) and time (during the moment of his presentation) that we all could learn to follow. (A Facebook group, perhaps, tracking David’s presentation wardrobe until the next RID conference in Atlanta, 2011?!) During and after his workshop, I have been remembering various sources of information about the eyes and vision. For instance, Eye Movement Desensitization and Response, which is a treatment for trauma.
The way I understand EMDR (simply) is, “Memories are linked in networks that contain related thoughts, images, emotions, and sensations.” If I recall the explanations of EMDR when it was first introduced to me, the network of memories can include particular (specific, repeated) eye movement, which can be deliberately altered through practice, disrupting parts of the linkage that re-create the emotions of the trauma. “Learning occurs when new associations are forged with material already stored in memory.” I also thought about a recent lesson from a yoga teacher, about using the opposite side eye ‘to lead’ when turning, because it provides the perceptual system with different input than leading with the same eye on the same side (i.e., when turning to the right, the right eye tends to go there first, leading the rest of the body into that future time and space). By disrupting the habitual routine, we train ourselves to be more open to the unexpected, instead of relying on typical expectations.
Also fresh in mind is my friend Anuj’s recent phd defense on the topic of Risk Perception and Awareness Training for young/new drivers, in which eye gaze is tracked and discussed with students, improving their awareness and thus reducing the risk of accidental death. I was struck by how unaware driver’s are of
- the significance of looking,
- of knowing where to look, and
- being deliberate about what one is looking for.
I frequently witness a similar unconsciousness with hearing (non-deaf) people when they “see” a Deaf person (or an interpreter) signing but do not realize this is language! Most people know it is rude to interrupt another person while they are talking, but this very basic etiquette often vanishes when the mode of communication is visual instead of auditory. Part of the rudeness stems, I suspect, not just from different conceptions of time (the hurry-hurry of hearing life, the long-goodbyes of deaf life) but also from different perceptual experiences of time. You could say that an ASL brain is processing in one dimension, while a spoken English brain is processing in another dimension. When persons used to using only one of the two languages communicate with each other (with or without an interpreter), a phase accommodation must be made – by one or both. When an interpreter is involved, the process of dimensional juggling or phase shifting is made blatantly obvious. There are repeat patterns of the co-incidence of time/timing and space/place during interpretation that compose sites of cultural co-creation, as well as opportunities for repeating oppression, practicing empowerment, and experimenting with cooperation.
Notes:
* re: “Hymnal” for the conference handouts booklet: “I told the interpreters to use that word,” David explained in ASL. The interpreter voiced this into English, adding (deadpan), “it would not have been the word choice the interpreter would otherwise have used.”
David N. Evans
Stern and Dunham, 1990. The Ocular System. In Cacioppo, Tassinary (Eds), Principles of psychophysiology: Physical, social and inferential elements. Cambridge University Press.
Prosody Examples (includes link to a video and powerpoint from Seattle Central Community College)
Bakhtin’s Theory of the Utterance, John Shotter, University of New Hampshire
Eye Movement Desensitization and Response EMDR): Theory: The Adaptive Information Processing Model, based on F. Shapiro (1995, 2001, 2002)
Driver’s Education: Risk Perception and Awareness Training, Dr Anuj Pradhan
View all posts in series Interpreting
Previous in Series: Beyond Political Correctness (RID 2009) — Next in Series: Embrace Change, Honor Tradition (RID 2009)
Eye gaze is being used in Parkinson’s research too:
Looking at language (8/6/2009)
Hey Steph,
Thanks for the blog entry! Reading about your experience and subsequent thoughts about the workshop was really cool; I appreciate you taking the time to do this. And I swear I didn’t color coordinate my outfit with the stage backdrop! =)
I would love to see your research if you’d be so inclined to share. This whole subject of eye gaze & blinks is so rich and needs so much more in the way of research! Any additional thoughts, information, or insights are most definitely welcome.
The moving/blinking eyes weren’t a Flash animation. I made them in the Keynote presentation using two pictures and motion paths (took about an hour to create). I’m glad it reinforced the presented information for participants.
See you in Atlanta! =)
Hey David,
I’m so glad you commented because I needed to re-read this entry right about now! 🙂 I can use it as background information for my hearing students as they prepare to meet Deaf author Seth Gore and a few other Deaf folk from the local community.
If we can catch some moments in Atlanta I’d be happy to talk about my research with you. Its about language broadly, especially discourses – how we create our worlds through language, and especially how our language about language (how we talk about language in general) can create trajectories of meaning – for good and also not-so-good.
Be well!
[…] who have been pushing the envelope on mentoring within the field. (David Evans, btw, was sharply dressed, as usual.) They’re working on a very intensive, face-t0-face model of interaction that engages deep […]