Monday 2 December 2013

Context, context, context

For its exact meaning, A word depends on the context in which it is used. This is so for ambiguous words, words with 2 or more different meanings. Examples for English are ‘bank’ (river bank, money bank), ‘break’ (coffee break, to break something into pieces), etc.

But even words that have but a single meaning depend on the specifics of that meaning on context. People who use sign language, know this very well. ‘Small’ in ‘a small elephant’ is a different kind of small, than in ‘a small ladybug’. ‘My dog’ can be a miniature dog or a big great Dane.

Context is even more important for lipreaders, because so many words look alike, or are only partially visible. For lipreaders, most of the words they see are ‘fuzzy’ patterns: patterns that do not have a direct link to one single word or meaning in their brain.
In lipreading, you will get some of the information from the speaker’s mouth, but the pattern that you see may activate 2 - 6 words in your brain, or maybe more. There rarely is a clear 1-1 link to a single word. So you have to use context, to select the correct alternative. At first, this may require conscious attention: Did he say ‘ball’? But that doesn’t make sense, so it must have been ‘mall’!

Later, you will probably be able to choose the correct alternative automatically, because you will not be lipreading word-by-word anymore, but in larger ‘chunks’. The words before and after will help your brains decide what the fuzzy bit in the middle was.
But there’s more than 1 context! There are (at least!) four:
  • Sentence context. "’m tired, I’m going ….". "I want a pizza with ....."  If you get the first words of the sentence right, you need very little information from the lips to get the last words too. Sentence context helps – if you know the language, and if the speaker speaks grammatically correct English!
  • Topic context. In a conversation about a football match, you can expect football words. In a talk at a bus-stop in England, expect weather words. “Third rainy day in a row, isn’t it” Topic context is partly determined by the situation: shop, bus-stop, office, pub. Especially with strangers, we use fixed phrases and ‘routines’ in standard situation. Any real conversation of course is less dependent on context: with friends in a pub, you may discuss work, relationships, health, football, your new car, or plans for the weekend. Once you know the topic of the conversation, this will help you recognize the words. It’s one of the reasons why lipreaders don’t appreciate speakers who jump from one topic to the next. And why it’s always good to ask a speaker what he or she is talking about: “I missed that, are you still talking about Sunday’s match?”
    It's also why it's so difficult to lipread a speaker who talks about a topic that you know little about. 

 Two trickier kinds of context:
  • The speaker’s intentions. This is where mindreading helps. What is he or she trying to tell you, and why? Is it a joke? A warning? Instructions? A question? Body language and facial expression may give you some cues. The better you know the speaker, the easier it will be to guess what’s on someone’s mind, and what you can expect him or her to talk about.
  • Your own intentions and expectations. Included in these are all of the 3 contexts above, as well as your emotions, personality, life experience and whatever else you have stored in your brain. If you expect someone to speak English, you will not be able to lipread a speaker of German. If you expect someone to talk about the weather, you may not recognize a question about  bus-times. If you expect someone to talk about football, you may miss a message about a missing pet. If you consider yourself a bad lipreader, you may not be able to get a single word no matter how predictable it is. If you are afraid to fail, you may always use your Mona Lisa smile and respond, let alone contradict someone, out of fear that you mis-read what the person said. Or: you will dominate each and every conversation because when you speak, you don’t have to listen!

Context really is a double-edged sword. It can help, and it can kill. No, not persons, but definitely conversations!

This does explain why good lipreaders are such very good listeners: they may not actually hear what the other person is saying, but they will read between the lines, they will get the underlying message. Whereas many persons with good hearing may hear all the words, but will interpret them in the context of their own expectations: they only hear what they want to, or expect to hear.


So yes, always use context. And be aware that it is is a double-edged sword.

Monday 16 September 2013

Lipreading at work

Deborah Tannen, author of “You just don’t understand, Women and Men in Conversation” (1991) about mis-communication between men and women, also wrote “Talking from 9 to 5. How women’s and men’s conversational styles affect who gets heard, who gets credit, and what gets done at work” (1994). The book describes the different communication styles of men versus women, in working environments. Although her data were mostly collected in offices in the USA, some of her observations may have a wider validity and may explain some of the problems that lipreaders and people with hearing problems in general, experience in work situations.

Tannen’s main message is that conversation is a ritual, with  unwritten rules and subconscious expectations. When someone does not play according to these unwritten rules, we experience cognitive dissonance.  My word, not Tannen’s. What happens is not what you expect to happen. So you have to do some thinking. Were your expectations wrong, is the other  person wrong, or is there something else that can explain what just happened? Very often, our response is emotional:  “This is not fair!” Or, in Tannen’s terms who describes conversations at work mostly in game-like terms: it’s a one-down. Usually, feelings are hurt. The result: consciously or subconsciously the person  affected will try to get even, and/or will try to avoid further conversations with the person who caused the cognitive dissonance.

This happens between men and women at work. Different rules, different expectations, resulting in different subgroups. Different subgroups, each with their own conversational rituals.

This happens in meetings and in employee-employer talks. But as Tannen writes “On the job, the meat of the work that has to be done is held together, made pleasant and possible, by the ketchup, relish, and bun of conversational rituals.” (page 43)
“Talk at work is not confined to talk about work. Many moments are spent in casual chat that establishes the friendly working environment that is the necessary backdrop to getting work done. (..) Both women and men know that their small talk is just that – “small” compared to the “big” talk about work – but differences in small-talk habits can become very big when they get in the way of the easy day-to-day working relationships that make us feel comfortable at work and keep the lines of communication open for the big topics when they arise.” (page 64)

And that’s exactly where lipreaders, and people with hearing problems in general, will get in trouble. They may manage in 1-1 "big-talk", factual communication about work, when context, speaker, and conversational rituals are familiar and fairly predictable. They get the hamburger, but not  the ketchup, relish or bun. Colleagues meet in groups, near the coffee machine, smoking outside, in the cafeteria. The lipreader who tries to blend in there notices too late who is speaking, doesn’t know the context, misses the clue of the joke and laughs too late. Or: asks someone to repeat what was said. Or: responds with a remark that doesn’t fit the ritual. Or: tries to monopolize the conversation, because when you talk, you don’t have to listen. In all cases: rituals broken, cognitive dissonance, hurt feelings.

The lipreader may blame the others: they forget that I’m hard-of-hearing, they never take my needs into account, why can’t they speak one at a time, take their cigarettes out of their mouths, turn down the background music. Or: depending on the lipreader’s personality and/or frame of mind at the time, the lipreaders blames him-/herself: Stupid me! Next time, I will stay at my desk. Or at home.

One of the problems is that these rituals are automatic, subconscious. The other person may remember that the lipreader has ‘special needs’ for a week, or a day, or 2 minutes, but – especially in conversational speech – will quickly go back on automatic pilot, and forget to speak clearly.

Lipreaders shouldn’t be surprised. It’s what they do themselves. You’d expect lipreaders to be very good lipspeakers. Because they know first-hand how important it is to speak clearly, one at a time, in well organized messages. Yes, dear lipreader, you may think that that is what you do. But please ask another lipreader for more objective feedback

And yes, people with hearing problems are partly to blame, themselves. They hide their hearing aids, they get upset when someone speaks very – slowly – or – VERY LOUDLY especially for them, because they so desperately want to be seen as ‘normal’. Whatever that is.

Men AND women, hearing AND not hearing: don’t let your emotions take over when someone breaks a ritual! Cognitive dissonance is good! It makes  you – and the other person – switch off the automatic pilot, it makes you a more aware participant. Real-life differences feed your brain!


PS: Small-talk solutions?

  • Befriend your colleagues on Facebook and regularly check their pages;
  •  Become the editor of a weekly or monthly company newsletter or company bulletin board and ask everyone to mail you their small and big news;
  • Find a buddy who will keep you updated;
  • Decide - and explain to your colleagues – that, because of your hearing problems and/or personality, you don’t do small talk. Several deaf people who were finally able to hear "small-talk" after receiving a Cochlear Implant report their disappointment: "Is that all hearing people talk about?"  Yes, it is. Small-talk is mostly a feel-good ritual. It's not hamburger, it's ketchup. 

Tuesday 10 September 2013

Lipreading Myth 3: only 30% ?

Almost all publications about lipreading say that lipreaders 'get' only 30% of the information that hearing people do. Lipreaders have to guess the missing 70%, which makes lipreading so very difficult and demanding. 

The bad news: yes, lipreading is very difficult and demanding.

The good news: the 30% rule may be true in (some) experimental conditions, but has no predictive value for real life.

Where does the 30% come from? 

Spoken English uses approximately 44 different phonemes (http://en.wikipedia.org/wiki/Phonemes_of_English). When you present these in isolation or in short (meaningless or meaningful) words without sound, subjects can discriminate appr. 12 different 'visemes' = phoneme groups.

For the lipreader, some phonemes are indistinguishable. The usual example is 'b - p - m': 3 different phonemes that look exactly the same to the lipreader. Words like 'pat', 'bat', 'mat' cannot be distinguished by lipreading.
Other phonemes invisible and cannot be seen at all: 'cate', 'gate', 'hate' all look like 'ate'. 

Depending on the research conditions, the speaker, and the subjects, a lipreader of English can see 5 different vowel groups, and 7 different consonant groups. These groups are sometimes called 'visemes'. English has 44 different auditory distinguishable phonemes, and only appr. 12 visually distinguishable visemes (but numbers and groupings vary, depending on the methods, speakers, etc.). So: lipreaders 'do it' with 30% of the information. 

But!

In everyday life, people do not speak in single, one syllable words. Well OK, some people do: 'Hi', 'yes', 'nope', 'wow', 'bye'.
But not without context! Someone is leaving and says 'bye'. The lipreader sees something that could be 'pie', 'my', or 'bye'. Does he or she have to guess 3 times, before s/he gets the right answer? Eh... no? 

Plus

  • Longer words are easier to lipread than one syllbale words, because longer words have fewer 'look-a-likes'. Without context, it will be difficult to guess 'bye' correctly. 'See you later' (not one word? OK, not in writing, but in spoken language it looks like one word!) has fewer look-a-likes and is easy to recognize, especially in the context of someone leaving. Invisible or indistinguishable phonemes can be identified 100% correct in the context of a meaningful word. Context helps! 
  • Some speakers articulate so clearly, that you CAN see the difference between 'P' and 'B'. Other speakers on the other hand, barely open their mouths. All you see is 'open - closed - open - closed'. With a good 'lipspeakers', a lipreader may get 100% of the information. A bad lipspeakers: 0% + a lot of frustration! 
  • You don't need all the phonemes, to get all the information! Even hearing people don't hear all phonemes correctly, their brains automatically fill in the gaps. Texters don't need all letters to understand each other, they can use abbeviations like FBOW (for better or worse), ATB (all the best), and many others. Texters get only ... 20% of the information that old fashioned writers get? No. They get maybe 20% of the letters, but experienced texters can get 100% of the information! 
  • In the context of a sentence, even invisible ..... can be guessed! Of course, to be able to use context, you need foreknowledge of the language, the speaker and the topic...  

The moral of the story:

Yes, lipreading is difficult. Yes, lipreaders have to do it with information that has many more 'gaps', than hearing people do. But foreknowledge and context can help you fill in many of the gaps!

And no, lipreaders should not be afraid to make mistakes. Few speakers are 100% predictable. And those who are, are often boring... 

Thursday 5 September 2013

Lipreading Myth 2: 93% of all communication is nonverbal

You may have heard (or said yourself!) that research has shown that communication ( getting a message across) depends mainly on nonverbal cues or body language. 

To be exact: that 93% of a message is transmitted by means of nonverbal cues or body language, and that only 7% depends on spoken words. 

Not true of course, because this would make lipreaders much, much better communicators than blind people; it would make telephone and radio pretty much as useless for hearing people as is it is for hard-of-hearing and deaf people. And it would take all the fun out of whispering in the dark.

And what about reading: no non-verbal cues whatsoever! If you believe that 93% of communication is non-verbal, you may as well stop reading now.


The research quoted – actually: mis-quoted - in the statements about the importance of nonverbal communication, was done by Albert Mehrabian in the 1960’s. Only two studies, with a number of limitations.

Most importantly: Mehrabian wasn’t looking at ‘communication’ in general, he was looking at inconsistent communication of attitudes. Basically: if someone says a positive word (for instance: ‘dear’) with a negative tone of voice, what do you believe? The word, or the tone of voice? The word, or a picture of a facial expression? 
In that specific experimental context – a tape recording of a single word spoken by an unknown speaker, body language shown as photographs of facial expressions and  female subjects, tested in a laboratory setting - most subjects based their judgement (like – dislike) on the actual word in 7% of the cases, on tone of voice in 38% of the cases, and on facial expression in 55% of the cases. 38 + 55 = 93. So: nonverbal wins in 93% of the cases!

Yes, true, if you’re marketing nonverbal. But for the rest of us, communicating fairly consistently and not just about attitudes: the message really, truly is in the words!

Of course, nonverbal cues can help. IF the nonverbal cues are consistent with the message. Which they will usually be, when a person speaks the truth. Even then, you have to take into account what are sometimes (USA only?) called the 3 C’s of nonverbal communication:

  • Context: when you interpret a nonverbal cue, you have to take the context (situation, topic) into account. Shivering may mean that someone is cold. Or scared. Or antsy. Looking away may mean that someone is bored. Or uncomfortable. Or: that he or she heard a door open, or a phone ringing.
  • Clusters: you have to look at groups (clusters) of nonverbal cues. A single nonverbal cue in isolation can mean many different things. So actually, this is context too: interpret nonverbal cues in the context of other nonverbal cues. Someone is shivering? Any other cues that the person is cold, or scared, or antsy? A person looks away: any other cues that he or she is bored, or uncomfortable, or heard a sound that  you missed?
  • Congruence: nonverbal cues have to be consistent with each other AND with the verbal message. So basically, that’s context too.  If the person says that he or she is cold, or scared or antsy AND shivers, AND you are pretty sure the person is not lying, trust the nonverbal cues! 
If you are now thinking that 'reading' nonverbal cues resembles lipreading: it does! The lipreader also has to take into account the 3 C's: context, clusters, and congruence. Actually, these 3 C's are important for ALL communication. 

Mehrabian’s female subjects in the ‘60s mostly believed the nonverbal cues, but they didn’t have the 3 C’s: no context, no clusters, and no congruence. And they may well have guessed wrong. 

In real life: yes, use nonverbal cues to help you understand a speaker. But when cues are inconsistent or don’t match: don’t try and guess.  Ask. 

If you're not convinced, watch this video from YouTube. First without sound or subtitles, then with. Words carry more than 7% of the message, don't they? 
Video: Busting the Mehrabian myth

Wednesday 4 September 2013

Lipreading Myth 1: Lipreaders can do it

A number of times a year I receive e-mails asking me for the name of a lipreading expert. In one case, someone had recorded video at a wedding, all guests had said something .... yes, and that was the problem: what did the guests say? The sound was very bad, and no-one could hear what the guests said. So can I please caption the video for them?
No.

There's a YouTube video of the new pope meeting the Belgian king and queen. Some Royalty TV programme mailed me: can I please tell them what the Belgian king said to the pope? There was no sound, they weren't even sure what language was being spoken. Please send us the text before tonight's progamme.
Sorry, can't do.

Someone involved in a lawsuit. They had video of the suspects, shot from far away. Can I please tell them what the suspects were saying? 
No.

I know that - in the UK and the USA - there are specially trained 'forensic' lipreaders. For more info about this, see Wikipedia: 
http://en.wikipedia.org/wiki/Forensic_speechreading

But really, it is a myth that an expert lipreader can tell you what is being said by a speaker on video. There is too much uncertainty. 

To be able to lipread what someone says, you have to be familiar with 1. the language, 2. the speaker, 3. the topic, and 4. the context. The face and mouth-movements of the speaker have to be very clearly visible. And even then, an expert lipreader will not be able to tell you with 100% certainty what was said.

Recognition, or matching, on the other hand, is quite easy. Once you know what a person says, you can recognize the visual patterns and check whether what you see matches the words that you are expecting. 
That's probably why many hearing people think that lipreading is easy: "Oh, yes! Of course, now I can see what the pope / suspect says!"

In many cases, however, many different words / phrases will match what you see. 
For funny examples, watch the Bad Lip-Reading videos on Youtube

"I'm proud to say yo' mama took a Cosby sweater"




Monday 2 September 2013

Research!

I just stumbled across a recent report by Ruth Campbell, the UK (world?) expert on lipreading/speechreading:

"This report aims to clarify what is and is not possible in relation to speechreading, and to the development of speechreading skills. It has been designed to be used by agencies which may wish to make use of speechreading for a variety of reasons, but it focuses on requirements in relation to understanding silent speech for information gathering purposes."


Highly recommended! 

http://www.ucl.ac.uk/dcal/research/research-projects/images/speechreading

Lipread.eu becomes Lipreading.eu

Unfortunately, the European LipRead project came to an early end. Some of the activities started under the LipRead umbrella, however, will be continued on the www.lipreading.eu website. 

I'll continue to use this blog for my 'thinking in progress' about lipreading, learning to lipread, teaching lipreading.