Monday 16 September 2013

Lipreading at work

Deborah Tannen, author of “You just don’t understand, Women and Men in Conversation” (1991) about mis-communication between men and women, also wrote “Talking from 9 to 5. How women’s and men’s conversational styles affect who gets heard, who gets credit, and what gets done at work” (1994). The book describes the different communication styles of men versus women, in working environments. Although her data were mostly collected in offices in the USA, some of her observations may have a wider validity and may explain some of the problems that lipreaders and people with hearing problems in general, experience in work situations.

Tannen’s main message is that conversation is a ritual, with  unwritten rules and subconscious expectations. When someone does not play according to these unwritten rules, we experience cognitive dissonance.  My word, not Tannen’s. What happens is not what you expect to happen. So you have to do some thinking. Were your expectations wrong, is the other  person wrong, or is there something else that can explain what just happened? Very often, our response is emotional:  “This is not fair!” Or, in Tannen’s terms who describes conversations at work mostly in game-like terms: it’s a one-down. Usually, feelings are hurt. The result: consciously or subconsciously the person  affected will try to get even, and/or will try to avoid further conversations with the person who caused the cognitive dissonance.

This happens between men and women at work. Different rules, different expectations, resulting in different subgroups. Different subgroups, each with their own conversational rituals.

This happens in meetings and in employee-employer talks. But as Tannen writes “On the job, the meat of the work that has to be done is held together, made pleasant and possible, by the ketchup, relish, and bun of conversational rituals.” (page 43)
“Talk at work is not confined to talk about work. Many moments are spent in casual chat that establishes the friendly working environment that is the necessary backdrop to getting work done. (..) Both women and men know that their small talk is just that – “small” compared to the “big” talk about work – but differences in small-talk habits can become very big when they get in the way of the easy day-to-day working relationships that make us feel comfortable at work and keep the lines of communication open for the big topics when they arise.” (page 64)

And that’s exactly where lipreaders, and people with hearing problems in general, will get in trouble. They may manage in 1-1 "big-talk", factual communication about work, when context, speaker, and conversational rituals are familiar and fairly predictable. They get the hamburger, but not  the ketchup, relish or bun. Colleagues meet in groups, near the coffee machine, smoking outside, in the cafeteria. The lipreader who tries to blend in there notices too late who is speaking, doesn’t know the context, misses the clue of the joke and laughs too late. Or: asks someone to repeat what was said. Or: responds with a remark that doesn’t fit the ritual. Or: tries to monopolize the conversation, because when you talk, you don’t have to listen. In all cases: rituals broken, cognitive dissonance, hurt feelings.

The lipreader may blame the others: they forget that I’m hard-of-hearing, they never take my needs into account, why can’t they speak one at a time, take their cigarettes out of their mouths, turn down the background music. Or: depending on the lipreader’s personality and/or frame of mind at the time, the lipreaders blames him-/herself: Stupid me! Next time, I will stay at my desk. Or at home.

One of the problems is that these rituals are automatic, subconscious. The other person may remember that the lipreader has ‘special needs’ for a week, or a day, or 2 minutes, but – especially in conversational speech – will quickly go back on automatic pilot, and forget to speak clearly.

Lipreaders shouldn’t be surprised. It’s what they do themselves. You’d expect lipreaders to be very good lipspeakers. Because they know first-hand how important it is to speak clearly, one at a time, in well organized messages. Yes, dear lipreader, you may think that that is what you do. But please ask another lipreader for more objective feedback

And yes, people with hearing problems are partly to blame, themselves. They hide their hearing aids, they get upset when someone speaks very – slowly – or – VERY LOUDLY especially for them, because they so desperately want to be seen as ‘normal’. Whatever that is.

Men AND women, hearing AND not hearing: don’t let your emotions take over when someone breaks a ritual! Cognitive dissonance is good! It makes  you – and the other person – switch off the automatic pilot, it makes you a more aware participant. Real-life differences feed your brain!


PS: Small-talk solutions?

  • Befriend your colleagues on Facebook and regularly check their pages;
  •  Become the editor of a weekly or monthly company newsletter or company bulletin board and ask everyone to mail you their small and big news;
  • Find a buddy who will keep you updated;
  • Decide - and explain to your colleagues – that, because of your hearing problems and/or personality, you don’t do small talk. Several deaf people who were finally able to hear "small-talk" after receiving a Cochlear Implant report their disappointment: "Is that all hearing people talk about?"  Yes, it is. Small-talk is mostly a feel-good ritual. It's not hamburger, it's ketchup. 

Tuesday 10 September 2013

Lipreading Myth 3: only 30% ?

Almost all publications about lipreading say that lipreaders 'get' only 30% of the information that hearing people do. Lipreaders have to guess the missing 70%, which makes lipreading so very difficult and demanding. 

The bad news: yes, lipreading is very difficult and demanding.

The good news: the 30% rule may be true in (some) experimental conditions, but has no predictive value for real life.

Where does the 30% come from? 

Spoken English uses approximately 44 different phonemes (http://en.wikipedia.org/wiki/Phonemes_of_English). When you present these in isolation or in short (meaningless or meaningful) words without sound, subjects can discriminate appr. 12 different 'visemes' = phoneme groups.

For the lipreader, some phonemes are indistinguishable. The usual example is 'b - p - m': 3 different phonemes that look exactly the same to the lipreader. Words like 'pat', 'bat', 'mat' cannot be distinguished by lipreading.
Other phonemes invisible and cannot be seen at all: 'cate', 'gate', 'hate' all look like 'ate'. 

Depending on the research conditions, the speaker, and the subjects, a lipreader of English can see 5 different vowel groups, and 7 different consonant groups. These groups are sometimes called 'visemes'. English has 44 different auditory distinguishable phonemes, and only appr. 12 visually distinguishable visemes (but numbers and groupings vary, depending on the methods, speakers, etc.). So: lipreaders 'do it' with 30% of the information. 

But!

In everyday life, people do not speak in single, one syllable words. Well OK, some people do: 'Hi', 'yes', 'nope', 'wow', 'bye'.
But not without context! Someone is leaving and says 'bye'. The lipreader sees something that could be 'pie', 'my', or 'bye'. Does he or she have to guess 3 times, before s/he gets the right answer? Eh... no? 

Plus

  • Longer words are easier to lipread than one syllbale words, because longer words have fewer 'look-a-likes'. Without context, it will be difficult to guess 'bye' correctly. 'See you later' (not one word? OK, not in writing, but in spoken language it looks like one word!) has fewer look-a-likes and is easy to recognize, especially in the context of someone leaving. Invisible or indistinguishable phonemes can be identified 100% correct in the context of a meaningful word. Context helps! 
  • Some speakers articulate so clearly, that you CAN see the difference between 'P' and 'B'. Other speakers on the other hand, barely open their mouths. All you see is 'open - closed - open - closed'. With a good 'lipspeakers', a lipreader may get 100% of the information. A bad lipspeakers: 0% + a lot of frustration! 
  • You don't need all the phonemes, to get all the information! Even hearing people don't hear all phonemes correctly, their brains automatically fill in the gaps. Texters don't need all letters to understand each other, they can use abbeviations like FBOW (for better or worse), ATB (all the best), and many others. Texters get only ... 20% of the information that old fashioned writers get? No. They get maybe 20% of the letters, but experienced texters can get 100% of the information! 
  • In the context of a sentence, even invisible ..... can be guessed! Of course, to be able to use context, you need foreknowledge of the language, the speaker and the topic...  

The moral of the story:

Yes, lipreading is difficult. Yes, lipreaders have to do it with information that has many more 'gaps', than hearing people do. But foreknowledge and context can help you fill in many of the gaps!

And no, lipreaders should not be afraid to make mistakes. Few speakers are 100% predictable. And those who are, are often boring... 

Thursday 5 September 2013

Lipreading Myth 2: 93% of all communication is nonverbal

You may have heard (or said yourself!) that research has shown that communication ( getting a message across) depends mainly on nonverbal cues or body language. 

To be exact: that 93% of a message is transmitted by means of nonverbal cues or body language, and that only 7% depends on spoken words. 

Not true of course, because this would make lipreaders much, much better communicators than blind people; it would make telephone and radio pretty much as useless for hearing people as is it is for hard-of-hearing and deaf people. And it would take all the fun out of whispering in the dark.

And what about reading: no non-verbal cues whatsoever! If you believe that 93% of communication is non-verbal, you may as well stop reading now.


The research quoted – actually: mis-quoted - in the statements about the importance of nonverbal communication, was done by Albert Mehrabian in the 1960’s. Only two studies, with a number of limitations.

Most importantly: Mehrabian wasn’t looking at ‘communication’ in general, he was looking at inconsistent communication of attitudes. Basically: if someone says a positive word (for instance: ‘dear’) with a negative tone of voice, what do you believe? The word, or the tone of voice? The word, or a picture of a facial expression? 
In that specific experimental context – a tape recording of a single word spoken by an unknown speaker, body language shown as photographs of facial expressions and  female subjects, tested in a laboratory setting - most subjects based their judgement (like – dislike) on the actual word in 7% of the cases, on tone of voice in 38% of the cases, and on facial expression in 55% of the cases. 38 + 55 = 93. So: nonverbal wins in 93% of the cases!

Yes, true, if you’re marketing nonverbal. But for the rest of us, communicating fairly consistently and not just about attitudes: the message really, truly is in the words!

Of course, nonverbal cues can help. IF the nonverbal cues are consistent with the message. Which they will usually be, when a person speaks the truth. Even then, you have to take into account what are sometimes (USA only?) called the 3 C’s of nonverbal communication:

  • Context: when you interpret a nonverbal cue, you have to take the context (situation, topic) into account. Shivering may mean that someone is cold. Or scared. Or antsy. Looking away may mean that someone is bored. Or uncomfortable. Or: that he or she heard a door open, or a phone ringing.
  • Clusters: you have to look at groups (clusters) of nonverbal cues. A single nonverbal cue in isolation can mean many different things. So actually, this is context too: interpret nonverbal cues in the context of other nonverbal cues. Someone is shivering? Any other cues that the person is cold, or scared, or antsy? A person looks away: any other cues that he or she is bored, or uncomfortable, or heard a sound that  you missed?
  • Congruence: nonverbal cues have to be consistent with each other AND with the verbal message. So basically, that’s context too.  If the person says that he or she is cold, or scared or antsy AND shivers, AND you are pretty sure the person is not lying, trust the nonverbal cues! 
If you are now thinking that 'reading' nonverbal cues resembles lipreading: it does! The lipreader also has to take into account the 3 C's: context, clusters, and congruence. Actually, these 3 C's are important for ALL communication. 

Mehrabian’s female subjects in the ‘60s mostly believed the nonverbal cues, but they didn’t have the 3 C’s: no context, no clusters, and no congruence. And they may well have guessed wrong. 

In real life: yes, use nonverbal cues to help you understand a speaker. But when cues are inconsistent or don’t match: don’t try and guess.  Ask. 

If you're not convinced, watch this video from YouTube. First without sound or subtitles, then with. Words carry more than 7% of the message, don't they? 
Video: Busting the Mehrabian myth

Wednesday 4 September 2013

Lipreading Myth 1: Lipreaders can do it

A number of times a year I receive e-mails asking me for the name of a lipreading expert. In one case, someone had recorded video at a wedding, all guests had said something .... yes, and that was the problem: what did the guests say? The sound was very bad, and no-one could hear what the guests said. So can I please caption the video for them?
No.

There's a YouTube video of the new pope meeting the Belgian king and queen. Some Royalty TV programme mailed me: can I please tell them what the Belgian king said to the pope? There was no sound, they weren't even sure what language was being spoken. Please send us the text before tonight's progamme.
Sorry, can't do.

Someone involved in a lawsuit. They had video of the suspects, shot from far away. Can I please tell them what the suspects were saying? 
No.

I know that - in the UK and the USA - there are specially trained 'forensic' lipreaders. For more info about this, see Wikipedia: 
http://en.wikipedia.org/wiki/Forensic_speechreading

But really, it is a myth that an expert lipreader can tell you what is being said by a speaker on video. There is too much uncertainty. 

To be able to lipread what someone says, you have to be familiar with 1. the language, 2. the speaker, 3. the topic, and 4. the context. The face and mouth-movements of the speaker have to be very clearly visible. And even then, an expert lipreader will not be able to tell you with 100% certainty what was said.

Recognition, or matching, on the other hand, is quite easy. Once you know what a person says, you can recognize the visual patterns and check whether what you see matches the words that you are expecting. 
That's probably why many hearing people think that lipreading is easy: "Oh, yes! Of course, now I can see what the pope / suspect says!"

In many cases, however, many different words / phrases will match what you see. 
For funny examples, watch the Bad Lip-Reading videos on Youtube

"I'm proud to say yo' mama took a Cosby sweater"




Monday 2 September 2013

Research!

I just stumbled across a recent report by Ruth Campbell, the UK (world?) expert on lipreading/speechreading:

"This report aims to clarify what is and is not possible in relation to speechreading, and to the development of speechreading skills. It has been designed to be used by agencies which may wish to make use of speechreading for a variety of reasons, but it focuses on requirements in relation to understanding silent speech for information gathering purposes."


Highly recommended! 

http://www.ucl.ac.uk/dcal/research/research-projects/images/speechreading

Lipread.eu becomes Lipreading.eu

Unfortunately, the European LipRead project came to an early end. Some of the activities started under the LipRead umbrella, however, will be continued on the www.lipreading.eu website. 

I'll continue to use this blog for my 'thinking in progress' about lipreading, learning to lipread, teaching lipreading.