Tag Archives: Voice User Interface Design

Cross-linguistic & Cross-cultural Voice Interaction Design

31 Jan

(update at the end)

2010 saw the first SpeechTEK Conference to have taken place outside of the US, SpeechTEK Europe 2010 in London. This year’s European Conference, SpeechTEK Europe 2011, will take place again in London (25 – 26 May 2011), but this time it will be preceded on Tuesday 24th May by a special Workshop on Cross-linguistic & Cross-cultural Voice Interaction Design organised by the Association for Voice Interaction Design (AVIxD). The main goal of AVIxD is to bring together voice interaction and experience designers from both Industry and Academia and, among other things, to “eliminate apathy and antipathy toward the need for good design of automated voice services” (that’s my favourite!). This is the first AVIxD Workshop to take place in Europe and I am honoured to have been appointed Co-Chair alongside Caroline Leathem-Collins from EIG.

Participation is free to AVIxD members and just £25 for non-members (which may be applied towards AVIxD membership). However in order to participate in the workshop, you need to submit a brief position paper in English (approx. 500 words) on any of the special topics of interest of the Workshop (See CFP below). The deadline for electronic submissions is Friday 25 March, so you need to hurry if you want to be part of it!

Here’s the full Call for (Position) Papers from the AVIxD site:

Call for Position Papers

First European AVIxD Workshop

Cross-linguistic & Cross-cultural Voice Interaction Design

Tuesday, 24 May 2011 (just prior to SpeechTEK Europe 2011), 1 – 7 PM

London, England

The Association for Voice Interaction Design (AVIxD) invites you to join us for our first voice interaction design workshop held in Europe, Cross-linguistic & Cross-cultural Voice Interaction Design. The AVIxD workshop is a hands-on day-long session in which voice user interface practitioners come together to debate a topic of interest to the speech community. The workshop is a unique opportunity for them to meet with their peers and delve deeply into a single topic.

As in previous years with the AVIxD Workshops held in the US, we will write papers based on our discussions which we will then publish on www.avixd.org. Please visit our website to see papers from previous workshops, and for more details on the purpose of the organization and how you can be part of it.

In order to participate in the workshop, individuals must submit a position paper of approximately 500 words in English. Possible topics to touch upon in your submission (to be discussed in depth during the workshop) include:

  1. Language choice and user demographics
  2. Presentation of the language options to the caller and caller preference
  3. Creation and (co-)maintenance of dialogue designs, grammars, prompts across languages
  4. Political and sociolinguistic issues in system prompt choices and recognition grammars, such as code-switching, formal versus informal registers
  5. Guidelines for application localization, translation, and interpretation
  6. Setting expectations regarding availability of multilingual agents, Language- and culture-sensitive persona definition
  7. Coordinating usability testing and tuning across diverse linguistic / cultural groups
  8. Language choice and modality preference

We always encourage the use of specific examples from applications you’ve worked on in your position paper.

Participation is free to AVIxD members; non-members will be charged £25, which may be applied towards AVIxD membership at the workshop. Please submit your position papers via email no later than Friday 25 March 2011 to cfp@avixd.org. Letters of acceptance will be sent out on 30 March 2011.

We look forward to engaging with the European speech design community to discuss the particular challenges of designing speech solutions for users from diverse linguistic and cultural backgrounds. Feel free to contact either of the co-chairs below, if you have any questions.

Caroline Leathem-Collins, EIG  (caroline {at} eiginc {dot} com)

Maria Aretoulaki, DialogCONNECTION Ltd (maria {at} dialogconnection {dot} com)

UPDATE

SpeechTEK Europe 2011 has come and gone and I’ve got many interesting things to report (as I have been tweeting through my @dialogconnectio Twitter account).

But first, here are the slides for my presentation at the main conference on the outcome of the AVIxD Workshop on Cross-linguistic & Cross-cultural Voice Interaction Design organised by the Association for Voice Interaction Design (AVIxD). I only had 12 hours to prepare them – including sleep and London tube commute – so I had to practically keep working on them until shortly before the Session! Still I think the slides capture the breadth and depth of topics discussed or at least touched upon at the Workshop. There are several people now writing up on all these topics and there should be one or more White papers on them very soon (by the end of July we hope!). So the slides did their job after all!

Get the slides in PDF here:  Maria Aretoulaki – SpeechTEK Europe 2011 presentation.

2010 in review – Not bad at all :)

3 Jan

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Wow.

Crunchy numbers

Featured image

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 7,600 times in 2010. That’s about 18 full 747s.

In 2010, there were 9 new posts, not bad for the first year! There were 32 pictures uploaded, taking up a total of 5mb. That’s about 3 pictures per month.

The busiest day of the year was September 15th with 326 views. The most popular post that day was The Social Media scene in Manchester (UK) is very sociable!.

Where did they come from?

The top referring sites in 2010 were linkedin.com, mail.live.com, mail.yahoo.com, twitter.com, and facebook.com.

Some visitors came searching, mostly for voice activated lift, scottish voice activated lift, voice activated lift scotland, and scottish voice activated elevator.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

The Social Media scene in Manchester (UK) is very sociable! September 2010
32 comments

2

The voice-activated lift won’t do Scottish! July 2010
4 comments

3

Speech Recognition for Dummies May 2010
12 comments

4

The Loneliness of the long-distance … VUI Designer! June 2010
5 comments

5

About May 2010
1 comment

The eternal battle between the VUI Designer & the Customer

7 Dec

I promised some time ago to put up the slides of my presentation at this year’s SpeechTEK Europe 2010 in London, the first SpeechTEK to have taken place outside of the US. My presentation, “The Eternal Battle Between the VUI Designer and the Customer“, was on Wednesday 26th May 2010 and opened the “Voice User Interface Design: Major Issues” Session.  It went down really well, and I had afterwards several people in the audience tell me about their own experience and asking me for tips on how to deal with similar issues.

Here is a PDF with the presentation slides:

Maria Aretoulaki – “The Eternal Battle Between the VUI Designer and the Customer” (SpeechTEK Europe 2010 presentation)

Maria Aretoulaki – “The Eternal Battle Between the VUI Designer and the Customer” (SpeechTEK Europe 2010 presentation)

Maria Aretoulaki – SpeechTEK Europe 2010 presentation UPDATED ppt

And here’s the gist of it:

VUI Design is preoccupied with the conception, the design, the implementation, the testing, and the tuning of solutions that work in the most efficient, secure and non-irritating for the user manner. Well, realistically that’s what VUI Design can achieve. In an ideal world, the VUI Designer would actually strive to create speech applications that – apart from taking into consideration the customer’s financial and brand requirements – would also fit the caller’s needs, goals and preferences. The initial Requirements analysis should bring both in focus. So much is already known and accepted both amidst the VUI Designers and the customers.

The problems start just after they all leave the meeting room and start working on the implementation: Call flow design, system persona development and prompt crafting, but even recognition grammars, all seem to fall victim of a war of words and attitudes between the VUI Design expert who has seen systems being developed and spurned before, and the customer with his tech-savvy business team and their technical architects and programming geniuses, who all think they know what callers want and how call flows should be structured, prompt wording crafted and grammars written, just because they have got strong opinions! Even the results of Usability tests are liable to different interpretations by each side.

This presentation pinpoints common pitfalls in the communication between a VUI Designer and customer employees and recommends ways to resolve conflicts and disagreements on the application design and implementation.

Credits:

SpeechTEK Europe 2010 was organised by:

Information Today, Inc.
143 Old Marlton Pike
Medford NJ 08055 U.S.A.
Phone 1 (609) 654-6266.
http://www.infotoday.com

The voice-activated lift won’t do Scottish! (Burnistoun S1E1 – ELEVEN!)

28 Jul

Voice recognition technology? …  In a lift? … In Scotland? … You ever TRIED voice recognition technology? It don’t do Scottish accents!

Today I found this little gem on Youtube and I thought I must share it, as apart from being hilarious, it says a thing or two about speech recognition and speech-activated applications. It’s all based on the urban myth that speech recognisers cannot understand regional accents, such as Scottish and Irish.

Scottish Elevator – Voice Recognition – ELEVEN!

(YouTube – Burnistoun – Series 1 , Episode 1 [ Part 1/3 ])

What? No Buttons?!

These two Scottish guys enter a lift somewhere in Scotland and find that there are no buttons for the floor selection, so they quickly realise it’s a “voice-activated elevator“, as the system calls itself. They want to go to the 11th floor and they first pronounce it the Scottish way:

/eh leh ven/

That doesn’t seem to work at all.

You need to try an American accent“, says one of them, so they try to mimic one, sadly very unsuccessfully:

/ee leh ven/

Then they try a quite funny, Cockney-like English accent:

/ä leh ven/

to no avail.

VUI Sin No. 1: Being condescending to your users

The system prompts them to “Please speak slowly and clearly“, which is exactly what they had been doing up to then in the first place! Instead, it should have said something along the lines of “I’m afraid I didn’t get that. Let’s try again.” and later “I’m really sorry, but I don’t seem to understand what you’re saying. Maybe you would like to try one more time?“. Of course, not having any buttons in the lift means that these guys could be stuck in there forever! That’s another fatal usability error: Both modalities, speech and button presses, should have been allowed to cater for different user groups (easy accents, tricky accents) and different use contexts (people who have got their hands full with carrier bags vs people who can press a button!).

I’m gonna teach you a lesson!

One of them tries to teach the system the Scottish accent: “I keep saying it until she understands Scottish!“, a very reasonable expectation, which would work particularly well with a speaker-dependent dictation system of the kind you’ve got on your PC, laptop or hand-held device. This speaker-independent one (‘cos you can’t really have your personal lift in each building you enter!) will take a bit more time to learn anything from a single conversation! It requires time analysing the recordings, their transcriptions and semantic interpretations, comparing what the system understood with what the user actually said and using those observations to tune the whole system. We are talking at least a week in most cases. They would die of dehydration and starvation by then!

VUI Sin No.2: Patronising your users until they explode

After a while, the system makes it worse by saying what no system should ever dare say to a user’s face: “Please state which floor you would like to go to in a clear and calm manner.” Patronising or what! The guys’ reaction is not surprising: “Why is it telling people to be calm?! .. cos Scottish people would be going out for MONTHS at it!“.

Well, that’s not actually true. These days off-the-shelf speech recognition software is optimised to work with most main accents in a language, yes, including Glaswegian! Millions of real-world utterances spoken by thousands of people with all possible accents in a language (and this for many different languages too) are used to statistically train the recognition software to work equally well with most of them and for most of the time. These utterances are collected from applications that are already live and running somewhere in the world for the corresponding language. The more real-world data available, the better the software can be tuned and the more accurate the recognition of “weird” pronunciations will be, even when you take the software out of the box.

VUI Best Practice: Tune your application to cater for YOUR user population

An additional safeguarding and optimising technique is tuning the pronunciations for a specific speech recognition application.  So when you already know that your system will be deployed in Scotland, you’d better add the Scottish pronunciation for each word explicitly in the recognition lexicon.  This includes manually adding /eh leh ven/ , as the standard /ee leh ven/ pronunciation is not likely to work very well. Given that applications are usually restricted to a specific domain anyway (selecting floors in a lift, getting your bank account balance, choosing departure and arrival train times etc.), this only needs to be done for the core words and phrases in your application, rather than the whole English, French, or Farsi language! So do not despair, there’s hope for freedom (of speech) even for the Scottish! 🙂

For a full transcript of the video, check out EnglishCentral.

Does Your Customer Know What They are Signing off??

3 Jun

Just back from SpeechTEK Europe 2010, the first SpeechTEK to take place outside of the US, which was great fun. I gave a presentation on “The Eternal Battle Between the VUI Designer and the Customer“, which went down quite well (more on that in my next blog), heard many interesting new ideas about how normal people view normal communication channels to a company or organisation (the Web is prevailing but multimodality and crosschannel communication will be indispensable in a couple of years), heard about new applications of speech and touchtone and any challenges they are facing, and met up with loads of people I know in the field from companies I’ve worked for and cities I have worked in. I have started a few projects and collaborations as a result (again to be announced in my next blog), but for now I would like to share my presentation at SpeechTEK 2007 in New York on Monday 20th August 2007 (how time passes!), entitled: “Does Your Customer Know What They are Signing off?”.

Maria Aretoulaki – SpeechTEK 2007 presentation – opening slide

As it says in the accompanying blurb: “This presentation stresses the importance of incremental and modular descriptions of system functionality for targeted and phased reviews and testing. This strategy ensures clarity, consistency, and maintainability beyond the project lifetime and eliminates the need for changes midproject, thus both managing customer expectations and protecting the service provider from ad-hoc requests.“.

Here is a PDF with the presentation slides:

Maria Aretoulaki – SpeechTEK 2007 presentation : “Does Your Customer Know What They are Signing off?”

You can also get the Powerpoint file from the SpeechTEK site itself at: http://conferences.infotoday.com/stats/documents/default.aspx?id=29&lnk=http%3A%2F%2Fconferences.infotoday.com%2Fdocuments%2F27%2FB105_Aretoulaki.pps

The idea is to have a standardised way to document speech application design both in terms of call flow depictions and in terms of functionality description. In addition, 3 different tiers of functionality and call flow representation are proposed, from the more abstract High-Level design (what range of tasks can a system perform?), to the rather detailed Macro-Level (all the user interaction and back-end processes and their interdependencies), to the very detailed Micro-Level which documents every single condition, system prompt and related recognition grammar.

Maria Aretoulaki – 3-tier speech app design representation

The point is that, in every speech project, a number of people with very different backgrounds, roles and expectations are involved, from the Business-minded, to the Techie, to the Usability expert: from Account Managers to the Marketing Strategists, to the Call Centre Managers, the IT Managers, the System Architects, the Programmers, and the VUI Designer themselves (more on these different characters in my next blog with my SpeechTEK 2010 presentation). The 3 different tiers of speech design representation and documentation are ideal in catering for the diverse information needs of those very different groups. The Business and Marketing guys understand better the High-Level representation with the list of things that the system can do in different cases. The Call Centre Managers and some very involved (and worried!) business guys from the side of the customer feel better when they see the Macro-Level detail, because they feel they have more information and therefore more control over what is being designed and implemented. It is also something very concrete to sign off (and therefore difficult to dispute at will later on). The VUI Designer and the System Architect and the various application developers really need the excruciating detail of the Micro-Level: every single condition (including every case where things go wrong) needs to be documented, along with every different prompt that the system will utter (including when it doesn’t recognise or even hear what the caller says), and every speech recognition grammar that is activated every time the system expects a reaction from the caller / user. The inherent modularity and the incremental nature of the design representation means that it can be more easily maintained, more readily modified, and even more straightforwardly adopted and adapted for other speech and multimodal applications in the future. So everybody’s happy 🙂

I gave this presentation when I was Head of Speech Design at Vicorp, although the basic ideas behind it matured during the time I was Senior VUI Designer at Intervoice (now Convergys).

Credits:

SpeechTEK 2007 was organised by:

Information Today, Inc.
143 Old Marlton Pike
Medford NJ 08055 U.S.A.
Phone 1 (609) 654-6266.
http://www.infotoday.com