Tag Archives: smartphone app

XOWi: The wearable Voice Recognition Personal Assistant

30 Oct

I just found out about the new venture of my colleagues, Ahmed Bouzid and Weiye Ma, and I’m all excited and want to spread the word!

They came up with the idea of a Wearable and hence Ubiquitous Personal Voice Assistant, XOWi (pronounced Zoe). The basic concept is that XOWi is small and unintrusive (you wear it like a badge or pin it somewhere near you) but still connects to your smartphone and through that to all kinds of apps and websites for communicating with people (Facebook, Twitter, Ebay) and controlling data and information (selecting TV channels, switching the aircon on). Moreover, it is completely voice-driven, so it is completely hands- and eyes-free. This means that it won’t distract you (if you’re driving, reading, working) and if you have any vision impairment or disability, you are still completely connected and communicable.¬†So, XOWi truly turns Star Trek into reality! The video below explains the concept:


The type of application context is exemplified by the following diagram.

XOWi architecture

And here is how it works:

Ahmed and Weiye have turned to Kickstarter for crowdfunding. If they manage to get $100,000 by 21st November, XOWi will become a product and I will get one for my birthday in March 2014! ūüėÄ Join the Innovators and support the next generation in smart communicators!

TECHGRUMPS: technology addictions & the rise of a new social (un)conscience

26 Apr

On Easter Sunday (24 April 2011), I was happy and honoured to take part in the live recording of the latest TECHGRUMPS podcast, Techgrumps 27: Non geeks go raw like sushi (sic!).  80 minutes of whinging about the latest technology trends, as well as the uses of said technology.

My contribution to the grump world is complaining about the social terror of checking your smartphone notifications every 5 minutes, whatever the (social) context, and the de facto new social media exhibitionism regarding all facets of your personal life through the various social media (a stark contrast to my earlier blog posts on the Social Media Scenes in Manchester and London!). Hear me from the 10th to the 32nd minute complain about:

  • ¬†people spending more time updating their current location and taking photos and videos at a gig rather than dancing, singing and enjoying said gig (check the phone screens in the two photos below I took from a Jamiroquai gig earlier this month)

Jamiroquai at MEN Arena Manchester (19 Apr 2011)

Jamiroquai at MEN Arena Manchester (19 Apr 2011)

  • people checking their Facebook or Twitter notifications on their phone in the middle of a philosophical conversation (usually initiated by the person without a smartphone ;))
  • people checking their phone every 5 minutes in the middle of a film at the cinema, just in case someone has texted them or has posted a witticism on Twitter or Facebook (and that’s even when the film is NOT horrible)
  • people needing to offload very personal information and details on their daily routines every hour of the day on their wide social media audiences, which consist mainly of remote acquaintances rather than close friends (who are usually not remotely interested in said details either)

This excessive notification checking, irrespective of the current social situation, is of course partly due to the availability of the technology itself, i.e. integration of Facebook or Twitter on your phone, internet on-the-go, dedicated notification sounds for texts, Facebook, Twitter, chat etc. So, in all fairness,¬†it is hard not to check your phone when you do get a notification (sound). For all you know, it could be a missed call from a loved one who has been in an accident, or an email confirming that new contract. Nevertheless, it seems that we are all sucked up in a world of instantly available information and an overflow of personal and less personal data that we don’t seem able to escape from. As a result, we are missing the NOW, the experience of the current moment and of the person(s) standing opposite us in real life. This obsessive behaviour can be construed as ¬†rude and anti-social by the people in the immediate surroundings not checking their phones, but – more than anything – it indicates a shift in general social conscience and social mores, whereby the remote online acquaintance in the US you have never met in your life¬† is allocated by default the same or more (potential?) value than the close offline friend sitting next to you here and now. So new types of shallow relationships are cropping up. Whether someone has retweeted you is becoming more important than whether someone actually lends an open ear to you at a cafe to discuss your problems over a cup of¬† coffee.

This need to connect and be “approved” by as many people as possible, whether real close friends, Facebook “friends” or Twitter followers you are not even remotely interested in, must have its roots at the basic human need for love, approval and the sense of belonging (in the right groups). Still, it seems that our whole lives are run by this new need for exhibitionism and we are practically controlled indirectly¬†by our ubiquitous and international audience who is or may be reading.

Having suffered the social media notification terror¬†myself when sitting at my laptop, I refuse to use that functionality or indeed the internet on my (admittedly palaeolithic) phone. ¬†Even the thought of getting a free smartphone scares me! My time when I’m away from my laptop is my treasured time OFFLINE and I want it to remain that way! I have already spent thousands of invaluable hours chained to my laptop obsessing over emails and notifications in the past 20 years, hours that have been sadly subtracted off MY LIFE! So this is not a rant about Social Media – which often really help in the democratisation of Governments, processes and opinion. This is a rant about Social Media abuse and their infliction onto others as well as onto ourselves.

It sounds very heavy but the whole podcast is actually full of witty jokes and hearty laughter!¬†And there are several more techy topics covered, as you can see on the podcast page: from the “native” IE to Firebug, Wikimedia, LaTeX, and the latest iphone personal information storage scare.¬† Enjoy!

Techgrumps 27: Non geeks go raw like sushi

Speech Recognition for Dummies

20 May

OK, I often have to explain to people what I do and in most cases I get an enquiring and mystified look! What is Speech Recognition, let alone VUI Design! So I guess I have to go back to basics for a bit and explain what Speech Recognition is and what speech recognition applications involve.

What is Speech Recognition then?

Speech Recognition is the conversion of speech to text. ¬†The words that you speak are turned into a written representation of those words for the computer to process further (figure out what you want in order to decide what to do or say next). This is not an exact science because – even among us humans – speech recognition is difficult and is fraught with misunderstandings or incomplete understanding. How many times have you had to repeat your name to someone (both in person and on the phone)? How many times have you had someone cracking up with laughter, because they thought you said something different to what you actually said? These are examples of human speech recognition failing magnificently! So it is no wonder that machines do it even less well. It’s all guesswork really.

In the case of machine speech recognition, the machine will have a kind of lexicon into its disposal with possible words in the corresponding language (English, French, German etc.) and their phonetic representation. This phonetic representation describes the ways that people are most likely to pronounce this specific word (think of Queen’s English or Hochdeutsch for German, at this point). Now if you bring regional accents and foreigners speaking the language into the equation, things get even more complicated. The very same letter combinations or whole words are pronounced completely differently depending on whether you are from London, Liverpool, Newcastle, Edinburgh, Dublin, Sydney, New York, or New Orleans. Likewise, the very same English letter combinations and words will sound even more different when spoken by a Greek, a German or a Japanese person. In order to deal with those cases, speech recognition lexica are augmented with additional “pronunciations” for each problematic word. So the machine can hear 3 different versions of the same word spoken by different people and still recognise it as one and the same word. Sorted! Of course you don’t need to go into all this trouble for every possible word or phrase in the language you are covering with your speech application. You only need to go to such lengths for words and phrases that are relevant to your specific application (and domain), as well as for accents that are representative of your end-user population. If an app is going to be used mainly in England, you are better off covering Punjabi and Chinese pronunciations of your English app words rather than Japanese or German variants. There will of course be Japanese and German users of your system, but they represent a much smaller percentage of your user population and we can’t have everything!!

Speech recognition may be based on text representations of words and their phonetic “translation” (pronunciations) but the whole process is actually statistical. What you say to the system will be processed by the system as a wave signal like this one here:

Speech signal for “.. and sadly crime experts predict that one day even a friendly conversation between mother and daughter will be conducted at gunpoint” ūüôā ¬†(Based on the Channel 4 comedy series “Brass Eye” – Season 1)

So the machine will have to figure out what you’re saying by chopping this signal up into parts, each representing a word that makes sense in the context of the surrounding words. Unfortunately the same signal can potentially be chopped up in several different ways, each representing a different string of words and of course a different meaning! There’s a famous example of the following ambiguous string:

signal for “How to Wreck a Nice Beach” err I mean “How to recognise Speech”!! (Taken from FNLP 2010: Lecture 1: Copyright (C) 2010 Henry S. Thompson)

The same speech signal can be heard as “How to wreck a nice beach” or .. “How to recognise speech“!!! They sound very similar actually!! (Taken from FNLP 2010: Lecture 1) So you can see the types of problems that us humans, let alone a machine, are faced with when trying to recognise each other!

Speech Recognition Techniques

The approach to speech recognition described above, which uses hand-crafted lexica, is the standard “manual” approach. This is effective and sufficient for applications that represent very limited domains, e.g. ordering a printer or getting your account balance. The lexica and the corresponding manual “grammars” can describe most relevant phrases that are likely to be spoken by the user population. Any other phrases will be just irrelevant one-offs that can be ignored without negatively affecting the performance of the system.

For anything more complex and advanced, there is the “statistical” approach. This involves the collection of large amounts of real-world speech data, preferably in your application domain: medical data for medical apps, online shopping data for a catalogue ordering app etc. The statistical recogniser will be run over this data multiple times resulting in statistical representations of the most likely and meaningful combinations of sounds in the specific human language (English, German, French, Urdu etc.). ¬†This type of speech recogniser is much more robust and accurate than a “symbolic” recogniser (which uses the manual approach), because it can accurately predict sound and word combinations that could not have been pre-programmed in a hand-crafted grammar. Thus statistical recognisers have got much better coverage of what people actually say (rather than what the programmer or linguist thinks that people say). Sadly, most speech apps (the Interactive Voice Response systems or IVRs, for instance, used in Call Centre automation) are based on the manual symbolic approach rather than the fancy statistical one, because the latter requires considerable amounts of data and this data is not readily available (especially for a new app that has never existed before). A lot of time would need to be spent recording relevant human-2-human conversations and even more time to analyse it in a useful manner. Even when data is available, things such as cost and privacy protection get in the way of either acquiring it or putting it into use.

Speech Recognition Applications

By now you should have realised how complex speech recognition is at the best of times, let alone how difficult it is to recognise people with different regional accents, linguistic backgrounds, and .. even moods or health conditions! (more on that later) Now let’s look at the different types of speech recognition applications. First of all, we should distinguish between speaker-dependent and speaker-independent apps.

Speaker-dependent applications involve the automatic speech recognition of a single person / speaker. It could be your dictation system that you’ve installed on your PC to take notes down, or start writing emails and letters. It could be your hand-held dictation system that you carry around as a doctor or a lawyer, composing a medical report on your patients or talking to your clients, walking up and down the room. It could even be your standard mobile phone or smartphone / iphone / Android¬† that you use to call (voice dial) one of your saved contacts, search through your music library for a track with a simple voice command (or two), or even to tweet. All these are speaker-dependent applications in that the corresponding recogniser has been trained to work with your voice and your voice only. You may have trained it in as little as 5 minutes of speaking to it or longer / shorter in other cases, but it will work sufficiently well with your voice, even if you’ve got a cold (and therefore a hoarse voice) or you’re feeling low (and are therefore more quiet than usual). Give it to your mate or colleague though and it will break down, or misrecognise you in some way. The same recogniser will have to be retrained with any other speaker in order to work.

Enter speaker-independent speech recognition systems! They have been trained on huge amounts of real-world data with thousands of speakers of all kinds of different linguistic, ethnic, regional, or educational backgrounds. As a result, those systems can recognise anyone, both you and your mate and even all your colleagues or anyone else you are likely to meet in the future. They are not tied to the way you pronounce things, your physiology or your voiceprint; they have been developed to work with any human (or indeed machine pretending to be a human, come to think of it!). ¬†So when you buy off-the-shelf speech recognition software, it’s going to work immediately with any speaker, even if badly in some cases. You can later customise it to work for your specific app world and for your target user population, usually with some external help (Enter Professional Services providers.). Speaker-independent applications can work on any phone (mobile or landline) and are used mainly to (partly) automate Call Centres and Helplines, e.g. speech and DTMF IVRs for online shopping, telephone banking or e-Government apps. OK, speech recognition on a mobile can be tricky as the signal may not be good, i.e. intermittent, the line could be crackling, and of course there is the additional problem of background noise, since you are most likely to use it out in the busy streets or some kind of loud environment. Speaker-independent recognition is also used to create voice portals, i.e. speech-enabled versions of websites for greater accessibility and usability (think of disabled Web users). Moreover, a speaker-independent recogniser is also used for voicemail transcription, that is when you get all the voicemails you have received on your phone transcribed automatically and sent to you as text messages, for instant and – importantly – discrete accessibility. They are B2B applications, which means that the solution is sold to a company (a Call Centre, a Bank, a Government organisation). In contrast, speaker-dependent apps are sold to an individual, so they are B2C apps, they are sold directly to the end customer.

Because speaker-independent apps have to work with any speaker calling from any device or channel (even the web, think of Skype), the corresponding speech recogniser is usually stored on a server or cloud somewhere. Speaker-dependent apps on the other hand are stored locally on your personal PC, laptop, Mac, mobile phone or handheld.

And to clear any potential confusion beforehand, when you ring up from your mobile an automated Call Centre IVR (for instance to pay a utilities bill), you are using a speech recogniser stored at that Call Centre’s, the company’s, reseller’s or solution provider’s server rooms. So in that case, although you are using your unique voice on your personal mobile phone, the recogniser does not reside on it. The same holds for voicemail transcription, curiously! Although you are using your unique voiceprint on your personal phone to leave a voicemail on your mate’s phone, the speech recogniser used for the automatic transcription of your mate’s voicemail will be residing on some secret server somewhere, perhaps at the Headquarters of their mobile provider or whoever is charging your mate for this handy service. In contrast, when you use a dictation / voice-to-text app on your smartphone to voice dial one of your contacts, your personal voiceprint, created during training and stored on the device, is used for the speech recognition process. So recognition is a built-in feature. Nowadays there is, however, a third case: if you are using your smartphone to search for an Indian restaurant on Google Maps, the recogniser actually resides in the cloud, on Google servers, rather than on the device. So there are increasingly more permutations of system configurations now!

There are many off-the-shelf speech recognition software packages out there. Nuance is one of the biggest technology providers for both speaker-independent and speaker-dependent / dictation apps.  Other automatic speech recognition (ASR) software companies are Loquendo, Telisma, and LumenVox.  Companies specialising in speaker-dependent / dictation systems are Philips, Grundig and Olympus, among others.  However Microsoft has also long been active in Speech processing and lately Google has also been catching up very fast.

The sky is the limit, as the saying goes!