Tag Archives: DialogCONNECTION

My baby, DialogCONNECTION, is 11!

4 Dec

This week, my company, DialogCONNECTION Limited, turned 11 years old! 🎉 🥂 😁

It feels like yesterday, when in December 2008 I registered it with Companies House and became Company Director (with multiple hats).

My very first client project was for the NHS Business Authority on their EHIC Helpline (which hopefully will survive the Brexit negotiations). Back then, whenever I was telling anyone what my company does (VUI Design for Speech IVRs), I was greeted by blank stares of confusion or incomprehension. It did feel a bit lonely at times!

Many more clients and thousands of long hours, long days and working weekends since, here we are in December 2019 and I suddenly find myself surrounded by VUI Designers and Voice Strategists who have now seen the potential and inescapable nature of speech interfaces and have followed on my footsteps. I feel vindicated, especially since I started in Voice back in 1996 with my Post-Doc in Spoken Dialogue Management at the University of Erlangen! 😎 (Yet another thing I’m hugely grateful to the EU for!)

We started with Voice-First VUI Design back in 1996, well before Samsung’s BIXBY (2017), Google’s ASSISTANT (2016), Amazon’s ALEXA (2014), Apple’s SIRI (2010) and even before the world started using GOOGLE for internet searches (1998)!

http://dialogconnection.com/who-designs-for-you.html

It’s quite frustrating when I realise that many of these newcomers have never heard of an IVR (Interactive Voice Response) system before, but they will eventually learn. 🤓 In the past 25 years it was the developers who insisted could design conversational interfaces without any (Computational) Linguistics, Natural Language Processing (NLP) or Speech Recognition (ASR) background and didn’t need, therefore, a VUI Designer. And we were an allegedly superfluous luxury and rarity in those times. In the past couple of years it’s the shiny Marketing people, who make a living from their language mastery, and the edgy GUI Designers, who excell in visual design and think they can design voice interfaces too, but still know nothing about NLP or ASR.

What they don’t know is that, by modifying, for instance, just the wording of what your system says (prompt tuning), you can achieve dramatically better speech recognition and NLU accuracy, because the user is covertly “guided” to say what we expect (and have covered in the grammar). The same holds for tuned grammars (for out-of-vocabulary words), word pronunciations (for local and foreign accents), tuned VUI designs (for error recovery strategies) and tuned ASR engine parameters (for timeouts and barge-ins). It’s all about knowing how the ASR software and our human brain language software works.

Excited to see what the next decade is going to bring for DialogCONNECTION and the next quarter of a century for Voice! Stay tuned!

UBIQUITOUS VOICE: Essays from the Field now on Kindle!

14 Oct

In 2018, a new book on “Voice First” came out on Amazon and I was proud and deeply honoured, as it includes one of my articles! Now it has come out on Kindle as an e-Book and we are even more excited at the prospect of a much wider reach!

“Ubiquitous Voice: Essays from the Field”: Thoughts, insights and anecdotes on Speech Recognition, Voice User Interfaces, Voice Assistants, Conversational Intelligence, VUI Design, Voice UX issues, solutions, Best practices and visions from the veterans!

I have been part of this effort since its inception, working alongside some of the pioneers in the field who now represent the Market Leaders (GOOGLE, AMAZON, NUANCE, SAMSUNG VIV .. ). Excellent job by our tireless and intrepid Editor, Lisa Falkson!

My contribution “Convenience + Security = Trust: Do you trust your Intelligent Assistant?” is on data privacy concerns and social issues associated with the widespread adoption of voice activation. It is thus platform-, ASR-, vendor- and company-agnostic.

You can get the physical book here and the Kindle version here.

Prepare to be enlightened, guided and inspired!

Meet META, the Meta-cognitive skills Training Avatar!

16 Jun

METALOGUE logo

EU FP7 logo

 

Since November 2013, I’ve had the opportunity to participate in the EU-funded FP7 R & D project, METALOGUE, through my company DialogCONNECTION Ltd, one of 10 Consortium Partners. The project aims to develop a natural, flexible, and interactive Multi-perspective and Multi-modal Dialogue system with meta-cognitive abilities; a system that can:

  • monitor, reason about, and provide feedback on its own behaviour, intentions and strategies, and the dialogue itself,
  • guess the intentions of its interlocutor,
  • and accordingly plan the next step in the dialogue.

The system tries to dynamically adapt both its strategy and behaviour (speech and non-verbal aspects) in order to influence the dialogue partner’s reaction, and, as a result, the progress of the dialogue over time, and thereby also achieve its own goals in the most advantageous way for both sides.

The project is in its 3rd and final year (ending in Oct 2016) and has a budget of € 3,749,000 (EU contribution: € 2,971,000). METALOGUE brings together 10 Academic and Industry partners from 5 EU countries (Germany, Netherlands, Greece, Ireland, and UK).

 

METALOGUE focuses on interactive and adaptive training situations, where negotiation skills play a key role in the decision-making processes. Reusable and customisable software components and algorithms have been developed, tested and integrated into a prototype platform, which provides learners with a rich and interactive environment that motivates them to develop meta-cognitive skills, by stimulating creativity and responsibility in the decision-making, argumentation, and negotiation process. The project is producing a virtual trainer, META, a Training Avatar capable of engaging in natural interaction in English (currently, with the addition of German and Greek in the future), using gestures, facial expressions, and body language.

METALOGUE Avatar

Pilot systems have been developed for 2 different user scenarios: a) debatingand b) negotiation, both tested and evaluated by English-speaking students at the Hellenic Youth Parliament. We are currently targeting various industry verticals, in particular Call Centres, e.g. to semi-automate and enhance Call Centre Agent Training.

 

And here’s META in action!

 

In this video, our full-body METALOGUE Avatar is playing the role of a business owner, who is negotiating a smoking ban with a local Government Counsellor.   Still imperfect (e.g. there is some slight latency before replying – and an embarrassing repetition at some point!), but you can also see the realistic facial expressions, gaze, gestures, and body language, and even selective and effective pauses. It can process natural spontaneous speech in a pre-specified domain (smoking ban, in this case) and it has reached an ASR error rate below 24% (down from almost 50% 2 years ago!). The idea is to use such an Avatar in Call Centres to provide extra training support on top of existing training courses and workshops. It’s not about replacing the human trainer, but rather empowering and motivating Call Centre Trainee Agents who are trying to learn how to read their callers and how to successfully negotiate deals and even complaints with them in an optimal way.

IMG_20151218_143348

 

My company, DialogCONNECTION, is charged with the task of attracting interest and feedback from industry to gauge the relevance and effectiveness of the METALOGUE approach in employee training contexts (esp. negotiation and decision-making). We are looking in particular for Call Centres;both small and agile (serving multiple small clients) and large (and probably plagued by the well-known agent burn-out syndrome). Ideally, you would give us access to real-world Call Centre Agent-Caller/Customer recordings or even simulated Trainer – Trainee phone calls that are used for situational Agent training (either already available or collected specifically for the project). A total of just 15 hours of audio (and video if available) would suffice to train the METALOGUE speech recognisers and the associated acoustic and language models, as well as its metacognitive models.

However, if you don’t want to commit your organisation’s data, any type of input and feedback would make us happy! As an innovative pioneering research project, we really need guidance, evaluation and any input from the real world of industry! So, if we have sparked your interest in any way and you want to get involved and give it a spin, please get in touch!

The 2015 stats are in!

20 Jan

The WordPress.com stats monkeys prepared an annual report for this blog.

Top blog posts in 2015 were: “A.I.: from Sci-Fi to Science reality” and the ever popular older “Speech Recognition for Dummies” and the classic “Voice-activated lift won’t do Scottish! (Burnistoun S1E1 – ELEVEN!“.

Scottish Elevator – Voice Recognition – ELEVEN!

(YouTube – Burnistoun – Series 1 , Episode 1 [ Part 1/3 ])

Voice recognition technology? …  In a lift? … In Scotland? … You ever TRIED voice recognition technology? It don’t do Scottish accents!

🙂

 

So, we had 4,500 unique visitors in 2015! Thank you!

A New York City subway train holds 1,200 people. This blog was viewed about 4,500 times in 2015. If it were a NYC subway train, it would take about 4 trips to carry that many people.

Check out some more stats in the full WordPress report.

Happy 2016! 🙂

Call Centre Training e-poll

11 Oct

As part of our METALOGUE project, we have created an electronic poll (e-poll).

metalogue_replay

Our goal is to collect actual real-world requirements from Call Centre professionals that will inform our system pilot design and implementation. Through this and a number of other e-polls, we are asking some basic questions on Call Centre Agent training goals, Call Centre Agent preferences, target functionality of an automated agent training tool, etc.

We are inviting anyone from the Industry, from Call Centre Operators and Managers, Agent Trainers, to Call Centre Agents (experienced and novice) to participate. Feel free to add your own input and comments.

If you can also use the Contact form below to indicate whether you are a Call Centre Operator / Manager, Trainer, or Agent (or all of the above!), we would be able to collect some data on the demographics of the e-poll respondents.

Thank you in advance!

Award to our METALOGUE Presentation Trainer!

26 Sep

Fantastic news!

Our EU project METALOGUE has won the EC-TEL 2014 Technology-Enhanced Learning Best Demo Award! Our METALOGUE Partners at the Dutch Open University (OUNL) demonstrated and won the audience over with their Presentation Trainer, a public speaking instructor which tracks and analyses the user’s body posture and movements, speaking cadence and voice volume, and provides instructional feedback on their non-verbal communication skills (sensor-based learning). Congratulations to our OUNL partners!

EC-TEL 2014 Best Demo Award

EU FP7 logo

The first time ever someone ordered a pizza with a computer!

23 Jan

In 1974 Donald Sherman, whose speech was limited by a neurological disorder called Moebius Syndrome, used a new-fangled device designed by John Eulenberg to dial up a pizzeria. The first call went to Dominos, which hung up. They were apparently too busy becoming a behemoth. Mercifully, a humane pizzeria – Mr. Mike’s – took the call, and history was made. It all plays out below, and we hope that Mr. Mike’s is still thriving all these years later….” (Smithsonian.com Blog)

Speech synthesis on this computer was rather slow, and it also apparently required “Yes/No” questions to just simply generate a “Yes” or a “No” too. Still, it could also synthesize other phrases, such as the pizza toppings (pepperoni and mushrooms, salami ...), the complex delivery address (the Michigan State Computer Science Department), as well as the contact number for callback. So not bad at all!

I was touched by the patience and kindness of the pizza place employee. He would patiently wait for up to 5 seconds for any answer, which must have been unnerving in itself! And now he is part of History! Good on him!! And well done to the Michigan State University‘s Artificial Language Laboratory and Dr. John Eulenberg!