Archive | IVR Design Best Practice RSS feed for this section

2 for 1: Sponsor a Top Speech, NLP & Robotics Event (SPECOM & ICR 2017)

9 Dec

specom

 

Joint SPECOM 2017 and ICR 2017 Conference

The 19th International Conference on Speech and Computer (SPECOM 2017) and the 2nd International Conference on Interactive Collaborative Robotics (ICR 2017) will be jointly held in Hatfield, Hertfordshire on 12-16 September 2017.

SPECOM has been established as one of the major international scientific events in the areas of speech technology and human-machine interaction over the last 20 years. It attracts scientists and engineers from several European, American and Asian countries and every year the Programme Committee consists of internationally recognized experts in speech technology and human-machine interaction of diverse countries and Institutes, which ensure the scientific quality of the proceedings.

SPECOM TOPICS: Affective computing; Applications for human-machine interaction; Audio-visual speech processing; Automatic language identification; Corpus linguistics and linguistic processing; Forensic speech investigations and security systems; Multichannel signal processing; Multimedia processing; Multimodal analysis and synthesis; Signal processing and feature extraction; Speaker identification and diarization; Speaker verification systems; Speech analytics and audio mining; Speech and language resources; Speech dereverberation; Speech disorders and voice pathologies; Speech driving systems in robotics; Speech enhancement; Speech perception; Speech recognition and understanding; Speech translation automatic systems; Spoken dialogue systems; Spoken language processing; Text mining and sentiment analysis; Text-to-speech and speech-to-text systems; Virtual and augmented reality.

Since last year, SPECOM is jointly organised with ICR conference extending the interest also to human-robot interaction. This year the joint conferences will have 3 Special Sessions co-organised by academic institutes from Europe, USA, Asia and Australia.

ICR 2017 Topics: Assistive robots; Child-robot interaction; Collaborative robotics; Educational robotics; Human-robot interaction; Medical robotics; Robotic mobility systems; Robots at home; Robot control and communication; Social robotics; Safety robot behaviour.

Special Session 1: Natural Language Processing for Social Media Analysis

The exploitation of natural language from social media data is an intriguing task in the fields of text mining and natural language processing (NLP), with plenty of applications in social sciences and social media analytics. In this special session, we call for research papers in the broader field of NLP techniques for social media analysis. The topics of interest include (but are not limited to): sentiment analysis in social media and beyond (e.g., stance identification, sarcasm detection, opinion mining), computational sociolinguistics (e.g., identification of demographic information such as gender, age), and NLP tools for social media mining (e.g., topic modeling for social media data, text categorization and clustering for social media).

Special Session 2: Multilingual and Low-Resourced Languages Speech Processing in Human-Computer Interaction

Multilingual speech processing has been an active topic for many years. Over the last few years, the availability of big data in a vast variety of languages and the convergence of speech recognition and synthesis approaches to statistical parametric techniques (mainly deep learning neural networks) have put this field in the center of research interest, with a special attention for low- or even zero-resourced languages. In this special session, we call for research papers in the field of multilingual speech processing. The topics include (but are not limited to): multilingual speech recognition and understanding, dialectal speech recognition, cross-lingual adaptation, text-to-speech synthesis, spoken language identification, speech-to-speech translation, multi-modal speech processing, keyword spotting, emotion recognition and deep learning in speech processing.

Special Session 3: Real-Life Challenges in Voice and Multimodal Biometrics

Complex passwords or cumbersome dongles are now obsolete. Biometric technology offers a secure and user friendly solution to authenticate and have been employed in various real-life scenarios. This special session seeks to bring together researchers, professionals, and practitioners to present and discuss recent developments and challenges in Real-Life applications of biometrics. Topics of interest include (but are not limited to):

Biometric systems and applications; Identity management and biometrics; Fraud prevention; Anti-spoofing methods; Privacy protection of biometric systems; Uni-modalities, e.g. voice, face, fingerprint, iris, hand geometry, palm print and ear biometrics; Behavioural biometrics; Soft-biometrics; Multi-biometrics; Novel biometrics; Ethical and societal implications of biometric systems and applications.

Delegates’ profile

Speech technology, human-machine interaction and human-robot interaction attract a multidisciplinary group of students and scientists from computer science, signal processing, machine learning, linguistics, social sciences, natural language processing, text mining, dialogue systems, affective modelling, interactive interfaces, collaborative and social robotics, intelligent and adaptive systems. The estimated number of delegates which will attend the Joint SPECOM and ICR conferences is approximately 150 participants.

Who should sponsor:

  • Research Organisations
  • Universities and Research Labs
  • Research and Innovation Projects
  • Academic Publishers
  • Innovative Companies

Sponsorship Levels

Based on different sponsorship levels, sponsors will be able to disseminate their research, innovation and/or commercial activities by distributed leaflets/brochures and/or by 3 days booths, in common area with the coffee breaks and poster sessions.

Location

The joint SPECOM and ICR conferences will be held in the College Lane Campus of the University of Hertfordshire, in Hatfield. Hatfield is located 20 miles (30 kilometres) north of London and is connected to the capital via the A1(M) and direct trains to London King’s Cross (20 minutes), Finsbury Park (16 minutes) and Moorgate (35 minutes). It is easily accessible from 3 international airports (Luton, Stansted and Heathrow) via public transportation.

Contact:

Iosif Mporas

i.mporas@herts.ac.uk 

School of Engineering and Technology

University of Hertfordshire

Hatfield, UK

The winning Omni-Channel Customer Experience

8 Oct

winning Omni-Channel Customer Experience

I just saw this really interesting and eye-opening infographic on LinkedIn and had to share.

Some really interesting statistics on current customer behaviour and attitudes towards brands and communication with them. E.g.

  • The probability of successfully Cross-selling and Up-selling to a potential customer is just 5%-20%, but a whooping 60-70% with an existing customer. (So, watch it and respect your existing customers please!)
  • More than 63% of customer service enquiries are initiated through social channels (So, watch your social media streams!)

Read more here

Call Centre Training e-poll

11 Oct

As part of our METALOGUE project, we have created an electronic poll (e-poll).

metalogue_replay

Our goal is to collect actual real-world requirements from Call Centre professionals that will inform our system pilot design and implementation. Through this and a number of other e-polls, we are asking some basic questions on Call Centre Agent training goals, Call Centre Agent preferences, target functionality of an automated agent training tool, etc.

We are inviting anyone from the Industry, from Call Centre Operators and Managers, Agent Trainers, to Call Centre Agents (experienced and novice) to participate. Feel free to add your own input and comments.

If you can also use the Contact form below to indicate whether you are a Call Centre Operator / Manager, Trainer, or Agent (or all of the above!), we would be able to collect some data on the demographics of the e-poll respondents.

Thank you in advance!

Develop your own Android voice app!

26 Dec

Voice application Development for Android

My colleague Michael F. McTear has got a new and very topical book out! Voice Application Development for Android, co-authored with Zoraida Callejas. Apart from a hands-on step-by-step but still condensed guide to voice application development, you get the source code to develop your own Android apps for free!

Get the book here or through Amazon. And have a look at the source code here.

Exciting times ahead for do-it-yourself Android speech app development!

The AVIxD 49 VUI Tips in 45 Minutes !

6 Nov

Image

 

 

The illustrious Association for Voice Interaction Design (AVIxD) organised a Workshop in the context of SpeechTEK in August 2010, whose goal was “to provide VUI designers with as many tips as possible during the session“. Initially the goal was 30 Tips in 45 minutes. But they got overexcited and came up with a whooping 49 Tips in the end! The Session was moderated by Jenni McKienzie, and the panelists were David Attwater, Jon Bloom, Karen Kaushansky, and Julie Underdahl. This list dates back 3 years now, but it’s by no means outdated. This is the most sound advice you will find in designing better voice recognition IVRs and I hated it being buried in a PDF!

So I am audaciously plagiarising and bringing you here: the 49 VUI Tips for Better Voice User Interface Design! Or go and read the .PDF yourselves here:

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

And finally ….

Image

 

Have you got a VUI Tip you can’t find in this list that you’d like to share? Tell us here!

 

The first time ever someone ordered a pizza with a computer!

23 Jan

In 1974 Donald Sherman, whose speech was limited by a neurological disorder called Moebius Syndrome, used a new-fangled device designed by John Eulenberg to dial up a pizzeria. The first call went to Dominos, which hung up. They were apparently too busy becoming a behemoth. Mercifully, a humane pizzeria – Mr. Mike’s – took the call, and history was made. It all plays out below, and we hope that Mr. Mike’s is still thriving all these years later….” (Smithsonian.com Blog)

Speech synthesis on this computer was rather slow, and it also apparently required “Yes/No” questions to just simply generate a “Yes” or a “No” too. Still, it could also synthesize other phrases, such as the pizza toppings (pepperoni and mushrooms, salami ...), the complex delivery address (the Michigan State Computer Science Department), as well as the contact number for callback. So not bad at all!

I was touched by the patience and kindness of the pizza place employee. He would patiently wait for up to 5 seconds for any answer, which must have been unnerving in itself! And now he is part of History! Good on him!! And well done to the Michigan State University‘s Artificial Language Laboratory and Dr. John Eulenberg!

TEDxManchester 2012: Voice Recognition FTW!

12 Sep

After the extensive TEDxSalford report, and the TEDxManchester Best-of, it’s about time I posted the YouTube video of my TEDxManchester talk!

TEDxManchester took place on Monday 13th February this year at one of the iconic Manchester locations – and my “local” – the Cornerhouse. Among the luminary speakers were people I have always been admiring, such as the radio Goddess Mary Anne Hobbs, and people I have become very close friends with over the years – which has led me to an equal amount of admiration, such as Ian Forrester (@cubicgarden to most of us). You can check out their respective talks, as well as some awesome others, in my TEDxManchester report below.

My TEDxManchester talk

I spoke about the weird and wonderful world of Voice Recognition (“Voice Recognition FTW!”): from the inaccurate – and far too often funny – simple voice-to-text apps and dictation systems on your smartphones, to the most frustrating automated Call Centres, to the next generation, sophisticated SIRI and everything in-between. I explained why things go wrong and when things can go wonderfully right. The answer is “CONTEXT”; the more you have of it , the more accurate and relevant the interpretation of user intention will be, and the more relevant and impressive the system reaction / reply will be.

Here is finally my TEDxManchester video on YouTube.

And below are my TEDxManchester slides.