Archive | June, 2016

Meet META, the Meta-cognitive skills Training Avatar!

16 Jun

METALOGUE logo

EU FP7 logo

 

Since November 2013, I’ve had the opportunity to participate in the EU-funded FP7 R & D project, METALOGUE, through my company DialogCONNECTION Ltd, one of 10 Consortium Partners. The project aims to develop a natural, flexible, and interactive Multi-perspective and Multi-modal Dialogue system with meta-cognitive abilities; a system that can:

  • monitor, reason about, and provide feedback on its own behaviour, intentions and strategies, and the dialogue itself,
  • guess the intentions of its interlocutor,
  • and accordingly plan the next step in the dialogue.

The system tries to dynamically adapt both its strategy and behaviour (speech and non-verbal aspects) in order to influence the dialogue partner’s reaction, and, as a result, the progress of the dialogue over time, and thereby also achieve its own goals in the most advantageous way for both sides.

The project is in its 3rd and final year (ending in Oct 2016) and has a budget of € 3,749,000 (EU contribution: € 2,971,000). METALOGUE brings together 10 Academic and Industry partners from 5 EU countries (Germany, Netherlands, Greece, Ireland, and UK).

 

METALOGUE focuses on interactive and adaptive training situations, where negotiation skills play a key role in the decision-making processes. Reusable and customisable software components and algorithms have been developed, tested and integrated into a prototype platform, which provides learners with a rich and interactive environment that motivates them to develop meta-cognitive skills, by stimulating creativity and responsibility in the decision-making, argumentation, and negotiation process. The project is producing a virtual trainer, META, a Training Avatar capable of engaging in natural interaction in English (currently, with the addition of German and Greek in the future), using gestures, facial expressions, and body language.

METALOGUE Avatar

Pilot systems have been developed for 2 different user scenarios: a) debatingand b) negotiation, both tested and evaluated by English-speaking students at the Hellenic Youth Parliament. We are currently targeting various industry verticals, in particular Call Centres, e.g. to semi-automate and enhance Call Centre Agent Training.

 

And here’s META in action!

 

In this video, our full-body METALOGUE Avatar is playing the role of a business owner, who is negotiating a smoking ban with a local Government Counsellor.   Still imperfect (e.g. there is some slight latency before replying – and an embarrassing repetition at some point!), but you can also see the realistic facial expressions, gaze, gestures, and body language, and even selective and effective pauses. It can process natural spontaneous speech in a pre-specified domain (smoking ban, in this case) and it has reached an ASR error rate below 24% (down from almost 50% 2 years ago!). The idea is to use such an Avatar in Call Centres to provide extra training support on top of existing training courses and workshops. It’s not about replacing the human trainer, but rather empowering and motivating Call Centre Trainee Agents who are trying to learn how to read their callers and how to successfully negotiate deals and even complaints with them in an optimal way.

IMG_20151218_143348

 

My company, DialogCONNECTION, is charged with the task of attracting interest and feedback from industry to gauge the relevance and effectiveness of the METALOGUE approach in employee training contexts (esp. negotiation and decision-making). We are looking in particular for Call Centres;both small and agile (serving multiple small clients) and large (and probably plagued by the well-known agent burn-out syndrome). Ideally, you would give us access to real-world Call Centre Agent-Caller/Customer recordings or even simulated Trainer – Trainee phone calls that are used for situational Agent training (either already available or collected specifically for the project). A total of just 15 hours of audio (and video if available) would suffice to train the METALOGUE speech recognisers and the associated acoustic and language models, as well as its metacognitive models.

However, if you don’t want to commit your organisation’s data, any type of input and feedback would make us happy! As an innovative pioneering research project, we really need guidance, evaluation and any input from the real world of industry! So, if we have sparked your interest in any way and you want to get involved and give it a spin, please get in touch!

Advertisements

The Future of A.I. and the Mind

6 Jun

Just listened with great interest and amusement to Neil deGrasse Tyson‘s StarTalk Radio! podcast “Gazing into the Future with Ray Kurzweil“, where “your personal Astrophysicist” and Hayden Planetarium Director interviews the Futurist on his predictions on the future of A.I., the mind, and humankind in general.

 

Neil-deGrasse-Tyson

 

The interview spans all kind of topics: from embodied and disembodied Artificial Intelligence (think Robots vs the Cloud), embodied and disembodied Human Intelligence (think getting hooked up with 3D printed body parts vs uploading your “brain” – or at least your memories – onto a computer chip), Nanorobots running through your bloodstream and even (Sex) Robots flirting with Polyamorous Roboticists! (sic!).

 

 

Kurzweil is being criticised by the other interviewee, Neuroscientist Gary Marcus, for being so specific about the dates on which each milestone is going to happen (Eliminate poverty by 2020? Eliminate disease by 2030? That would be nice). But they all agree that the future is coming, in some ways it’s already here, and that the moral questions posed or that should be posed are valid and tricky to answer: What’s the future of work? Can machines turn us into paperclips? Will we all be unemployed in the future? Can machines turn evil or will circumstances turn them destructive? (think of self-driving cars inadvertently colliding with one another in the vicinity of magnets). There are as many optimistic as pessimistic scenarios, but we are appeased that SkyNet is unlikely to become reality any time soon.

post-11237-Ray-Kurzweil-Joins-Google-84M6

And we may simulate human intelligence and even language, but they ascertain it’s unrealistic to try and simulate the human brain itself with its billions of neurons and synapses in search of the foundations of this intelligence and linguistic aptitude.

But then again, as one of the sayings quoted in the show goes “When something doesn’t work, it’s A.I., when it does, it’s just (clever) Engineering”. Quite!