Tag Archives: multilingual

2 for 1: Sponsor a Top Speech, NLP & Robotics Event (SPECOM & ICR 2017)

9 Dec



Joint SPECOM 2017 and ICR 2017 Conference

The 19th International Conference on Speech and Computer (SPECOM 2017) and the 2nd International Conference on Interactive Collaborative Robotics (ICR 2017) will be jointly held in Hatfield, Hertfordshire on 12-16 September 2017.

SPECOM has been established as one of the major international scientific events in the areas of speech technology and human-machine interaction over the last 20 years. It attracts scientists and engineers from several European, American and Asian countries and every year the Programme Committee consists of internationally recognized experts in speech technology and human-machine interaction of diverse countries and Institutes, which ensure the scientific quality of the proceedings.

SPECOM TOPICS: Affective computing; Applications for human-machine interaction; Audio-visual speech processing; Automatic language identification; Corpus linguistics and linguistic processing; Forensic speech investigations and security systems; Multichannel signal processing; Multimedia processing; Multimodal analysis and synthesis; Signal processing and feature extraction; Speaker identification and diarization; Speaker verification systems; Speech analytics and audio mining; Speech and language resources; Speech dereverberation; Speech disorders and voice pathologies; Speech driving systems in robotics; Speech enhancement; Speech perception; Speech recognition and understanding; Speech translation automatic systems; Spoken dialogue systems; Spoken language processing; Text mining and sentiment analysis; Text-to-speech and speech-to-text systems; Virtual and augmented reality.

Since last year, SPECOM is jointly organised with ICR conference extending the interest also to human-robot interaction. This year the joint conferences will have 3 Special Sessions co-organised by academic institutes from Europe, USA, Asia and Australia.

ICR 2017 Topics: Assistive robots; Child-robot interaction; Collaborative robotics; Educational robotics; Human-robot interaction; Medical robotics; Robotic mobility systems; Robots at home; Robot control and communication; Social robotics; Safety robot behaviour.

Special Session 1: Natural Language Processing for Social Media Analysis

The exploitation of natural language from social media data is an intriguing task in the fields of text mining and natural language processing (NLP), with plenty of applications in social sciences and social media analytics. In this special session, we call for research papers in the broader field of NLP techniques for social media analysis. The topics of interest include (but are not limited to): sentiment analysis in social media and beyond (e.g., stance identification, sarcasm detection, opinion mining), computational sociolinguistics (e.g., identification of demographic information such as gender, age), and NLP tools for social media mining (e.g., topic modeling for social media data, text categorization and clustering for social media).

Special Session 2: Multilingual and Low-Resourced Languages Speech Processing in Human-Computer Interaction

Multilingual speech processing has been an active topic for many years. Over the last few years, the availability of big data in a vast variety of languages and the convergence of speech recognition and synthesis approaches to statistical parametric techniques (mainly deep learning neural networks) have put this field in the center of research interest, with a special attention for low- or even zero-resourced languages. In this special session, we call for research papers in the field of multilingual speech processing. The topics include (but are not limited to): multilingual speech recognition and understanding, dialectal speech recognition, cross-lingual adaptation, text-to-speech synthesis, spoken language identification, speech-to-speech translation, multi-modal speech processing, keyword spotting, emotion recognition and deep learning in speech processing.

Special Session 3: Real-Life Challenges in Voice and Multimodal Biometrics

Complex passwords or cumbersome dongles are now obsolete. Biometric technology offers a secure and user friendly solution to authenticate and have been employed in various real-life scenarios. This special session seeks to bring together researchers, professionals, and practitioners to present and discuss recent developments and challenges in Real-Life applications of biometrics. Topics of interest include (but are not limited to):

Biometric systems and applications; Identity management and biometrics; Fraud prevention; Anti-spoofing methods; Privacy protection of biometric systems; Uni-modalities, e.g. voice, face, fingerprint, iris, hand geometry, palm print and ear biometrics; Behavioural biometrics; Soft-biometrics; Multi-biometrics; Novel biometrics; Ethical and societal implications of biometric systems and applications.

Delegates’ profile

Speech technology, human-machine interaction and human-robot interaction attract a multidisciplinary group of students and scientists from computer science, signal processing, machine learning, linguistics, social sciences, natural language processing, text mining, dialogue systems, affective modelling, interactive interfaces, collaborative and social robotics, intelligent and adaptive systems. The estimated number of delegates which will attend the Joint SPECOM and ICR conferences is approximately 150 participants.

Who should sponsor:

  • Research Organisations
  • Universities and Research Labs
  • Research and Innovation Projects
  • Academic Publishers
  • Innovative Companies

Sponsorship Levels

Based on different sponsorship levels, sponsors will be able to disseminate their research, innovation and/or commercial activities by distributed leaflets/brochures and/or by 3 days booths, in common area with the coffee breaks and poster sessions.


The joint SPECOM and ICR conferences will be held in the College Lane Campus of the University of Hertfordshire, in Hatfield. Hatfield is located 20 miles (30 kilometres) north of London and is connected to the capital via the A1(M) and direct trains to London King’s Cross (20 minutes), Finsbury Park (16 minutes) and Moorgate (35 minutes). It is easily accessible from 3 international airports (Luton, Stansted and Heathrow) via public transportation.


Iosif Mporas


School of Engineering and Technology

University of Hertfordshire

Hatfield, UK

Cross-linguistic & Cross-cultural Voice Interaction Design

31 Jan

(update at the end)

2010 saw the first SpeechTEK Conference to have taken place outside of the US, SpeechTEK Europe 2010 in London. This year’s European Conference, SpeechTEK Europe 2011, will take place again in London (25 – 26 May 2011), but this time it will be preceded on Tuesday 24th May by a special Workshop on Cross-linguistic & Cross-cultural Voice Interaction Design organised by the Association for Voice Interaction Design (AVIxD). The main goal of AVIxD is to bring together voice interaction and experience designers from both Industry and Academia and, among other things, to “eliminate apathy and antipathy toward the need for good design of automated voice services” (that’s my favourite!). This is the first AVIxD Workshop to take place in Europe and I am honoured to have been appointed Co-Chair alongside Caroline Leathem-Collins from EIG.

Participation is free to AVIxD members and just £25 for non-members (which may be applied towards AVIxD membership). However in order to participate in the workshop, you need to submit a brief position paper in English (approx. 500 words) on any of the special topics of interest of the Workshop (See CFP below). The deadline for electronic submissions is Friday 25 March, so you need to hurry if you want to be part of it!

Here’s the full Call for (Position) Papers from the AVIxD site:

Call for Position Papers

First European AVIxD Workshop

Cross-linguistic & Cross-cultural Voice Interaction Design

Tuesday, 24 May 2011 (just prior to SpeechTEK Europe 2011), 1 – 7 PM

London, England

The Association for Voice Interaction Design (AVIxD) invites you to join us for our first voice interaction design workshop held in Europe, Cross-linguistic & Cross-cultural Voice Interaction Design. The AVIxD workshop is a hands-on day-long session in which voice user interface practitioners come together to debate a topic of interest to the speech community. The workshop is a unique opportunity for them to meet with their peers and delve deeply into a single topic.

As in previous years with the AVIxD Workshops held in the US, we will write papers based on our discussions which we will then publish on www.avixd.org. Please visit our website to see papers from previous workshops, and for more details on the purpose of the organization and how you can be part of it.

In order to participate in the workshop, individuals must submit a position paper of approximately 500 words in English. Possible topics to touch upon in your submission (to be discussed in depth during the workshop) include:

  1. Language choice and user demographics
  2. Presentation of the language options to the caller and caller preference
  3. Creation and (co-)maintenance of dialogue designs, grammars, prompts across languages
  4. Political and sociolinguistic issues in system prompt choices and recognition grammars, such as code-switching, formal versus informal registers
  5. Guidelines for application localization, translation, and interpretation
  6. Setting expectations regarding availability of multilingual agents, Language- and culture-sensitive persona definition
  7. Coordinating usability testing and tuning across diverse linguistic / cultural groups
  8. Language choice and modality preference

We always encourage the use of specific examples from applications you’ve worked on in your position paper.

Participation is free to AVIxD members; non-members will be charged £25, which may be applied towards AVIxD membership at the workshop. Please submit your position papers via email no later than Friday 25 March 2011 to cfp@avixd.org. Letters of acceptance will be sent out on 30 March 2011.

We look forward to engaging with the European speech design community to discuss the particular challenges of designing speech solutions for users from diverse linguistic and cultural backgrounds. Feel free to contact either of the co-chairs below, if you have any questions.

Caroline Leathem-Collins, EIG  (caroline {at} eiginc {dot} com)

Maria Aretoulaki, DialogCONNECTION Ltd (maria {at} dialogconnection {dot} com)


SpeechTEK Europe 2011 has come and gone and I’ve got many interesting things to report (as I have been tweeting through my @dialogconnectio Twitter account).

But first, here are the slides for my presentation at the main conference on the outcome of the AVIxD Workshop on Cross-linguistic & Cross-cultural Voice Interaction Design organised by the Association for Voice Interaction Design (AVIxD). I only had 12 hours to prepare them – including sleep and London tube commute – so I had to practically keep working on them until shortly before the Session! Still I think the slides capture the breadth and depth of topics discussed or at least touched upon at the Workshop. There are several people now writing up on all these topics and there should be one or more White papers on them very soon (by the end of July we hope!). So the slides did their job after all!

Get the slides in PDF here:  Maria Aretoulaki – SpeechTEK Europe 2011 presentation.