Predicting when to laugh with structured classification - IMS - Equipe Information, Multimodalité et Signal Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Predicting when to laugh with structured classification

Résumé

Today, Embodied Conversational Agents (ECAs) are emerging as natural media to interact with machines. Applications are numerous and ECAs can reduce the technological gap between people by providing user-friendly interfaces. Yet, ECAs are still unable to produce social signals appropriately during their interaction with humans, which tends to make the interaction less instinctive. Especially, very little attention has been paid to the use of laughter in human-avatar interactions despite the crucial role played by laughter in human-human interaction. In this paper, a method for predicting the most appropriate moment for laughing for an ECA is proposed. Imitation learning via a structured classification algorithm is used in this purpose and is shown to produce a behavior similar to humans’ on a practical application: the yes/no game.
Fichier principal
Vignette du fichier
supelec887.pdf (1.14 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01104739 , version 1 (19-01-2015)

Licence

Paternité - Pas d'utilisation commerciale - Pas de modification

Identifiants

  • HAL Id : hal-01104739 , version 1

Citer

Bilal Piot, Olivier Pietquin, Matthieu Geist. Predicting when to laugh with structured classification. InterSpeech 2014, Sep 2014, Singapore, Singapore. pp.1786-1790. ⟨hal-01104739⟩
1718 Consultations
194 Téléchargements

Partager

Gmail Facebook X LinkedIn More