Show simple item record

dc.contributor.authorO'Dwyer, Jonny
dc.contributor.authorMurray, Niall
dc.contributor.authorFlynn, Ronan
dc.date.accessioned2019-05-09T09:01:34Z
dc.date.available2019-05-09T09:01:34Z
dc.date.copyright2018-02
dc.date.issued2018
dc.identifier.citationO'Dwyer, J., Murray, N., Flynn, R. (2018). Affective computing using speech and eye gaze: a review and bimodal system proposal for continuous affect prediction. In - arXiv preprint arXiv:1805.06652, 2018.en_US
dc.identifier.otherSoftware Research Institute - Articlesen_US
dc.identifier.urihttps://research.thea.ie/handle/20.500.12065/2680
dc.description.abstractSpeech has been a widely used modality in the field of affective computing. Recently however, there has been a growing interest in the use of multi-modal affective computing systems. These multi-modal systems incorporate both verbal and non-verbal features for affective computing tasks. Such multi-modal affective computing systems are advantageous for emotion assessment of individuals in audio-video communication environments such as teleconferencing, healthcare, and education. From a review of the literature, the use of eye gaze features extracted from video is a modality that has remained largely unexploited for continuous affect prediction. This work presents a review of the literature within the emotion classification and continuous affect prediction sub-fields of affective computing for both speech and eye gaze modalities. Additionally, continuous affect prediction experiments using speech and eye gaze modalities are presented. A baseline system is proposed using open source software, the performance of which is assessed on a publicly available audio-visual corpus. Further system performance is assessed in a cross-corpus and cross-lingual experiment. The experimental results suggest that eye gaze is an effective supportive modality for speech when used in a bimodal continuous affect prediction system. The addition of eye gaze to speech in a simple feature fusion framework yields a prediction improvement of 6.13% for valence and 1.62% for arousal.en_US
dc.formatPDFen_US
dc.language.isoenen_US
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectHuman-computer interactionen_US
dc.subjectArtificial intelligenceen_US
dc.subjectUser interfaces (Computer systems)en_US
dc.titleAffective computing using speech and eye gaze: a review and bimodal system proposal for continuous affect prediction.en_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.description.peerreviewyesen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-5919-0596
dc.rights.accessOpen Accessen_US
dc.rights.accesshttp://creativecommons.org/licenses/by/4.0/
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.departmentSoftware Research Institute AITen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
Except where otherwise noted, this item's license is described as Attribution 4.0 International