Journal on Multimodal User Interfaces

Papers
(The TQCC of Journal on Multimodal User Interfaces is 8. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-05-01 to 2024-05-01.)
ArticleCitations
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design48
The effects of spatial auditory and visual cues on mixed reality remote collaboration37
The combination of visual communication cues in mixed reality remote collaboration36
Multimodal interfaces and communication cues for remote collaboration28
A survey of challenges and methods for Quality of Experience assessment of interactive VR applications25
Psychophysical comparison of the auditory and tactile perception: a survey24
A BCI video game using neurofeedback improves the attention of children with autism24
Effects of personality traits on user trust in human–machine collaborations22
Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration21
Sharing gaze rays for visual target identification tasks in collaborative augmented reality16
Words of encouragement: how praise delivered by a social robot changes children’s mindset for learning15
A gaze-based interactive system to explore artwork imagery15
MUMBAI: multi-person, multimodal board game affect and interaction analysis dataset14
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification12
Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies11
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games11
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes11
Developing a scenario-based video game generation framework for computer and virtual reality environments: a comparative usability study10
Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback9
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks9
Verbal empathy and explanation to encourage behaviour change intention9
Grounding behaviours with conversational interfaces: effects of embodiment and failures9
fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces9
Multimodal, visuo-haptic games for abstract theory instruction: grabbing charged particles8
Combining haptics and inertial motion capture to enhance remote control of a dual-arm robot8
SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature8
Multimodal analysis of personality traits on videos of self-presentation and induced behavior8
An audiovisual interface-based drumming system for multimodal human–robot interaction8
0.015710115432739