※重複してお受け取りの際はご容赦ください.
関連研究者の皆様,
名古屋大学の東中です.
Advanced Robotics (Taylor&Francis)ジャーナルの特集号
"Multimodal Processing and Robotics for Dialogue Systems"
のご案内をさせていただきます.〆切は2023/1/31 です.
https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimo…
https://www.rsj.or.jp/content/files/pub/ar/CFP/CFP_37_21.pdf
皆様からのご投稿をお待ちしています.
もし周りのご興味をお持ちの方がいらっしゃいましたら,
本メールをぜひ転送ください.
よろしくお願いいたします.
/////////////////////////////////////////////////////////////////////////
[Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan 2023
In recent years, as seen in smart speakers such as Google Home and
Amazon Alexa, there has been remarkable progress in spoken dialogue
systems technology to converse with users with human-like utterances.
In the future, such dialogue systems are expected to support our daily
activities in various ways. However, dialogue in daily activities is
more complex than that with smart speakers; even with current spoken
dialogue technology, it is still difficult to maintain a successful
dialogue in various situations. For example, in customer service
through dialogue, it is necessary for operators to respond
appropriately to the different ways of speaking and requests of
various customers. In such cases, we humans can switch the speaking
manner depending on the type of customer, and we can successfully
perform the dialogue by not only using our voice but also our gaze and
facial expressions.
This type of human-like interaction is far from possible with the
existing spoken dialogue systems. Humanoid robots have the possibility
to realize such an interaction, because they can recognize not only
the user's voice but also facial expressions and gestures using
various sensors, and can express themselves in various ways such as
gestures and facial expressions using their bodies. Their many means
of expressions have the potential to successfully continue dialogue in
a manner different from conventional dialogue systems.
The combination of such robots and dialogue systems can greatly expand
the possibilities of dialogue systems, while at the same time,
providing a variety of new challenges. Various research and
development efforts are currently underway to address these new
challenges, including "dialogue robot competition" at IROS2022.
In this special issue, we invite a wide range of papers on multimodal
dialogue systems and dialogue robots, their applications, and
fundamental research. Prospective contributed papers are invited to
cover, but are not limited to, the following topics on multimodal
dialogue systems and robots:
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be
sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics
Society of Japan through the on-line submission system of the journal
(
https://www.rsj.or.jp/AR/submission). Sample manuscript templates and
detailed instructions for authors are available at the website of the
journal.