※重複して受け取られましたらご容赦ください.
関連研究者のみなさま,
名古屋大学の東中です.
すでにご案内しております Advanced Robotics 誌の
"Multimodal Processing and Robotics for Dialogue Systems"の特集号の投稿締切
を2023年1月31日としておりましたが,多数の締切延長やお問い合わせがございまし
たので,締切を1ヶ月延長いたします.
投稿締切:2023年2月28日
特集号URL:
https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimo…
奮ってご投稿ください.よろしくお願いいたします.
/////////////////////////////////////////////////////////////////////////
[Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 28 Feb 2023
In recent years, as seen in smart speakers such as Google Home and Amazon
Alexa, there has been remarkable progress in spoken dialogue systems
technology to converse with users with human-like utterances. In the future,
such dialogue systems are expected to support our daily activities in
various ways. However, dialogue in daily activities is more complex than
that with smart speakers; even with current spoken dialogue technology, it
is still difficult to maintain a successful dialogue in various
situations. For example, in customer service through dialogue, it is
necessary for operators to respond appropriately to the different ways of
speaking and requests of various customers. In such cases, we humans can
switch the speaking manner depending on the type of customer, and we can
successfully perform the dialogue by not only using our voice but also our
gaze and facial expressions.
This type of human-like interaction is far from possible with the existing
spoken dialogue systems. Humanoid robots have the possibility to realize
such an interaction, because they can recognize not only the user's voice
but also facial expressions and gestures using various sensors, and can
express themselves in various ways such as gestures and facial expressions
using their bodies. Their many means of expressions have the potential to
successfully continue dialogue in a manner different from conventional
dialogue systems.
The combination of such robots and dialogue systems can greatly expand the
possibilities of dialogue systems, while at the same time, providing a
variety of new challenges. Various research and development efforts are
currently underway to address these new challenges, including "dialogue
robot competition" at IROS2022.
In this special issue, we invite a wide range of papers on multimodal
dialogue systems and dialogue robots, their applications, and fundamental
research. Prospective contributed papers are invited to cover, but are not
limited to, the following topics on multimodal dialogue systems and robots:
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be sent
by 28th Feb 2023 to the office of Advanced Robotics, the Robotics Society of
Japan through the on-line submission system of the journal
(
https://www.rsj.or.jp/AR/submission). Sample manuscript templates and
detailed instructions for authors are available at the website of the
journal.
Note that word count includes references. Captions and author bios are
not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds
the word limit.