日本データベース学会の皆さま 早稲田大学の酒井と申します。
NTCIR-17 FairWeb-1 (グループフェアネスを考慮したウェブ検索タスク)
の参加案内を共有させていただきます。
タスク参加登録締切は4/15です。ご検討いただければ幸いです。
日本語の情報(日付等、改訂前のものですが)はこちら にございます。
https://waseda.box.com/webdb2022fairweb1https://waseda.box.com/webdb2022fairweb1slides
***
SECOND CALL FOR TASK PARTICIPATION: NTCIR-17 FairWeb-1 Task (Registration due April 15, 2023 AoE)
http://sakailab.com/fairweb1/
# Overview
Intro slide deck: https://waseda.box.com/fairweb1intro2023feb
The FairWeb-1 task is a new English web search task that considers both relevance and group fairness.
Each of our search topic seeks information about specific entity types: R (researher), M (movies), and Y (YouTube contents); accordingly, we have four topic types:
R-topics (e.g., information retrieval researchers)
M-topics (e.g., Daniel Craig 007 movies)
Y-topics (e.g., Coldplay covers on YouTube)
For evaluating group fairness, we consider the following attribute sets containing either ordinal or nominal groups:
R-topics: h-index (ordinal) AND gender (nominal)
M-topics: #ratings on IMDb (ordinal) AND geographic region (nominal)
Y-topics: #subscribers of the YouTube account (ordinal)
For R- and M-topics, we will evaluate intersectional groupfairness.
For example, for each R-topic, we want the search engine result page (SERP)
to contain information about researchers with varying levels of h-index (not just high h-index people), AND with different genders (not just "men").
# Task input/ouput
INPUT:
- a search topic (R, M, or Y)
- attribute set(s) and target distribution(s)
OUTPUT:
- a TREC-style run (a SERP for each topic)
# Evaluation method
We will use the Group Fairness and Relevance (GFR) framework: please see the slide deck for details.
# Timeline
February 15, 2023 Pilot relevance assessments for the sample topics and a few pilot runs released
February 1-March 10, 2023 Topic development
March 15, 2023 Topics released
April 15, 2023 Task registrations due
May 16, 2023 Run submissions due
May 17-July 31, 2023 Entity annotations; runs evaluated
August 1, 2023 Evaluation results and draft overview released
September 1, 2023 Draft participant papers due
November 1, 2023 Camera ready papers due
December 2023 NTCIR-17@NII, Tokyo, Japan
# Inquiries:
fairweb1org(a)list.waseda.jp
# Organisers:
Sijie Tao, Nuo Chen, Tetsuya Sakai (Waseda University, Japan)
Zhumin Chu (Tsinghua University, P.R.C.)
Nicola Ferro (University of Padua, Italy)
Maria Maistro (University of Copenhagen, Denmark)
Ian Soboroff (NIST, USA)
Hiromi Arai (RIKEN AIP, Japan)
###
Professor Tetsuya Sakai (tetsuya(a)waseda.jp)
Department of Computer Science and Engineering, Waseda University
http://sakailab.com/tetsuya/
※重複して受け取られましたらご容赦ください.
関連研究者のみなさま,
名古屋大学の東中です.
すでにご案内しております Advanced Robotics 誌の
"Multimodal Processing and Robotics for Dialogue Systems"の特集号の投稿締切
を2023年1月31日としておりましたが,多数の締切延長やお問い合わせがございまし
たので,締切を1ヶ月延長いたします.
投稿締切:2023年2月28日
特集号URL:
https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimo…
奮ってご投稿ください.よろしくお願いいたします.
/////////////////////////////////////////////////////////////////////////
[Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 28 Feb 2023
In recent years, as seen in smart speakers such as Google Home and Amazon
Alexa, there has been remarkable progress in spoken dialogue systems
technology to converse with users with human-like utterances. In the future,
such dialogue systems are expected to support our daily activities in
various ways. However, dialogue in daily activities is more complex than
that with smart speakers; even with current spoken dialogue technology, it
is still difficult to maintain a successful dialogue in various
situations. For example, in customer service through dialogue, it is
necessary for operators to respond appropriately to the different ways of
speaking and requests of various customers. In such cases, we humans can
switch the speaking manner depending on the type of customer, and we can
successfully perform the dialogue by not only using our voice but also our
gaze and facial expressions.
This type of human-like interaction is far from possible with the existing
spoken dialogue systems. Humanoid robots have the possibility to realize
such an interaction, because they can recognize not only the user's voice
but also facial expressions and gestures using various sensors, and can
express themselves in various ways such as gestures and facial expressions
using their bodies. Their many means of expressions have the potential to
successfully continue dialogue in a manner different from conventional
dialogue systems.
The combination of such robots and dialogue systems can greatly expand the
possibilities of dialogue systems, while at the same time, providing a
variety of new challenges. Various research and development efforts are
currently underway to address these new challenges, including "dialogue
robot competition" at IROS2022.
In this special issue, we invite a wide range of papers on multimodal
dialogue systems and dialogue robots, their applications, and fundamental
research. Prospective contributed papers are invited to cover, but are not
limited to, the following topics on multimodal dialogue systems and robots:
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be sent
by 28th Feb 2023 to the office of Advanced Robotics, the Robotics Society of
Japan through the on-line submission system of the journal
(https://www.rsj.or.jp/AR/submission). Sample manuscript templates and
detailed instructions for authors are available at the website of the
journal.
Note that word count includes references. Captions and author bios are
not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds
the word limit.
日本データベース学会の皆様,
阪大の鬼塚です.
SIGMODに併設のワークショップで
Simplicity in Management of Data (SiMoD)
というワークショップが始まります(第一回目).
近年の論文でのシステムは複雑な内容になりがちであるため,逆の発想に立って
シンプルで重要なコアアイデアの論文を募集することを主眼としているそうです(simple
ideas that work well in practice).
アイデア論文と実験論文を募集していて,特にアイデア論文は4ページと短いです.
*Novel Ideas (up to 4 pages):*Papers in this track should present
early-stage, original ideas that were not proposed in the past. The
submission should be concise and distill the core idea on specific
problems being solved, with reasonable evidence (e.g., preliminary
experimental results) showing the idea's applicability. We also
encourage the authors to include the limitations of the proposed ideas.
Submissions in this track should not exceed four pages, including
everything, such as references.
投稿締め切りは 3/15 です.詳しくはワークショップページをご確認ください.
https://sfu-dis.github.io/simod/
--
Makoto Onizuka(鬼塚 真)
E-mail:onizuka@ist.osaka-u.ac.jp
TEL: 06-6879-7750/FAX:06-6879-7743
URL:http://www-bigdata.ist.osaka-u.ac.jp/
〒565-0871 大阪府吹田市山田丘 1-5 大阪大学大学院 情報科学研究科
マルチメディア工学専攻 ビッグデータ工学講座(鬼塚研究室)