日本データベース学会の皆様
名古屋大学の駒水です.
マルチモーダル×人間理解をテーマとした国際ワークショップ MUWS 2025 では、
Web・SNS・ニュースメディア上のマルチモーダル情報を対象とした研究を募集しています。
* Track 1:人間中心のマルチモーダル理解
(感情・印象・偏見・セミオティクス応用など)
* Track 2:世界的イベントに関するマルチモーダル分析
(文化・感情・社会的影響の比較、データ提供あり)
投稿締切:2025年7月11日(AoE)
開催日:2025年10月27日(アイルランド・ダブリン)
投稿形式:英語・ACMスタイル・ダブルブラインド(2〜4ページまたは8ページ)
詳細・投稿ページ:
https://muws-workshop.github.io/
ご感心のある方はぜひ投稿をご検討ください.
-----------------------------------------------------------------------------------------------------------------------------
CALL FOR PAPERS
MUWS 2025 - The 4th International Workshop on Multimodal Human
Understanding for the Web and Social Media
co-located with ACM Multimedia 2025 in Dublin, Ireland.
October 27/27 2025, Dublin, Ireland
More Info:
https://muws-workshop.github.io/
-----------------------------------------------------------------------------------------------------------------------------
* Aim and Scope
Multimodal human understanding and analysis is an emerging research
area that cuts through several disciplines like Computer Vision,
Natural Language Processing (NLP), Speech Processing, Human-Computer
Interaction, and Multimedia. Several multimodal learning techniques
have recently shown the benefit of combining multiple modalities in
image-text, audio-visual and video representation learning and various
downstream multimodal tasks. At the core, these methods focus on
modelling the modalities and their complex interactions by using large
amounts of data, different loss functions and deep neural network
architectures. However, for many Web and Social media applications,
there is the need to model the human, including the understanding of
human behaviour and perception. For this, it becomes important to
consider interdisciplinary approaches, including social sciences,
semiotics and psychology. The core is understanding various
cross-modal relations, quantifying bias such as social biases, and the
applicability of models to real-world problems. Interdisciplinary
theories such as semiotics or gestalt psychology can provide
additional insights and analysis on perceptual understanding through
signs and symbols via multiple modalities. In general, these theories
provide a compelling view of multimodality and perception that can
further expand computational research and multimedia applications on
the Web and Social media.
The theme of the MUWS workshop, multimodal human understanding,
includes various interdisciplinary challenges related to social bias
analyses, multimodal representation learning, detection of human
impressions or sentiment, hate speech, sarcasm in multimodal data,
multimodal rhetoric and semantics, and related topics. The MUWS
workshop will be an interactive event and include keynotes by relevant
experts, poster and demo sessions, research presentations and
discussion.
* Track 1: Human-Centred Multimodal Understanding
The goal of this track is to attract researchers working in multimodal
understanding (NLP, CV, Digital Humanities, and other related fields)
topics with a focus on human-centred aspects. We will seek for
original, both application-oriented and theoretical papers, and
position papers that bridge both text and multimedia data. This track
will cover novel research that targets (but not limited to) the
following topics of interest:
* Multimodal modelling of human impressions in the context of the Web
and social media
* Incorporating multi-disciplinary theories such as semiotics or
Gestalt-theory into multimodal approaches and analyses
* Human-centred aspects in Vision and Language models
* Measuring and analysing cultural, social and multilingual biases in
the context of the Web and social media
* Cross-modal and semantic relations in multimodal web data
* Multimodal human perception understanding
* Multimodal sentiment/emotion/sarcasm recognition
* Multimodal hate speech detection
* Multimodal misinformation detection
* Multimodal content understanding and analysis
* Multimodal rhetoric in online media
* Track 2: Multimodal Understanding Through Impactful World Events
The goal of this track is to provide a dataset that facilitates the
development of AI solutions for relevant and impactful research
questions in order to bring together researchers working on similar
topics such as multimedia and multimodal AI. For this purpose, we
release news and social media data with both image and text related to
events that attracted global impact, e.g., the 2024 United States
presidential election, the 2025 German federal election, or the
DeepSeek.AI R1 model & stock market crash. The datasets cover
multimodal content in various languages published in different regions
of the world. This allows the study of how the same event is portrayed
in countries with different cultural, economic, and regional
background. To foster research, we will provide various research
questions along with the dataset which include, but are not limited
to:
* Geographical Proximity: How can news values in multimodal news such
as the location of an event affect the human perception?
* Multimodal Cultural Bias: How are world-wide events perceived across
different cultures or languages?
* Framing of Elites: How do multimodal framing techniques employed by
news outlets differ in their portrayal of elite figures, e.g.,
politicians during major electoral events?
* Sentiment across Cultures: How does the sentiment expressed in news
articles and social media posts, throughout textual and visual vary
across different countries covering the same event?
* Societal Impact: How do world-wide events affect masses with regards
to potential perceived consequences, and what is the role of each data
modality on that perception?
The goal for participants is to develop novel research ideas based on
the dataset but without requiring them to compete against each other.
Each submitted work is expected to target some of the research
questions while studying the unique aspect of the problem using one of
the datasets below:
* Dataset 1: Tweets posted by news companies around the world about
the Ukraine-Russia Conflict.
* Dataset 2: News bias categories left, right, center for mainstream events
More info can found here:
https://muws-workshop.github.io/cfp/
* Submission Instructions:
We welcome contributions for short (2-4 pages) and long (8 pages)
papers (plus unlimited number of pages for the references) that
address the topics of interest.
* Long research papers should present complete work with evaluations
on related topics.
* Short research papers should present preliminary results or more
focused contributions, we also welcome reports of 2 pages for the
Track-2 focused work.
Papers should follow the ACM Proceedings style. All submissions must
be written in English, must be formatted according to the proceedings
style, and should adhere to double-blind review requirements.
Submission Page:
https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/MUWS
*Important Dates:
Submission deadline: July 11th, 2025 (Anywhere on Earth)
Paper notification: July 24th, 2025
Camera ready: August 3rd, 2025
* Organizing Committee
Sherzod Hakimov, University of Potsdam, Germany
Marc A. Kastner, Hiroshima City University, Japan
Eric Müller-Budack, TIB - Leibniz Information Centre for Science and
Technology, Germany
David Semedo, NOVA University of Lisbon, Portugal
Takahiro Komamizu, Nagoya University, Japan
Contact
All questions about the workshop should be emailed to:
muws-workshop AT listserv.dfn.de
--
Takahiro Komamizu
Ph.D. in Engineering
Nagoya University
E-mail: taka-coma(a)acm.org