日本データベース学会の皆様
筑波大の森嶋です.DASFAA2026のCFPをお送りいたします.
DASFAAは日本のDBコミュニティと非常に関連が深い国際会議です.
何卒投稿をご検討いただければ幸いです.よろしくお願い申し上げます.
森嶋厚行
We are pleased to announce the Call for Papers for the 31st
International Conference on Database Systems for Advanced
Applications (DASFAA 2026), which will be held in Jeju Island, South
Korea.
DASFAA is a leading international conference that brings together
researchers, practitioners, and developers to discuss the latest
innovations in database systems and advanced applications. Its
long-standing history has established it as a premier venue for
research in database technologies, data science, and AI-enabled
applications. We invite high-quality, original research contributions
covering a broad range of topics in these areas.
Topics of interest include (but are not limited to):
Query processing, indexing, and storage
ML/AI for databases and data intelligence
Graph, vector, and temporal databases
Cloud, distributed, and HTAP systems
Data science, LLMs, retrieval-augmented systems
Privacy, security, and applications in healthcare, bioinformatics, and beyond
Important Dates (All deadlines are 23:59 AoE):
[Research Track]:
Abstract Submission: October 20, 2025
Paper Submission: October 27, 2025
Notification: January 19, 2026
Camera-Ready Submission: February 14, 2026
[Industry Track]:
Paper Submission: November 3, 2025
Notification: January 19, 2026
Camera-Ready Submission: February 14, 2026
For full details including submission instructions, formatting
requirements, and COI policy, please Visit:
https://dasfaa2026.github.io/DASFAA2026/calls/research-papers.htmlhttps://dasfaa2026.github.io/DASFAA2026/calls/industry-papers.html
We look forward to your submissions and hope to see you in beautiful
Jeju Island!
Best regards,
Jaeyoung Do (Seoul National University)
Yi Cai (South China University of Technology)
Publicity Co-Chairs, DASFAA 2026
========================
--
Atsuyuki Morishima <morishima-office(a)ml.cc.tsukuba.ac.jp>
Dean, Institute of Library, Information and Media Science
Leader of the Crowd-in-the-Loop Society Research Team, Institute of
Library, Information and Media Science
Center for Artificial Intelligence Research, University of Tsukuba
iSchools Asia Pacific Regional Chair and Executive Committee Member
https://fusioncomplab.org/people/atsuyuki/
The Crowd4U Initiative
http://crowd4u.org
ISeee Project
http://crowd4u.org/projects/iseee
<iframe width="1024" height="576"
src="https://www.youtube.com/embed/T7YB96faCxI" frameborder="0"
allow="accelerometer; autoplay; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>
My working hours may not be your working hours. Please do not feel
obliged to reply outside your normal work schedule.
日本データベース学会の皆様
名古屋大学の駒水です.
マルチモーダル×人間理解をテーマとした国際ワークショップ MUWS 2025 では、
Web・SNS・ニュースメディア上のマルチモーダル情報を対象とした研究を募集しています。
* Track 1:人間中心のマルチモーダル理解
(感情・印象・偏見・セミオティクス応用など)
* Track 2:世界的イベントに関するマルチモーダル分析
(文化・感情・社会的影響の比較、データ提供あり)
投稿締切:2025年7月11日(AoE)
開催日:2025年10月27日(アイルランド・ダブリン)
投稿形式:英語・ACMスタイル・ダブルブラインド(2〜4ページまたは8ページ)
詳細・投稿ページ: https://muws-workshop.github.io/
ご感心のある方はぜひ投稿をご検討ください.
-----------------------------------------------------------------------------------------------------------------------------
CALL FOR PAPERS
MUWS 2025 - The 4th International Workshop on Multimodal Human
Understanding for the Web and Social Media
co-located with ACM Multimedia 2025 in Dublin, Ireland.
October 27/27 2025, Dublin, Ireland
More Info: https://muws-workshop.github.io/
-----------------------------------------------------------------------------------------------------------------------------
* Aim and Scope
Multimodal human understanding and analysis is an emerging research
area that cuts through several disciplines like Computer Vision,
Natural Language Processing (NLP), Speech Processing, Human-Computer
Interaction, and Multimedia. Several multimodal learning techniques
have recently shown the benefit of combining multiple modalities in
image-text, audio-visual and video representation learning and various
downstream multimodal tasks. At the core, these methods focus on
modelling the modalities and their complex interactions by using large
amounts of data, different loss functions and deep neural network
architectures. However, for many Web and Social media applications,
there is the need to model the human, including the understanding of
human behaviour and perception. For this, it becomes important to
consider interdisciplinary approaches, including social sciences,
semiotics and psychology. The core is understanding various
cross-modal relations, quantifying bias such as social biases, and the
applicability of models to real-world problems. Interdisciplinary
theories such as semiotics or gestalt psychology can provide
additional insights and analysis on perceptual understanding through
signs and symbols via multiple modalities. In general, these theories
provide a compelling view of multimodality and perception that can
further expand computational research and multimedia applications on
the Web and Social media.
The theme of the MUWS workshop, multimodal human understanding,
includes various interdisciplinary challenges related to social bias
analyses, multimodal representation learning, detection of human
impressions or sentiment, hate speech, sarcasm in multimodal data,
multimodal rhetoric and semantics, and related topics. The MUWS
workshop will be an interactive event and include keynotes by relevant
experts, poster and demo sessions, research presentations and
discussion.
* Track 1: Human-Centred Multimodal Understanding
The goal of this track is to attract researchers working in multimodal
understanding (NLP, CV, Digital Humanities, and other related fields)
topics with a focus on human-centred aspects. We will seek for
original, both application-oriented and theoretical papers, and
position papers that bridge both text and multimedia data. This track
will cover novel research that targets (but not limited to) the
following topics of interest:
* Multimodal modelling of human impressions in the context of the Web
and social media
* Incorporating multi-disciplinary theories such as semiotics or
Gestalt-theory into multimodal approaches and analyses
* Human-centred aspects in Vision and Language models
* Measuring and analysing cultural, social and multilingual biases in
the context of the Web and social media
* Cross-modal and semantic relations in multimodal web data
* Multimodal human perception understanding
* Multimodal sentiment/emotion/sarcasm recognition
* Multimodal hate speech detection
* Multimodal misinformation detection
* Multimodal content understanding and analysis
* Multimodal rhetoric in online media
* Track 2: Multimodal Understanding Through Impactful World Events
The goal of this track is to provide a dataset that facilitates the
development of AI solutions for relevant and impactful research
questions in order to bring together researchers working on similar
topics such as multimedia and multimodal AI. For this purpose, we
release news and social media data with both image and text related to
events that attracted global impact, e.g., the 2024 United States
presidential election, the 2025 German federal election, or the
DeepSeek.AI R1 model & stock market crash. The datasets cover
multimodal content in various languages published in different regions
of the world. This allows the study of how the same event is portrayed
in countries with different cultural, economic, and regional
background. To foster research, we will provide various research
questions along with the dataset which include, but are not limited
to:
* Geographical Proximity: How can news values in multimodal news such
as the location of an event affect the human perception?
* Multimodal Cultural Bias: How are world-wide events perceived across
different cultures or languages?
* Framing of Elites: How do multimodal framing techniques employed by
news outlets differ in their portrayal of elite figures, e.g.,
politicians during major electoral events?
* Sentiment across Cultures: How does the sentiment expressed in news
articles and social media posts, throughout textual and visual vary
across different countries covering the same event?
* Societal Impact: How do world-wide events affect masses with regards
to potential perceived consequences, and what is the role of each data
modality on that perception?
The goal for participants is to develop novel research ideas based on
the dataset but without requiring them to compete against each other.
Each submitted work is expected to target some of the research
questions while studying the unique aspect of the problem using one of
the datasets below:
* Dataset 1: Tweets posted by news companies around the world about
the Ukraine-Russia Conflict.
* Dataset 2: News bias categories left, right, center for mainstream events
More info can found here: https://muws-workshop.github.io/cfp/
* Submission Instructions:
We welcome contributions for short (2-4 pages) and long (8 pages)
papers (plus unlimited number of pages for the references) that
address the topics of interest.
* Long research papers should present complete work with evaluations
on related topics.
* Short research papers should present preliminary results or more
focused contributions, we also welcome reports of 2 pages for the
Track-2 focused work.
Papers should follow the ACM Proceedings style. All submissions must
be written in English, must be formatted according to the proceedings
style, and should adhere to double-blind review requirements.
Submission Page:
https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/MUWS
*Important Dates:
Submission deadline: July 11th, 2025 (Anywhere on Earth)
Paper notification: July 24th, 2025
Camera ready: August 3rd, 2025
* Organizing Committee
Sherzod Hakimov, University of Potsdam, Germany
Marc A. Kastner, Hiroshima City University, Japan
Eric Müller-Budack, TIB - Leibniz Information Centre for Science and
Technology, Germany
David Semedo, NOVA University of Lisbon, Portugal
Takahiro Komamizu, Nagoya University, Japan
Contact
All questions about the workshop should be emailed to:
muws-workshop AT listserv.dfn.de
--
Takahiro Komamizu
Ph.D. in Engineering
Nagoya University
E-mail: taka-coma(a)acm.org
日本データベース学会の皆様
(重複してお受け取りの際はご容赦ください)
株式会社Scalarの山田と申します。
xSIG 2025のポスター発表募集をお送りいたします。
xSIG 2025は、8/4-6に香川県高松市で開催されるSWoPP 2025の一部として、8/6に開催されます。
萌芽的な研究のほか、既発表の研究や研究プロジェクトの紹介などの場としてご活用ください。
また、xSIGやSWoPP内の研究会で口頭発表予定の内容の発表も受け入れております。
多くの皆様のご発表をお待ちしております。ぜひ、ご検討ください。
xSIG 2025 Call for Posters
========================================
https://xsig.ipsj.or.jp/2025/
ポスター発表募集
--------------------
xSIG 2025では、全主催・協賛研究会の分野にまたがる幅広い分野 (cross-SIG) を対象としたポスター発表を募集します。
- 新たな研究の構想や問題提起、将来の展開を見据えた萌芽的な研究など、あらゆる段階の研究の紹介
- xSIG 2025およびSWoPP 2025の各研究会で口頭発表を行う研究の紹介
- 既発表の研究の紹介
- 研究・開発プロジェクトの紹介
学生を対象とした優秀ポスター発表の表彰を予定しています。
(ただし、xSIG 2025の口頭発表論文で受賞されている方は対象外とします)
学生、若手研究者に限らず、様々な年代の方の発表をお待ちしております。
xSIG 2025およびSWoPP
2025の各研究会で口頭発表を行う論文についても、ポスター発表することを推奨します。口頭発表の内容を踏まえて、ポスター発表を通じて密な議論がなされることを期待します。
ただし、ポスター発表の申込件数が多い場合には、より多くの方に発表機会を提供するために、xSIG 2025およびSWoPP
2025の各研究会で口頭発表を「行わないもの」を優先し、ポスター発表をお断りすることがございます。また、xSIGの関連分野外の発表についてもお断りすることがございます。予めご了承ください。
ポスターセッションはxSIG/SWoPP会場での「対面形式」での開催とします。
投稿
--------------------
投稿にはGoogle Forms(https://forms.gle/pojFwz2eFZt6c4Vo6)を利用します。
投稿時に必要なものは、タイトル、著者情報、概要(和文200-400文字程度、英文100-200語程度)のみで、論文やExtended
Abstractは不要です。
xSIGウェブサイト上でタイトルと著者情報を掲載します。
発表
--------------------
xSIG 2025ポスターセッションにおいてポスター発表を行ってください。
会場備付のボードにポスター(最大A0サイズを想定)を画鋲で貼り付けていただきます。
ポスター会場の様子については下のページもご参照ください。
https://r15296411.theta360.biz/t/9258586a-0f9b-11ec-98da-063b2b63adb9-1
(コミュニケーションプラザ、及び市民ギャラリーの両方を利用する予定です)
電子版ポスターの提供は公式には行いませんが、希望者がSWoPPのSlack上にアップロードできる場を提供する予定です。
ポスター発表関連の日程
--------------------
- 投稿〆切: 2025年6月30日(月)17:00 (JST)
- 採否通知: 2025年7月1日(火)頃
- ポスターセッション: 2025年8月6日(水)
========================================
xSIG 2025 Call for Posters
========================================
https://xsig.ipsj.or.jp/2025/
Call for Posters
--------------------
xSIG 2025 solicits poster presentations from a wide range (cross-SIG)
fields of all sponsors and co-sponsors of xSIG.
- research of any stage, such as concept and problem presentation
- research presented at xSIG 2025 and SWoPP 2025 with an oral presentation.
- previously published research in other opportunities.
- introduction of research and development project.
We are planning to give poster awards for outstanding poster
presentations to students. (Please note that authors already awarded
for oral presentation papers of xSIG 2025 are ineligible for the
poster awards) We hope various researchers, not limited to students
and young researchers, have presentations here. xSIG 2025 encourages
authors of oral presentations of xSIG 2025 and SWoPP 2025 to introduce
their research again in the poster session for in-depth discussions.
If the number of submissions exceeds the available capacity, we might
decline some posters of oral presentation papers at xSIG and SWoPP in
order to provide presentation opportunities for more people. Also, we
might decline out-of-scope posters from the xSIG interests. The poster
session will be provided only on-site at the xSIG/SWoPP venue.
Submission
--------------------
You have to enter the author information, title, and abstract of your
poster. The abstract should be around 200-400 characters (in Japanese)
or 100-200 words (in English). The title and author(s) will be listed
on the xSIG website.
Submission System
--------------------
Poster submission page
https://forms.gle/pojFwz2eFZt6c4Vo6
Poster Presentation
--------------------
Present your poster in the xSIG 2025 poster session. You will need to
attach your poster (maximum size A0) to the board provided at the
venue using thumbtacks.
Please also refer to the following page to see what the poster venue
will look like:
https://r15296411.theta360.biz/t/9258586a-0f9b-11ec-98da-063b2b63adb9-1
While we will not officially provide electronic versions of the
posters, we plan to offer a space on the SWoPP Slack where presenters
can upload their posters.
Poster schedule
--------------------
- Submission deadline: 17:00, June 30, 2025 (JST)
- Author notification: after July 1, 2025
- Poster session: August 6, 2025
--
Hiroyuki Yamada <hiroyuki.yamada(a)scalar-labs.com>
CTO at Scalar, Inc.