DBJAPANの皆様
ABC2024の看護行動ビデオからの行動認識チャレンジですが、参加申し込み締め切りが3/6です。
十分申し込みは集まっておりますが、今一度アナウンスさせていただきます。
*** Call For Challenge Participants ***
Dear Researchers/Authors,
You are invited to participate in the 6th ABC Challenge (
https://abc-research.github.io/challenge2024), in conjunction with the 6th
International Conference on Activity and Behavior Computing at Nakatsu and
Kitakyushu, Japan (Hybrid), on May 28 - 31, 2024.
The 6th ABC Challenge is the Activity Recognition of Nurse Training
Activity Using Skelton and Video Dataset with Generative AI. Activity types
are each action of Endotracheal suctioning.
https://abc-research.github.io/challenge2024/
The dataset will provide the skeleton data for training/testing and the
video data only during the training.
The skeleton data include many data which recognized only partial parts of
the body due to camera location limitations.
Participants are required to use a Generative AI or LLMs (hereafter,
Generative AI) in a creative way.
Participants are required to recognize activities based on skeletal data.
Since the data collection was a practical experiment, camera locations were
limited by not showing the face, the size of the room, etc. Therefore, it
is not possible to recognize all of the body parts, and many skeleton data
had missing body parts. Additionally, Generative AI has been a hot topic in
recent years and its momentum will continue to increase. In order to
explore the potential for use in the field of activity recognition,
participants are required to utilize Generative AIs in a creative way.
Endotracheal suctioning (ES) is a necessary practice carried out in such as
intensive care units or even in a home by the family of patients. It
involves the removal of pulmonary secretions from a patient with an
artificial airway in place. The procedure is associated with complications
and risks including bleeding, and infection. Therefore, there is a need to
develop an activity recognition system that can ensure the safety of
patients as well as reflect to improve their skills while they conduct this
complicated procedure. Activity recognition can be used to aid nurses in
better managing and increasing the quality of their work, as well as
evaluate their performance when they conduct ES. The activity recognition
is the initial stage to determine the order of actions and assess the
nurse’s skills.
CHALLENGE GOAL & TASK
The goal of this challenge is to recognize 9 activities in Endotracheal
suctioning (ES) by using skeleton data for training/testing and video only
for training. In this challenge, participants are required to use a
Generative AI in a creative way. For evaluation, we will consider the F1
score and the paper contents. We will consider the average F1 score for all
the subjects.
The data we provide is a part of the dataset used in our previous work,
entitled “Toward Recognizing Nursing Activity in Endotracheal Suctioning
Using Video-based Pose Estimation” [1]. The authors of this work proposed
an algorithm to define and track the main subject. Also, missing keypoints
problems due to the performance of the pose estimation algorithm are
improved by smoothing keypoints.
[1] Hoang Anh Vy Ngo, Quynh N Phuong Vu, Noriyo Colley, Shinji Ninomiya,
Satoshi Kanai, Shunsuke Komizunai, Atsushi Konno, Misuzu Nakamura, Sozo
Inoue: “Toward Recognizing Nursing Activity in Endotracheal Suctioning
Using Video-based Pose Estimation”, The 5th International Conference on
Activity and Behavior Computing, 2023, (Germany).
DATASET OVERVIEW
There are two types of data in this dataset:
Video: recorded from the front side of the nurse beyond the patient
mannequin. This video will be published only during training.
Pose Skeleton (keypoints): extracted from videos by using YOLOv7. This data
will be published for training and testing.
There are a total of 9 activities in the ES procedure. All the activities
are listed in the below table.
9 Activity types in endotracheal suctioning and their id are below,
{ 0: “Catheter preparation”, 1: “Temporal removement of an artificial
airway”, 2: “Suctioning phlegm”, 3: “Refitting the artificial airway”, 4:
“Catheter disinfection”, 5: “Discarding gloves”, 6: “Positioning”, 7:
“Auscultation”, 8: “Others” }
IMPORTANT DATES
Registration closes: Mar 6, 2024
Submission of results: Mar 20, 2024
Submission of paper: Mar 27, 2024
Review sent to participants: Apr 10, 2024
Camera-ready papers: Apr 17, 2024
Conference: May 29 - 31, 2024
PRIZES
The winning team will be awarded 100,000 JPY.
The registration fee for the 1st and 2nd runner-up teams will be waived.
Each of the participating teams will be awarded with a participation
certificate.
ORGANIZERS
Haru Kaneko, Kyushu Institute of Technology
Anh Vy Ngô, Kyushu Institute of Technology
Ryuya Munemoto, Kyushu Institute of Technology
Iqbal Hassan, Kyushu Institute of Technology
Tahera Hossain, Aoyama Gakuin University
Sozo Inoue, Kyushu Institute of Technology
FAQ
Submit your questions to abc2024(a)sozolab.jp with the subject *Challenge
title*
日本データベース学会の皆様
お茶大の小口です。
既にアナウンスされているように2024年7月2日〜5日に岐阜市の
長良川国際会議場でDASFAA 2024が開催されます。
以下はそのチュートリアル募集です。
https://www.dasfaa2024.org/call-for-tutorial-proposals/
4/1を〆切としてアレンジしておりますので、チュートリアルの
提案内容をお持ちの方は是非、応募をご検討ください。
どうぞよろしくお願い致します。
-------------------------------------------------
The 29th International Conference on Database Systems for Advanced
Applications (DASFAA 2024)
July 2-5, 2024, Gifu, Japan
CALL FOR TUTORIAL PROPOSALS
https://www.dasfaa2024.org/call-for-tutorial-proposals/
The goal of the DASFAA 2024 tutorial track is to offer the conference
attendees an introduction to the state-of-the-art topics in research,
development and applications in database technologies and advanced data
management, processing and analysis systems. We invite tutorial proposals
from active researchers, engineers, and practitioners on topics (such as
big data etc.) of potential interest to the attendees of the main
conference. We note that a tutorial should not only focus on the
presenters’ previous work, but also present a good review of all
significant work in the area of the topic. Each tutorial will have a
length of 1.5 hours.
Important Dates
All deadlines are 23:59 Anywhere on Earth (AoE) time.
Proposal submission due: April 1, 2024
Acceptance notification: April 15, 2024
Instructions for submission
Tutorial proposals should be handled electronically, in PDF format (Max 4
pages, LNCS style), through the CMT system by April 1, 2024 (23:59 Any
Where on Earth (AoE) Time). Each tutorial proposal should include the
following information.
- Tutorial title
- Contact information of the presenters: names, affiliations, addresses,
and email addresses
- Motivation for the tutorial
- Brief outline of the tutorial content
- Specific goals and objectives
- Whether the tutorial was presented elsewhere in the past (and if so)
what the attendance was
- Significant references of the tutorial
- Short biographies of the presenters
The received tutorial proposals will be reviewed by the organizers based
on the importance of the topic of the proposal and the balance of coverage
and depth of contents. Acceptance notifications will be sent by April 15,
2024. Tutorials will be presented during the main DASFAA-2024 conference
as parallel sessions. The tutorial slides will be made available on the
conference website. The conference registration fee will be waived for
tutorial speakers. An honorarium will also be paid for the tutorial
presenter(s). The topics of accepted tutorials and the names of the
presenters will be included in the conference proceedings.
Contact Information
If you have any questions, please feel free to contact us at
dasfaa2024tutorial(a)db.is.i.nagoya-u.ac.jp.
Tutorial Chairs
- Masato Oguchi, Ochanomizu University, Japan
- Wenjie Zhang, University of New South Wales, Australia
- RAGE Uday Kiran, University of Aizu, Japan
===============================================
Masato OGUCHI, Dr.Eng.
Department of Information Sciences
Ochanomizu University
https://www.is.ocha.ac.jp/~oguchi/https://www.is.ocha.ac.jp/~oguchi_lab/http://ogl.is.ocha.ac.jp/
oguchi(a)is.ocha.ac.jp