Text-based 3D Animal Fine-Grained Retrieval  (TextANIMAR)


TextANIMAR Challenge 

The main objective of this challenge is to leverage artificial intelligence research on 3D animal models. The rapid development of 3D technologies produced a markable number of 3D models. Therefore, 3D model retrieval has drawn significant attention and is beneficial in real-life demands, such as video games, arts, films, and virtual reality. Compared with searching 3D general objects of a given category, 3D animal model fine-grained retrieval is much more challenging due to the large discrepancy of animal breeds and poses.

This track proposes a realistic and promising setting for 3D animal model fine-grained retrieval, which aims to search for relevant 3D animal models from a dataset using sentences inputted by users. It can help users to get access to 3D models quickly by natural descriptions.

Tentative Schedule

Participant Information

Please contact the task organizers with any questions on these points.

Dataset Description

Our TextANIMAR2023 dataset has the structure as follow:

Submission Instructions

Participants require to submit a CSV file with the format name (<Team Name>_TextANIMAR2023.csv) to CodaLab (https://codalab.lisn.upsaclay.fr/competitions/11093). The file has to be compressed to submission.zip before being submitted to CodaLab. Each team has 25 submissions as the maximum.

Given N models and Q queries, each row needs to show retrieval results in descending order. A sample .csv file is below.


<Query ID 1>, <Model ID top-1>, <Model ID top-2>, <Model ID top-3>, ..., <Model ID top-N>

<Query ID 2>, <Model ID top-1>, <Model ID top-2>, <Model ID top-3>, ..., <Model ID top-N>

...

<Query ID Q>, <Model ID top-1>, <Model ID top-2>, <Model ID top-3>, ..., <Model ID top-N>


Each team also needs to send a working notes paper to the organizers (ltnghia@fit.hcmus.edu.vn) by the submission deadline with the email title "[TextANIMAR2023] <Team Name> Working notes paper submission".

The working notes paper is four pages long and in two-column IEEE format. You are allowed to add a fifth page that contains only references. Your paper should cite the Challenge Overview paper written by the organizers. This paper contains all the necessary information on the challenge definition and the dataset. Therefore, you don't need to repeat to describe the challenge or the dataset. Instead, you can devote the four pages exclusively to presenting the motivation for your approach, explaining your method, showing your results, analyzing your results, and giving an outlook on future work.

Evaluation Methodology

The metrics used for this track are:

Leaderboard

Organizers

References

Trung-Nghia Le,  Tam V. Nguyen, Minh-Quan Le, Trong-Thuan Nguyen, Viet-Tham Huynh, Trong-Le Do, Khanh-Duy Le, Mai-Khiem Tran, Nhat Hoang-Xuan, Thang-Long Nguyen-Ho, Vinh-Tiep Nguyen, Tuong-Nghiem Diep, Khanh-Duy Ho, Xuan-Hieu Nguyen, Thien-Phuc Tran, Tuan-Anh Yang, Kim-Phat Tran, Nhu-Vinh Hoang, Minh-Quang Nguyen, E-Ro Nguyen, Minh-Khoi Nguyen-Nhat, Tuan-An To, Trung-Truc Huynh-Le, Nham-Tan Nguyen, Hoang-Chau Luong, Truong Hoai Phong, Nhat-Quynh Le-Pham, Huu-Phuc Pham, Trong-Vu Hoang, Quang-Binh Nguyen, Hai-Dang Nguyen, Akihiro Sugimoto, Minh-Triet Tran, "TextANIMAR: Text-based 3D Animal Fine-Grain", ArXiv Pre-print: 2304.06053, 2023. [PDF]

© Copyright Software Engineering Laboratory, University of Science, VNU-HCM, Vietnam