Seminar by Mr Xiao Luwei, Post-Doc, CCDS, 16 Sept 2024, CIL Meeting Room (N4-B1A-02)
Title : AesExpert: Towards Multi-modality Foundation Model for Image Aesthetics Perception.
Date : 16th September, 2024
Time : 11.30 AM To 12.00 PM
Venue : CIL Meeting Room (N4-B1A-02)
Presenter: Mr. XIAO LUWEI
Abstract: The highly abstract nature of image aesthetics perception (IAP) poses significant challenge for current multimodal large language models (MLLMs). The lack of human-annotated multi-modality aesthetic data further exacerbates this dilemma, resulting in MLLMs falling short of aesthetics perception capabilities. To address the above challenge, we first introduce a comprehensively annotated Aesthetic Multi-Modality Instruction Tuning (AesMMIT) dataset, which serves as the footstone for building multi-modality aesthetics foundation models. Specifically, to align MLLMs with human aesthetics perception, we construct a corpus-rich aesthetic critique database with 21,904 diverse-sourced images and 88K human natural language feedbacks, which are collected via progressive questions, ranging from coarse-grained aesthetic grades to fine-grained aesthetic descriptions. To ensure that MLLMs can handle diverse queries, we further prompt GPT to refine the aesthetic critiques and assemble the large-scale aesthetic instruction tuning dataset, i.e. AesMMIT, which consists of 409K multi-typed instructions to activate stronger aesthetic capabilities. Based on the AesMMIT database, we fine-tune the open-sourced general foundation models, achieving multi-modality Aesthetic Expert models, dubbed AesExpert. Extensive experiments demonstrate that the proposed AesExpert models deliver significantly better aesthetic perception performances than the state-of-the-art MLLMs, including the most advanced GPT-4V and Gemini-Pro-Vision.
Biography: LUWEI XIAO is a third-year Ph.D. candidate in the School of Computer Science and Technology at East China Normal University, where he is supervised by Prof. Liang He. He is concurrently a joint Ph.D. student under the supervision of Prof. Erik Cambria at NTU. His research focuses on Multi-modal Interaction and Sentiment Analysis. Prior to his doctoral studies, he obtained his M.Eng. Degree from South China Normal University in 2021. He has authored several publications in respected journals and conference proceedings, including IPM and ICME. Furthermore, he has served as a peer reviewer for top-tier conferences such as the AAAI and ACM Multimedia.