AI-Empowered Analytics on Joint Attention 

In another research, we employed AI-Empowered Analytics on Joint Attention analysis. This research examines how students apply joint attention, specifically synchronized gazing detection, for collaborative learning. By using machine learning models such as YOLOv7 for gazing detection and object detection, we analyze video files to capture moment-by-moment gazing and object detection. This method provides a precision level of 60% to 89%.

How students apply joint attention (synchronized gazing detection) for collaborative learning? 
    • Machine learning model: gazing detection, object detection (YOLOv7)
    • Input: video file
    • Output: moment-by-moment gazing detection and object detection
    • Precision level: 60% - 89%


    This study reports that higher-performing groups tend to pay more joint attention to their own screens (referred to as Common Attention) compared to lower-performing groups. Conversely, lower-performing groups focus more on their partner's screens or faces (referred to as Mutual Attention). The ANOVA results indicate significant differences in joint attention behaviors between higher and lower-performing groups. These findings emphasize the role of joint attention in collaborative learning and how it varies based on group performance levels. Understanding these patterns can help educators design interventions to improve collaborative learning outcomes.

    • Higher performing groups tended to pay more joint attention to their own screens (Common Attention) than lower performing groups;
    • Lower performing groups paid more attention to their partner’s screens or partner’s face (Mutual Attention) than higher performing groups. 

    Lyu, Q., Chen, W., Wang, X., Su, J., Heng, K.H.J.G. (2024). Investigating Collaborators’ Joint Attention in Tech-rich Learning Environment-A Deep Learning-based Analysis Approach. Education and Information Technologies, under review.