Quantifying cross-species primate facial cues
POSTER
Abstract
Faces are central to primate social communication, encapsulating emotions, social cues, and individual identities. The advent of deep learning has heralded a new era for robust, high-throughput video-based facial analysis; however, non-human primates present unique challenges due to morphological variability and environmental complexities. To overcome these challenges, we need a dataset devoted to cross-species primate faces to train deep learning models for high-throughput face analysis. Here, we introduce PrimateFace, the first large-scale, cross-species dataset of primate images, annotated with face bounding boxes and facial landmarks, capturing a diverse array of settings, facial expressions, developmental stages, and social interactions. Of the 500,000 images, 5,000 of these images are further annotated for face recognition and facial action units. Using a self-supervised approach, we first pretrained models on unlabeled PrimateFace images, then fine-tuned the representations on smaller annotated subsets for supervised facial analysis tasks. We contribute trained notebook tutorials with fine-tuned transformer and light-weight architectures, including the self-supervised DINO and efficient MobileViT architectures, demonstrating competitive performance and generalizability across primate species and tasks. PrimateFace opens new avenues for cross-species facial analysis and opportunities to study primate social cognition.
* This work was supported by the NIH (R37MH109728) and the NRSA T32 NIDCD-NIH Training Grant in Audition and Communication (CANAC).
Presenters
-
Felipe Parodi
University of Pennsylvania
Authors
-
Felipe Parodi
University of Pennsylvania
-
Jordan K Matelsky
University of Pennsylvania
-
Michael L Platt
University of Pennsylvania
-
Konrad P Kording
University of Pennsylvania