نبذة مختصرة : Understanding how humans observe and interpret actions is vital for social interaction. Point-light displays (PLDs), which depict actions using only joint movements, are widely used to study this process. Recently, PLAViMoP — an open-access database of 3D PLDs covering everyday actions, fine-motor skills, sports movements, facial expressions, social interactions, and robotic actions — has been introduced to facilitate the use of PLDs. PLAViMoP includes a search engine and metadata for each sequence, including movement type, label, actor sex, and age. In complement to the database, here we present a novel methodology that integrates eye-tracking data into the PLD reference frame, allowing gaze behavior and action kinematics to be jointly analyzed (i.e., in a unified dataset). This combined approach offers new insights into action perception and has broad applications in health, sports, and occupational settings. It also offers a promising tool for continuous psychophysical studies of the perception of biological movement.
No Comments.