We are interested in how we see faces, process that information and ultimately use it to dictate social interaction with others.
Our primary focus is on gender and sexuality categorisation – in other words, how we assign people into sexuality groups using visual cues. Clearly, one method of doing this is via intentional signalling, by wearing certain clothing or hairstyles that are attributed to a group culturally. However, many studies have shown we can also use more subtle biological cues to accurately make these decisions, including the structure and movement of facial features.
We are currently using sophisticated computational techniques to investigate the role of facial expression and movement when categorising sexuality and gender. By transferring motion from one face to another, we can separate structure from movement. Importantly, the PCA-based technique we use can describe faces quantifiably, allowing a detailed analysis of the components of facial movement.
Motion transferred from my own face to a virtual ‘avatar’ face. This face has never existed in reality, it is a combination of multiple real faces, thus having no true identity.