Physiological signals measurement and spoofing detection from face video
Thesis event information
Date and time of the thesis defence
Place of the thesis defence
L10, Linnanmaa
Topic of the dissertation
Physiological signals measurement and spoofing detection from face video
Doctoral candidate
Master of Science Zitong Yu
Faculty and unit
University of Oulu Graduate School, Faculty of Information Technology and Electrical Engineering, Center for Machine Vision and Signal Analysis
Subject of study
Computer Science and Engineering
Opponent
Doctor of Philosophy Julian Fierrez, Universidad Autonoma de Madrid
Custos
Academy professor Guoying Zhao, University of Oulu, Faculty of Information Technology and Electrical Engineering; Center for Machine Vision and Signal Analysis
Physiological signals measurement and spoofing detection from face video
Human faces contain rich biometric and physiological clues. Thus, identity recognition and physiological state monitoring from face videos are feasible. On one hand, subtle color changes in the facial skin can reveal important information about the heart pulse of individuals, which works as the base for remote photoplethysmography (rPPG) signal measurement. Benefitting from computer vision technology, physiological signals can be reconstructed from face videos under laboratory-controlled conditions. On the other hand, face anti-spoofing (FAS) is vital for biometric security as face recognition systems are vulnerable to various presentation attacks.
In the first part of this thesis, three end-to-end spatio-temporal methods are presented for reliable rPPG signals recovery. To exploit efficient contextual clues from both spatial and temporal perspectives, several handcrafted and automatically searched spatio-temporal networks are proposed. Moreover, negative Pearson-based temporal loss and cross-entropy-based frequency constraints as well as rPPG-related auxiliary supervision (e.g., skin segmentation) are proposed for accurate rPPG signal recovery.
In the second part of this thesis, seven deep learning based FAS methods are presented to resolve the issue of intrinsic spoof representation, which is crucial to real-world deployment under unseen scenarios and attack types. On one side, novel convolutional operators as well as the networks are designed for generalized, lightweight, and multi-modal FAS. On the other side, several material-based pixel-wise supervision signals (e.g., depth and reflection) are proposed with an advanced pyramid supervision strategy.
Finally, with the evidence that the spoofings like a face mask cannot reflect live heart pulses, a novel facial rPPG-based method using a vision transformer is proposed to extract discriminative periodic liveness clues for challenging 3D mask attack detection.
In the first part of this thesis, three end-to-end spatio-temporal methods are presented for reliable rPPG signals recovery. To exploit efficient contextual clues from both spatial and temporal perspectives, several handcrafted and automatically searched spatio-temporal networks are proposed. Moreover, negative Pearson-based temporal loss and cross-entropy-based frequency constraints as well as rPPG-related auxiliary supervision (e.g., skin segmentation) are proposed for accurate rPPG signal recovery.
In the second part of this thesis, seven deep learning based FAS methods are presented to resolve the issue of intrinsic spoof representation, which is crucial to real-world deployment under unseen scenarios and attack types. On one side, novel convolutional operators as well as the networks are designed for generalized, lightweight, and multi-modal FAS. On the other side, several material-based pixel-wise supervision signals (e.g., depth and reflection) are proposed with an advanced pyramid supervision strategy.
Finally, with the evidence that the spoofings like a face mask cannot reflect live heart pulses, a novel facial rPPG-based method using a vision transformer is proposed to extract discriminative periodic liveness clues for challenging 3D mask attack detection.
Last updated: 23.1.2024