The 18th IEEE Int. Conf. on Advanced Video and Signal-Based Surveillance
Hybrid     29 November - 2 December, 2022 Photo
Domain Adaptive Object Detection

Date: Wednesday, November 30

Vishal M. Patel

Vishal M. Patel is an Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. Prior to joining Hopkins, he was an A. Walter Tyson Assistant Professor in the Department of ECE at Rutgers University and a member of the research faculty at the University of Maryland Institute for Advanced Computer Studies (UMIACS). He completed his Ph.D. in Electrical Engineering from the University of Maryland, College Park, MD, in 2010. He has received a number of awards including the 2021 IEEE Signal Processing Society (SPS) Pierre-Simon Laplace Early Career Technical Achievement Award, the 2021 NSF CAREER Award, the 2021 IAPR Young Biometrics Investigator Award (YBIA), the 2016 ONR Young Investigator Award, the 2016 Jimmy Lin Award for Invention, A. Walter Tyson Assistant Professorship Award, Best Paper Awards at IEEE AVSS 2017 & 2019, IEEE BTAS 2015, IAPR ICB 2018, IEEE ICIP 2021, and two Best Student Paper Awards at IAPR ICPR 2018. He is an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence, Pattern Recognition Journal, and serves on the Machine Learning for Signal Processing (MLSP) Committee of the IEEE Signal Processing Society. He serves as the vice president of conferences for the IEEE Biometrics Council.


Object detection is one of the most fundamental problems in image and video-based surveillance applications. Recent advances in deep learning have led to the development of accurate and efficient models for object detection. However, learning highly accurate models relies on the availability of large-scale annotated datasets. Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images. Domain adaptation tries to mitigate this degradation. In this talk, I will present some of our recent works on domain adaptive object detection. In particular, unsupervised, source-free and fully test-time adaptive object detection methods will be presented.

Combatting DeepFakes

Date: Thursday, December 1

Siwei Lyu

Siwei Lyu is a SUNY Empire Innovation Professor at the Department of Computer Science and Engineering, the Director of UB Media Forensic Lab (UB MDFL), and the founding Co-Director of the Center for Information Integrity (CII) of the University at Buffalo, State University of New York. Dr. Lyu's research interests include digital media forensics, computer vision, and machine learning. Before joining UB, Dr. Lyu was an Assistant Professor from 2008 to 2014, a tenured Associate Professor from 2014 to 2019, and a Full Professor from 2019 to 2020 at the Department of Computer Science, University at Albany, State University of New York. From 2005 to 2008, he was a Post-Doctoral Research Associate at the Howard Hughes Medical Institute and the Center for Neural Science of New York University. He was an Assistant Researcher at Microsoft Research Asia (then Microsoft Research China) in 2001.

Dr. Lyu has published over 190 refereed journal and conference papers, with more than 13000 citations and an h-index of 51. He is the recipient of the IEEE Signal Processing SocietyBest Paper Award (2011), the National Science Foundation CAREER Award (2010), SUNY Albany's Presidential Award for Excellence in Research and Creative Activities (2017), SUNY Chancellor's Award for Excellence in Research and Creative Activities (2018) Google Faculty Research Award (2019), and IEEE Region 1 Technological Innovation (Academic) Award (2021). Dr. Lyu served on the IEEE Signal Processing Society's Information Forensics and Security Technical Committee (2016 - 2021) and was on the Editorial Board of IEEE Transactions on Information Forensics and Security (2016-2021). Dr. Lyu is a Fellow of IEEE and IAPR.


AI techniques, especially deep neural networks (DNNs) significantly improve the reality of falsified multimedia, leading to a severely disconcerting impact on society. In particular, the AI-based face forgery, known as DeepFake, is one of the most recent AI techniques that attract increasing attention due to its ease of use and powerful performance. In this talk, I will overview some of the recent works of my research group in DeepFake Forensics, including the detection, i.e., distinguishing forged content, and obstruction, i.e., preventing the synthesis of DeepFakes.

Biases in Biometrics Recognition and Multimodal AI

Date: Friday, December 2

Julian Fierrez

Julian Fierrez received the MSc and the PhD degrees in telecommunications engineering from Universidad Politecnica de Madrid, Spain, in 2001 and 2006, respectively. Since 2004 he has been at Universidad Autonoma de Madrid, where he is Full Professor since 2022. From 2007 to 2009 he was a visiting researcher at Michigan State University in the USA under a Marie Curie fellowship. His research is on signal and image processing, AI fundamentals and applications, HCI, forensics, and biometrics for security and human behavior analysis. He is actively involved in large EU projects in these topics (e.g., BIOSECURE, TABULA RASA and BEAT in the past; now IDEA-FAST, PRIMA and TRESPASS-ETN). Since 2016 he has been Associate Editor for Elsevier's Information Fusion and IEEE Trans. on Information Forensics and Security, and since 2018 also for IEEE Trans. on Image Processing. He has been General Chair of the IAPR Iberoamerican Congress on Pattern Recognition (CIARP 2018) and the Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2019). Since 2020 he is a member of the ELLIS Society. Prof. Fierrez has received best papers awards at AVBPA, ICB, IJCB, ICPR, ICPRS, and Pattern Recognition Letters. He is also recipient of several world-class research distinctions, including: EBF European Biometric Industry Award 2006, EURASIP Best PhD Award 2012, Medal in the Young Researcher Awards 2015 by the Spanish Royal Academy of Engineering, and the Miguel Catalan Award to the Best Researcher under 40 in the Community of Madrid in the general area of Science and Technology. In 2017 he was also awarded the IAPR Young Biometrics Investigator Award, given to a single researcher worldwide every two years under the age of 40, whose research work has had a major impact in biometrics


In the last few years, we are witnessing a growing interest in the Artificial Intelligence research community in studying bias effects when machine learning methods are applied on large amounts of data. These bias effects can stem from the data itself or from the learning process, which nowadays is clearly dominated by deep learning methods that most of the time are quite opaque. When those learning processes are related to AI applications dealing with personal information, or whose application affects people’s lives, then biases can result in unfair AI-based automated decision-making processes, very harmful in terms of undesired discrimination among population groups. This keynote will discuss the current state of the topic with special emphasis in AI applications involving face biometrics. Recent methods and approaches to reduce undesired discrimination towards fair biometrics will be also discussed.


Universidad Autónoma de Madrid, Escuela Politécnica Superior
Biometrics and Data Pattern Analytics - BiDA-Lab