23nd International Conference of the Biometrics Special Interest Group (BIOSIG 2024)

Date: 26.-27.09.2024
Duration: 10:00-17:00
Location: Fraunhofer IGD
Fraunhoferstraße 5
Darmstadt
Diese Veranstaltung wird als Weiterbildung im Sinne der T.I.S.P.-Rezertifizierung anerkannt

Program

Thursday, September 26
09:45
Marta Gomez-Barrero
Universität der Bundeswehr München
Marta Gomez-Barrero
BIOSIG Conference Opening
CHAIR: Prof. Dr. Marta Gomez-Barrero
09:55
Anna Stratmann
Bundesamt für Sicherheit in der Informationstechnik (BSI), Germany
Anna Stratmann
KEYNOTE
OFIQ – The way forward in facial image quality assessment

Facial image quality assessment plays a crucial role in biometric authentication throughout all areas of application. Ensuring the reliability and accuracy of facial images is paramount for these applications to function effectively. By evaluating images quality components, such as pose or focus, we can enhance the performance of facial image recognition systems and minimise the risk of errors.

Additionally, in a world, where biometric systems interact with each other more and more, interoperability of biometric algorithms becomes increasingly important, making a vendor-independent approach indispensable. Here, OFIQ – the Open-Source Face Image Quality framework – brings a transparent, open source approach for this important task into play.

10:55 Break (30 min)
CHAIR: Fadi Boutros
11:25
Tashvik Dhamija
Centre Inria d'Université Côte d'Azur
Tashvik Dhamija
Local Distributional Smoothing for Noise-invariant Fingerprint Restoration

Existing fingerprint restoration models fail to generalize on severely noisy fingerprint regions. To achieve noise-invariant fingerprint restoration, this paper proposes to regularize the fingerprint restoration model by enforcing local distributional smoothing by generating similar output for clean and perturbed fingerprints. Notably, the perturbations are learnt by virtual adversarial training so as to generate the most difficult noise patterns for the fingerprint restoration model. Improved generalization on noisy fingerprints is obtained by the proposed method on two publicly available databases of noisy fingerprints.

CAST Members can download the documents here.
11:45
Barbara de Oliveira Koop
Federal University of Technology - Parana (UTFPR)
Barbara de Oliveira Koop
Analyzing Similarity and Quality of Neonate Fingerprint Images subject to a Super-Resolution Technique

Usual biometric recognition systems are limited to addressing neonate fingerprints due to their diminutive nature. This encourages both hardware and software advances. Super-resolution techniques, for example, are promising alternatives to transform low-quality fingerprints into images that allow recognition algorithms to work reasonably. However, it has not been clear to what extent images subject to super-resolution preserve fundamental aspects so that this could replace the use of high-resolution scanners. In this paper, we explore whether a super-resolution network applied to low-resolution fingerprints attains comparable quality and retains similarities to fingerprints captured using high-resolution scanners. We conducted experiments on four different low-resolution levels. Results reveal that the structure composition, pixel integrity, and NFIQ2 scores of upscaled 1500 ppi images are more similar to those captured with a 3000 ppi scanner than the other resolutions, allowing for cost reduction in scanner investments.

CAST Members can download the documents here.
12:05
Tugce Arican
University of Twente
Tugce Arican
Finger Vein Comparison Redefined: Embracing Local Representations for Efficiency

Pose variations in finger vein images present a significant challenge for reliable and accurate finger vein recognition. Although various methods have been proposed to detect and correct these variations, they are often computationally expensive and fail to ensure precise alignment. We propose leveraging local finger vein representations learned by an auto-encoder to enable pose variation tolerant comparison without requiring additional alignment steps. Our proposed approach not only decreases the Equal Error Rate on the SD-HMT dataset from 7.0\% to 3.52\% in cross-dataset comparisons, but also reduces comparison time for one image pair from 22.1 seconds to 0.64 seconds compared to a pose correction pipeline. These results show the promise of our approach for real-word applications where efficiency and accuracy are paramount.

CAST Members can download the documents here.
12:25 Lunch Break (60 min)
CHAIR: Massimiliano Todisco
13:25
Robert Nichols
Hochschule Darmstadt
Robert Nichols
On the Use of Synthetic Hand Images for Biometric Recognition

Recognition of subjects based on images of their hands does not enjoy the same level of publicity as e.g. face recognition; therefore, scientific studies on this topic are limited. Nevertheless, the importance in forensic scenarios is considerable, where investigators often face the difficult task of identifying a suspect with little more than an image depicting a partial hand i.e. the palmar or dorsal aspect.

However, the large amounts of data needed for robust recognition systems are often unavailable. Recent advancements in the area of generative artificial intelligence have demonstrated impressive capabilities in terms of image fidelity and performance, in turn implying the possibility of substituting or augmenting real datasets with synthetically generated samples. In this paper, we explore generating hand images with latent diffusion models (LDM) conditioned on state-of-the-art hand recognition systems. Our experimental results indicate the possible future viability of generating fully synthetic identity-preserving mated samples. We identify interesting behaviour of the involved algorithms to encourage future work in this area and ultimately facilitate the development of robust, privacy-preserving and unbiased biometric systems.

CAST Members can download the documents here.
13:45
Anton Firc
Brno University of Technology
Anton Firc
Diffuse or Confuse: A Diffusion Deepfake Speech Dataset

Advancements in artificial intelligence and machine learning have significantly improved synthetic speech generation. This paper explores diffusion models, a novel method for creating realistic synthetic speech. We create a diffusion dataset using available tools and pretrained models. Additionally, this study assesses the quality of diffusion-generated deepfakes versus non-diffusion ones and their potential threat to current deepfake detection systems. Findings indicate that the detection of diffusion-based deepfakes is generally comparable to non-diffusion deepfakes, with some variability based on detector architecture. Re-vocoding with diffusion vocoders shows minimal impact, and the overall speech quality is comparable to non-diffusion methods.

CAST Members can download the documents here.
14:05
Junichi Yamagishi
National Institute of Informatics
Junichi Yamagishi
A Preliminary Case Study on Long-Form In-the-Wild Audio Spoofing Detection

Audio spoofing detection has become increasingly important due to the rise in real-world cases. Current spoofing detectors, referred to as spoofing countermeasures (CM), are mainly trained and focused on audio waveforms with a single speaker and short duration. This study explores spoofing detection in more realistic scenarios, where the audio is long in duration and features multiple speakers and complex acoustic conditions. We test state-of-the-art AASIST under this challenging scenario, looking at the impact of multiple variations such as duration, speaker, and acoustic conditions on CM performance. Our work reveals key issues with current methods and suggests preliminary ways to improve them. We aim to make spoofing detection more applicable in more in-the-wild scenarios. This research is an important step towards creating better systems that can handle the varied challenges of audio spoofing in real-world applications.

CAST Members can download the documents here.
14:25 Break (30 min)
CHAIR: Christian Rathgeb
14:55
Chengcheng Liu
Xi'an Jiaotong University
Chengcheng Liu
Differential Morphing Attack Detection via Triplet-Based Metric Learning and Artifact Extraction

Face morphing attack has been demonstrated to pose significant security risks to face recognition systems.

Therefore, recently, developing reliable Morphing Attack Detection (MAD) techniques has become an important research priority. This paper considers the failure of regular face recognition systems in handling morphed faces from the perspective of identity feature space, arguing that the morphed image creates a "pathway" between the two subjects involved in the morphing generation process. This "pathway" causes automatic face recognition systems to misidentify the morphed face as either contributing subjects. Based on this insight and considering the characteristics of the Differential Morphing Attack Detection (D-MAD) scenario (i.e., both the potential morphing attack image and the trusted live face image are accessible), we propose an end-to-end D-MAD solution based on metric learning with triplet feature separation and artifact analysis. The proposed approach aims to break the "pathway" created by the morphed image in the identity feature space and to incorporate latent artifact features of the potential morphed image for D-MAD. Comparative results on different public benchmarks indicate that the proposed solution demonstrates satisfactory performance against state-of-the-art algorithms.

CAST Members can download the documents here.
15:15
Michele Panariello
EURECOM
Michele Panariello
2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems

We introduce 2D-Malafide, a novel and lightweight adversarial attack designed to deceive face deepfake detection systems. Building upon the concept of 1D convolutional perturbations explored in the speech domain, our method leverages 2D convolutional filters to craft perturbations which significantly degrade the performance of state-of-the-art face deepfake detectors. Unlike traditional additive noise approaches, 2D-Malafide optimises a small number of filter coefficients to generate robust adversarial perturbations which are transferable across different face images. Experiments, conducted using the FaceForensics++ dataset, demonstrate that 2D-Malafide substantially degrades detection performance in both white-box and black-box settings, with larger filter sizes having the greatest impact. Additionally, we report an explainability analysis using GradCAM which illustrates how 2D-Malafide misleads detection systems by altering the image areas used most for classification. Our findings highlight the vulnerability of current deepfake detection systems to convolutional adversarial attacks as well as the need for future work to enhance detection robustness through improved image fidelity constraints.

CAST Members can download the documents here.
15:35
Ana Filipa Sequeira
INESC TEC
An End-to-End Framework to Classify and Generate Privacy-Preserving Explanations in Pornography Detection

The proliferation of explicit material online, particularly pornography, has emerged as a paramount concern in our society. While state-of-the-art pornography detection models already show some promising results, their decision-making processes are often opaque, raising ethical issues. This study focuses on uncovering the decision-making process of such models, specifically fine-tuned convolutional neural networks and transformer architectures. We compare various explainability techniques to illuminate the limitations, potential improvements, and ethical implications of using these algorithms. Results show that models trained on diverse and dynamic datasets tend to have more robustness and generalisability when compared to models trained on static datasets. Additionally, transformer models demonstrate superior performance and generalisation compared to convolutional ones. Furthermore, we implemented a privacy-preserving framework during explanation retrieval, which contributes to developing secure and ethically sound biometric applications.

CAST Members can download the documents here.
15:55 Break (30 min)
CHAIRS: Fadi Boutros & Christian Rathgeb
16:25
Poster Session (90 min)

1

Ana-Teodora Radutoiu
Download Poster (only for members)
Download Paper (only for members)

DTU Compute

A Study on the Next Generation of Digital Travel Credentials

2

Junichi Yamagishi
Download Poster (only for members)
Download Paper (only for members)

National Institute of Informatics

Exploring Active Data Selection Strategies for Continuous Training in Deepfake Detection

3

Thanh Tung Linh Nguyen
Download Poster (only for members)
Download Paper (only for members)

Hochschule Darmstadt

NeutrEx-Light: Efficient Expression Neutrality Estimation for Facial Image Quality Assessment

4

-

-

-

5

Catherine A Jasserand
Download Poster (only for members)
Download Paper (only for members)

University of Groningen

Deceptive Deepfakes: Is the Law Coping With AI-Altered Representations of Ourselves?

6

Iluminada Baturone
Download Poster (only for members)
Download Paper (only for members)

University of Seville

Exploring Vein Biometrics on Ordinary Smartphones Using CNNs and Transfer Learning with Open and Closed Sets

7

Claudia Franco-Moreno
Download Poster (only for members)
Download Paper (only for members)

University of Seville

Combining CRYSTALS-Kyber Homomorphic Encryption with Garbled Circuits for Biometric Authentication

8

Scott DL Wellington
Download Poster (only for members)
Download Paper (only for members)

University of Bath

Quantifying Source Speaker Leakage in One-to-One Voice Conversion

9

Kiran Raja
Download Poster (only for members)
Download Paper (only for members)

NTNU

Demographic Variability in Face Image Quality Measures

10

Surendra Singh
Download Poster (only for members)
Download Paper (only for members)

Clarkson University

Securing Biometric Data: Fully Homomorphic Encryption in Multimodal Iris and Face Recognition

19:00 Barbecue and get together (120min)
Friday, September 27

CHAIR: Naser Damer
09:00
Anderson Rocha
University of Campinas (Unicamp), Brazil
Anderson Rocha
KEYNOTE
The Convergence Revolution and Digital Forensics: Exploring the Nexus of Bits, Atoms, Neurons, and Genes

In the rapidly evolving landscape of technology, the convergence of digital realms such as nanotechnology, biotechnology, internet of things, robotics and AI presents both unprecedented opportunities and formidable challenges. This talk aims to delve into the intricate interplay of information technology (Bits), material science (Atoms), neural networks (Neurons), and genetic engineering (Genes). We will explore how this synergistic blend is driving a revolution, reshaping industries, and redefining what it means to be human in the digital age.

As we stand at the crossroads of this technological convergence, our discussion will pivot around critical themes such as trust and security in a digitized world, and the field of digital forensics. How can we build trust in systems that merge human intelligence with artificial intelligence? What are the implications for security when our biological data becomes part of the digital framework? And how does digital forensic investigation evolve as these disparate fields merge?

Join us in this thought-provoking session to unravel the complexities of the convergence revolution. We will not only address the challenges but also highlight the opportunities that lie ahead. Whether you are a tech enthusiast, a professional in the field, or simply intrigued by the future of technology, this talk promises to offer valuable insights and stimulate a rich conversation about our shared digital and physical future.

CAST Members can download the documents here.
10:00 Break (30 min)
CHAIR: Kiran Raja
10:30
André Dörsch
Hochschule Darmstadt
André Dörsch
Detection and Mitigation of Bias in Under Exposure Estimation for Face Image Quality Assessment

The increasing employment of large scale biometric systems such as the European "Entry-Exit System" and planned national initiatives such as the "Live Enrolment" procedure require quality assessment algorithms to ensure reliable recognition accuracy. Among other factors, facial image quality and hence face recognition accuracy can be negatively impacted by underexposure. Therefore, quality assessment algorithms analyse the exposure of live-captured facial images. To this end, mainly handcrafted measures have been proposed which are also referenced in current standards. However, this work shows that handcrafted measures, which use basic statistical approaches to analyse facial brightness patterns, exhibit racial bias. It is found that these algorithms disproportionately classify images of black people as underexposed as they do not take into account natural differences in skin color, particularly when relying on average pixel brightness values. To ensure fair biometric quality assessment, we have fine-tuned a data-efficient image transformer (DeiT) on synthetic data, outperforming state-of-the-art exposure measures in recognition accuracy and biometric fairness. We benchmark our proposed model against several algorithms, demonstrating superior performance by achieving the lowest Equal Error Rate (EER) of approximately 7%. Our findings highlight the importance of developing robust and fair biometric classification methods to mitigate discrimination and ensure fair performance for all users, regardless of their skin color.

CAST Members can download the documents here.
10:50
Philipp Srock
Hohschule Darmstadt
Philipp Srock
Classifying Face Beauty Based on Retouched Images

Face image filters to modify personal facial attractiveness (beauty) have been increasing these days. The image result of this process is referred to as a ”retouch image.” Thus, the perception of less/more beauty has been raised as an exciting topic. As an assumption, there could be a correlation between beauty scores and how well a machine-learning binary classifier can detect them. The bigger a given filter differs in beauty, the easier it is for an AI model to detect. This paper is looking to answer these assumptions.

CAST Members can download the documents here.
11:10
Lambert A Igene
Clarkson University
Lambert A Igene
Development of Novel 3D Face Masks for Assessment of Face Recognition Systems

Face recognition is integral to identity management and access control systems, but the rise of biometric face recognition has raised concerns about the vulnerability of Presentation Attack Detection (PAD) to various attacks. This paper focuses on developing and evaluating novel, low-cost 3D presentation attack instruments (PAIs) to test the performance of advanced biometric systems. Our PAIs include specialized bobbleheads, white resin 3D-printed masks, white filament 3D masks, masks with image projections, and projections on generic mannequin heads. This project aims to create a diverse, cost-effective PAI dataset to enhance PAD training. We evaluated the effectiveness of these 3D PAIs using three algorithms (Fraunhofer IGD, CLFM, and FaceMe) from the Liveness Detection Face Competition 2021 (LivDet Face, 2021). Additionally, we conducted matching comparisons between PAIs and live samples using the ArcFace face matcher, achieving a True Accept Rate (TAR) of 70% for bobbleheads and 43% for white resin with projection. Our preliminary assessment indicates an Average Classification Error Rate (ACER) of 10.6%, demonstrating the potential of these new PAIs to improve PAD training.

CAST Members can download the documents here.
11:30 Lunch Break (60 min)
CHAIR: Ana Filipa Sequeira
12:30
Eman Alomari
Swansea University
Eman Alomari
Ear-based Person Recognition using Pix2Pix GAN augmentation

This study presents a robust framework that leverages advanced deep-learning techniques for ear-based human recognition. Faced with the challenge of dataset sizes, our approach is developed based on a generative adversarial network (GAN) method namely Pix2Pix to augment the dataset. It is demonstrated that this approach offers the ability to produce complementary images for ear recognition. To be more specific, Pix2Pix GAN is employed to generate missing sides in ear image pairs (i.e., creating corresponding left ear images for right ear images and vice versa). As such, this augmentation could substantially increase the dataset size, making it more diverse and of significantly greater use for training purposes. The employed dataset consisted of several images of the right ear and only one left ear for each individual. A series of corresponding synthetic left-ear images is generated using Pix2Pix GAN as a tool for augmenting the available data and mitigate the dataset's lack of left ear images. The experiment framework used the EarNet model and conducted comparative evaluations before and after Pix2Pix GAN augmentation using the AMI Ear dataset. By employing the Pix2Pix GAN, the proposed approach can effectively double the size of a dataset and, in the process, provide significantly greater utility regarding how that data can be utilised in real-world applications scenarios. The resulting accuracy reaches 98% on the AMI dataset, demonstrating that this technique can improve model performance for ear-based human recognition.

CAST Members can download the documents here.
12:50
Afzal Hossain
Clarkson University
Afzal Hossain
Deep Learning Approach for Ear Recognition and Longitudinal Evaluation in Children

Ear recognition as a biometric modality is becoming increasingly popular, with promising broader application areas. While current applications involve adults, one of the challenges in ear recognition for children is the rapid structural changes in the ear as they age. This work introduces a foundational longitudinal dataset collected from children aged 4 to 14 years over a 2.5-year period and evaluates ear recognition performance in this demographic. We present a deep learning based approach for ear recognition, using an ensemble of VGG16 and MobileNet, focusing on both adult and child datasets, with an emphasis on longitudinal evaluation for children.

CAST Members can download the documents here.
13:10
Daisuke Imoto
National Institue of Police Science
Daisuke Imoto
Running Gait Biometrics at a Distance: A Novel Silhouette/3D Running Gait Dataset in Forensic Scenarios

Gait biometrics have recently been adopted in security systems and forensic applications, but issues stemming from the coexistence of viewing and speed variations have not yet been resolved. In this study, we constructed a novel multi-view and multi-speed-mode (walking and running) silhouette/3D gait dataset, which we call the “NRIPS-run dataset,” containing 53 individuals (or identities) derived from treadmill silhouette sequences under four speed conditions (walking: 3 and 5 km/h; running: 7 and 9 km/h). Based on the dataset, cross- view and cross-speed-mode matching including running speed conditions, as well as state-of-the- art reported methods on deep learning-based gait recognition, were evaluated in both identification and verification scenarios. The analyses revealed that analysis accuracy tended to be slightly higher under running speed conditions than under walking speed conditions for same speed comparisons, whereas cross-view and cross-speed-mode matchings remained challenging in both scenarios. Our findings are expected to accelerate further developments in vision-based running gait biometrics.

CAST Members can download the documents here.
13:30 Break (30 min)
CHAIR: Marta Gomez-Barrero
14:00
Héctor Delgado
Microsoft
Héctor Delgado
KEYNOTE
How reliable are the ‘99% accuracy’ claims in audio deepfake detection?

Synthetic voice generated by AI, or voice deepfakes, are a serious threat to the security and privacy of people and organizations. Voice spoofing and presentation attacks can impersonate, trick, or influence others, with possibly severe outcomes. Over the last decade, the community has made great progress in detecting synthetic voices, with reported true positive and true negative rates as high as 99%.

However, in recent years some research began to question the generalization capability of state-of-the-art spoofing detectors, in particular with respect to perturbations due to real world or artificial signal manipulation. We have expanded on these early notions by arguing that more realistic datasets, capturing characteristics of real-world use of spoofing technologies, are integral towards developing general and robust methods.

In this presentation we will demonstrate with practical examples that this is the case, and that the high accuracy figures reported may give a false sense of protection against audio deepfakes. We will show how the current methodologies on audio deepfake detection research may result in excellent performance on research benchmarks but can completely fail in the wild, and how an improved methodology can help achieve more trustworthy systems in real-life situations.

15:00
Marta Gomez-Barrero
Universität der Bundeswehr München
Marta Gomez-Barrero
Awards / Closing Remarks
15:10 End

Information and Contact

If you have any questions please contact:

Moderator

Marta Gomez-Barrero
RI CODE, Universität der Bundeswehr München
Email:

Administration

Simone Zimmermann
CAST e.V.
Tel.: +49 6151 869-230
Email:

Routing

Start


CAST e.V.
Rheinstraße 75
64295 Darmstadt

Upcoming CAST Events

hot topic "EU DORA" 24.10.2024
Cybersicherheit für den Mittelstand 21.11.2024
Forensik / Internetkriminalität 28.11.2024
ID:SMART Workshop 2025 19.-20.02.2025