Seminar Privatsphäre und Sicherheit

  • Typ: Seminar
  • Lehrstuhl: KASTEL Strufe
  • Semester: Winter 25/26
  • Dozent:

    Prof. Dr. Thorsten Strufe
    Patricia Guerra Balboa

  • SWS: 2
  • LVNr.: 2400118
Inhalt

Das Seminar behandelt aktuelle Themen aus dem Forschungsgebiet des technischen Datenschutzes.

Dazu gehören z.B.:

  • Anonyme Kommunikation
  • Netzwerksicherheit
  • Anonymisierte Online-Dienste
  • Anonyme digitale Bezahlungssysteme
  • Bewertung der Anonymität von online Diensten
  • Anonymisierte Publikation von Daten (Differential Privacy, k-Anonymity)
  • Transparenz-/Awareness-verbessernde Systeme
  • Unterstützung beim Medienverständnis
Vortragssprache Englisch

Organisation

In this seminar, students will research one of the given topics (see below) to write a short seminar paper during the semester. At the end, they will present their paper to their peers and engage in discussions about the various topics.

The seminar aims to teach students in three aspects:

  • Technical knowledge in various areas of security and privacy.
  • Basic skills related to scientific research, paper writing and a scientific style.
  • The basics of the scientific process, how conferences and peer reviews work.

Schedule

28.10.2025, 14:00–15:30 (Room 252, 50.34) Introduction (Organization & Topics)
02.11.2025 Topic preferences due
03.11.2025 Topic assignment
04.11.2025, 14:00–15:30 (Room 252, 50.34) Lecture (Basic Skills)
01.02.2026 Paper submission deadline
08.02.2026 Reviews due
15.02.2026 Revision deadline
~ 17.02.2026 Presentations

Registration

Registration is done in ILIAS and will start on 1st of October, 2025. You can find the link on the top of this page. There will be a limited number of slots available, which will be distributed on a first-come-first-served basis.

Topics

The following is a preliminary list of topics available. Note that the topics will also be introduced in the Introduction event.

#1 A literature review of cross-sensor gait recognition

Supervisor: Julian Todt

Gait, the way we walk, is a highly distinctive biometric trait that has been shown to enable identity inference via a wide range of sensors such as thermal cameras or lidar. The sensor landscape, i.e. which types of sensors are used in various situations, is ever-changing because of new technologies, privacy concerns, and legal requirements. Together with the ever-increasing capabilities of machine learning, particularly in the area of biometric recognition, this prompts questions about the possibility of cross-sensor recognition. There, labeled training data from one sensor enables successful recognition on recordings from another sensor.

For this seminar topic, the goal is to review existing literature on cross-sensor gait recognition. This includes investigating which combinations of sensors have been investigated and which efficacy has been demonstrated. Further, the existing approaches should be analyzed and compared.

References:

  • Dongjiang Cao et al. “Cross Vision-RF Gait Re-identification with Low-cost RGB-D Cameras and mmWave Radars”. In: Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6.3 (Sept. 2022), 102:1–102:25. doi: 10.1145/3550325.
  • Wenxuan Guo et al. “Camera-LiDAR Cross-Modality Gait Recognition”. en. In: Computer Vision – ECCV 2024. Ed. by Aleš Leonardis et al. Cham: Springer Nature Switzerland, 2025, pp. 439–455. isbn: 978-3-031-72754-2. doi: 10.1007/978-3-031-72754-2_25.

#2 Advances in Streaming Data Protection

Supervisor: Patricia Guerra Balboa

As more applications rely on streaming data–from sensor readings and user interactions to financial transactions–ensuring privacy under the constraints of limited memory, one-pass processing, and evolving data becomes a critical challenge. Local Differential Privacy (LDP), which provides strong privacy guarantees without requiring trust in a central curator, brings new complexities when applied to streaming scenarios, such as managing cumulative privacy loss and adapting to changing data distributions.

In this seminar, students will investigate recent advances in LDP for streaming, evaluate the trade-offs between accuracy, privacy, and efficiency, and discuss open questions like optimal composition strategies, adaptive mechanisms, and the design of privacy-preserving algorithms that can handle long-term, dynamic data.

References:

  • Wang, T., Blocki, J., Li, N., & Jha, S. (2017). Locally differentially private protocols for frequency estimation. In 26th USENIX Security Symposium (USENIX Security 17) (pp. 729-745).
  • Ye, Q., Hu, H., Li, N., Meng, X., Zheng, H., & Yan, H. (2021, May). Beyond value perturbation: Local differential privacy in the temporal setting. In IEEE INFOCOM 2021-IEEE Conference on Computer Communications (pp. 1-10). IEEE.

#3 The Cost of Protecting against Active Adversaries

Supervisor: Daniel Schadt

Mix networks are a common method to implement anonymous communication systems. In a mix network, messages travel along a series of mix nodes, each one shuffling the message batches before the messages continue along their path. However, making such a system secure against adversaries that can drop or delay packets in the network is challenging: An expected packet that does not arrive, or arrives way too late, can already signal a relationship between sender and recipient.

To solve this problem, various improvements to mix networks have been proposed: Ando et al. use special "merging" and "checkpoint" onions to protect against packet drops [1]. Further, Bruisable Onions [2] and its successor Peony Onion Encryption [3] help in preventing delay attacks. While those papers analyze the "theoretical" cost of such defense mechanisms, it is unclear what that means in practical terms.

The goal of this seminar is to take a closer look at the proposed protocols, and evaluate their performance in a more tangible way. In particular, comparisons to existing practical formats (Sphinx [4]) or systems (Loopix [5], Nym) should be made.

References:

  1. Megumi Ando, Anna Lysyanskaya, Eli Upfal. On the Complexity of Anonymous Communication Through Public Networks. ITC 2021. https://doi.org/10.4230/LIPIcs.ITC.2021.9
  2. Megumi Ando, Anna Lysyanskaya, Eli Upfal. Bruisable Onions: Anonymous Communication in the Asynchronous Model. TCC 2024. https://doi.org/10.1007/978-3-031-78011-0_16
  3. Megumi Ando, Miranda Christ, Kashvi Gupta, Tal Malkin, Dane Smith. Full Anonymity in the Asynchronous Setting from Peony Onion Encryption. https://ia.cr/2025/1067
  4. George Danezis, Ian Goldberg. Sphinx: A Compact and Provably Secure Mix Format. SP 2009. https://doi.org/10.1109/SP.2009.15
  5. Ania Piotrowska, Jamie Hayes, Tariq Elahi, Sebastian Meiser, George Danezis. The Loopix Anonymity System. USENIX Security 2017. https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-piotrowska.pdf

#4 Selective Failure Attacks on Private Information Retrieval

Supervisor: Christoph Coijanovic

Private Information Retrieval (PIR) is a useful building block of anonymous communication because it allows a client to retrieve a record from a server's database without the server knowing which record the client wants. Most PIR protocols are vulnerable to selective failure attacks: Suppose the server suspects that a client is interested in the ith record. If the server corrupts only that record, the client's post-retrieval behavior may reveal whether it really wanted the ith record (e.g., by re-retrieving due to the corruption).

Recently, there has been much interest in making PIR secure against selective failure attacks [1-5].

The goal of this seminar paper is to understand how selective failure attacks work, under what conditions a PIR scheme is vulnerable to them, and how they can be mitigated.

References:

  1. Colombo, Simone et al. “Authenticated private information retrieval.” IACR Cryptology ePrint Archive (2023).
  2. Dietz, Marian and Stefano Tessaro. “Fully Malicious Authenticated PIR.” IACR Cryptology ePrint Archive (2023).
  3. Wang, Yinghao et al. “Crust: Verifiable And Efficient Private Information Retrieval with Sublinear Online Time.” IACR Cryptol. ePrint Arch. 2023 (2023): 1607.
  4. Castro, Leo de and Keewoo Lee. “VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers.” IACR Cryptol. ePrint Arch. 2024 (2024): 341.
  5. Falk, Brett Hemenway et al. “Malicious Security for PIR (almost) for Free.” IACR Cryptol. ePrint Arch. 2024 (2024): 964.

#5 Neural Network Architectures for Private Machine Learning

Supervisor: Felix Morsbach

Deep learning models are vulnerable to inference attacks, in which an attacker is able to reconstruct (parts of) the training data used to train the model. To prevent such attacks, multiple different attacks exist, for example training the model with differential privacy guarantees or coarsening the output of the model. However, modifications to the learning setup such as the architecture or hyperparameters also can significantly influence the vulnerability of deep learning models [1-3].

The goal of this project is a) to survey the current literature on architectural modifications for improving private learning and b) to classify them into appropriately chosen categories.

  1. Morsbach, Felix, Tobias Dehling, and Ali Sunyaev. “Architecture Matters: Investigating the Influence of Differential Privacy on Neural Network Design.” In NeurIPS 2021 Workshop on Privacy in Machine Learning (PriML 2021). Virtual Conference, 2021. https://doi.org/10.5445/IR/1000140769.
  2. Klause, Helena, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, and Georgios Kaissis. “Differentially Private Training of Residual Networks with Scale Normalisation,” March 1, 2022. http://arxiv.org/abs/2203.00324.
  3. De, Soham, Leonard Berrada, Jamie Hayes, Samuel L. Smith, and Borja Balle. “Unlocking High-Accuracy Differentially Private Image Classification through Scale.” arXiv, June 16, 2022. https://doi.org/10.48550/arXiv.2204.13650.

#6 Differentially Private Top-k Selection

Supervisor: Àlex Miranda Pascual

Often, it is necessary to determine the most popular option from a large pool of candidates. For instance, this enables us to determine a group's favorite films, the most popular restaurant menu item, and the areas with the highest disease prevalence. However, revealing such information could expose the users who selected these films or dishes, or reveal who suffers from the disease. Therefore, there is a need to select the top-k elements from a set of possibilities in a private manner.

Differential privacy (DP) has become the de facto mathematical standard for privacy-preserving data analysis. In particular, the problem of top-k selection has been extensively addressed in the context of differential privacy. In this seminar, you will survey different DP top-k selection proposals and answer questions such as how the proposals relate in terms of privacy and utility and which ones work best.

References: Some DP top-k selection mechanisms: Sparse vector technique [1,2], peeling [3,5], and one-shot [4,5]. Chapters 1-3 contain some introduction to data privacy and to differential privacy [6].

  1. M. Lyu, D. Su, and N. Li, “Understanding the sparse vector technique for differential privacy,” Proc. VLDB Endow., vol. 10, no. 6, pp. 637–648, Feb. 2017, doi: 10.14778/3055330.3055331.
  2. Y. Liu, S. Wang, Y. Liu, F. Li, and H. Chen, “Unleash the power of ellipsis: Accuracy-enhanced sparse vector technique with exponential noise,” Proc. VLDB Endow., vol. 18, no. 2, pp. 187–199, Oct. 2024, doi: 10.14778/3705829.3705838.
  3. D. Durfee and R. Rogers, “Practical differentially private topk selection with pay-what-you-get composition,” Proc. Int. Conf. Neural Infor. Process. Syst., pp. 3532–3542, 2019.
  4. G. Qiao, W. Su, and L. Zhang, “Oneshot differentially private top-k selection”, Proc. Int. Conf. Machine Learn., volume 139 of PMLR, pp. 8672–8681, 18–24 Jul 2021.
  5. M. Shekelyan and G. Loukides, “Differentially private top-k selection via canonical Lipschitz mechanism,” Jan. 31, 2022, arXiv: arXiv:2201.13376. doi: 10.48550/arXiv.2201.13376.
  6. C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Found. Trends Theor. Comput. Sci., vol. 9, no. 3–4, pp. 211–407, Agosto 2014, doi: 10.1561/0400000042.

#7 Quantifying Sample Quality in Biometric Recognition

Supervisor: Matin Fallahi

Biometric recognition systems depend directly on input sample quality. Samples can be degraded by noise, blur, occlusions, motion, misalignment, sensor artifacts, or acquisition errors. Poor-quality samples reduce recognition accuracy, increase false rejection and false acceptance rates, and undermine system robustness. Therefore, quantifying sample quality is essential to enable preprocessing, adaptive decision-making, multimodal fusion, and rejection of unreliable inputs.

In this seminar, we explore different methods for quantifying and estimating the quality of biometric samples. These methods range from signal-based and image- based indicators, to statistical quality metrics, and machine learning models designed to predict quality scores in an automated way.

References:

#8 Using Explainable AI to Analyze Image and Video Anonymization

Supervisor: Islam Amar

Deep learning–based anonymization techniques are increasingly used to conceal personal identifiers in images and videos while preserving their utility for downstream tasks. However, evaluating anonymization solely through recognition accuracy or re-identification rates often overlooks why some anonymization methods succeed or fail. Explainable AI (XAI) methods offer a way to visualize the internal decision processes of face and person recognition models, highlighting which image regions or features drive identification. By applying saliency and attention-based explanation techniques (e.g., Grad-CAM, activation maps, or attention rollout) to original and anonymized inputs, it becomes possible to reveal residual identity cues that persist after anonymization.

The goal of this topic is to collect and summarize XAI methods used in the context of facial or body identification, apply them to anonymized vs. original samples, and compare how different anonymization techniques affect the regions responsible for identification.

References:

  • Towards Visual Saliency Explanations of Face Verification (WACV 2024)
  • Explanation of Face Recognition via Saliency Maps (2023)
  • Towards Interpretable Face Recognition (ICCV 2019)
  • Explainable Face Recognition (2020)

#9 Multi-Perspective Anonymization in Multi-Camera Environments

Supervisor: Islam Amar

In many real-world settings such as smart cities, transportation hubs, and hospitals, visual data is collected simultaneously from multiple cameras with overlapping fields of view. Traditional anonymization techniques often treat each camera independently, which can lead to inconsistencies across views and make it easier for adversaries to link identities between cameras. Recent research has started addressing this challenge through multi-perspective anonymization, using 3D-aware generative models, multimodal architectures, or geometry-based consistency mechanisms to ensure that anonymized appearances remain coherent across different viewpoints and modalities.

At the same time, emerging work on event-based vision, gait, and skeleton modalities shows that anonymization must be considered not only in standard RGB frames but also across multiple sensing perspectives and modalities. Datasets such as CRxK enable realistic multi-camera evaluation, and studies on de-anonymization attacks highlight the risks of inconsistent anonymization in shared spaces like the Metaverse.

The goal of this topic is to review and compare recent approaches to multi-view and multimodal anonymization, summarize their underlying techniques (e.g., 3D face replacement, identity disentanglement, gait or skeleton anonymization), and analyze how well they preserve privacy across camera views using benchmark datasets.

References:

  • Achieving Privacy-Preserving Multi-View Consistency with Advanced 3D-Aware Face De-identification (MMAsia 2023)
  • De-anonymization Attacks on Metaverse (2023)
  • CRxK Dataset: A Multi-View Surveillance Video Dataset for Reenacted Crimes in Korea (2023)
  • An Architecture for Automatic Multimodal Video Data Anonymization to Ensure Data Protection (2023)
  • Event Anonymization: Privacy-Preserving Person Re-Identification and Pose Estimation in Event-Based Vision (2024)
  • Person Re-Identification without Identification via Event Anonymization (2024)
  • Anonymization for Skeleton Action Recognition (2023)
  • Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark (GaitBench, 2022)

#10 Verifiable Context Propagation

Supervisor: Fritz Windisch

In cloud infrastructures today, service meshes are commonly used in order to ensure confidentiality, integrity and authentication. This usually consists of using proxies to encrypt traffic using mTLS, thereby delivering all three desired security characteristics. While impersonation of a service is thus unlikely, it is still possible that a service is compromised by an attacker and may thus act on behalf of the latter.

In order to combat this, verifiable context propagation schemes exist. Context propagation schemes forward information about the context of a request in order to provide additional information used in verifying the legitimacy of a request. Verifiable context propagation uses cryptographic primitives in order to prove the origin of the tracing information. The gained insight can subsequently be used in policies to filter out illegitimate requests.

The goal of the topic is to collect an overview over mechanisms forwarding context information with requests in either a verifiable or non-verifiable way. The found contributions should then be discussed by criteria (applicable domain, protection against an attacker compromising a service, included information, other advantages/drawbacks of the given solution).

References: