Seminar Privacy und Technischer Datenschutz
- Typ: Seminar
- Lehrstuhl: Praktische IT-Sicherheit
- Semester: Sommer 2025
-
Dozent:
Prof. Dr. Thorsten Strufe
Patricia Guerra Balboa - SWS: 2
- LVNr.: 2400087
- Hinweis: Präsenz
Inhalt |
Das Seminar behandelt aktuelle Themen aus dem Forschungsgebiet des technischen Datenschutzes. Dazu gehören z.B.:
|
Vortragssprache | Englisch |
Organisation
In this seminar, students will research one of the given topics (see below) to write a short seminar paper during the semester. At the end, they will present their paper to their peers and engage in discussions about the various topics.
The seminar aims to teach students in three aspects:
- Technical knowledge in various areas of security and privacy.
- Basic skills related to scientific research, paper writing and a
scientific style
. - The basics of the scientific process, how conferences and peer reviews work.
Schedule
April 23, 2025, 14:00–15:30 | Introduction (Organization & Topics) | Room 252 (50.34) |
April 30, 2025, 14:00–15:30 | Kickoff presentation & topic preferences due | Room 252 (50.34) |
May 1, 2025 | Topic assignment | |
July 7, 2025 | Paper submission deadline & Campus registration for the exam |
|
July 14, 2025 | Reviews due | |
July 21, 2025 | Revision deadline (final submission) | |
~ August 4, 2025 (tentative date) | Presentations | tbd |
Registration
Registration is done in ILIAS, please join the course. You can find the link on the top of this page. There will be a limited number of slots available, which will be distributed on a first-come-first-served basis.
Preliminary list of topics
This is a preliminary list of available topics. Further topics may be added until the introductory session.
#1 Selective Failure Attacks on Private Information Retrieval
Supervisor: Christoph Coijanovic
Private Information Retrieval (PIR) is a useful building block of anonymous communication because it allows a client to retrieve a record from a server's database without the server knowing which record the client wants. Most PIR protocols are vulnerable to selective failure attacks: Suppose the server suspects that a client is interested in the ith record. If the server corrupts only that record, the client's post-retrieval behavior may reveal whether it really wanted the ith record (e.g., by re-retrieving due to the corruption).
Recently, there has been much interest in making PIR secure against selective failure attacks [1-5].
The goal of this seminar paper is to understand how selective failure attacks work, under what conditions a PIR scheme is vulnerable to them, and how they can be mitigated.
References:
- Colombo, Simone et al. “Authenticated private information retrieval.” IACR Cryptology ePrint Archive (2023).
- Dietz, Marian and Stefano Tessaro. “Fully Malicious Authenticated PIR.” IACR Cryptology ePrint Archive (2023).
- Wang, Yinghao et al. “Crust: Verifiable And Efficient Private Information Retrieval with Sublinear Online Time.” IACR Cryptol. ePrint Arch. 2023 (2023): 1607.
- Castro, Leo de and Keewoo Lee. “VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers.” IACR Cryptol. ePrint Arch. 2024 (2024): 341.
- Falk, Brett Hemenway et al. “Malicious Security for PIR (almost) for Free.” IACR Cryptol. ePrint Arch. 2024 (2024): 964.
#2 Quantum Homomorphic Encryption (QHE)
Supervisor: Shima Hassanpour
Survey the existing approaches proposed for quantum homomorphic encryption. Categorise them in terms of method and privacy solution.
References:
- Rohde, P.P., Fitzsimons, J.F., Gilchrist, A.: Quantum walks with encrypted data. Phys. Rev. Lett. 109(15), 150501 (2012)
- Zhang, Yuan-Jing, et al. “Quantum homomorphic encryption based on quantum obfuscation.” 2020 International Wireless Communications and Mobile Computing (IWCMC). IEEE, 2020.
#3 Quantum Physical Unclonable Function (PUF)
Supervisor: Shima Hassanpour
Survey the existing quantum PUF-based protocols proposed as a cryptographic primitives and investigate the adversarial model.
References:
- Arapinis, Myrto, et al. “Quantum physical unclonable functions: Possibilities and impossibilities.” Quantum 5 (2021): 475.
- Nilesh, Kumar, Christian Deppe, and Holger Boche. “Quantum PUF and its Applications with Information Theoretic Analysis.” 2024 IEEE 10th World Forum on Internet of Things (WF-IoT). IEEE, 2024.
#4 Atagging Tor
Supervisor: Daniel Schadt
Tor is a widely used network that anonymizes a user's connections by routing them over a series of volunteer-run relays. However, Tor is susceptible to a so-called tagging attack, where a relay early in the circuit modifies the data stream in a way that a relay further down the path can recognize and undo the modifications, thereby identifying the circuit's source.
Such attacks have been known since Tor's inception, but their impact was underrated, as Tor already allows for time-based end-to-end correlation anyway. However, their severity has been re-assessed and the project is now looking to find solutions.
Your task in this seminar is to summarize those assessments [1, 2], and compare proposed countermeasures [3, 4, 5].
References:
- The23rd Raccoon: How I Learned to Stop Ph34ring NSA and Love the Base Rate Fallacy, https://archive.torproject.org/websites/lists.torproject.org/pipermail/tor-dev/2008-September/002493.html
- The 23rd Raccoon: Analysis of the Relative Severity of Tagging Attacks, https://archive.torproject.org/websites/lists.torproject.org/pipermail/tor-dev/2012-March/003347.html
- Nick Mathweson: AEZ for relay cryptography, Tor proposal 261, https://spec.torproject.org/proposals/261-aez-crypto.html
- Tomer Ashur, Orr Dunkelman, Atul Luykx: Using ADL for relay cryptography, Tor proposal 295, https://spec.torproject.org/proposals/295-relay-crypto-with-adl.html
- Degabriele et al: Counter Galois Onion: Fast Non-Malleable Onion Encryption for Tor, https://eprint.iacr.org/2025/583.pdf
#5 Lightweight Privacy Protections for Video Data
Supervisor: Simon Hanisch
For privacy protection techniques to be effective, they must be applied as close as possible to the sensor collecting the data. To achieve this, we need lightweight privacy protectors that can run on embedded hardware, close to the recording sensors. These privacy protections must be fast enough to handle large data streams and then effectively anonymize them. The goal of this seminar is to find and compare suitable privacy protections that are lightweight enough to be used in an embedded scenario.
References:
- PrivacyLens: On-Device PII Removal from RGB Images using Thermally-Enhanced Sensing
- CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks
#6 Survey on Gait-based Acoustic Identification
Supervisor: Julian Todt
Gait—the way that we walk—is a very distinctive biometric trait that can be reliably used to identify individuals. Most commonly, this identification occurs using videos or motion captures of individual's gait. But, there have also been studies that use sound recordings of gait in order to identify individuals.
The goal of this seminar topic is to survey the literature on gait-based identification using acoustic recordings and compare the approaches.
References:
- Wei Xu, ZhiWen Yu, Zhu Wang, Bin Guo, and Qi Han. 2019. AcousticID: Gait-based Human Identification Using Acoustic Signal. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article 115 (September 2019), 25 pages. https://doi.org/10.1145/3351273
- Jürgen T. Geiger, Maximilian Kneißl, Björn W. Schuller, and Gerhard Rigoll. 2014. Acoustic Gait-based Person Identification using Hidden Markov Models. In Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop (MAPTRAITS '14). Association for Computing Machinery, New York, NY, USA, 25–30. https://doi.org/10.1145/2668024.2668027
#7 Analyzing Pufferfish Privacy
Supervisor: Patricia Guerra Balboa
Traditional privacy models often assume that individuals’ data are uncorrelated—an assumption that breaks down in many real-world scenarios, such as time series, social networks, or genomic datasets, where one person's data may reveal information about others. Pufferfish Privacy offers a flexible framework to explicitly define secrets and adversary knowledge, enabling privacy guarantees even in the presence of complex correlations.
In this seminar, students will delve into foundational and recent papers in the field, critically analyzing the strengths and limitations of current approaches. We'll also discuss open challenges, including practical mechanism design, and how to effectively model adversaries in correlated settings.
References:
- Kifer, D., & Machanavajjhala, A. (2014). Pufferfish: A framework for mathematical privacy definitions. ACM Transactions on Database Systems (TODS), 39(1), 1-36.
- Song, S., Wang, Y., & Chaudhuri, K. (2017, May). Pufferfish privacy mechanisms for correlated data. In Proceedings of the 2017 ACM International Conference on Management of Data (pp. 1291-1306).
- Nuradha, T., & Goldfeld, Z. (2023). Pufferfish privacy: An information-theoretic study. IEEE Transactions on Information Theory, 69(11), 7336-7356.
#8 Advances in Streaming Data Protection
Supervisor: Patricia Guerra Balboa
As more applications rely on streaming data--from sensor readings and user interactions to financial transactions--ensuring privacy under the constraints of limited memory, one-pass processing, and evolving data becomes a critical challenge. Local Differential Privacy (LDP), which provides strong privacy guarantees without requiring trust in a central curator, brings new complexities when applied to streaming scenarios, such as managing cumulative privacy loss and adapting to changing data distributions.
In this seminar, students will investigate recent advances in LDP for streaming, evaluate the trade-offs between accuracy, privacy, and efficiency, and discuss open questions like optimal composition strategies, adaptive mechanisms, and the design of privacy-preserving algorithms that can handle long-term, dynamic data.
References:
- Wang, T., Blocki, J., Li, N., & Jha, S. (2017). Locally differentially private protocols for frequency estimation. In 26th USENIX Security Symposium (USENIX Security 17) (pp. 729-745).
- Ye, Q., Hu, H., Li, N., Meng, X., Zheng, H., & Yan, H. (2021, May). Beyond value perturbation: Local differential privacy in the temporal setting. In IEEE INFOCOM 2021-IEEE Conference on Computer Communications (pp. 1-10). IEEE.
#9 Choosing things privately with Differential Privacy
Supervisor: Àlex Miranda Pascual
Many statistical analyses involve the selection of an element from a set of possibilities. For example, if we want to predict a traffic jam, we need to select the most congested road with a higher probability of getting stuck from the whole set of possible roads. At the same time, this selection process involves private information of the users. For example, to know the most congested road, we may need access to the current location of individuals.
Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving data analysis in such situations. Hence, we find various algorithms in the literature that allow us to “choose things privately”. Now the question arises: Which one do we want to choose? The goal of this seminar topic is to understand the state of the art of DP “choosing” mechanisms and to compare them in terms of privacy and utility to build a systematization that allows us to know which one it is worth to use depending on the circumstances.
References:
- Desfontaines, D. (2023, 10). Choosing things privately with the exponential mechanism. https://desfontain.es/privacy/choosing-things-privately.html. (Ted is writing things (personal blog))
- Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9 (3–4), 211–407.
- McKenna, R., & Sheldon, D. R. (2020). Permute-and-flip: A new mechanism for differentially private selection. Advances in Neural Information Processing Systems, 33, 193–203.
#10 Neural Network Architectures for Private Machine Learning
Supervisor: Felix Morsbach
Deep learning models are vulnerable to inference attacks, in which an attacker is able to reconstruct (parts of) the training data used to train the model. To prevent such attacks, multiple different attacks exist, for example training the model with differential privacy guarantees or coarsening the output of the model. However, modifications to the learning setup such as the architecture or hyperparameters also can significantly influence the vulnerability of deep learning models [1-3].
The goal of this project is a) to survey the current literature on architectural modifications for improving private learning and b) to classify them into appropriately chosen categories.
- Morsbach, Felix, Tobias Dehling, and Ali Sunyaev. “Architecture Matters: Investigating the Influence of Differential Privacy on Neural Network Design.” In NeurIPS 2021 Workshop on Privacy in Machine Learning (PriML 2021). Virtual Conference, 2021. https://doi.org/10.5445/IR/1000140769.
- Klause, Helena, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, and Georgios Kaissis. “Differentially Private Training of Residual Networks with Scale Normalisation,” March 1, 2022. http://arxiv.org/abs/2203.00324.
- De, Soham, Leonard Berrada, Jamie Hayes, Samuel L. Smith, and Borja Balle. “Unlocking High-Accuracy Differentially Private Image Classification through Scale.” arXiv, June 16, 2022. https://doi.org/10.48550/arXiv.2204.13650.
#11 Biometric Foundation Models
Supervisor: Matin Fallahi
This seminar topic invites students to explore the concept of biometric foundation models—large, pre-trained models that learn general representations from biometric data (e.g., face, voice, EEG). The student will investigate how these models are built, their advantages over traditional biometric systems, and their potential applications.
References:
- Shahreza, Hatef Otroshi, and Sébastien Marcel. "Foundation Models and Biometrics: A Survey and Outlook." Authorea Preprints (2025).
- Salice, Dario, and Dario Salice. Foundations and Opportunities of Biometrics. 2024.