Seminar Privacy und Technischer Datenschutz
- Typ: Seminar
- Lehrstuhl: KASTEL Strufe
- Semester: Sommer 2026
-
Dozent:
Prof. Dr. Thorsten Strufe
Patricia Guerra Balboa - SWS: 2
- LVNr.: 2400087
- Hinweis: Präsenz
| Inhalt |
Das Seminar behandelt aktuelle Themen aus dem Forschungsgebiet des technischen Datenschutzes. Dazu gehören z.B.:
|
| Vortragssprache | Englisch |
Organisation
In this seminar, students will research one of the given topics (see below) to write a short seminar paper during the semester. At the end, they will present their paper to their peers and engage in discussions about the various topics.
The seminar aims to teach students in three aspects:
- Technical knowledge in various areas of security and privacy.
- Basic skills related to scientific research, paper writing and a
scientific style
. - The basics of the scientific process, how conferences and peer reviews work.
Important dates
| April 21, 2026, 14:00–15:30 | Introduction (Organization & Topics) | Room 252 (50.34) |
| April 26, 2026, 23:59 | Topic preferences due | |
| April 27, 2026 | Topic assignment | |
| April 28, 2026, 14:00–15:30 | Basic skills | Room 252 (50.34) |
| July 5, 2026 | Paper submission deadline & Campus registration for the exam |
|
| July 12, 2026 | Reviews due | |
| July 19, 2026 | Revision deadline (final submission) | |
| ~July 24, 2026 (tentative) | Presentations | Room 252 (50.34) |
Registration
Registration is done in ILIAS, please join the course. You can find the link on the top of this page. There will be a limited number of slots available, which will be distributed on a first-come-first-served basis.
Preliminary list of topics
A preliminary list of topics will be published here.
#1 Correlation-based attacks against DP mechanisms
Supervisor: Patricia Guerra Balboa
Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving disclosure. Recently, however, several articles have shown several shortcomings of this notion. Strong correlation in data is one example. DP inherently assumes that the database is a simple, independent random sample. This implies that the records in the database are uniformly distributed (i.e., follow the same probability distribution) and independent (in particular, uncorrelated). Unfortunately, this is not the case for several data and use cases, such as trajectory data.
The goal of this project is to investigate existing empirical attacks that exploit correlation to infer sensitive information about users, and to determine how realistic the theoretical threat that correlation poses to DP is compared to these existing attacks.
References:
- Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9 (3-4), 211-407.
- Humphries, T., Oya, S., Tulloch, L., Rafuse, M., Goldberg, I., Hengartner, U., & Kerschbaum, F. (2023, July). Investigating membership inference attacks under data dependencies. In 2023 IEEE 36th Computer Security Foundations Symposium (CSF) (pp. 473-488). IEEE.
- Buchholz, E., Abuadbba, A., Wang, S., Nepal, S., & Kanhere, S. S. (2022, December). Reconstruction attack on differential private trajectory protection mechanisms. In Proceedings of the 38th annual computer security applications conference (pp. 279-292).
- Yang, B., Sato, I., & Nakagawa, H. (2015). Bayesian differential privacy on correlated data. In Proceedings of the 2015 acm sigmod international conference on management of data (pp. 747-762).
#2 Deep-Dive: Verifiable Time-Lock Puzzles and Their Applicability to Anonymous Communication
Supervisor: Christoph Coijanovic
When a time-lock puzzle is generated, it is guaranteed that it cannot be solved in fewer than t sequential computation steps. The 'verifiability' property enables a potential solver to verify that the puzzle's solution has a useful property before attempting to solve it. The first objective of this seminar topic, based on the recent paper by Xin and Papadopoulos [1], is to understand and explain the settings, constructions and performance implications of verifiable time-lock puzzles.
One potential application of time-lock puzzles is in verifiable mix networks. Mix networks unlink senders from their messages by shuffling messages based on delays defined by the senders. To ensure that messages are still shuffled in the presence of malicious mix nodes, several techniques have been proposed to make mix networks verifiable [2]. Intuitively, time-lock puzzles could be used by clients in a mix network to ensure that servers delay messages for the desired amount of time. The second objective is to verify this intuition and compare it with other approaches based on Haines and Müller's SoK.
References:
- J. Xin and D. Papadopoulos, "Check-Before-you-Solve: Verifiable Time-Lock Puzzles", 2025 IEEE Symposium on Security and Privacy (SP)
- T. Haines and J. Müller, "SoK: Techniques for Verifiable Mix Nets," 2020 IEEE 33rd Computer Security Foundations Symposium (CSF)
#3 The privacy of Signal in the face of strong adversaries
Supervisor: Daniel Schadt
Signal [1] is considered state-of-the-art when it comes to secure messaging, and its cryptographic mechanism offers strong message confidentiality and integrity. However, when it comes to privacy, we usually consider more than just encryption of messages: Metadata, such as "who communicates with whom" or "how much do people communicate," is just as important as message content and can leak sensitive information. Signal claims a focus on privacy, but only within their threat model.
In this seminar, you will give an overview of the privacy protections that Signal offers (or does not offer) against various adversaries outside of Signal's threat model (a compromised Signal server, a malicious ISP, a global network adversary). You will look into features like Signal's Sealed Sender [2, 3] and to which extent they protect privacy. Finally, you can compare Signal to other messengers.
References:
- https://signal.org
- https://signal.org/blog/sealed-sender/
- https://www.cs.umd.edu/users/kaptchuk/publications/ndss21.pdf
#4 User Attestation of Smart Cameras
Supervisor: Simon Hanisch
Video anonymization can be used to remove identifiable information from videos, such as faces, body shapes and movements. To minimize the attack surface of videos, anonymization should happen as close as possible to the recording device — ideally, on the device itself. Modern smart cameras have enough computing power to run video anonymization in real time, making this a reality. However, a new challenge now arises: people who are being recorded want to verify that proper video anonymization is running on the smart camera. This seminar topic aims to investigate how smart cameras can effectively attest to recorded individuals the anonymization running on the device.
References:
- User-Based Attestation for Trustworthy Visual Sensor Networks
- TrustCAM: security and privacy-protection for an embedded smart camera based on trusted computing
- Security and Privacy Protection in Visual Sensor Networks: A Survey
#5 A Literature Review of Gait Anonymization Methods
Supervisor: Julian Todt
Gait – the way we walk – is a very distinctive biometric trait that enables robust identification of individuals. Gait can be captured without the consent of the individual, from far away and via a range of sensing technologies. One approach to mitigate the resulting privacy risks is anonymizing the recorded data, i.e. removing the identifying information while preserving the utility of the data. Various anonymization methods have been proposed, using numerous underlying approaches, for a range of different use-cases.
The goal of this seminar topic is to review the existing literature on gait anonymization methods. Approaches should be analyzed and compared, in particular with regards to their use-cases and evaluation methodology.
References:
- Hanisch, Simon, Julian Todt, and Thorsten Strufe. "Pantomime: Motion Data Anonymization using Foundation Motion Models." arXiv preprint arXiv:2501.07149 (2025).
- Hirose, Yuki, Kazuaki Nakamura, Naoko Nitta, and Noboru Babaguchi. 2019. “Anonymization of Gait Silhouette Video by Perturbing Its Phase and Shape Components.” 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), November, 1679–85. https://doi.org/10.1109/APSIPAASC47483.2019.9023196.
- Tieu, Ngoc-Dung T., Huy H. Nguyen, Hoang-Quoc Nguyen-Son, Junichi Yamagishi, and Isao Echizen. 2017. “An Approach for Gait Anonymization Using Deep Learning.” 2017 IEEE Workshop on Information Forensics and Security (WIFS), December, 1–6. https://doi.org/10.1109/WIFS.2017.8267657.
#6 Differentially Private Learning Algorithms
Supervisor: Felix Morsbach
Stochastic gradient descent (SGD) is an iterative optimization algorithm for finding the parameters that provide the best fit between predicted and actual outputs, and is widely used in many machine learning models. With the rise in popularity of private learning, differentially private SGD has been firmly established as the de facto standard algorithm to learn models with differential privacy guarantees. However, there also have been many alternative proposals to DP-SGD that also allow the training of differentially private models.
The goal of this seminar topic is a) survey the current literature on alternative proposals for differentially private machine learning training, b) to classify them into appropriately chosen categories, and c) analyze their applicability to standard machine learning tasks.
- Abadi, Martin, Andy Chu, Ian Goodfellow, et al. “Deep Learning with Differential Privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, 308–18. https://doi.org/10.1145/2976749.2978318.
- Papernot, Nicolas, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ulfar Erlingsson. “Scalable Private Learning with PATE.” International Conference on Learning Representations, 2018. https://openreview.net/forum?id=rkZB1XbRZ.
- Feng, Ce, Nuo Xu, Wujie Wen, Parv Venkitasubramaniam, and Caiwen Ding. “Spectral-DP: Differentially Private Deep Learning through Spectral Perturbation and Filtering.” IEEE Symposium on Security and Privacy (SP), 2023. https://doi.org/10.1109/SP46215.2023.10179457.
#7 Survey on DP accounting
Supervisor: Àlex Miranda Pascual
In recent years, differential privacy (DP) has become the standard privacy notion. It ensures that any attacker is unable to distinguish whether an individual has participated in the database or not. The privacy parameters of DP, ε and δ, allow us to quantify the level of privacy provided by an DP mechanism, and thus finding tight estimates is necessary in order to understand the privacy loss caused by these mechanism. In addition, DP is also well known for its useful properties, such as composition, which tells us that the combination of k (ε, δ)-DP mechanisms is (k ε, k δ)-DP. However, this bound is not necessarily tight for all mechanism composition, which can lead to an overestimation of the privacy loss. Computing the actual tight bounds, or good approximations, is not straightforward and has been extensively studied in the subfield called DP accounting. DP accounting provides practitioners with tools to track and bound the cumulative privacy loss of multiple iterations and determine how many operations can be safely performed. Consisting of mathematical theorems and empirical approximations, DP accounting can be used to determine the privacy guarantees of composed and sampled mechanisms, which are common in DP machine learning.
The aim of this seminar is to investigate this subfield of DP accounting by surveying the current literature, such as identifying cases where tight bounds exist, and finding and comparing different accounting methods.
References: DP and its properties are defined in [1]. References [2, 3, 4] consist of theoretical papers on DP accounting. References [5, 6] focus on more applied results such as DP stochastic gradient descent (DP-SGD).
- Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042
- Koskela, A., Jälkö, J., & Honkela, A. (2020). Computing Tight Differential Privacy Guarantees Using FFT. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 2560–2569. https://proceedings.mlr.press/v108/koskela20b.html
- Sommer, D. M., Meiser, S., & Mohammadi, E. (2019). Privacy loss classes: The central limit theorem in differential privacy. Proceedings on Privacy Enhancing Technologies. https://petsymposium.org/popets/2019/popets-2019-0029.php
- Zhu, Y., Dong, J., & Wang, Y.-X. (2022). Optimal accounting of differential privacy via characteristic function. Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, 4782–4817. https://proceedings.mlr.press/v151/zhu22c.html
- Chua, L., Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sinha, A., & Zhang, C. (2024). How private are DP-SGD implementations? Proceedings of the 41st International Conference on Machine Learning, 8904–8918. https://proceedings.mlr.press/v235/chua24a.html
- Chua, L., Ghazi, B., Harrison, C., Kamath, P., Kumar, R., Leeman, E. J., Manurangsi, P., Sinha, A., & Zhang, C. (2025). Balls-and-Bins sampling for DP-SGD. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 946–954. https://proceedings.mlr.press/v258/chua25a.html
#8 Self-Supervised Learning for Biosignal Time Series
Supervisor: Matin Fallahi
In many biosignal domains, such as electroencephalography (EEG), electrocardiography (ECG), and electromyography (EMG), labeled data is limited, while annotation is often costly and prone to noise. Self-supervised learning addresses this challenge by learning informative representations from unlabeled time-series signals through auxiliary objectives. In this context, the seminar will examine the main categories of self-supervised methods, the design of pretext tasks and augmentations, and the transferability of the learned representations to downstream biosignal analysis tasks.
References:
#9 Using Explainable AI to Analyze Image and Video Anonymization
Supervisor: Islam Amar
Deep learning–based anonymization techniques are increasingly used to conceal personal identifiers in images and videos while preserving their utility for downstream tasks. However, evaluating anonymization solely through recognition accuracy or re-identification rates often overlooks why some anonymization methods succeed or fail. Explainable AI (XAI) methods offer a way to visualize the internal decision processes of face and person recognition models, highlighting which image regions or features drive identification. By applying saliency and attention-based explanation techniques (e.g., Grad-CAM, activation maps, or attention rollout) to original and anonymized inputs, it becomes possible to reveal residual identity cues that persist after anonymization.
The goal of this topic is to collect and summarize XAI methods used in the context of facial or body identification, apply them to anonymized vs. original samples, and compare how different anonymization techniques affect the regions responsible for identification.
References:
- Towards Visual Saliency Explanations of Face Verification (WACV 2024)
- Explanation of Face Recognition via Saliency Maps (2023)
- Towards Interpretable Face Recognition (ICCV 2019)
- Explainable Face Recognition (2020)
#10 Privacy Leaks in the lower layers of 5G Networks
Supervisor: Fritz Windisch
Currently used by billions of users [0], 5G is serving numerous different, privacy sensitive use-cases such as healthcare, robotics or industry. Therefore 5G was designed with more security and privacy in mind, especially on the higher layers, rethinking authentication protocols and increasing the protection goals required by the standard.
However as seen recently, this does not fully extend to the lower layers yet, as has been indicated by recent studies [1]. The goal of this topic is to provide a deep dive into privacy issues of the lower layers of 5G, and to provide suggestions on how to resolve them.
References:
- Ericsson Mobility Report (November 2025)
- M Chlosta et al.: 5G SUCI-catchers: still catching them all? - https://doi.org/10.1145/3448300.3467826
#11 Privacy for Non-Spatial Trajectory Attributes
Supervisor: Patricia Guerra Balboa
This seminar focuses on privacy for non-spatial trajectory attributes, such as time, duration, or total traveled distance. While most existing privacy-preserving approaches concentrate on protecting location data, they often overlook these additional attributes, which can also reveal sensitive information about individuals.
For instance, knowing how long a trip takes or how frequently someone travels can expose behavioral patterns, even without precise location data. This highlights an important gap in current research.
The goal of this seminar is to explore and systematize existing approaches that address these non-spatial aspects of trajectory privacy. We aim to classify the different methods, understand their strengths and limitations, and ultimately identify open challenges and opportunities for improvement in this emerging area.
References:
- Miranda-Pascual, À., Guerra-Balboa, P., Parra-Arnau, J., Forné, J., & Strufe, T. (2023). SoK: Differentially private publication of trajectory data. Proceedings on privacy enhancing technologies.
- Buchholz, E., Abuadbba, A., Wang, S., Nepal, S., & Kanhere, S. S. (2024). Sok: Can trajectory generation combine privacy and utility?. arXiv preprint arXiv:2403.07218.
- Ye, Q., Hu, H., Li, N., Meng, X., Zheng, H., & Yan, H. (2021, May). Beyond value perturbation: Local differential privacy in the temporal setting. In IEEE INFOCOM 2021-IEEE conference on computer communications (pp. 1-10). IEEE.