Seminar in Privacy
- Type: seminar
- Chair: KASTEL Strufe
- Semester: summer of 2026
-
Lecturer:
Prof. Dr. Thorsten Strufe
Patricia Guerra Balboa - SWS: 2
- Lv-No.: 2400087
- Information: Präsenz
| Topics |
The seminar covers current topics from the research area of technical data protection. These include, for example:
|
| Language | English |
Organisation
In this seminar, students will research one of the given topics (see below) to write a short seminar paper during the semester. At the end, they will present their paper to their peers and engage in discussions about the various topics.
The seminar aims to teach students in three aspects:
- Technical knowledge in various areas of security and privacy.
- Basic skills related to scientific research, paper writing and a
scientific style
. - The basics of the scientific process, how conferences and peer reviews work.
Important dates
| April 21, 2026, 14:00–15:30 | Introduction (Organization & Topics) | Room 252 (50.34) |
| April 26, 2026, 23:59 | Topic preferences due | |
| April 27, 2026 | Topic assignment | |
| April 28, 2026, 14:00–15:30 | Basic skills | Room 252 (50.34) |
| July 5, 2026 | Paper submission deadline & Campus registration for the exam |
|
| July 12, 2026 | Reviews due | |
| July 19, 2026 | Revision deadline (final submission) | |
| ~July 24, 2026 (tentative) | Presentations | Room 252 (50.34) |
Registration
Registration is done in ILIAS, please join the course. You can find the link on the top of this page. There will be a limited number of slots available, which will be distributed on a first-come-first-served basis.
Preliminary list of topics
A preliminary list of topics will be published here.
#1 Correlation-based attacks against DP mechanisms
Supervisor: Patricia Guerra-Balboa
Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving disclosure. Recently, however, several articles have shown several shortcomings of this notion. Strong correlation in data is one example. DP inherently assumes that the database is a simple, independent random sample. This implies that the records in the database are uniformly distributed (i.e., follow the same probability distribution) and independent (in particular, uncorrelated). Unfortunately, this is not the case for several data and use cases, such as trajectory data.
The goal of this project is to investigate existing empirical attacks that exploit correlation to infer sensitive information about users, and to determine how realistic the theoretical threat that correlation poses to DP is compared to these existing attacks.
References:
- Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9 (3-4), 211-407.
- Humphries, T., Oya, S., Tulloch, L., Rafuse, M., Goldberg, I., Hengartner, U., & Kerschbaum, F. (2023, July). Investigating membership inference attacks under data dependencies. In 2023 IEEE 36th Computer Security Foundations Symposium (CSF) (pp. 473-488). IEEE.
- Buchholz, E., Abuadbba, A., Wang, S., Nepal, S., & Kanhere, S. S. (2022, December). Reconstruction attack on differential private trajectory protection mechanisms. In Proceedings of the 38th annual computer security applications conference (pp. 279-292).
- Yang, B., Sato, I., & Nakagawa, H. (2015). Bayesian differential privacy on correlated data. In Proceedings of the 2015 acm sigmod international conference on management of data (pp. 747-762).
#2 Deep-Dive: Verifiable Time-Lock Puzzles and Their Applicability to Anonymous Communication
Supervisor: Christoph Coijanovic
When a time-lock puzzle is generated, it is guaranteed that it cannot be solved in fewer than t sequential computation steps. The 'verifiability' property enables a potential solver to verify that the puzzle's solution has a useful property before attempting to solve it. The first objective of this seminar topic, based on the recent paper by Xin and Papadopoulos [1], is to understand and explain the settings, constructions and performance implications of verifiable time-lock puzzles.
One potential application of time-lock puzzles is in verifiable mix networks. Mix networks unlink senders from their messages by shuffling messages based on delays defined by the senders. To ensure that messages are still shuffled in the presence of malicious mix nodes, several techniques have been proposed to make mix networks verifiable [2]. Intuitively, time-lock puzzles could be used by clients in a mix network to ensure that servers delay messages for the desired amount of time. The second objective is to verify this intuition and compare it with other approaches based on Haines and Müller's SoK.
References:
- J. Xin and D. Papadopoulos, "Check-Before-you-Solve: Verifiable Time-Lock Puzzles", 2025 IEEE Symposium on Security and Privacy (SP)
- T. Haines and J. Müller, "SoK: Techniques for Verifiable Mix Nets," 2020 IEEE 33rd Computer Security Foundations Symposium (CSF)
#3 The privacy of Signal in the face of strong adversaries
Supervisor: Daniel Schadt
Signal [1] is considered state-of-the-art when it comes to secure messaging, and its cryptographic mechanism offers strong message confidentiality and integrity. However, when it comes to privacy, we usually consider more than just encryption of messages: Metadata, such as "who communicates with whom" or "how much do people communicate," is just as important as message content and can leak sensitive information. Signal claims a focus on privacy, but only within their threat model.
In this seminar, you will give an overview of the privacy protections that Signal offers (or does not offer) against various adversaries outside of Signal's threat model (a compromised Signal server, a malicious ISP, a global network adversary). You will look into features like Signal's Sealed Sender [2, 3] and to which extent they protect privacy. Finally, you can compare Signal to other messengers.
References:
- https://signal.org
- https://signal.org/blog/sealed-sender/
- https://www.cs.umd.edu/users/kaptchuk/publications/ndss21.pdf
#4 User Attestation of Smart Cameras
Supervisor: Simon Hanisch
Video anonymization can be used to remove identifiable information from videos, such as faces, body shapes and movements. To minimize the attack surface of videos, anonymization should happen as close as possible to the recording device — ideally, on the device itself. Modern smart cameras have enough computing power to run video anonymization in real time, making this a reality. However, a new challenge now arises: people who are being recorded want to verify that proper video anonymization is running on the smart camera. This seminar topic aims to investigate how smart cameras can effectively attest to recorded individuals the anonymization running on the device.
References: