Seminar Privacy and Security

  • Type: seminar
  • Chair: KIT-Fakultäten - KIT-Fakultät für Informatik - KASTEL – Institut für Informationssicherheit und Verlässlichkeit - KASTEL Strufe
  • Semester: winter of 2023/2024
  • Lecturer:

    Prof. Dr. Thorsten Strufe
    M. Sc. Patricia Guerra-Balboa

  • SWS: 2
  • Lv-No.: 2400118

The seminar covers current topics in the research area of technical data protection.

This includes for example:

  • Anonymous communication
  • Network security
  • Anonymous online services
  • Anonymous digital payment systems
  • Evaluation of the anonymity of online services
  • Anonymized publication of data (differential privacy, k-anonymity)
  • Transparency/awareness enhancing systems
  • Support in understanding media
Language English
Tentative Schedule
  • 24.10.2023, 14:00 – 15:30 11:30 – 13:00, 50.34 Room 252 Introduction (Organization & Topics)
  • 29.10.2023 Topic preferences due
  • 30.10.2023 Topic assignment
  • 31.10.2023 02.11.2023 9:45 – 11:15 Basic Skills
  • 28.01.2024 Paper submission deadline
  • 04.02.2024 Reviews due
  • 11.02.2024 Revision deadline
  • ~20.02.2024 Presentations


This is the list of the topics that you can choose from. Each topic also has an introductory slide in the introduction slide set.

In addition to the introduction slides, we also provide the Basic Skill lecture slides.

#1 Continuous Group Key Agreement

Supervisor: Christoph Coijanovic

Group communication is very popular in everyday life. Like all online communication, it needs to be secure. In addition to confidentiality, "security" in this context includes forward secrecy and post-compromise security. To achieve these goals, a Continuous Group Key Agreement (CGKA) protocol can be used, where group members derive a shared key that can be updated on demand.

The Internet Engineering Task Force (IETF) is in the process of standardizing CGKA as IETF Message Layer Security (MLS) [1], and there is a lot of academic interest in the topic, e.g.,:

  • Hashimoto et al. consider how to hide metadata [2].
  • Balbas et al. and Kajita et al. add administration [3,4].
  • Alwen et al. make CGKA more resilient to unreliable infrastructure [5].

Your task is to survey the existing literature on CGKA. The goal of this survey is to find the state of the art in CGKA and to determine how compatible the different approaches are with each other and with the IETF MLS standard.

[2] Hashimoto, Keitaro et al. "How to Hide MetaData in MLS-Like Secure Group Messaging: Simple, Modular, and Post-Quantum." ACM CCS (2022)
[3] Balbás, David et al. "Cryptographic Administration for Secure Group Messaging." IACR Cryptol. ePrint Arch. (2022)
[4] Kaisei Kajita et al. "Continuous Group Key Agreement with Flexible Authorization and Its Applications." IWSPA (2023)
[5] Alwen, Joël et al. "Fork-Resilient Continuous Group Key Agreement." IACR Cryptol. ePrint Arch. (2023)

#2 Privacy Protections for Mixed Reality

Supervisor: Simon Hanisch

Virtual Reality (VR) and Augmented Reality (AR) continue to evolve and become more integrated into our daily lives. The devices required to realize VR/AR connect the virtual world with the physical world by continuously monitoring their users and their surroundings. This continuous monitoring raises new privacy issues, as the people captured by the monitoring are recorded in a new quality and quantity. The goal of this seminar is to explore what privacy technologies exist for users of VR/AR and how they can be categorized and compared.


#3 Attack Resilience of DP

Supervisor: Patricia Guerra-Balboa

Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving data sharing. Differential Privacy mitigates the risk of Membership Inference Attacks (MIAs) by perturbing data, making it extremely difficult for malicious actors to extract identifying information, even when other auxiliary data sources are available. However, while its resilience against MIAs is direct and has been successfully proven in the uncorrelated case, its resilience against other types of attacks, such as attribute inference or reconstruction attacks, has not been formalized.

The goal of this project is to investigate the different types of attacks in the area of statistical disclosure control and to analyze the protection that DP privacy provides against them. If possible, we will also perform these analyses in different neighborhoods of DP.

#4 Correlation-based attacks against DP protection mechanisms

Supervisor: Patricia Guerra-Balboa

Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving disclosure. Recently, however, several articles have shown several shortcomings of this notion. Strong correlation in data is one example. DP inherently assumes that the database is a simple, independent random sample. This implies that the records in the database are uniformly distributed (i.e., follow the same probability distribution) and independent (in particular, uncorrelated). Unfortunately, this is not the case for several data and use cases, such as trajectory data.

The goal of this project is to investigate existing empirical attacks that exploit correlation to infer sensitive information about users, and to determine how realistic the theoretical threat that correlation poses to DP is compared to these existing attacks. If we have time, we would go deeper into the particular case of trajectory data and the possible attacks exploiting autocorrelation in time series.


  • Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9 (3-4), 211-407.
  • Wang, H., Xu, Z., Jia, S., Xia, Y., & Zhang, X. (2021). Why current differential privacy schemes are inapplicable for correlated data publishing? World Wide Web, 24 (1), 1-23.
  • Yang, B., Sato, I., & Nakagawa, H. (2015). Bayesian differential privacy on correlated data. In Proceedings of the 2015 acm sigmod international conference on management of data (pp. 747-762).
  • Zhang, T., Zhu, T., Liu, R., & Zhou, W. (2022). Correlated data in differential privacy: definition and analysis. Concurrency and Computation: Practice and Experience, 34 (16), e6015.

#5 A Literature-based Privacy Analysis of Event Cameras

Supervisor: Julian Todt

Neuromorphic vision sensors (event cameras) have recently seen increased attention. As opposed to traditional cameras, they capture changes in brightness (so-called events) instead of brightness itself. This lead to their adaption in various applications such as autonomous vehicles and robotics. There have also been claims that they are more privacy-friendly due to their different mode of operation in comparison with traditional cameras. More recently however, there has been work on reconstructing traditional images from event-cameras using machine learning which has also prompted the development of anonymization methods. In this seminar, your goal is to analyze the privacy of event cameras and compare it to traditional cameras. After an extensive review of existing literature on event cameras with a focus on identification methods, anonymization methods and other privacy aspects, you will analyze their privacy impact and compare it to traditional RGB cameras.


  • Ahmad, Shafiq, Pietro Morerio, and Alessio Del Bue. "Person Re-Identification without Identification via Event Anonymization." arXiv, August 17, 2023.
  • Cedric Scheerlinck, Henri Rebecq, Daniel Gehrig, Nick Barnes, Robert Mahony, Davide Scaramuzza. "Fast Image Reconstruction with an Event Camera." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 156-163
  • A. Sokolova and A. Konushin, "Human identification by gait from event-based camera," 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 2019, pp. 1-6, doi: 10.23919/MVA.2019.8758019.

#6 Anonymous Communication in Practice

Supervisor: Daniel Schadt

While various designs of anonymous communication networks have been proposed in academia, only a few projects are (or were) actively developed and used in practice. Tor is the most well-known one, but other systems existed historically (e.g. JAP, Mixminion) or are being developed now (e.g. Nym). The goal of this seminar is to provide an overview over a few practical systems, comparing them in both their theorical claims/guarantees as well as their practical features/implementation.

#7 Neural Mechanisms of Speech Processing

Supervisor: Matin Fallahi

This seminar aims to delve into the neural mechanisms underpinning speech processing in the brain. Participants will engage with contemporary research papers on this subject and pen a concise paper encapsulating their discoveries. The objective is to explore how brain activity, as recorded through MEG, EEG, fMRI, among other techniques, can be modeled during language-centric tasks like reading, listening, and speaking. Analyzing primary research papers will foster the development of critical thinking skills and provide a richer understanding of recent advancements in cognitive neuroscience. The crux of this seminar revolves around modeling brain activity during language-related tasks, which is pivotal for unraveling the complex neural networks facilitating language processing.


  • Millet, J., Caucheteux, C., Boubenec, Y., Gramfort, A., Dunbar, E., Pallier, C., & King, J. R. (2022). Toward a realistic model of speech processing in the brain with self-supervised learning. Advances in Neural Information Processing Systems, 35, 33428–33443.
  • Caucheteux, C., Gramfort, A., & King, J. R. (2023). Evidence of a predictive coding hierarchy in the human brain listening to speech. Nature Human Behaviour.

#8 Attacks on Biometric Authentication Systems

Supervisor: Matin Fallahi

This seminar aims to offer a comprehensive exploration of the vulnerabilities inherent in biometric authentication systems. Participants will delve into existing literature to understand the various types of attacks that could compromise the integrity and reliability of biometric authentication surfaces. The primary objective is to dissect the surface of biometric authentication systems to ascertain potential weak points that could be exploited by malicious actors. Participants will review a range of academic papers to comprehend the nature and extent of threats faced by these systems, thereby fostering a deeper understanding of the security challenges in biometric authentication. The seminar endeavors to answer the critical question: What are the various types of attacks on the surface of biometric authentication systems, and how can these vulnerabilities be mitigated to bolster security? The investigation will encompass spoofing attacks, presentation attacks, and other potential threat vectors that could undermine the efficacy of biometric authentication systems.


  • Roberts C. Biometric attack vectors and defences. computers & security. 2007 Feb 1;26(1):14-25.
  • Galbally J, McCool C, Fierrez J, Marcel S, Ortega-Garcia J. On the vulnerability of face verification systems to hill-climbing attacks. Pattern Recognition. 2010 Mar 1;43(3):1027-38.
  • Wang X, Yan Z, Zhang R, Zhang P. Attacks and defenses in user authentication systems: A survey. Journal of Network and Computer Applications. 2021 Aug 15;188:103080.

#9 qRAM architecture

Supervisor: Shima Hassanpour

Quantum random access memory (qRAM) is a quantum architecture that can serve as a fast implementation of a quantum oracle, acting as an interface between classical data and quantum algorithms. The goal of this seminar is to survey existing physical implementation ideas.


  • Jaques, Samuel, and Arthur G. Rattew. "Qram: A survey and critique." arXiv preprint arXiv:2305.10310 (2023).
  • O’Sullivan, James, et al. "Random-access quantum memory using chirped pulse phase encoding." Physical Review X 12.4 (2022): 041014.

#10 Private Set Intersection

Supervisor: Shima Hassanpour

Private Set Intersection (PSI) is one of the most important privacy issues. The aim of this seminar topic is to survey existing quantum approaches that have been proposed to solve the PSI problem.


  • Amoretti, Michele. "Private Set Intersection with Delegated Blind Quantum Computing." In 2021 IEEE Global Communications Conference (GLOBECOM), pp. 1-6. IEEE, 2021.
  • Shi, Run-Hua, and Yi-Fei Li. "Quantum private set intersection cardinality protocol with application to privacy-preserving condition query." IEEE Transactions on Circuits and Systems I: Regular Papers 69, no. 6 (2022): 2399-2411.

#11 Zero-trust: Verification first

Supervisor: Fritz Windisch

The additional security challenges networks face today due to the advent of IoT and other new attack angles on infrastructure (insider attackers), has prompted the creation of the paradigm of zero-trust. In modern networks, the previous model of a perimeter defense and a trusted demilitarized zone can not be used anymore. Zero-trust describes the practice to force authentication for all accessed resources and assign the minimal possible privileges for all requests on a network in order to make networks more secure and resilient. A second direction of zero-trust lies in the isolation of network devices, blocking all but the required communication flows of network tenants.

The goal of the work is to give an overview over zero-trust, approaches, current limitations, and potential future developments.


#12 Network slicing: Isolation of network devices in software-defined networks

Supervisor: Fritz Windisch

In order to protect network connections in critical contexts and to isolate devices on a network level, the term of network slicing has been established. In recent time, network slicing has gained additional attention due to being utilized in 5G and subsequent standards. Network slicing is an integral part of making networks more resilient and secure, even in zero-trust contexts. As such, it is of upmost importance to be able to provide network slicing even in dynamic software-defined networks (SDN).

The goal of the work is to give an overview over network slicing and existing approaches (single-domain and multi-domain) alongside their claims on security and limitations. As there are many projects on network slicing, utilizing the most recent developments will be sufficient.


#13 An Introduction to Differentially Private Stochastic Gradient Descent

Supervisor: Felix Morsbach

Stochastic gradient descent (SGD) is an iterative optimization algorithm for finding the parameters that provide the best fit between predicted and actual outputs, and is widely used in many machine learning models. Private learning has become a popular topic that has seen an increase in recent years, and in particular has introduced a differential private (DP) counterpart to SGD, called DP-SGD.

The goal of this seminar topic is to develop a tutorial covering DP-SGD and other DP optimization algorithms for machine learning. The tutorial should cover how optimization algorithms generally can be made differentially private, what DP optimization algorithms exist and and what composition theorems are used to ensure DP.


  • Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. 2016.
  • Chaudhuri, Kamalika, Claire Monteleoni, and Anand D. Sarwate. "Differentially Private Empirical Risk Minimization." The Journal of Machine Learning Research 12 (July 1, 2011): 1069–1109.
  • Shokri, Reza, and Vitaly Shmatikov. "Privacy-Preserving Deep Learning." In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 1310–21. CCS ’15. Denver, Colorado, USA: Association for Computing Machinery, 2015.
  • Song, Shuang, Kamalika Chaudhuri, and Anand D. Sarwate. "Stochastic Gradient Descent with Differentially Private Updates." In 2013 IEEE Global Conference on Signal and Information Processing, 245–48, 2013.
  • Bassily, Raef, Adam Smith, and Abhradeep Thakurta. "Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds." In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 464–73, 2014.
  • Papernot, Nicolas, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ulfar Erlingsson. "Scalable Private Learning with PATE." In International Conference on Learning Representations 2018, 2018.

#14 Out of the Lab: What Can Membership Inference Attacks Actually Do?

Supervisor: Felix Morsbach

Membership inference attacks are able to infer information about the data used for training machine learning models. This vulnerability of machine learning models is often used as an argument in discussions on the privacy risks of machine learning and artificial intelligence in general. However, while there is much research focused on this topic and many demonstrations of this vulnerability exist, usually they are based academic or non-sensitive datasets. Whether (and how) these attacks in their current form actually pose privacy risks is debatable.

The goal of this project is to a) investigate the privacy implications a membership inference attack could have and b) assess whether current state-of-the-art membership inference attacks actually would be capable to cause such harm.


  • Shokri, R., M. Stronati, C. Song, and V. Shmatikov. "Membership Inference Attacks Against Machine Learning Models." In 2017 IEEE Symposium on Security and Privacy (SP), 3–18, 2017.
  • Liu, Bo, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, and Zihuai Lin. "When Machine Learning Meets Privacy: A Survey and Outlook." ACM Computing Surveys 54, no. 2 (March 5, 2021): 31:1-31:36.