Seminar Privatsphäre und Sicherheit

  • Type: seminar
  • Chair: KIT-Fakultäten - KIT-Fakultät für Informatik - KASTEL – Institut für Informationssicherheit und Verlässlichkeit - KASTEL Strufe
  • Semester: winter of 2024/2025
  • Lecturer:

    Prof. Dr. Thorsten Strufe
    Patricia Guerra Balboa

  • SWS: 2
  • Lv-No.: 2400118
Content

The seminar covers current topics in the research area of technical data protection.

This includes for example:

  • Anonymous communication
  • Network security
  • Anonymous online services
  • Anonymous digital payment systems
  • Evaluation of the anonymity of online services
  • Anonymized publication of data (differential privacy, k-anonymity)
  • Transparency/awareness enhancing systems
  • Support in understanding media
Language English

About

In this seminar, students will research one of the given topics (see below) to write a short seminar paper during the semester. At the end, they will present their paper to their peers and engage in discussions about the various topics.

The seminar aims to teach students in three aspects:

  • Technical knowledge in various areas of security and privacy.
  • Basic skills related to scientific research, paper writing and a scientific style.
  • The basics of the scientific process, how conferences and peer reviews work.

Organisation

To register for the course, please sign up in the ILIAS course. There is a limited number of slots available.

Schedule:

October 22, 2024, 14:00–15:30 Introduction (Organization & Topics) Room 252 (50.34)
October 27, 2024 Topic preferences due  
October 28, 2024 Topic assignment  
October 29, 2024, 14:00–15:30 Lecture Basic Skills Room 252 (50.34)
January 26, 2025 Paper submission deadline  
February 2, 2025 Reviews due  
February 9, 2025 Revision deadline  
~ February 18, 2025 Presentations TBD

Topics

This is a preliminary list of available topics.

#1 Selective forwarding attacks in modern networks

Supervisor: Fritz Windisch

Selective forwarding attacks are typically used to hinder communications between certain parties in an unforeseen manner. By dropping packets strategically, an adversary can try to degrade performance of a connection without this being too obvious, for example by dropping acknowledgements and thus triggering a flood of resends. Traditionally, selective forwarding attacks have been intensively studied especially in resource constrained and dynamic networks, such as wireless ad-hoc networks. Even in modern networks, using paths along multiple routes does not fully prevent this problem. This poses a problem in mission-critical environments where QoS is important, such as remote surgery or similar use cases, where certain nodes of different operators might behave maliciously.

The goal of this topic is to collect some state-of-the-art approaches for selective forwarding attack mitigations/detections, rated by their applicability in different domains (wireless/wired), their claimed accuracy, their advantages and their drawbacks.

References:

#2 Out of the Lab: Privacy Threats of Machine Learning

Supervisor: Felix Morsbach

Machine learning models have been demonstrated to be vulnerable to inference attacks (e.g. membership inference). Such attacks can infer information about the data used for training a given machine learning model. This can violate the confidentiality of the training data and, in scenarios in which the training data comprises personal information, might even cause a privacy (and/or data protection) violation.

This threat is often used as an argument in discussions on the privacy implications of machine learning. However, whether information gained from such inference attacks actually pose a privacy risk is debatable. For example, the demonstrated attacks usually are conducted in sterile lab environments and the information gained from such an attack is probabilistic.

Determining the privacy risk due to the vulnerability of machine learning models to inference attacks is difficult. Thus, as one part of this puzzle, the goal of this project is to survey the machine learning privacy inference attack literature and categorize possible privacy harms of said attacks.

References:

  • Shokri, R., M. Stronati, C. Song, and V. Shmatikov. “Membership Inference Attacks Against Machine Learning Models.” In 2017 IEEE Symposium on Security and Privacy (SP), 3–18, 2017. https://doi.org/10.1109/SP.2017.41
  • Liu, Bo, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, and Zihuai Lin. “When Machine Learning Meets Privacy: A Survey and Outlook.” ACM Computing Surveys 54, no. 2 (March 5, 2021): 31:1-31:36. https://doi.org/10.1145/3436755.
  • Citron, Danielle Keats, and Daniel J. Solove. "Privacy harms." BUL Rev. 102 (2022): 793.

#3 Choosing things privately with Differential Privacy

Supervisor: Patricia Guerra Balboa

Many statistical analyses involve the selection of an element from a set of possibilities. For example, if we want to predict a traffic jam, we need to select the most congested road with a higher probability of getting stuck from the whole set of possible roads. At the same time, this selection process involves private information of the users. For example, to know the most congested road, we may need access to the current location of individuals.

Differential Privacy (DP) has become the formal and de facto mathematical standard for privacy-preserving data analysis in such situations. Hence, we find various algorithms in the literature that allow us to “choose things privately”. Now the question arises: Which one do we want to choose? The goal of this seminar topic is to understand the state of the art of DP “choosing” mechanisms and to compare them in terms of privacy and utility to build a systematization that allows us to know which one it is worth to use depending on the circumstances.

References:

  • Desfontaines, D. (2023, 10). Choosing things privately with the exponential mechanism. https:// desfontain.es/privacy/choosing-things-privately.html. (Ted is writing things (personal blog))
  • Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9 (3–4), 211–407.
  • McKenna, R., & Sheldon, D. R. (2020). Permute-and-flip: A new mechanism for differentially private selection. Advances in Neural Information Processing Systems, 33, 193–203.

#4 A survey on Beamforming-based Wireless Sensing

Supervisor: Julian Todt

Wireless Sensing is a type of Joint Communication and Sensing (JCaS) that uses WiFi networks to extract information about the network's environment while transmitting data. For example, it has been shown that it is possible to perform activity recognition and person identification based on the signal propagation characteristics in WiFi networks. The most commonly used characteristics are Received signal strength indicator (RSSI) and Channel State Information (CSI) which contain information about the environment implicitly. More recently, access points use beamforming to increase bandwidth, for which the device send explicit information about the environment to the access point, unencrypted. Beamforming therefore is a promising new data source for Wireless Sensing as the required information is supposedly easier to extract.

In this seminar, the student should perform a survey on wireless sensing approaches using beamforming. The goal is to categorize existing approaches in this area, summarize the state-of-the-art and draw conclusions.

References:

  • Cao, Jiannong, and Yanni Yang. Wireless Sensing: Principles, Techniques and Applications. Wireless Networks. Cham: Springer International Publishing, 2022. https://doi.org/10.1007/978-3-031-08345-7
  • Ma, Yongsen, Gang Zhou, and Shuangquan Wang. “WiFi Sensing with Channel State Information: A Survey.” ACM Computing Surveys 52, no. 3 (May 31, 2020): 1–36. https://doi.org/10.1145/3310194
  • Haque, Khandaker Foysal, Milin Zhang, Francesca Meneghello, and Francesco Restuccia. “BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback.” arXiv, March 16, 2023. http://arxiv.org/abs/2303.09687

#5 Survey on Linkability Attacks in 5G-AKA

Supervisor: Kamyar Abedi

The 5G Authentication and Key Agreement (5G-AKA) protocol is a critical authentication protocol in 5G networks. It ensures mutual authentication between the User Equipment (UE) and the network while maintaining user privacy. However, despite improvements over its predecessor, 5G-AKA remains vulnerable to certain attacks, particularly those aiming to compromise user privacy through linkability attacks.

Linkability attacks pose a significant threat to the privacy and security of users in 5G networks. The vulnerability arises from the exposure of certain identifiers or cryptographic material during the authentication process, allowing adversaries, especially those in roaming networks, to link different sessions to the same user. Understanding the nature of these attacks and reviewing the countermeasures proposed in the literature are crucial steps toward reinforcing 5G network security.

The primary goal of this seminar project is to survey existing research on linkability attacks in 5G-AKA and the countermeasures proposed to mitigate them as follow:

  1. Explore how linkability attacks exploit vulnerabilities in the 5G-AKA protocol.
  2. Analyze the technical details of a few key research papers that have contributed to this area, focusing on their methods, attack models, and findings.
  3. Examine the countermeasures proposed in these papers.

This seminar project aims to contribute to the understanding of linkability issues in 5G-AKA and the effectiveness of countermeasures. Through a detailed exploration of the technical aspects of these attacks and defenses.

References:

  • A. Koutsos, "The 5G-AKA Authentication Protocol Privacy," 2019 IEEE European Symposium on Security and Privacy (EuroS&P), Stockholm, Sweden, 2019, pp. 464-479, doi: 10.1109/EuroSP.2019.00041.
  • Wang, Yuchen, Zhenfeng Zhang, and Yongquan Xie. "Privacy-Preserving and Standard- Compatible AKA Protocol for 5G." 30th USENIX Security Symposium (USENIX Security 21). 2021.

#6 Private Set Intersection

Supervisor: Shima Hassanpour

Secure Multiparty Computation (SMPC) is a family of protocols that allow distributed parties to compute a function on their private inputs. Private Set Intersection (PSI) is one of the most important privacy problems in this domain. The challenge in PSI is how two or more parties can compute their intersections without revealing any information about the elements that are not in their intersection. The aim of this seminar topic is to survey existing approaches that have been proposed to solve the PSI problem.

References:

  • Pinkas, Benny, Thomas Schneider, and Michael Zohner. "Scalable private set intersection based on OT extension." ACM Transactions on Privacy and Security (TOPS) 21.2 (2018): 1-35.
  • Morales, Daniel, Isaac Agudo, and Javier Lopez. "Private set intersection: A systematic literature review." Computer Science Review 49 (2023): 100567.

#7 qRAM architecture

Supervisor: Shima Hassanpour

Quantum random access memory (qRAM) is a quantum architecture that can serve as a fast implementation of a quantum oracle, acting as an interface between classical data and quantum algorithms. The goal of this seminar is to survey existing physical implementation ideas.

References:

  • Jaques, Samuel, and Arthur G. Rattew. "Qram: A survey and critique." arXiv preprint arXiv:2305.10310 (2023).
  • Xu, Shifan, et al. "Systems architecture for quantum random access memory." Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture. 2023.
  • Asaka, Ryo, Kazumitsu Sakai, and Ryoko Yahagi. "Quantum random access memory via quantum walk." Quantum Science and Technology 6.3 (2021): 035004.
  • Giovannetti, Vittorio, Seth Lloyd, and Lorenzo Maccone. "Quantum random access memory." Physical review letters 100.16 (2008): 160501.

#8 Composition in Differential Privacy

Supervisor: Àlex Miranda Pascual

There is no doubt that differential privacy (DP) is one of the most widely used and well-known privacy metrics in the literature. DP has a clear advantage over other privacy notions: It possesses the key property of composability. A new DP mechanism can be formed by composing a finite number of given DP mechanisms. Moreover, the DP composition theorems offer an accurate and reliable means of quantifying any privacy loss incurred by the newly composed DP mechanism. For these reasons, the advantages of DP composition are indisputable and widely recognised throughout the privacy community. Composability is essential for building most DP algorithms. Furthermore, without composition, it is impossible to calculate the privacy protection of adaptive updates (e.g., in a streaming scenario or model learning).

These factors contribute to the considerable popularity of composition as a tool. The majority of papers that employ composition utilize the simplest result, namely independent sequential and parallel composition. However, there are numerous other composition techniques that are less commonly employed and remain relatively unknown. The objective of the seminar is to examine the various composition techniques utilized in the existing literature and to categorize them based on their applicability, input requisites, and the formula of the final privacy loss.

References: Desfontaines [1] provides an excellent introduction to composability and some of the lesser-known composition theorems to be included in the classification. For an overview of the classical composition results, please refer to the beginning of Section 3.5 of [2] and [3]. Additionally, Guerra-Balboa et al. [4] provide an introduction to composition and some theorem generalizations.

  1. D. Desfontaines, “Open problem(s) - How generic can composition results be?”, DifferentialPrivacy.org, https://differentialprivacy.org/open-problems-how-generic-can-composition-be/
  2. C. Dwork and A. Roth, “The algorithmic foundations of differential privacy”, Found. Trends Theor. Comput. Sci., vol. 9, no. 3–4, pp. 211–407, Agosto 2014, doi: 10.1561/0400000042.
  3. F. McSherry, “Privacy integrated queries”, presented at the Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data (SIGMOD), Association for Computing Machinery, Inc., Jun. 2009. doi: 10.1145/1559845.1559850.
  4. P. Guerra-Balboa, À. Miranda-Pascual, J. Parra-Arnau, and T. Strufe, “Composition in differential privacy for general granularity notions”, in 2024 IEEE 37th Computer Security Foundations Symposium (CSF), Jul. 2024, pp. 680–696. doi: 10.1109/CSF61375.2024.00004.

#9 The Evolution of Hidden Services in Tor

Supervisor: Daniel Schadt

Tor is a well-known and widely used system for anonymous communication. As one of its features it provides hidden services, which are services whose location/identity is protected from those accessing them. While hidden services have been part of Tor since its inception [1] and their core concept has stayed the same, many improvements have been made over the years to better protect the identity and availability of those services.

The goal of this seminar is to provide an overview of the evolution of the hidden service protocol and the improvements made to it to withstand more types of attacks, such as the introduction of Vanguards or Proof-of-Work. The improvements should be classified based on what aspect they improve (e.g. privacy, availability, performance, ...).

References:

#10 Privacy Protections for Mixed Reality

Supervisor: Simon Hanisch

Virtual Reality (VR) and Augmented Reality (AR) continue to evolve and become more integrated into our daily lives. The devices required to realize VR/AR connect the virtual world with the physical world by continuously monitoring their users and their surroundings. This continuous monitoring raises new privacy issues, as the people captured by the monitoring are recorded in a new quality and quantity. The goal of this seminar is to explore what privacy technologies exist for users of VR/AR and how they can be categorized and compared.

References:

#11 Selective Failure Attacks on Private Information Retrieval

Supervisor: Christoph Coijanovic

Private Information Retrieval (PIR) is a useful building block of anonymous communication because it allows a client to retrieve a record from a server's database without the server knowing which record the client wants. Most PIR protocols are vulnerable to selective failure attacks: Suppose the server suspects that a client is interested in the ith record. If the server corrupts only that record, the client's post-retrieval behavior may reveal whether it really wanted the ith record (e.g., by re-retrieving due to the corruption).

Recently, there has been much interest in making PIR secure against selective failure attacks [1-5].

The goal of this seminar paper is to understand how selective failure attacks work, under what conditions a PIR scheme is vulnerable to them, and how they can be mitigated.

References:

  1. Colombo, Simone et al. “Authenticated private information retrieval.” IACR Cryptology ePrint Archive (2023).
  2. Dietz, Marian and Stefano Tessaro. “Fully Malicious Authenticated PIR.” IACR Cryptology ePrint Archive (2023).
  3. Wang, Yinghao et al. “Crust: Verifiable And Efficient Private Information Retrieval with Sublinear Online Time.” IACR Cryptol. ePrint Arch. 2023 (2023): 1607.
  4. Castro, Leo de and Keewoo Lee. “VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers.” IACR Cryptol. ePrint Arch. 2024 (2024): 341.
  5. Falk, Brett Hemenway et al. “Malicious Security for PIR (almost) for Free.” IACR Cryptol. ePrint Arch. 2024 (2024): 964.

#12 Homomorphic Encryption for Privacy-Preserving Biometric Authentication

Supervisor: Matin Fallahi

Biometric systems offer a promising alternative to passwords, but privacy concerns are growing over the sensitive information that can be extracted from biometric templates. Meanwhile, Homomorphic Encryption (HE) allows computations on encrypted data without revealing the underlying information, making HE a potential solution to protect biometric data during authentication. This seminar will explore how HE can address these privacy concerns. We will investigate recent research in this area, focusing on threat models.

References: