Privacy Enhancing Technologies

  • Type: seminar
  • Semester: summer of 2020
  • Place:

    online

  • Lecturer:

    Dr. Patricia Arias Cabarcos, M. Sc. Christiane Kuhn, Dr. Javier Parra-Arnau, M. Sc. Simon Hanisch, M.Sc. Thomas Agrikola

About the seminar

This seminar addresses current research in the context of privacy enhancing technologies and data protection. Publications from relevant journals, conferences, and books are chosen for topics and listed below.

 

The seminar follows the idea of a scientific conference.

  • Each participant will get a topic (topic suggestions can be found below later) to which he/she writes a short paper in the following 6-8 weeks. This paper is then submitted for review. At this point the content of the paper must be complete and the paper written entirely. Unfinished submissions will be excluded from the remainder of the course.
  • Each participant will peer-review 2 or 3 papers written by other participants in the next 1-2 weeks. "Reviewing" means reading the paper in-depth and making suggestions for improvements.
  • Each participant will receive the reviews to his/her work and will revise his/her paper in the following 1-2 weeks.
  • At the end of the term (date will be announced later) the conference will take place. Each participant gives a presentation, which is followed by a short discussion. For the grade, preparation and presentation of the work as well as the quality of the reviews and the report are taken into account.
     

You can register for this seminar via the corresponding ILIAS course.

Important dates

April 20

Organization intro (online, ILIAS video)
Topics presentation (online, ILIAS video)
April 27 Kickoff Reading, Writing, Presenting (online, ILIAS video)
April 28 Topic preferences due
May 1 Topic assignment
June 30 Paper submission deadline: https://easychair.org/my/conference?conf=spets20
July 7 Reviews due
July 14 Revision deadline
July 24 Presentations

 

 

Topics

Topic 1: Understanding the sparse vector technique of differential privacy
Supervisor: Dr. Javier Parra-Arnau

The sparse vector technique (SVT) is a fundamental technique for designing differentially-privacy algorithms [1]. It  has the unique feature that one can output some query answers without apparently paying any privacy cost. Because of the potential savings on privacy budget, many variants for SVT have been proposed and employed in privacy-preserving data mining and publishing. However, most variants of SVT are actually not private.
The aim of this seminar work is to identify the misunderstandings that likely contribute to them, and studying a new version of SVT that provides even better utility.

[1] C. Dwork, “Differential privacy,” in Proc. Int. Colloq. Automata, Lang., Program. Springer-Verlag, 2006, pp. 1–12.
[2] M. Lyu, D. Su, N. Li, "Understanding the Sparse Vector Technique for Differential Privacy", Sep. 2016, https://arxiv.org/pdf/1603.01699.pdf

 

 

Topic 2: Privacy protection against gait recognition
Supervisor: M. Sc. Simon Hanisch

Human behavior is unique from individual to individual and is influenced by physiological and psychological factors. For example, human gait is unique to a person and is influenced by characteristics like the proportions of the body.
Gait recognition is a popular biometric identification method that is being used to identify people in video footage for surveillance purposes. The benefit, when compared to face recognition, is that the gait is easier to observe since it can be observed with the individual not directly facing the camera. Further, it is also more difficult to obfuscate gait because it requires permanent behavioral changes of the individual to change their gait, which is more difficult than for example wearing a mask.
The research problem at hand is to survey techniques for anonymizing gait in videos. Special focus should be on techniques that preserve the utility of the videos.

[1] https://ieeexplore.ieee.org/document/8267657
[2] https://www.sciencedirect.com/science/article/abs/pii/S2214212618304629?via%3Dihub
[3] https://ieeexplore.ieee.org/document/5686920

 

 

Topic 3: Artificial Intelligence meets Privacy Policies
Supervisor: Dr. Patricia Arias Cabarcos

We have the right to be informed about how our data are collected when using online services and what choices are available to protect our privacy. This information is communicated through privacy policies, which are typically overly complex, ambiguous, and too long to be understandable. Research has shown that we would need hundreds of hours per year to read privacy policies [1], but even investing that amount of time, comprehension is not guaranteed1 2. Amidst this complexity, it is not surprising that most users just accept policies without reading them [2], despite the threats to privacy that might come with this decision. A recent approach to solve this problem lies on the application of artificial intelligence (AI) to simplify interpretation. Some examples are the automated extraction of meaningful information from policies [3] or conversational strategies where a bot directly answers user questions [4] about a privacy policy.

The goal of this seminar work is to survey, categorize, and analyze AI-based proposals to make privacy policies more user-friendly.

[1] McDonald, A.M. and Cranor, L.F., 2008. The cost of reading privacy policies. Isjlp, 4, p.543.
[2] Obar, J.A. and Oeldorf-Hirsch, A., 2020. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society, 23(1), pp.128-147.
[3] Zimmeck, S. and Bellovin, S.M., 2014. Privee: An architecture for automatically analyzing web privacy policies. In 23rd {USENIX} Security Symposium ({USENIX} Security 14) (pp. 1-16).
[4] Harkous, H., Fawaz, K., Lebret, R., Schaub, F., Shin, K.G. and Aberer, K., 2018. Polisis: Automated analysis and presentation of privacy policies using deep learning. In 27th {USENIX} Security Symposium ({USENIX} Security 18) (pp. 531-548). See also: https://pribot.org/
[5] Tesfay, W.B., Hofmann, P., Nakamura, T., Kiyomoto, S. and Serna, J., 2018, April. I read but don't agree: Privacy policy benchmarking using machine learning and the eu gdpr. In Companion Proceedings of the The Web Conference 2018 (pp. 163-166).

1 https://theconversation.com/website-privacy-options-arent-much-of-a-choice-since-theyre-hard-to-find-and-use-124631

2 https://explore.usableprivacy.org/browse/readability/SMOGIndex?view=human

 

 

Topic 4: A survey on Neuroprivacy
Supervisor: Dr. Patricia Arias Cabarcos

The field of Brain Computer Interfaces has evolved far beyond the medical realm to enter domains like entertainment, wellness, and marketing. To give some examples, we can nowadays find brain-controlled games or meditation applications1 to be enjoyed with consumer-grade electroencephalogram (EEG) readers. But collecting user brainwaves for these services opens the door to new types of privacy abuses [1] . Since brainwaves correlate with our mental states, cognitive abilities, and medical conditions, it is not hard to imagine the possibility of creating brain spyware to infer emotions, prejudices, interests, health disorders, personality traits, or other private data that can be used perniciously. Indeed, the feasibility of some of these attacks has been already demonstrated [3]. Furthermore, the EEG data abuse poses risks to individual freedom of thought, and therefore to society, which calls for new legal frameworks to regulate “Neuroprivacy”2 [4].

The goal of this seminar work is to survey the state of the art on Neuroprivacy, identifying risks, and analyzing existing technical and legal solutions to protect users.

[1] Bonaci, T., Calo, R. and Chizeck, H.J., 2015. App stores for the brain: privacy and security in brain-computer interfaces. IEEE Technology and Society Magazine34(2), pp.32-39.
[2] Ienca, M., Andorno, R. Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13, 5 (2017).
[3] Martinovic, I., Davies, D., Frank, M., Perito, D., Ros, T. and Song, D., 2012. On the feasibility of side-channel attacks with brain-computer interfaces. In Presented as part of the 21st {USENIX} security symposium ({USENIX} Security 12) (pp. 143-158).
[4] Committee on Science and Law, 2005. Are Your Thoughts Your Own?: “Neuroprivacy” and the Legal Implications of Brain Imaging. Record-Association of the Bar of the City of New-York. 60(2), p.407.
[5]Samuel. S. “Brain-reading tech is coming. The law is not ready to protect us” Vox. Dec 2019. Online: https://www.vox.com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics

1 https://store.neurosky.com/collections/apps

2 Also called "brain privacy," is defined as the rights people have regarding the imaging, extraction and analysis of neural data from their brains

 

 

Topic 5: Onion Routing Properties
Supervisor: M. Sc. Christiane Kuhn

To protect users' privacy while browsing online, TOR (The Onion Router) was designed and is now the dominating tool in practice. Various scientific works try to formally grasp the protection that the underlying technique called Onion Routing is offering. However, the authors use differing models, ideal functionalities and result in different properties they require to be fullfilled for a secure onion routing protocol.
The goal of this seminar work is to provide an overview of the different properties, compare them and explain their differences.

[1] M. Backes, I. Goldberg, A. Kate, and E. Mohammadi. Provably secure and practical onion routing. In 2012 IEEE 25th Computer Security Foundations Symposium, pages 369–385. IEEE, 2012.
[2] A. Kate, G. M. Zaverucha, and I. Goldberg. Pairing-based onion routing with improved forward secrecy. ACM Transactions on Information and System Security (TISSEC), 13(4):29, 2010.
[3] G. Danezis and I. Goldberg. Sphinx: A compact and provably secure mix format. In Security and Privacy, 2009 30th IEEE Symposium on, pages 269–282. IEEE, 2009.
[4] C. Kuhn, M. Beck, and T. Strufe. Breaking and (Partially) Fixing Provably Secure Onion Routing. arXiv e-prints, page arXiv:1910.13772, 2019.

 

Topic 6: Secure Search on Encrypted Data
Supervisor:M.Sc. Thomas Agrikola

Protocols for secure search on encrypted data enable clients to (1) securely upload a data array x = (x[1], . . . , x[n]), to an untrusted honest-but-curious sever, where data may be uploaded over time, and (2) securely issue  repeated search queries q for retrieving the first element (i' , x[i' ]) with i' = min{i | isMatch(x[i], q) = 1}. Further matching elements can be retrieved with additional interaction. The secure search protocol introduced in [1] uses  fully homomorphic encryption (FHE) to encrypt both the data and the queries and improve on previous secure search protocols in terms of post-processing and speed.
The goal of this seminar is to understand the protocol of [1] and compare this protocol to other secure search protocols from the literature.

[1] Adi Akavia et al. “Setup-Free Secure Search on Encrypted Data: Faster and Post-Processing Free”. In: PoPETs 2019.3 (2019), pp. 87–107. doi: 10.2478/popets-2019-0038. url: https://doi.org/10.2478/popets-2019-0038.