Home / IEEE Technology Policy and Ethics / September 2020 / Inclusive Privacy Consenting in Public Video Surveillance and Future Directions

Inclusive Privacy Consenting in Public Video Surveillance and Future Directions

Ankur Chattopadhyay, Northern Kentucky University, Jordan Sommer, University of Wisconsin – Green Bay, and Robert Ruska Jr., Northern Kentucky University

September 2020

Mainstream public video-surveillance systems are not generally designed to provide accommodations for under-privileged and under-represented subjects [1, 2]. These subjects comprise of physically-challenged individuals, visually-impaired people and senior-citizens. With the recent emergence of the GDPR act [3, 4], all surveillance data subjects, including these under-served populations, need to be offered an opportunity to provide consent, according to the Opt-in and Opt-out rule, regarding being recorded on closed-circuit televisions (CCTV) and other security cameras. Research advances in privacy-enhancing innovations [5, 6], and privacy-mediating features [6, 7] have helped visual-surveillance systems evolve towards offering more subject-centric privacy consenting options. However, these present state-of-the-art technologies do not account for the under-served people. In order to be more inclusive in its privacy-mediating design, a video surveillance system needs to address the difficulties posed by technologies to the under-served subjects. In this article, we discuss this potential gap in research work, and explore the need of designing more inclusive privacy consenting functionalities within video-surveillance systems.

I. INCLUSIVITY ISSUES – VIDEO SURVEILLANCE

A. Under-served Populations
Existing literature [1, 2, 8] suggests that the needs of people with disabilities have been unrecognized in public service technologies involving artificial intelligence (AI). However, there have been recent efforts made to make these under-served populations more visible and to understand their challenges. There are multiple perspectives, including ethical grounds, on identifying the needs of these under-served populations. Moreover, the differences in the abilities of these individuals are linked to a history of certain disadvantages [9, 10]. In short, when it comes to studying the needs of disabled individuals, determining disparities in human abilities is a complex matter. Looking from a simplified viewpoint [11], much of today’s technology usage revolves around the user’s perception, and disabilities do affect an individual’s ability to perceive, which should be considered as a distinct disadvantage for individuals with disabilities.

B. Ubiquity of Surveillance Cameras
Surveillance cameras are ubiquitous today, and a survey in the UK [5, 6, 12] has estimated one surveillance camera in a public space for every 11 people. As of 2018, the consumer video surveillance market is worth more than $1 billion, reaching over 300 million cameras worldwide [6]. With this global deployment of public video surveillance, and subjects being captured on cameras without their consent, there is a compelling case of human privacy invasion, and a growing list of privacy concerns [5, 6, 7, 12]. On top of that,
with the enforcement of GDPR, it is not just the privacy, which is at stake. The way privacy-mediation is handled with data-subjects also matters [7, 13]. Enabling real-time analytics on video streams from surveillance cameras can be acceptable from a public standpoint, if surveillance systems can successfully address privacy concerns of surveilled subjects, and can additionally offer privacy-mediation options to all individuals. It can also be argued that in order to gain more widespread and universal acceptance, video surveillance systems need to be more inclusive in design i.e. they need to provide privacy consenting options that are accessible to all types of individuals, including the under-served populations [14, 15].

C. Opt-in Opt-out Facilities
As part of the required compliance with the new GDPR policy, the modern video surveillance systems need to offer surveilled subjects a fair opportunity to provide their consent against or for being captured on camera [3, 4]. However, the reality of the situation is that subjects or bystanders, especially under-served people, have little say in whether they agree to being recorded or not [1, 2, 13]. There has been some recent research progress on this situation with some potential techniques having been proposed to handle subject consent as privacy-mediation for visual-surveillance [6, 7]. In fact, one such approach is focused on using gestural interaction with surveillance cameras in order to enable subjects to signal their individual preferences about being captured to camera devices [7]. The benefit of such a privacy-mediation arrangement is that surveilled subjects can explicitly express consent (Opt-in), or disapproval (Opt-out), which is essential for GDPR compliance. Results gathered from this recent research work indicate that it is feasible to find human gestures that are suitable, understandable, and socially acceptable for implementation of the Opt-in and Opt-out feature in video-surveillance. However, one potential question can be raised as to whether this proposed gestural interaction-based design can also work for under-served individuals?

II. TOWARDS INCLUSIVE PRIVACY CONSENTING

As discussed in the previous sections, a potentially unanswered question is – how to come up with a more inclusive design for privacy consenting in video-surveillance systems for accommodating under-served people? Future research for addressing this open question should focus upon incorporation of new functionalities and features within video-surveillance settings that are inclusive to different user abilities, identities and values [16]. This process would involve re-evaluation of existing video-surveillance system designs for accessibility and inclusion [1, 2, 8, 9, 11]. Researchers need to carefully study the challenges posed by the current privacy-mediation technologies to different groups of under-served populations, given that the needs of people with disabilities vary from one group to another [14, 15]. Hence, the design enhancements made for one group of under-served people may or may not work for another under-served group. Knowledge based upon evaluation results of inclusive security and privacy prototypes from other information technology fields and associated work can help provide design guidance in this future work [14, 15, 16]. For instance, existing work in assistive technologies for visually- impaired users can help in this regard.

Overall, it is desirable to create inclusive privacy design patterns on how to better accommodate people with different disabilities, and to even have anti-patterns on how to avoid imposing difficulties to under-served people. Prior work on inclusive privacy in related disciplines [8] can be referenced to explain why some prominent theoretical approaches to privacy, which were developed for meeting traditional privacy challenges, yield unsatisfactory results when applied to scenarios involving under-served people. The construct of “contextual-integrity” relates adequate protection for privacy to norms of specific contexts, demanding that information gathering be appropriate to that context, and obey the governing norms of distribution within it [17]. It can be debated that public video-surveillance violates the right-to-privacy, because it violates contextual-integrity. Similarly, it can be reasoned that the exclusion of accommodations for under-served populations also defies contextual integrity. Based on these arguments, today’s flawed video-surveillance design can be linked to the source of “unequal and unfair” treatment towards those surveilled leading to inequality and bias [1, 2].

III. SUMMARY

There are multiple possible ways in which current privacy technologies within video-surveillance can be extended to make the systems more inclusive and user-friendly in order to support under-served populations. Although researchers have developed several assistive devices and technologies to help under-served populations in facilitating social interactions, technological aids towards addressing the unique challenges of meeting their privacy-related needs have not been looked into until very recently. Before implementing an inclusive privacy consenting design, it is important to consult potential data-subjects, especially from the under-served sections of people, in order to understand their concerns, preferences, specific requirements, and challenges with current technologies that might not be evident to researchers [14, 15]. Since assistive technologies are abandoned at a high rate, a detailed requirements analysis and a thoughtful user-centric approach is essential to inclusive privacy-mediation design for successful accommodation of people with unique impairments. Looking forward, privacy risk assessments in video-surveillance systems should explicitly take into account under-served subjects, like disabled and elderly people, which are unable to provide gestural consent, as well as communicate their privacy-related choices clearly and easily. In summary, it is high time that video-surveillance system developers focus their attention towards the privacy consenting needs of under-served people by attaining a more inclusive design that caters to a diverse set of subjects, including individuals with disabilities and impairments. Furthermore, this would help them achieve improved compliance with the latest GDPR related requirements.

References 

  1. Morris, M. R. “AI and Accessibility: A Discussion of Ethical Considerations.” arXiv preprint arXiv:1908.08939, 2019.
  2. Guo, A., Kamar, E., Vaughan, J.W., Wallach, H. and Morris, M.R. “Toward Fairness in AI for People with Disabilities: A Research Roadmap,” arXiv:1907.02227 [cs], 2019.
  3. Šidlauskas, Aurimas. “Video Surveillance and the GDPR.” (2019).
  4. Barnoviciu, Eduard, Veta Ghenescu, Serban-Vasile Carata, Marian Ghenescu, Roxana Mihaescu, and Mihai Chindea. “GDPR compliance in Video Surveillance and Video Processing Application.” in 2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD), pp. 1-6. IEEE, 2019.
  5. Chattopadhyay, Ankur. “Developing an Innovative Framework for Design and Analysis of Privacy Enhancing Video Surveillance.” PhD diss., University of Colorado, Colorado Springs. Kraemer Family Library, 2016.
  6. Das, A., et al. “Assisting Users in a World Full of Cameras: A Privacy-Aware Infrastructure for Computer Vision Applications,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1387–1396, 2017.
  7. Koelle, M., Ananthanarayan, S., Czupalla, S., Heuten, W., and Boll, S. “Your smart glasses’ camera bothers me – exploring opt-in and opt-out gestures for privacy mediation,” Proceedings of the 10th Nordic Conference on Human-Computer Interaction – NordiCHI ’18, pp. 473–481, 2018.
  8. Wang, Y. “Inclusive Security and Privacy,” IEEE Security Privacy, vol. 16, no. 4, pp. 82–87, July 2018.
  9. Dosono, B., Hayes, J., and Wang, Y. “‘I’m Stuck!’: A Contextual Inquiry of People with Visual Impairments in Authentication,” Eleventh Symposium on Usable Privacy and Security, pp. 151-168, 2015.
  10. Krahn, G. L., Walker, D. K., and Correa-De-Araujo, R. “Persons with Disabilities as an Unrecognized Health Disparity Population,” Am J Public Health, vol. 105, no. Suppl 2, pp. S198–S206, Apr. 2015.
  11. Bigham, J. P., and Carrington, P. “Learning from the Front: People with Disabilities as Early Adopters of AI.” (2018).
  12. Čas, J., et al. “Introduction: Surveillance, privacy and security.” Surveillance, Privacy and Security. Routledge, pp. 1-12, 2017.
  13. Pappachan, P., et al., “Towards Privacy-Aware Smart Buildings: Capturing, Communicating, and Enforcing Privacy Policies and Preferences,” in 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW), pp. 193-198, 2017.
  14. Hayes, J., Kaushik, S., Price, C. E. and Wang, Y. “Cooperative Privacy and Security: Learning from People with Visual Impairments and Their Allies,” Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019), 2019.
  15. Ahmed, T., et al. “Understanding the Physical Safety, Security, and Privacy Concerns of People with Visual Impairments,” IEEE Internet Comput., vol. 21, no. 3, pp. 56-63, 2017.
  16. Wobbrock, J.O., et al. “Ability-Based Design: Concept Principles and Examples”, ACM Trans. Access. Comput., vol. 3, no. 3, pp. 9:1-9:27, 2011.
  17. Nissenbaum, H. “PRIVACY AS CONTEXTUAL INTEGRITY,” Washington Law Review, vol. 79, p. 39, 2004.

 

Ankur Chattopadhyay earned his Ph.D. in Computer Science from the University of Colorado at Colorado Springs (UCCS), and is currently an Assistant Professor of Cybersecurity in the Computer Science Department at Northern Kentucky University (NKU). He joined NKU in January, 2020, and his research interests include visual privacy, visual trust, computer science & cybersecurity education, privacy-enhancing com-puter vision & pattern recognition, adversarial thinking & learning in machine vision, and inclusive privacy & security in visual surveillance. He is currently an Editorial Board Member with the IEEE Future Directions Newsletter in Technology, Policy and Ethics. He is an active professional member of IEEE and ACM. He has over 30 peer-reviewed publications, including conference papers, newsletter articles and journal papers. He has more than 15 years of work experience in both academics and industry. Ankur is originally from Kolkata, India, where he did his Bachelors in Computer Engineering from the Institute of Engineering & Management (IEM), and was employed with Tata Consultancy Services, a global computer consultancy firm, for almost 7 years. Before joining NKU, he was an Assistant Professor of Computer Science at the University of Wiscon-sin – Green Bay (UWGB), where he founded and directed the Center of Cybersecurity Education & Outreach. He was the principal investigator (PI) and the project director of the first-ever NSA/NSF GenCyber program in the state of Wis-consin at UWGB, where he has led and hosted the Gen-Cyber program for three years. He has also worked with Google and Microsoft as the PI/project lead for the Google IgniteCS and Microsoft TechSpark grant programs at UWGB. His industry profile includes multiple roles like IT Analyst, Software Engineer, and Embedded Systems Engineer.

Jordan Sommer completed a bachelor’s in Computer Science with an emphasis in Information Assurance and Security from the University of Wisconsin at Green Bay (UWGB). His cap-stone research project focused on inclusive privacy-related issues in public video surveillance, and he has done research, as an undergraduate student, under the supervision of Dr. Ankur Chattopadhyay.

 Robert Ruska Jr. is currently a graduate (Masters) student in the College of Informatics at the Northern Kentucky University (NKU), and is presently doing research with Dr. Ankur Chattopadhyay. He completed his bachelor’s in computer science, with an emphasis in Information Assurance and Security from the University of Wisconsin at Green Bay (UWGB). As an undergraduate student, he has worked as a lab admin, and as a research assistant under Dr. Ankur Chattopadhyay at the UWGB cyber-center. He is a U.S. Army Veteran, and has always had an interest in cybersecurity. He plans on pursuing a Masters in Cybersecurity at NKU in the near future. His research interests include cybersecurity, cyber-education and inclusive privacy.

Editor: 

Dr. Kashif Saleem is a research scientist, currently working at Center of Excellence in Information Assurance (CoEIA), King Saud University as an Assistant Professor, since 2012. He received his M.E. and Ph.D. degrees in Electrical Engineering from University Technology Malaysia in 2007 and 2011, respectively. He took professional trainings and certifications from the University of the Aegean, Massachusetts Institute of Technology (MIT), IBM, Microsoft, Cisco. He has authored several research publications that are presented and published in renowned conferences, books and top-tier journals. His professional services include Associate and Guest Editorships, Chair, TPC Member, Invited Speaker and reviewer for several journals, conferences and workshops. Dr.Saleem have acquired and is running funded scientific research projects in KSA, EU, and the other parts of the world. His research interest mainly includes data communication and security, intelligent algorithms, Biological inspired computing, for IoT, M2M Communication, WSN, WMN, MANET, Cyber-physical Systems.