Menu Close

The HAIC public outreach initiative aims to make cybersecurity more accessible to a broader audience. As part of this initiative, are organizing HAIC Talks, a series of public lectures on contemporary topics in cybersecurity. In the style of studia generalia, these lectures are free and open to everyone. No background knowledge in cybersecurity is required. HAIC Talks are made possible through the generous support of the Aalto University School of Science.

Sign-up for our HAIC Talks mailing list to hear about future events.



This talk is part of the Secure Systems Demo Day 2020 program.

Description: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users.

This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) optimizing security prompts for an online system; (ii) determining which features are fair to include in a classifier and which decision makers should evaluate fairness; (iii) defining standards for ethical virtual reality content.

 

You can find presentation slides here: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

photograph-of-elissa-redmiles

About the speaker: Elissa M. Redmiles is a Faculty Member and Research Group Leader of the Digital Harm group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Much of her work focuses specifically on investigating inequalities that arise in these decision-making processes and mitigating those inequalities through the design of systems that facilitate safety equitably across users. Dr. Redmiles’ work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland. As a graduate student, she was supported by a NSF Graduate Research Fellowship, a National Defense Science and Engineering Graduate Fellowship, and a Facebook Fellowship.

This talk is part of the Secure Systems Demo Day 2020 program. With registration you get participation links to both online events. The Secure Systems Demo Day is an annual meet-up for researchers in academia and industry and gives an overview of the current information security research going on in Finland’s capital area.


You can find presentation slides here: 5th Generation Crime-fighting in Cyberspace: Lawful Intercept in 5G Networks

 


Description: We are seeing a persistent gap between the theoretical security of e.g. cryptographic algorithms and real world vulnerabilities, data-breaches and possible attacks. Software developers – despite being computer experts – are rarely security experts, and security and privacy are usually, at best, of secondary importance for them. They may not have training in security and privacy or even be aware of the possible implications, and they may be unable to allocate time or effort to ensure that security and privacy best practices and design principles are upheld for their end-users. Understanding their education and mindsets, their processes, the tools that they use, and their pitfalls are the foundation for shifting development practices to be more secure. This talk will give an overview of security challenges for developers, and research avenues to address these.

About the speaker: Yasemin Acar is a Research Group Leader at MPI-SP, where she focuses on human factors in computer security. Her research centers humans, their comprehension, behaviors, wishes and needs. She aims to better understand how software can enhance users’ lives without putting their data at risk. Her recent focus has been on human factors in secure development, investigating how to help software developers implement secure software development practices. Her research has shown that working with developers on these issues can resolve problems before they ever affect end users. She was a visiting scholar at the National Institute for Standards and Technology in 2019, where she researched how users of smart homes want to have their security and privacy protected. She received the John Karat Usable Security and Privacy student Research Award for the community’s outstanding student in 2018. Her work has also been honored by the National Security Agency in their best cybersecurity paper competition 2016.

Venue: Online

Time: 16:00-17:30. The lecture will be approximately 60 minutes, after which there will be time for questions.

Registration: Please register to get the Zoom meeting information.


October 29, 2020: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning – with Elissa Redmiles

This talk is part of the Secure Systems Demo Day 2020 program.

Description: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users.

This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) optimizing security prompts for an online system; (ii) determining which features are fair to include in a classifier and which decision makers should evaluate fairness; (iii) defining standards for ethical virtual reality content.

 

You can find presentation slides here: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

photograph-of-elissa-redmiles

About the speaker: Elissa M. Redmiles is a Faculty Member and Research Group Leader of the Digital Harm group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Much of her work focuses specifically on investigating inequalities that arise in these decision-making processes and mitigating those inequalities through the design of systems that facilitate safety equitably across users. Dr. Redmiles’ work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland. As a graduate student, she was supported by a NSF Graduate Research Fellowship, a National Defense Science and Engineering Graduate Fellowship, and a Facebook Fellowship.

This talk is part of the Secure Systems Demo Day 2020 program. With registration you get participation links to both online events. The Secure Systems Demo Day is an annual meet-up for researchers in academia and industry and gives an overview of the current information security research going on in Finland’s capital area.


You can find presentation slides here: 5th Generation Crime-fighting in Cyberspace: Lawful Intercept in 5G Networks