CACR is pleased to offer recordings of their speaker series on their YouTube site. For all events prior to January 2025, a historical record is available on this page for ease of reference.
CACR is pleased to offer recordings of their speaker series on their YouTube site. For all events prior to January 2025, a historical record is available on this page for ease of reference.
Watch the Recording on YouTube
Lawyers lead the investigations for many cybersecurity incidents, ranging from data breaches to ransomware, in part because they can often shield any materials produced after a breach from discovery under either attorney-client privilege or work product immunity.
Moreover, by limiting and shaping the documentation that is produced by breached firms’ personnel and third-party consultants in the wake of a cyberattack, attorneys can limit the availability of potentially damaging information to plaintiffs’ attorneys, regulators, or media, even if their attorney-client privilege and work product immunity arguments falter.
This talk draws on a project involving over sixty interviews with a broad range of actors in the cybersecurity landscape—including lawyers, forensic investigators, insurers, and regulators—to explore the impact of legal leadership on cybersecurity investigations and reveal how, in their zeal to preserve the confidentiality of incident response efforts, lawyers may sometimes undermine the long-term cybersecurity of both their clients and society more broadly.
Josephine Wolff is associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her research interests include liability for cybersecurity incidents, international Internet governance, cyber-insurance, cybersecurity workforce development, and the economics of information security. Her first book You'll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018. Her second book Cyberinsurance Policy: Rethinking Risk in an Age of Ransomware, Computer Fraud, Data Breaches, and Cyberattacks was published by MIT Press in 2022. Her writing on cybersecurity has also appeared in Slate, The New York Times, The Washington Post, The Atlantic, and Wired. Prior to joining Fletcher, she was an assistant professor of public policy at the Rochester Institute of Technology and a fellow at the New America Cybersecurity Initiative and Harvard's Berkman Klein Center for Internet & Society.
Watch the Recording on YouTube
The convergence of multiple policy and societal vectors makes this a unique moment in time for multinational corporations:
As a result of this convergence, AI development, training, and deployment, especially where collection and use of sensitive data is required, is facing significant headwinds -- and managing digital incidents (e.g., deep fakes, privacy and AI litigation, very tailored cyber-attacks) is requiring significant resources.
We are spending (and will continue allocating) significant resources to address what amounts to administrative compliance issues (e.g., correct language on privacy notices) and far fewer resources to address the real potential harms we as a society care about and could be facing from this convergence.
Stan Crosley is the founder and managing partner of Crosley Law Offices (est. 2010), and, along with Fred Cate, in 2022 created and launched Red Barn Strategy. Stan has more than 25 years of privacy and data strategy experience and is the former Chief Privacy Officer at Eli Lilly and Company, where he initiated and implemented the privacy program in 2000 as one of the first CPOs in the United States. Stan is an adjunct professor of Maurer School of Law, a senior fellow with the Future of Privacy Forum, and a senior strategist with the Information Accountability Foundation. Stan was recently named a Westin Emeritus Fellow by the International Association of Privacy Professionals (IAPP), one of only 50 globally among a professional association of 85,000 members.
Stan was a co-founder of the International Pharmaceutical and Medical Device Privacy Consortium, which he chaired for its first decade and is a former member of the board of IAPP, and co-chair of the HHS/ONC Privacy and Security Workgroup. Stan’s experience extends from in-house chief privacy officer to an attorney with three separate large law firms, to appointments in academia, research NGOs, non-profit advisory boards, and federal government committees and is a frequent speaker on data strategy, digital governance, and data protection at conferences around the world. Crosley Law and Red Barn Strategy are incredibly fortunate to work with some of the largest and most successful multinational corporations in the world on data strategy and data governance, as well as small start-ups and non-profits, across the business ecosystem, including Apple, Pfizer, Lilly, Abbott, Walgreens, Regeneron, Microsoft, Amgen, Natera, Edwards Life Sciences, Chipotle, Roche, Nike, US Golf Association, Moderna, Indiana University Foundation, and many others.
Watch the Recording on YouTube
There are two strategic and longstanding questions about cyber risk that organizations largely have been unable to answer: What is an organization's estimated risk exposure and how does its security compare with peers? Answering both requires industry-wide data on security posture, incidents, and losses that, until recently, have been too sensitive for organizations to share.
Now, privacy enhancing technologies (PETs) such as cryptographic computing can enable the secure computation of aggregate cyber risk metrics from a peer group of organizations while leaving sensitive input data undisclosed. As these new aggregate data become available, analysts need ways to integrate them into cyber risk models that can produce more reliable risk assessments and allow comparison to a peer group.
This paper proposes a new framework for benchmarking cyber posture against peers and estimating cyber risk within specific economic sectors using the new variables emerging from secure computations. We introduce a new top-line variable called the “Defense Gap Index” representing the weighted security gap between an organization and its peers that can be used to forecast an organization’s own security risk based on historical industry data. We apply this approach in a specific sector using data collected from 25 large firms, in partnership with an industry ISAO, to build an industry risk model and provide tools back to participants to estimate their own risk exposure and privately compare their security posture with their peers.
Taylor Reynolds, Ph.D., is the research director of MIT's Internet Policy Research Initiative. In this role, he leads the development of this interdisciplinary field of research to help policymakers address cybersecurity and Internet public policy challenges. He is responsible for building the community of researchers and students from departments and research labs across MIT, executing the strategic plan, and overseeing the day-to-day operations of the Initiative. Taylor's current research focuses on three areas: leveraging cryptographic tools for measuring cyber risk, privacy enhancing technologies, and international AI policy.
Taylor was previously a senior economist at the OECD and led the organization’s Information Economy Unit covering policy issues such as the role of information and communication technologies in the economy, digital content, the economic impacts of the Internet and green ICTs. His previous work at the OECD concentrated on telecommunication and broadcast markets with a particular focus on broadband.
Before joining the OECD, Taylor worked at the International Telecommunication Union, the World Bank and the National Telecommunications and Information Administration (United States). Taylor has an MBA from MIT and a Ph.D. in Economics from American University in Washington, DC.
Cyber Persistent Engagement and Defend Forward
The United States has shifted its approach to the challenge of cyber insecurity through the adoption of a National Cyber Strategy focused on persistently engaging in the limitation, frustration, and disruption of adversary cyber campaigns. The DoD strategy of Defend Forward reconceptualizes how to manage strategic competition in and through cyberspace. The United Kingdom, Netherlands, South Korea, Japan, and others have all adopted in the last year a more anticipatory footing toward reducing cyber insecurity. This talk will examine the core theoretical logic behind this shift—the concept of initiative persistence and what it means for education, workforce development and whole of nation-plus postures.
Dr. Richard J. Harknett is professor and director of the School of Public and International Affairs and chair of the Center for Cyber Strategy and Policy at the University of Cincinnati. He co-directs the Ohio Cyber Range Institute, a state-wide organization supporting education, workforce, economic, and research development in cybersecurity. He served as Scholar-in-Residence at U.S. Cyber Command and National Security Agency. He has presented both policy briefings and academic research in 11 countries, on Capitol Hill, and to various US Federal and State government agencies. Professor Harknett has held two Fulbright Scholar appointments: in Cyber Studies at Oxford University, UK and in International Relations at the Diplomatic Academy, Vienna, Austria, where he holds a professorial lecturer appointment. He has authored over 60 publications including the co-authored book Cyber Persistence Theory: redefining national security in cyberspace (Oxford Univ Press, 2022) and has contributed to raising over $50 million in institutional and research grant and philanthropic support.
Watch the Recording on YouTube
Cybersecurity is essential to the basic functioning of our economy, the operation of our critical infrastructure, the strength of our democracy, the privacy of our data and communications, and our national defense. Last year, the Biden-Harris Administration released the National Cybersecurity Strategy which details the comprehensive approach the Administration is taking to better secure cyberspace and ensure the United States is in the strongest possible position to realize all the benefits and potential of our digital future. To realize the bold affirmative vision laid out in the Strategy, the Administration also took the novel step of publishing a National Cybersecurity Strategy Implementation Plan to ensure transparency and a continued path for coordination.
In this discussion with Stephen Viña, Senior Advisor at the Office of the National Cyber Director, we will explore the application of the National Cybersecurity Strategy to urgent, present-day issues such as the security of our critical infrastructure and cyber workforce needs and how the strategy sets the agenda for the Office of the National Cyber Director.
Stephen Viña is the Senior Advisor to the Deputy National Cyber Director for National Cybersecurity in the Office of the National Cyber Director (ONCD). In this role, Stephen supports the execution of the Division’s activities, develops and coordinates cybersecurity planning, polices, and programs, and leads the office’s cyber insurance initiatives. Previously at ONCD, Stephen was the inaugural Assistant National Cyber Director for Legislative Affairs, where he led the office’s relationship with Congress.
Prior to joining ONCD, Stephen was a Senior Vice President at Marsh, where he served as a cyber insurance broker and claims specialist, helping organizations manage their cyber risks and recover financial losses after a cyber incident. Earlier in his career, Stephen spent nearly fifteen years on Capitol Hill advising Members of Congress on security issues. During this time, Stephen helped pass several major pieces of cyber legislation and held leadership positions in both the House and Senate, including Chief Counsel for Homeland Security on the Senate Homeland Security and Governmental Affairs Committee and Subcommittee Staff Director on the House Committee on Homeland Security.
Stephen began his professional career as a Legislative Attorney at the Congressional Research Service where he focused on homeland security matters. He was also an Adjunct Professor for Texas A&M University School of Law and American University where he taught public policy and cybersecurity courses.
Stephen holds a law degree from Texas Wesleyan University School of Law in Fort Worth, Texas (now Texas A&M University School of Law) and is a Certified Information Privacy Professional (CIPP/US) and member of the Hispanic National Bar Association.
Building a movement: cybersecurity clinics for all
Co-hosted with CEW&T
What does it take to build an international movement for cybersecurity clinics? What is different about defending the most vulnerable organizations from cyberattack? This talk will explore how the Consortium of Cybersecurity Clinics grew from a few isolated efforts to an international network of university-based clinics with members on four continents. Where should the movement for cybersecurity clinics go from here?
Drawing on research and practice, we'll also discuss what can be learned from the experience of UC Berkeley's Citizen Clinic, which helps non-profit organizations build the capabilities they need to proactively defend themselves against digital threats, enabling them to focus on fulfilling their missions and driving social change. Since 2018 the Citizen Clinic has worked with civil society organizations at higher risk of politically motivated cyberattack to provide the tools and knowledge they need to defend themselves online. What is unique about the cybersecurity environment for this sector, and what insights can we derive to help other public-interest and community organizations?
Ann Cleaveland is the executive director of the Center for Long-Term Cybersecurity, a multidisciplinary research center at the University of California, Berkeley. She also chairs the Consortium of Cybersecurity Clinics, which she co-founded in 2021. Cleaveland has held leadership positions in philanthropy, non-profit management, and industry. She previously served as interim executive director of the Berkeley Institute for Data Science and as the senior director of strategic planning at the ClimateWorks Foundation. She received an MBA in Sustainable Management from the Presidio Graduate School and a B.A. from Rice University. Her research interests include cybersecurity futures, digital risk communications, and governance of cyber risk.
Lessons from 25 years of digital technology negotiation at the United Nations
Co-hosted with the Hamilton Lugar School and the Luddy School
What can be learned from the UN negotiations on cyber in the context of international security (from 2004-2021) and those on lethal autonomous weapon systems (2014-present) applicable to the objectives of developing shared understanding of Responsible AI (RAI) in the military domain and accelerating international operationalization of RAI practices?
In this discussion with former UNIDIR Deputy Director Kerstin Vignard, we will explore what can be learned from how the international community has approached the development of norms of responsible State behavior in the absence of appetite for new treaties. Would a similar approach focusing on reaffirming existing international law, agreement on norms, identification of confidence-building measures, and the development of capacity-building initiatives suffice in the field of military applications of AI? Or have these approaches proven too slow to keep pace with the speed of innovation while excluding key stakeholders, such as technologists and the private sector?
Ms. Vignard is an international security policy professional with interests at the nexus of international policy, technology, and responsible innovation. Her areas of expertise include AI, autonomous technologies, cyber, and human enhancement.
Following a 25-year career at the United Nations, in 2021 Vignard joined the Johns Hopkins University Applied Physics Laboratory (APL) where she works on a range of issues related to improving technical advice to multilateral policy fora and engaging technologists on ethical, legal, and social implications of innovation. Prior to joining APL, Vignard was the deputy director of the UN’s international security thinktank, UNIDIR (2012-2019), and UNIDIR’s chief of projects and publications (2005-2012). Vignard was responsible for building UNIDIR’s Security and Technology programme, and established UNIDIR’s workstreams on AI-enabled weapon systems and cyber issues. From 2019 to 2021, Vignard was on special assignment leading the UNIDIR team supporting the Chairmen of the Group of Governmental Experts (GGE) on Cyber Security and the first Open-Ended Working Group on ICTs. She also led UNIDIR’s team supporting four previous cyber GGEs.
In 2021 Vignard was named to the list “100 brilliant women in AI ethics” and is a research scholar affiliated with the Institute for Assured Autonomy.
MITRE ATLAS: community-driven tools for AI security and assurance
Co-hosted with the Kelley School
This presentation focused on the ongoing capability developments and community collaborations around MITRE ATLAS™, a globally accessible, living knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups. There are a growing number of vulnerabilities in AI-enabled systems as the incorporation of AI increases the attack surfaces of existing systems beyond those of traditional cyberattacks. ATLAS™ helps raise community awareness and readiness for these unique threats, vulnerabilities, and risks in the broader AI assurance landscape.
Dr. Liaghati discussed the latest community efforts focused on capturing cross-community data on real-world AI incidents in AI security and assurance, growing community understanding of vulnerabilities that can arise when using open-source models or data, building new open-source tools for threat emulation and AI red teaming, and developing mitigations to defend against AI security threats.
Christina Liaghati, Ph.D., is the AI Strategy Execution & Operations Manager for MITRE’s AI & Autonomy Innovation Center and the lead for MITRE ATLAS.
Working across a collaborative global community of industry, government, and academia, she passionately drives research and developments in AI security and assurance for everyone working to leverage AI-enabled systems. Serving the community with the not-for-profit, objective, MITRE perspective, Dr. Liaghati is dedicated to working together to create and openly share actionable tools, capabilities, data, and frameworks like MITRE ATLAS, an ATT&CK-style framework of the threats and vulnerabilities of AI-enabled systems.
Not so simple: self-regulation as a cybersecurity solution
Co-hosted with the Maurer School
This talk considered the viability of self-regulation as a solution to cybersecurity concerns. The basic concept of self-regulation is straightforward: rather than rely on external institutions such as Congress, the courts, or agencies to regulate the conduct of an industry, allow the industry itself to establish rules to regulate the conduct of firms within it. Self-regulation is often advocated as a solution to address challenges in high-tech industries -- including cybersecurity challenges. This is unsurprising, given the pace, opacity, and complexity of these technologies. Yet the contours of self-regulation are not well developed, most notably including the problems to which is well suited as a solution. Drawing on ongoing research on the jurisprudence of self-regulation -- the internal considerations needed to give self-regulatory efforts external validity -- this talk considered whether, and under what circumstances, self-regulation presents a viable approach to various aspects of the cybersecurity challenge.
Gus Hurwitz is a senior fellow and academic director at the University of Pennsylvania Carly Law School Center for Technology, Innovation, and Competition (CTIC). His work builds on his background in law, economics, and computer science to study how technology orders social and economic institutions. He has written extensively on technology regulation, including cybersecurity and privacy issues, and is the author of the book, Cybersecurity: An Interdisciplinary Problem (with Derek Bambauer, David Thaw, and Charlotte Tschider). He has previously taught at the University of Nebraska and George Mason University, prior to which he was a trial attorney with the Department of Justice Antitrust Division. Prior to law school he was a computer scientist at Los Alamos National Lab.
The REN-ISAC: the untold story
Co-hosted with the Luddy School
This year was the 20th anniversary of the founding of the enormously successful Research and Education Networking Information and Analysis Center (REN-ISAC) at Indiana University. It was officially founded February 21, 2003, with the signing of an agreement between IU and the then-National Infrastructure Protection Center (NIPC) in the Department of Homeland Security. This agreement was signed by myself, as the then-IU vice president for information technology, and Admiral Jim Plehal, the then-deputy director of the NIPC. Its membership has grown to over 750 higher education institutions globally, and it now plays a key role in the cybersecurity of this sector.
However, the complete story of the founding of the REN-ISAC dates back earlier to late 2000 and is largely unknown and untold. The REN-ISAC in its final form emerged out of a complex series of events and activities during 2001 and 2002.
This talk was given to mark the 20th anniversary of the REN-ISAC and described some of this history. But it also saught to illustrate how the sustained strategic investments and initiatives in cybersecurity dating right back to IU’s response to its first true modern cybersecurity incident in early 1997, and subsequent initiatives and investments in advanced high-performance networking, were key factors that enabled IU to establish the REN-ISAC.
It also paid tribute to all who have worked so hard over the last 20 years to make the REN-ISAC so successful.
Michael A. McRobbie served as the 18th president of Indiana University from July 1, 2007, to June 30, 2021. Prior to stepping down from the IU presidency, he was among the country's longest-serving presidents of a major public research university.
He was appointed university chancellor on July 1, 2021, the position held by IU’s legendary Herman Wells from 1962 to 2000. McRobbie is only the third person to be appointed to this position in IU’s 200-year history. This appointment recognized his extensive past achievements and contributions to IU, and anticipates his continued work in support of the university's core missions. He also holds the titles of president emeritus and university professor.
Among the major achievements of McRobbie's 14-year tenure as president, were:
McRobbie also oversaw the development of The Bicentennial Strategic Plan for Indiana University, a comprehensive set of strategic initiatives for all IU's campuses which will serve as a foundation for the university's next 100 years of excellence.
McRobbie joined IU in 1997 as the university’s first vice president for information technology and chief information officer. In 2003, he was appointed to the additional position of vice president for research. And in 2006, he was named interim provost and vice president for academic affairs for IU’s Bloomington campus.
As president, McRobbie chaired the board of the IU Foundation, responsible for IU’s fundraising campaigns, and served as vice chair of the board of IU Health, the largest hospital system in Indiana with an operating budget of over $6 billion.
McRobbie is university professor and holds faculty appointments in computer science, philosophy, cognitive science, informatics, and computer technology. He is a computer scientist and philosopher and has been an active researcher in high performance computing and networking, artificial intelligence, automated reasoning, and various areas of logic. He has been principal investigator on numerous large grants totaling in excess of $100 million, has published a number of books and articles, and has served on many editorial boards and conference committees.
A native of Australia, McRobbie became a U.S. citizen in 2010 and now holds dual American and Australian citizenship. He holds a B.A. with first class honors from the University of Queensland and a Ph.D. from the Australian National University.
McRobbie has served on numerous national and international committees in many areas of higher education, science, research, national security, and related fields. He has chaired the boards of the Association of American Universities (AAU), the Big Ten Athletics Conference, and Internet2, and served on the board of the Association of Public and Land-Grant Universities (APLU). He served as co-chair of the National Academies of Sciences, Engineering, and Medicine’s Committee on the Future of Voting which produced the highly influential report Securing the Vote: Protecting American Democracy.
McRobbie is an elected fellow or member of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Australian Academy of Humanities, and the Council on Foreign Relations. He has been awarded six honorary doctorates and has received numerous honors and awards from universities and institutions around the world.
Advances in private information retrieval
Charalampos Papamanthou, Ph.D., is the co-director of the Yale Applied Cryptography Laboratory and an associate professor of computer science at Yale University. Previously, he was the director of the Maryland Cybersecurity Center and an associate professor of electrical and computer engineering at the University of Maryland, College Park, where he joined in 2013 after a postdoc at UC Berkeley.
Papamanthou works on applied cryptography and computer security -- especially on technologies, systems, and theory for secure and private cloud computing. In 2022, he received the ACM CCS Test-of-Time Award for his work on searchable encryption. He has also received the JP Morgan Faculty Research Award, an NSF CAREER award, the Google Faculty Research Award, the Yahoo! Faculty Research Engagement Award, the NetApp Faculty Fellowship (twice), the UMD Invention of the Year Award, the Jimmy Lin Award for Invention, and the George Corcoran Award for Excellence in Teaching. He was also a finalist for the 2020 Facebook Privacy Research award.
Papamanthou's research has been funded by federal agencies (NSF, NIST, and NSA) and by the industry (JP Morgan, Google, Yahoo!, NetApp, VMware, Amazon, Algorand, Ergo, Ethereum, and Protocol Labs). His Ph.D. is in computer science from Brown University (2011) and he also holds an MSc in computer science from the University of Crete (2005), where he was a member of ICS-FORTH. His work has received more than 10,000 citations and he has published in venues and journals spanning theoretical and applied cryptography, systems and database security, graph algorithms and visualization, and operations research.
Social engineering in research, education, and application
In-person only; no livestream.
Bio
Aunshul Rege, PhD, is an associate professor and director of the Cybersecurity in Application, Research, and Education (CARE) Lab at Temple University, USA. She holds a PhD and MA in Criminal Justice, an MA and BA in Criminology, and a BSc in Computer Science. Her research focuses on critical infrastructure and cybersecurity, ransomware, cyberadversarial decision-making and adaptation, and cybersecurity education. Her research projects have been funded by several National Science Foundation and Department of Energy/Idaho National Lab grants. She is the organizer and host of the summer social engineering competitions for high school, undergraduate, and graduate students. Dr. Rege serves as the research lead for the Social Engineering Community and Def Con and serves as an Advisory Board member for Black Girls Hack, Raices Cyber, and Breaking Barriers Women in Cybersecurity. She is currently working on a book project titled “Cybercrime and Social Engineering” with New York University Press.
Margaret Hu is a professor of Law and director of the Digital Democracy Lab at William & Mary Law School. She is also a research affiliate with the Institute for Computational and Data Sciences at Penn State University. Her research interests include the intersection of national security, cybersurveillance, and AI and civil rights. Previously, she served as senior policy advisor for the White House Initiative on Asian Americans and Pacific Islanders, and also served as special policy counsel for immigration-related discrimination in Civil Rights Division, U. S. Department of Justice, in Washington, D.C.
Alan Rozenshtein & Chinmayi Sharma
Alan Rozenshtein is associate professor of Law at University of Minnesota Law School. He is a senior editor at Lawfare, a term member of the Council on Foreign Relations, a member of the Scholars Strategy Network, and a visiting faculty fellow at the University of Nebraska College of Law. He previously served as an attorney advisor in the Office of Law and Policy in the National Security Division of the U.S. Department of Justice, where his work focused on operational, legal, and policy issues relating to cybersecurity and foreign intelligence. He was also a special assistant United States attorney for the District of Maryland.
Chinmayi Sharma is a scholar in residence at the Robert Strauss Center for International Security and Law and a lecturer at the University of Texas at Austin School of Law. Her research and teaching focus on cybersecurity law and policy. She is a member of the Internet Law Foundry and was a Yale Cyber Leadership Fellow. She has written extensively for Lawfare, primarily on the topics of cybersecurity and government surveillance. Before joining the Strauss Center, Chinmayi worked at Harris, Wiltshire & Grannis LLP in Washington, D.C., focusing primarily on spectrum policy and privacy matters, and clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia. Prior to law school, Chinmayi was also the founder of a technology company that developed custom data management and data analytics software solutions.
Hyrum Anderson is Distinguished Engineer at Robust Intelligence, focusing on solutions to promote the integrity of machine learning systems. Previously, he conducted Microsoft's first AI Red Team exercises and founded the Microsoft AI Red Team to assess the security and privacy of deployed ML systems. As Chief Scientist at Endgame and Principal Research at Mandiant, he conducted applied research in ML for information security. He also served at Sandia National Laboratories as Principal Staff and Associate Staff at MIT Lincoln Laboratories. He has organized several public competitions to promote the security of ML at http://mlsec.io. He co-founded and sits on the governing board of the Conference on Applied Machine Learning in Information Security (CAMLIS). He received his PhD in Electrical Engineering (Machine Learning + Signal Processing) at the University of Washington, and MS and BS degrees in electrical engineering (Signal and Image Processing + Remote Sensing) at Brigham Young University.
Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks (DNNs), are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this work, we will emphasize the need for a security mindset in trustworthy machine learning, and then cover some lessons learned.
Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Dr. Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focused his interested on privacy and adversarial ML (AML). Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Jha is a fellow of the ACM, IEEE, and AAAS and winner of the IIT-Delhi Distinguished Alumni Award.
Welcome to the new possibilities in the cybersecurity workforce
Imagine a workforce where inclusive cultures drive nonuniformity while lending itself to the powerful diversity of thought. One that challenges the status quo and inspires an environment where all genders, identities, cultures, ethnicities, races, backgrounds, and experiences are entered into a shared space of the cybersecurity workforce. Where the obstacles that existed before are tackled strategically and patiently. And a new reality of a gender-balanced workforce emerges.
For us, at Women in CyberSecurity (WiCyS), this new workforce is tomorrow's reality that we work towards building every day. It's more than a mission; it is the core of our existence and the driver of our actions. During this webinar, Lynn will share the story and powers of the WiCyS community, showcasing how far women have come and how much further to go in building a bigger table that represents us all.
Lynn Dohm, executive director, Women in Cybersecurity (WiCyS), brings more than 25 years of organizational and leadership experience to the WiCyS team. She has successfully collaborated with businesses, nonprofits and NSF-funded grants and helped produce outcomes that aligned with their cybersecurity business goals. As a solution-oriented strategist, Lynn focuses on nonprofits, facilitating process improvements, coordinating project management and using resourceful operations to achieve strategic objectives.
Lynn has long been committed to cybersecurity education and for the last 14 years held active roles in grant-funded programs and nonprofits that assist in providing educational solutions for the cybersecurity workforce. She is passionate about the need for diverse mindsets, skill sets, and perspectives to solve problems that never previously existed and aims to facilitate learning opportunities and discussions on leading with inclusion, equity, and allyship. Lynn lives each day fulfilled as she continues to crusade, along with the strong and committed community of women, allies and advocates within the WiCyS organization, to bridge the cybersecurity workforce gap and improve the recruitment, retention, and advancement of women in cybersecurity.
In addition to Lynn being awarded Top 100 Women in Cybersecurity by Cyber Defense Magazine, she accepted the Nonprofit of the Year Award for WiCyS in 2020 and 2021, is on numerous cybersecurity judging panels and advisory boards, and is an inaugural member of (ISC)2’s DEI Task Force. She has been interviewed on TV and radio throughout the nation and is a keynote presenter, panelist, and moderator for multiple international conferences, events, and organizations.
More about WiCyS >>
A panel discussion with Indiana Office of Technology Chief Information Officer Tracy Barnes and Indiana Cybersecurity Program Director Chetrice Mosley-Romero, moderated by IU Cybersecurity Program Chair Scott Shackleford.
Indiana Office of Technology, Chief Information Officer Tracy Barnes was appointed by Gov. Eric J. Holcomb in March 2020. In this role, Tracy oversees the Indiana Office of Technology and provides strategic oversight of the State’s technology portfolio, as well as leadership on technology and cybersecurity policy.
Tracy brings significant business leadership and information technology experience to his role, having previously served as chief of staff for the Lieutenant Governor, deputy auditor and IT director for the Indiana Auditor of State.
Indiana Cybersecurity Program Director Chetrice Mosley-Romero works collaboratively with public and private stakeholders to administer the development and implementation of the state’s cybersecurity strategy and policy through the Governor’s Executive Council on Cybersecurity.
Prior to her current role, she was the executive director of External Affairs for the Indiana Utility Regulatory Commission where she led the public relations, policy, and consumer affairs divisions.
Technology platforms are central to modern life—and pose unprecedented challenges for society. Independent study and oversight are essential for understanding platform harms, including intrusive privacy practices, proliferating misinformation, and barriers to competition. But there is a platform data crisis: researchers, journalists, and regulators lack the tools needed to hold platforms accountable. In the first part of this seminar, I will describe the platform data crisis and discuss recent industry and government proposals to address it. These proposals would make meaningful progress, but also have significant limitations in the types of research and researchers they would enable. In the second part of the seminar, I will present Rally and WebScience, a new platform and toolkit for conducting real-world platform accountability research. Rally facilitates crowd-sourced, browser-based research studies, and WebScience provides reusable functionality for implementing studies. We launched Rally and WebScience in 2021, and they are already being used for multiple platform accountability projects.
Jonathan Mayer is an assistant professor at Princeton University, where he holds appointments in the Department of Computer Science and the School of Public and International Affairs. Before joining the Princeton faculty, he served as technology counsel to United States Senator Kamala D. Harris, as the chief technologist of the Federal Communications Commission Enforcement Bureau, and as a technology advisor at the California Department of Justice. Professor Mayer holds a Ph.D. in computer science from Stanford University and a J.D. from Stanford Law School.
The Oversight Board was created in 2020 to help Facebook answer some of the most difficult questions regarding online speech. Over the past year, the board has grown as an institution and started issuing binding decisions and non-binding policy recommendations alike. In doing so, the board has encountered many challenges and opportunities -- some foreseen and others not -- that are worthy of reflection. Exploring these lessons will help identify some of the promising avenues to uphold human rights in the digital age as well as ongoing areas of concern.
Eli Sugarman is content director of the Oversight Board. The Oversight Board makes decisions on what content Facebook and Instagram should allow or remove, based on respect for freedom of expression and human rights.
Russell Buchan
The academic literature on spying is dominated by two myths. First, international relations theorists draw a direct line between spying and the maintenance of international peace and security. Their argument is that, given the anarchic structure of the world order, spying allows States to identify threats to their national security and take action against them. Second, influenced by this realist understanding of international relations, international lawyers claim that international law is silent on the topic of espionage and, as a result, this activity is permitted by that legal framework. This paper debunks these myths. This paper argues that, because spying involves the unauthorized collection of confidential information, it can inhibit international cooperation, destabilize international relations, and undermine the international community’s efforts to address threats to international peace and security. Next, this paper submits that the intelligence community is bound by international law in the same way and to the same extent as any other organ of the State. Although States have not (yet?) devised an ‘international law of spying,’ this paper avers that there is a range of general principles and specialised regimes of international law that apply to spying and, by extension, to cyber-enabled spying. How international law applies to cyber espionage is demonstrated through an assessment of the recent hacks against SolarWinds and Microsoft.
Russell Buchan is senior lecturer in international law at the University of Sheffield, UK. Dr. Buchan is the author of International Law and the Construction of the Liberal Peace (Hart, 2013) and Cyber Espionage in International Law (Hart, 2018) and he is the co-author of Regulating the Use of Force in International Law: Stability and Change (Edward Elgar Publishing, 2021). He has also authored or co-authored many journal articles and book chapters on the topic of international law.
Kim Milford
Incident management isn’t for the faint of heart. It requires foresight and the ability to plan for the unpredictable. It requires diligence in sticking to SOPs. It requires trust in your processes and your personnel. It requires emotional fortitude to manage stress and anxiety, not only for those involved in responding to the incident but for the victims. This talk will explore the basics of incident management, with a particular look at how needs have changed in recent years.
Kim Milford serves as executive director of the Research & Education Networks Information Sharing & Analysis Center (REN-ISAC), a higher education and research network cyber-threat information sharing alliance. As executive director, Ms. Milford works with her team in service to, and in coordination with, global research and education institutions, partners, and sponsors to provide timely cybersecurity news reports, alerts and advisories, and analysis of cybersecurity threats and mitigation solutions.
Gary McGraw
Machine learning (ML) appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games, and more. This has led to much breathless popular press coverage of artificial intelligence and has elevated deep learning to an almost magical status in the eyes of the public. ML, however, especially of the deep learning sort, is not magic. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion.
Our research at the Berryville Institute of Machine Learning is focused on understanding and categorizing security engineering risks introduced by ML at the design level. Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML.
This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.
Gary McGraw, Ph.D., is co-founder of the Berryville Institute of Machine Learning. He is a globally recognized authority on software security and the author of eight best-selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books. He is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific publications. McGraw serves on the advisory boards of Code DX, Maxmyinterest, Runsafe Security, and Secure Code Warrior.
He has also served as a board member of Cigital and Codiscope (acquired by Synopsys) and as advisor to Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). McGraw produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual Ph.D. is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.
Advances in machine learning have enabled new applications that border on science fiction. Autonomous cars, data analytics, adaptive communication, and self-aware software systems are now revolutionizing markets by achieving or exceeding human performance. In this talk, I discuss the rapidly evolving use of machine learning in security-sensitive contexts and explore why many systems are vulnerable to nonobvious and potentially dangerous manipulation. We will examine sensitivity in applications where misuse might lead to harm—for instance, forcing adaptive networks into an unstable state, crashing an autonomous vehicle, or bypassing an adult content filter. I explore how currently accepted wisdom about threats and defenses should be viewed (and sometimes refuted) in light of the functional and security challenges of real-world systems. The talk concludes with a discussion of the technological, economic, and societal challenges we face as a result of the rise of machine learning as fundamental construct of computational systems.
Patrick McDaniel is the William L. Weiss Professor of Information and Communications Technology in the School of Electrical Engineering and Computer Science at Pennsylvania State University and a fellow of IEEE, ACM, and the AAAS. He is the director of the Institute for Networking and Security Research (INSR), a research institute focused on the study of networking and security in computing environments. His research focuses on a wide range of topics in computer and network security and technical public policy, with particular interests in mobile device security, adversarial machine learning, systems security, program analysis, and the integrity and security of election systems.
In this session, Meredith Harper will share her unique journey to become the vice president and global chief information security officer of Eli Lilly and Company. Along the way, Harper, as a minority and woman in technology, used challenging experiences to build her skills, her brand, and her teams while breaking the glass ceiling. She will spend time sharing those experiences to demonstrate how diversity, equity, and inclusion matter.
Meredith Harper joined Eli Lilly & Company in August 2018 as deputy chief information security officer. As of April 2019, she transitioned to the role of vice president, chief information security officer for Lilly’s global information security program. Over her 26-year career, she has emerged as a strategic leader who is not just interested in processes, goals, and objectives, but most of all she is passionate about her greatest assets -- her human capital. Her success has been attributed to her ability to manage large-scale complex programs that cross functional areas while advancing the skill sets of her team members.
Dr. Anthony Fauci ticked off the timeline, "First notice at the end of December, hit China in January, hit the rest of the world in February, March, April, May, early June." COVID-19 spread like wildfire. This disease turned out to be Fauci's "worst nightmare."
Pandemics end because we shut down the infection source or vaccinate against it. But if these techniques don't work, then we contact-trace. For COVID-19, manual contact tracing can be too slow. Phone-based apps might be able to speed this up but raise lots of issues.
We need to know: Is an app efficacious? Does the app help or hinder the efforts of human-based contact tracing, a practice central to ending epidemics? If not---and efficacy must be measured across different communities---there is no reason to consider its use any further. Is the use of the app equitable? What are the social and legal protections for people who receive an exposure notification? Does a contact-tracing app improve public health more effectively than other efforts? Does the public support its use? Without public support, apps fail.
The next pandemic will be different from COVID-19. Now is the time to decide what sorts of medical and social interventions we will make and what choices we want. The choices we make now will reverberate.
Susan Landau is Bridge Professor in Cyber Security and Policy at The Fletcher School and the School of Engineering, Department of Computer Science, Tufts University, and visiting professor, Department of Computer Science, University College London. Landau works at the intersection of cybersecurity, national security, law, and policy. Landau's new book, “People Count: Contact-Tracing Apps and Public Health,” will be published in April 2021. She has also written “Listening In: Cybersecurity in an Insecure Age,” which came about because of her congressional testimony in the Apple/FBI case, “Surveillance or Security: Risks Posed by New Communications Technologies,” and, with Whitfield Diffie, “Privacy on the Line: The Politics of Wiretapping and Encryption.” Landau has frequently briefed US and European policymakers on encryption, surveillance, and cybersecurity issues. She has been a senior staff privacy analyst at Google, a distinguished engineer at Sun Microsystems, and a faculty member at Worcester Polytechnic Institute, the University of Massachusetts Amherst, and Wesleyan University. She is a member of the Cybersecurity Hall of Fame and of the Information System Security Hall of Fame, and is a fellow of the American Association for the Advancement of Science and of the Association for Computing Machinery, as well as having been a Guggenheim and Radcliffe Fellow.
We’ll start by providing an overview of the OmniSOC and the services it provides. Then we’ll look at the OmniSOC internship program, focusing on why we think the program is valuable and why students should consider applying. Finally, we’ll dive into how we adjusted to COVID-19 this past summer and how it may impact the internship program going forward.
Scott Orr began his career at IUPUI in 1988 when he joined what later became known as the Computer Network Center within the School of Engineering and Technology. He played a key role in its mission to provide computer and networking support for academic and research efforts and during his final year there, served as the Acting Manager. He briefly worked for Commnet Plus, Inc. where he provided firewall consulting services for large companies. He returned to IUPUI in 1997 to oversee all aspects of computing services within the Computer Science department. Scott has also served as an Adjunct Faculty at IUPUI.
Keith Lehigh is the University Information Security Officer at IU. Keith started at IU in 2003 as a sysadmin and unofficial security analyst for Research Technologies, culminating in a stint working on the PolarGrid project, which took him to the ends of the earth. In 2009, Keith joined the UISO as a security engineer, primarily tasked with running network monitoring services. In 2017, Keith took over leading the UISO security engineering team. The engineering team focuses on assisting with incident response investigations, managing university-wide security services such as network monitoring and vulnerability scanning, and providing general security consulting to IU.
Professor Hollis will survey the most prominent existing regulatory mechanisms for combating foreign election interference today — international law, domestic law, and technical measures — and explain the gaps and challenges each faces. To supplement these responses, he'll discuss calls for democracies to develop and apply cyber-norms — socially constructed shared expectations of appropriate behavior for members of a particular community. Based on his work with Jan Neutze of Microsoft, his central claim is that States and other stakeholders should affirmatively construct international norms tailored to the challenges of online foreign election interference, including delineating “out-of-bounds” behavior vis-à-vis foreign elections and setting expectations for assistance or cooperation when such behavior occurs. Of course, cyber-norms are not a salve for all wounds. Yet, as Professor Hollis will discuss, they may build off new norm candidates from the G7 and the Paris Call for Trust and Security in Cyberspace to highlight how cyber-norms can provide critical tools to a broad, multi-layered, and multi-disciplinary response to the threat of foreign election interference.
Note: This talk will be based on a forthcoming chapter from his book with Jens Ohlin: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3635782
Duncan B. Hollis is Laura H. Carnell Professor of Law at Temple University Law School in Philadelphia. He is editor of the award-winning Oxford Guide to Treaties (2012, 2nd ed., 2020), International Law (7th ed., 2018, with Allen Weiner), and Defending Democracies: Combating Foreign Election Interference in a Digital Age (forthcoming 2021, with Jens Ohlin). Professor Hollis is a Non-Resident Scholar at the Carnegie Endowment for International Peace and an elected member of the American Law Institute. In 2016, Professor Hollis was elected by the General Assembly of the Organization of the American States to a four year term on the OAS’s Inter-American Juridical Committee. There, he served as Rapporteur for a project on improving the transparency of how States understand international law applies in cyberspace.
Russian interference in the 2016 US Presidential election produced the biggest political scandal in a generation, marking the beginning of an ongoing attack on democracy. In the run-up to the 2020 election, Russia was found to have engaged in more “information operations,” a practice that has been increasingly adopted by other countries. In this talk, Ohlin makes the case that these operations violate international law, not as a cyberwar or a violation of sovereignty, but as a profound assault on democratic values protected by the international legal order under the rubric of self-determination. He argues that, in order to confront this new threat to democracy, countries must prohibit outsiders from participating in elections, enhance transparency on social media platforms, and punish domestic actors who solicit foreign interference.
Professor Ohlin’s work stands at the intersection of four related fields: criminal law, criminal procedure, public international law, and the laws of war. His latest research project involves foreign election interference. His book, Election Interference: International Law and the Future of Democracy, is forthcoming from Cambridge University Press in 2020. He is co-editor, with Claire Finkelstein, of the Oxford Series in Ethics, National Security, and the Rule of Law, a steering-board member of an international working group researching secondary liability for international crimes, and a co-editor of the forthcoming Oxford Handbook on International Criminal Justice.
Hopefully, by the time of this talk, all voters will have the option of voting by mail, depositing their voted ballots in secure drop boxes, or casting their ballots in safe polling places, possibly during early voting. To protect the security of the voters’ ballots, no internet voting, including mobile phone voting, should be allowed, though some will be.
But, despite the current challenges, it will not be sufficient to have secure and easily available voting. By using mathematically sound post-election ballot audits, states can check that vote with tabulating scanners – which are computers and therefore vulnerable to hacking – to report the correct outcomes.
I’ll survey the current status of the election, explain why internet voting is fundamentally insecure, and, as time permits, discuss post-election ballot audits and how they are conducted.
Bio
Barbara Simons is the board chair of Verified Voting, a member of the Board of Advisors of the U.S. Election Assistance Commission (appointed by Sen. Reid and reappointed by Sen. Schumer), and the co-author of the book, “Broken Ballots: Will Your Vote Count?”. She has been a leader on technology policy issues for more than 40 years and co-authored numerous reports and studies on how to improve our voting systems.
Simons was President of ACM, the nation’s oldest and largest educational and scientific society for computing professionals, from July 1998 until June 2000. Simons earned her Ph.D. in computer science from the University of California, Berkeley, and she also co-founded the Reentry Program for Women and Minorities in the U.C. Berkeley Computer Science Department.
Attribution of cyberattacks requires identifying those responsible for bad acts, including states, and accurate attribution is a crucial predicate in contexts as diverse as criminal indictments, insurance coverage disputes, and cyberwar. But the difficult technical side of attribution is just the precursor to highly contested legal and policy questions about when and how to accuse governments of responsibility. Although politics may largely determine whether attributions are made public, this talk argues that when cyberattacks are publicly attributed to states, such attributions should be governed by legal standards.
Kristen Eichensehr is a professor of law at the University of Virginia. Eichensehr joined the Law School in 2020 after serving on the faculty of the UCLA School of Law.
She writes and teaches about cybersecurity, foreign relations, international law and national security law. She has written articles on, among other things, the attribution of state-sponsored cyberattacks, the important roles that private parties play in cybersecurity, the constitutional allocation of powers between the president and Congress in foreign relations, and the role of foreign sovereign amici in the Supreme Court. She received the 2018 Mike Lewis Prize for National Security Law Scholarship for her article, “Courts, Congress, and the Conduct of Foreign Relations,” published in the University of Chicago Law Review.
Eichensehr received her J.D. from Yale Law School, where she served as executive editor of the Yale Law Journal and articles editor of the Yale Journal of International Law.
This event is co-hosted with the Ostrom Workshop.
Rachana Ananthakrishnan
What are the unique aspects of the research community that drive product requirements for this market? How has everyday digital life changed researcher expectations for tools they use at work? How does building a product differ from engineering a software application? Ananthakrishnan will consider such questions and provide insights on how cyberinfrastructure providers have been shaped by these factors. She will show the effectiveness of product-driven organizations over software development teams in order to build, support, and grow products that are responsive to user needs.
Against the backdrop of unconscious competence, how do we identify and design solutions that are both frictionless and secure? Drawing from her experiences as part of the Globus team and her role shaping the product used by thousands of researchers worldwide, she will cover various facets from technology, product, and compliance, to design, business models, and organizational culture. Ananthakrishnan will weave in her journey from a Hoosier student to software developer and product manager to her current leadership position delivering secure enterprise-grade services in a consumer-friendly form. She will also share her professional fulfillment from serving this community and crafting solutions that enable researchers to solve meaningful global-scale problems.
Rachana Ananthakrishnan is a proud alumna of Indiana University Bloomington, receiving a master’s in computer science in 2002. She is part of the leadership team at Globus (globus.org), an initiative of the University of Chicago and Argonne National Lab that delivers products to the global research community and operates as a sustainable non-profit within the university. As head of products, she leads a team of software engineers, product managers, and system operators to deliver scalable and secure data management solutions as Software-as-a-Service and Platform-as-a-Service. She also serves on the board of directors of the Canadian research organization WestGrid and is a member of the InCommon Community Assurance and Trust Board.
Her expertise and interests are in providing secure and usable solutions to empower scientists across various domains to advance their research. In her prior roles as software engineer, solutions architect, and product manager, she has worked on security and data management solutions for several projects including Earth System Grid (ESG), Biomedical Informatics Research Network (BIRN), and Extreme Science and Engineering Discovery Environment (XSEDE).
Eva Galperin is director of cybersecurity at the Electronic Frontier Foundation (EFF).
Bio
Eva Galperin is director of cybersecurity at the Electronic Frontier Foundation (EFF). Prior to 2007, when she came to work for EFF, Eva worked in security and IT in Silicon Valley and earned degrees in political science and international relations from SFSU. Her work is primarily focused on providing privacy and security for vulnerable populations around the world. To that end, she has applied the combination of her political science and technical background to everything from organizing EFF's Tor Relay Challenge, to writing privacy and security training materials (including Surveillance Self Defense and the Digital First Aid Kit), and publishing research on malware in Syria, Vietnam, and Kazakhstan. When she is not collecting new and exotic malware, she practices aerial circus arts and learning new languages.
Computer security is no longer about data; it's about life and property. This change makes an enormous difference, and will shake up our industry in many ways. First, data authentication and integrity will become more important than confidentiality. And second, our largely regulation-free internet will become a thing of the past.
Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.
Given this future, it's vital that we look back at what we've learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future.
Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a lecturer in public policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of the Electronic Privacy Information Center and VerifiedVoting.org.
Learn more about Bruce Schneier's work at his website Schneier on Security: schneier.com
Executives can avoid the dreaded “break-up” by understanding that the CISO role has evolved over the years into a much more business centric position. The Board needs a dynamic CIO/CISO that can speak to cybersecurity gaps in a logical, business minded approach (not just a technical one). In this session, we will examine how this might differ among public and private sector organizations, as well as provide some practical approaches to maturing cybersecurity programs, metrics and how this needs to adapt over time.
Bryan Sacks proudly serves as the Chief Information Security State of Indiana. He is responsible for establishing the vision and strategy for the State’s responsibility span across the State’s Information Sharing and Analysis Center (IN-ISAC), the Security Operations Center (SOC), Security Operations and Engineering, and Risk & Compliance teams.
The enormous financial success of online advertising platforms is partially due to the precise targeting and delivery features they offer. Recently, such platforms have been criticized for allowing advertisers to discriminate against users belonging to sensitive groups, i.e., to exclude users belonging to a certain race or gender from receiving their ads. In this talk, I discuss two threads of work that aim to understand the extent of discrimination in ads on online platforms
Alan Mislove is a Professor and Associate Dean and Director of Undergraduate Programs at the Khoury College of Computer Sciences at Northeastern University, which he joined in 2009. He received his B.A., M.S., and Ph.D. in computer science from Rice University in 2002, 2005, and 2009, respectively. Prof. Mislove’s research concerns distributed systems and networks, with a focus on using social networks to enhance the security, privacy, and efficiency of newly emerging systems. He work comprises over 50 peer-reviewed papers, has received over 12,500 citations, and has been supported by over $6M in grants from government agencies and industrial partners. He is a recipient of an NSF CAREER Award (2011), a Google Faculty Award (2012), a Facebook Secure the Internet grant (2018), the ACM SIGCOMM Test of Time Award (2017), the IETF Applied Networking Research Prize (2018, 2019), the USENIX Security Distinguished Paper Award (2017), the NDSS Distinguished Paper Award (2018), the IEEE Cybersecurity Award for Innovation (2017), a Facebook Secure the Internet Grant, and his work has been covered by the Wall Street Journal, the New York Times, and the CBS Evening News.
As a whole, our attention, privacy, and behavioral autonomy are common goods, and we must protect them.
Nikolas Guggenberger is a clinical lecturer in law, a research scholar in law, and the executive director of the Information Society Project at Yale Law School. His research focuses on the intersection of law and technology, specifically platform regulation, privacy, the automation of law, and the future of private law. He has frequently served as an expert witness and on advisory committees, mainly on matters relating to financial technology, financial markets regulation, digital policy, and media law.
This talk will cover recent categorizations of Cybersecurity as a Complex System, and how this complexity may present challenges to building a shared Security Operations Center. As Geoff E at NCSC eloquently states "...let's begin by accepting that we are not entirely the masters of the systems we are creating; currently there are limits to our abilities to predict and control them. Once we have accepted this, we are then able to think more critically, and ultimately, make more informed security decisions." (https://www.ncsc.gov.uk/blog-post/mice-and-cyber)
Susan Ramsey, MSCIT, GIAC, CEH - Susan Ramsey is a Security Engineer and a Risk Assessor at the National Center for Atmospheric Research. She holds multiple professional certifications, a Master of Science in Computer Information Technology from Regis University, and is pursuing a second MS through SANS Institute, in Information Security Engineering. She has over twenty years of experience in Enterprise IT, from operations and technical support, to program management and consulting.
Cybersecurity - knowing why we’re doing what it is we do
The purpose of this presentation is to advance critical thinking about cybersecurity. The goal is to encourage on-going discussion and searching for ways to proactively, effectively facilitate cybersecurity actions that are truly and cost-effectively achieving clearly identified reasons for conducting such actions in the first place. That is, to proactively seek to ensure that we do not let the cybersecurity actions themselves become the goal.
Gary Stoneburner is a member of the senior professional staff of the Johns Hopkins Applied Physics Laboratory (JHU/APL) where is supports APL and government sponsors as a system security engineer. His prior experience includes civil service at NIST where he was one of two US technical representatives to the international Common Criteria project, the lead for NIST’s first publication on managing cyber-related risks, and a major contributor to many of the NIST cybersecurity publications. Previously he was with The Boeing Company where he served as lead hardware engineer for a very high assurance router (TCSEC, Orange Book Class A1) and as the company’s security architect. He is an Army Signal Officer with 8 years of active duty and retired from the reserves where his assignments included Deputy Chief IA Branch, J6, USSOUTHCOM; technical advisor to the INSCOM (US Army Intelligence and Security Command) accreditor; Deputy Team Chief Army Information Operations Red Team; and Watch Officer Army Global Network Operations and Security Center. In addition he is retired from the state defense force of Maryland, having served as the defense force CIO and Assistant Chief of Staff, Signal.
Librarians at the forefront in the fight for privacy: Lessons from the Library Freedom Project
The future is here, and it's not pretty. Facebook knows more about us than we know about ourselves, and they're facing endless scandals about how they've misused that data. Digital DNA testing companies get breached, "smart" devices accidentally record private conversations, bounty hunters buy location information direct from internet providers, and government surveillance is just as pervasive as when Edward Snowden tried to warn us all about back in 2013. Does our right to privacy exist anymore in this cyberpunk dystopia? And who can help us protect it? Librarians can. With a longstanding ethical commitment to privacy and a welcoming and trusted environment, these defenders of intellectual freedom are fighting back against the normalization of surveillance with the help of Library Freedom Project. Library Freedom Project brings together the expertise of librarians, hackers, attorneys, policy wonks, and activists to create practical privacy trainings for librarians so that they can defend the privacy of their patrons. Alison Macrina, director of LFP, will talk about how to fight surveillance and make privacy tools mainstream and ubiquitous through the trusted space of the library.
Alison Macrina is the founder and director of Library Freedom Project. She is also a librarian, internet activist, and a core contributor to the Tor Project. Alison is passionate about connecting surveillance to other issues of injustice, and works to demystify privacy and security topics for ordinary users.