Skip to Main Content

Past Projects

  • Applied Cryptography
  • Secure Voting
  • Communication Networks
  • Cloud Computing
  • Trusted Computing
  • SecurityEmpire
  • Information Assurance Education
  • Cybersecurity Assessment Tools

Applied Cryptography

Verifiable Randomness

We are designing, implementing, and analyzing new algorithms for generating verifiably random bits. Applications of this work include Random-Sample Elections (see below), where verifiably random bits are needed for selecting random samples and election audits.

CISA members that work in this area include Alan T. Sherman and Christopher D. Nguyen.

Secure Voting

Random-Sample Elections

Random-Sample Elections work by randomly selecting voters and auditing tallies in a novel way. It allows anyone to verify online that neither the selection nor outcome can have been manipulated by anyone including governments. Voters are protected but are unable to sell votes. Voters may also be better motivated and informed since each vote carries more weight and each voter can meaningfully investigate and study the single issue that voter is asked to help decide.

CISA members that work on this project include Alan T. Sherman and Christopher D. Nguyen.

Scantegrity

Scantegrity is a family of security enhancements for optical scan voting systems, providing such systems with end-to-end (E2E) verifiability of election results. Each version of the system uses privacy-preserving confirmation codes to allow a voter to prove to themselves that their ballot is included unmodified in the final tally. As the system relies on cryptographic techniques, the ability to validate an election outcome is both software independent as well as independent of faults in the physical chain-of-custody of the paper ballots. The system was developed by a team of researchers including cryptographers David Chaum and Ron Rivest.

Scantegrity II prints the confirmation codes in invisible ink to improve usability and dispute resolution. Two versions of the system are currently under research and development: Scantegrity III and Remotegrity. Scantegrity III further improves usability through the addition of a receipt printer. Remotegrity is an Internet version of the Scantegrity system.

CISA members that have worked on this project include Alan T. Sherman, Richard T. Carback III, Russell A. Fink, and John Conway.

This description is a reorganization of the information presented on the Scantegrity website.

Punchscan

Punchscan is the predecessor of Scantegrity system. It is the first vote capture system to offer fully end-to-end (E2E) verifiability of election results. Punchscan moves beyond ordinary paper audit trails offering a far more robust and available way for voters to become involved in the election oversight process. The system was invented by cryptographer David Chaum.

CISA members that worked on this project include Alan T. Sherman and Richard T. Carback III.

This description was taken from the Scantegrity website.

Communication Networks

Challenged Sensor Internetworks

We are exploring mechanisms to combine heterogeneous, wireless sensor networks into delay and disruption tolerant internetworks. Our work describes properties of these systems and provides algorithms for overlay path discovery, congestion modeling, and fragmentation. This work enables unique concepts such as the Solar System Internet.

CISA members that worked in this area include Edward J. Birrane.

Cloud Computing

Secure Cloud Computation

CISA members that worked in this area include F. John Krautheim.

Cloud Forensics

When investigating suspected crimes in and against Infrastructure-as-a-Service (IaaS) cloud computing environments, forensic examiners are poorly equipped to deal with the technological and legal challenges. Because data in the cloud are remote, distributed, and elastic, these challenges include understanding the cloud environment, acquiring and analyzing data remotely, and visualizing changes in virtualized data. Today digital forensics for cloud computing is challenging at best. This thesis identifies important issues in this new field and develops new practical forensic tools and techniques to facilitate forensics exams in cloud.

We are working to develop practical forensic tools and techniques to facilitate forensic examinations of the cloud. Forensics capabilities for cloud computing stands to impact cloud adoption on a global scale. Corporate decision makers, government policy makers, researchers, law enforcement and forensics examiners will be better able to evaluate the risks of cloud computing, to conduct forensic exams, and to guide future research and innovations as a result of this work.

For an example search warrant for IaaS cloud computing, see this page.

CISA members that work on this project include Josiah Dykstra and Alan T. Sherman.

Trusted Computing

Trusted Platform Modules

Trusted Platform Modules (TPMs) are secure cryptoprocessors that provide cryptographic primitives and services to otherwise insecure hardware. Services they provide include pseudo-random number generation, remote attestation (hashing), sealing (encryption), and binding (digital signatures). The hardware is tamper-resistant; it destroys its cryptographic keys if it detects tampering.

We have applied TPMs to provide integrity to the voting process by ensuring correctness of booted software. Scantegrity uses TPMs to increase assurance without being dependent on TPMs for security.

CISA members that worked in this area include Russell A. Fink and Richard T. Carback III.

SecurityEmpire

In interacting with any secure environment, the user is the weakest link in security. SecurityEmpire is a new interactive multiplayer computer game targeted towards high school students to teach Information Assurance (IA) concepts to users, without making assumptions regarding the user’s prior security experience. The game challenges users to build green energy systems (e.g., solar, geothermal, wind), while engaging in sound IA practices and avoiding security missteps.

The project addresses two threats to cyber safety: users act without thinking about the consequences of their actions; and users lack awareness of basic IA concepts. This project contributes to the DHS cyber initiative “Stop. Think. Connect.”.

In contrast with traditional teaching methods, educational games hold promise for greater student engagement and learning. Computer games offer a better chance than do board and card games to engage students who have access to computers because such students spend many more hours playing computer games and computer games can be copied and distributed more cheaply and efficiently.

The game will be fielded and tested in at Meade Senior High School (MHS) in Anne Arundel County as a stand-alone web game accessible via CISA servers. A second version is being developed as a Facebook application. Students will test the game, provide feedback, and suggest improvements for the game.

Play the game now!

This work is jointly lead by Alan T. Sherman (CISA), Marc Olano (Game Development Track), and Linda Oliva (Dept. of Education) supported by the National Science Foundation (NSF).

CISA members that work on this project include Alan T. Sherman and Oliver Kubik.

The research paper about SecurityEmpire presented at 3GSE’14:

Information Assurance Education

Cyber Battle Lab

The Cyber Battle Lab is a joint venture with Capitol College. CISA serves as a member of the Advisory Board.

CISA members that work on this project include Alan T. Sherman.

Cyber Defense Exercises (CDX)

The CDX project is the predecessor to the Cyber Battle Lab.

Cyber defense exercises (CDXs) are hands-on information assurance exercises used in the UMBC computer science undergraduate and graduate curricula. Each exercise is organized in a flexible fashion to facilitate varied use for different courses, levels, and available time. During each exercise, students engage in structured activities using a virtual machine that is run in a lab or on a laptop from a mobile cart that can be rolled into any classroom. The virtual machines are configured to permit a student to make mistakes safely while acting as the system administrator, without adversely affecting any other users or systems.

CISA members that worked on these exercises include Richard T. Carback III.

Cybersecurity Assessment Tools

To help universities better prepare the substantial number of cybersecurity professionals needed, we are creating infrastructure for a rigorous evidence-based improvement of cybersecurity education. For more information, visit our project website.

Creating a Cybersecurity Concept Inventory: A Status Report on the CATS Project.

We report on the status of our Cybersecurity Assessment Tools (CATS) project that is creating and validating a concept inventory for cybersecurity, which assesses the quality of instruction of any first course in cybersecurity. In fall 2014, we carried out a Delphi process that identified core concepts of cybersecurity. In spring 2016, we interviewed twenty-six students to uncover their understandings and misconceptions about these concepts. In fall 2016, we generated our first assessment tool{a draft Cybersecurity Concept Inventory (CCI), comprising approximately thirty multiple-choice questions. Each question targets a concept; incorrect answers are based on observed misconceptions from the interviews. This year we are validating the draft CCI using cognitive interviews, expert reviews, and psychometric testing. In this paper, we highlight our progress to date in developing the CCI. Refer the paper here.

CISA members that worked on this project includes Alan T. Sherman, David DeLatte, Enis Golaszewski, Michael Neary, Konstantinos Patsourakos, Dhananjay Phatak, Travis Scheponik & Linda Oliva (Dept. of Education) from University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS), Julia Thompson (CS) from University of Illinois at Urbana-Champaign.

Appeared in the proceedings of the 2017 National Cyber Summit (June 6-8, 2017, Huntsville, AL).

Identifying Core Concepts of Cybersecurity: Results of Two Delphi Processes.

This paper presents and analyzes results of two Delphi processes that polled cybersecurity experts to rate cybersecurity topics based on importance, difficulty, and timelessness. These ratings can be used to identify core concepts–cross-cutting ideas that connect knowledge in the discipline. The first Delphi process identified core concepts that should be learned in any first course on cybersecurity. The second identified core concepts that any cybersecurity professional should know upon graduating
from college. Despite the rapidly growing demand for cybersecurity professionals, it is not clear what defines foundational cybersecurity knowledge. Initial data from the Delphi processes lay a foundation for defining the core concepts of the field and, consequently, provide a common starting point to accelerate the development of rigorous cybersecurity education practices. These results provide a foundation for developing evidence-based educational cybersecurity assessment tools that will identify and measure effective methods for teaching cybersecurity. The Delphi results can also be used to inform the development of curricula, learning exercises, and other educational materials and policies. Refer the paper here.

CISA members that worked on this project include Alan T. Sherman, Geet Parekh, David DeLatte, Geoffrey L. Herman, Dhananjay Phatak, Travis Scheponik, & Linda Oliva (Dept. of Education) from University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS) from University of Illinois at Urbana-Champaign.

This work was supported in part by the U.S. Department of Defense under CAE-R Grant H98230-15-1-0294 and Grant H98230-15-1-0273, in part by the National Science Foundation under Grant SFS 1241576, and in part by the NSF under a subcontract of INSuRE under Grant 1344369.

This article was accepted and published 2017 issue of the IEEE journal.

Cybersecurity: Exploring core concepts through six scenarios.

The authors introduce and explain the core concepts of cybersecurity through six engaging practical scenarios. Presented as case studies, the scenarios illustrate how experts may reason through security challenges managing trust and information in the adversarial cyber world. The concepts revolve around adversarial thinking, including understanding the adversary; defining security goals; identifying targets, vulnerabilities, threats, and risks; and devising defenses. They also include dealing with confidentiality, integrity, availability (known as the “CIA triad”), authentication, key management, physical security, and social engineering. The authors hope that these scenarios will inspire students to explore this vital area more deeply. The target audience is anyone who is interested in learning about cybersecurity, including those with little to no background in cybersecurity. This article will also interest those who teach cybersecurity and are seeking examples and structures for explaining its concepts. For students and educators, the authors include selected misconceptions they observed in student responses to scenarios. The contributions are novel educational case studies, not original technical research. The scenarios comprise responding to an e-mail about lost luggage containing specifications of a new product, delivering packages by drones, explaining a suspicious database input error, designing a corporate network that separates public and private segments, verifying compliance with the Nuclear Test Ban Treaty, and exfiltrating a USB stick from a top-secret government facility. Refer the paper here.

CISA members that worked on this project include Alan T. Sherman, David DeLatte, Michael Neary, Dhananjay Phatak, Travis Scheponik & Linda Oliva (Dept. of Education) from University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS), Julia Thompson (CS) from University of Illinois at Urbana-Champaign.

This work was supported in part by the U.S. Department of Defense under CAE-R grants H98230-15-1-0294 and H98230-15-1-0273, and by the National Science Foundation under SFS grant 1241576.

This paper was published online on 27 Sep 2017 at Cryptologia Journal.

How Students Reason about Cybersecurity Concepts.

Despite the documented need to train and educate more cybersecurity professionals, we have little rigorous evidence to inform educators on effective ways to engage, educate, or retain cybersecurity students. To begin addressing this gap in our knowledge, we are conducting a series of think-aloud interviews with cybersecurity students to study how students reason about core cybersecurity concepts. We have recruited these students from three diverse institutions: University of Maryland, Baltimore County, Prince George’s Community College, and Bowie State University. During these interviews, students grapple with security scenarios designed to probe student understanding of cybersecurity, especially adversarial thinking. We are analyzing student statements using a structured qualitative method, novice-led paired thematic analysis, to document student misconceptions and problematic reasonings. We intend to use these findings to develop Cybersecurity Assessment Tools that can help us assess the effectiveness of pedagogies. These findings can also inform the development of curricula, learning exercises, and other educational materials and policies. Refer the paper here.

CISA members that worked on this project include Alan T. Sherman, David DeLatte, Dhananjay Phatak, Travis Scheponik & Linda Oliva (Dept. of Education) from University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS), Julia Thompson (CS) from University of Illinois at Urbana-Champaign.

This work was supported in part by the U.S. Department of Defense under CAE-R grants H98230-15-10294 and H98230-15-1-0273 and by the National Science Foundation under SFS grant 1241576.

Student Misconceptions about Cybersecurity Concepts: Analysis of Think-Aloud Interviews.

We conducted an observational study to document student misconceptions about cybersecurity using thematic analysis of 25 think-aloud interviews. By understanding patterns in student misconceptions, we provide a basis for developing rigorous evidence-based recommendations for improving teaching and assessment methods in cybersecurity and inform future research. This study is the first to explore student cognition and reasoning about cybersecurity. We interviewed students from three diverse institutions. During these interviews, students grappled with security scenarios designed to probe their understanding of cybersecurity, especially adversarial thinking. We analyzed student statements using a structured qualitative method, novice-led paired thematic analysis, to document patterns in student misconceptions and problematic reasoning that transcend institutions, scenarios, or demographics. Themes generated from this analysis describe a taxonomy of misconceptions but not their causes or remedies. Four themes emerged: overgeneralizations, conflated concepts, biases, and incorrect assumptions. Together, these themes reveal that students generally failed to grasp the complexity and subtlety of possible vulnerabilities, threats, risks, and mitigations, suggesting a need for instructional methods that engage students in reasoning about complex scenarios with an adversarial mindset. These findings can guide teachers’ attention during instruction and inform the development of cybersecurity assessment tools that enable cross-institutional assessments that measure the effectiveness of pedagogies. Refer the paper here.

CISA members that worked on this project include Alan T. Sherman, Enis Golaszewski, Konstantinos Patsourakos & Linda Oliva (Dept. of Education) from University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS), Julia D. Thompson (CS) from University of Illinois at Urbana-Champaign.

Part of the Educational Assessment, Evaluation, and Research Commons, Information Security Commons, Management Information Systems Commons, Scholarship of Teaching and Learning
Commons, and the Technology and Innovation Commons.

Investigating Crowdsourcing to Generate Distractors for Multiple-Choice Assessments.

We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several questions stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups.

We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective dis-tractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments. Refer the paper here.

CISA members that worked on this project include Alan T. Sherman, Travis Scheponik, Enis Golaszewski & Linda Oliva (Dept. of Education) from the University of Maryland, Baltimore County in collaboration with Geoffrey L. Herman (CS), Spencer Offenberger (CS) from the University of Illinois at Urbana-Champaign and Peter A. H. Peterson (CS) from the University of Minnesota Duluth.

This work was supported in part by the U.S. Department of Defense under CAE-R grants H98230-15-1-0294, H98230-15-1-0273, H98230-17-1-0349, and H98230-17-1-0347; and by the National Science Foundation under SFS grants 1241576, 1753681, and 1819521, and DGE grant 1820531.