133 research outputs found
Deleting Collected Digital Evidence by Exploiting a Widely Adopted Hardware Write Blocker
In this primary work we call for the importance of integrating security testing into the process of testing digital forensic tools. We postulate that digital forensic tools are increasing in features (such as network imaging), becoming networkable, and are being proposed as forensic cloud services. This raises the need for testing the security of these tools, especially since digital evidence integrity is of paramount importance. At the time of conducting this work, little to no published anti-forensic research had focused on attacks against the forensic tools/process.We used the TD3, a popular, validated, touch screen disk duplicator and hardware write blocker with networking capabilities and designed an attack that corrupted the integrity of the destination drive (drive with the duplicated evidence) without the user\u27s knowledge. By also modifying and repackaging the firmware update, we illustrated that a potential adversary is capable of leveraging a phishing attack scenario in order to fake digital forensic practitioners into updating the device with a malicious operating system. The same attack scenario may also be practiced by a disgruntled insider. The results also raise the question of whether security standards should be drafted and adopted by digital forensic tool makers
Bytewise Approximate Matching: The Good, The Bad, and The Unknown
Hash functions are established and well-known in digital forensics, where they are commonly used for proving integrity and file identification (i.e., hash all files on a seized device and compare the fingerprints against a reference database). However, with respect to the latter operation, an active adversary can easily overcome this approach because traditional hashes are designed to be sensitive to altering an input; output will significantly change if a single bit is flipped. Therefore, researchers developed approximate matching, which is a rather new, less prominent area but was conceived as a more robust counterpart to traditional hashing. Since the conception of approximate matching, the community has constructed numerous algorithms, extensions, and additional applications for this technology, and are still working on novel concepts to improve the status quo. In this survey article, we conduct a high-level review of the existing literature from a non-technical perspective and summarize the existing body of knowledge in approximate matching, with special focus on bytewise algorithms. Our contribution allows researchers and practitioners to receive an overview of the state of the art of approximate matching so that they may understand the capabilities and challenges of the field. Simply, we present the terminology, use cases, classification, requirements, testing methods, algorithms, applications, and a list of primary and secondary literature
CuFA: A More Formal Definition for Digital Forensic Artifacts
The term “artifact” currently does not have a formal definition within the domain of cyber/ digital forensics, resulting in a lack of standardized reporting, linguistic understanding between professionals, and efficiency. In this paper we propose a new definition based on a survey we conducted, literature usage, prior definitions of the word itself, and similarities with archival science. This definition includes required fields that all artifacts must have and encompasses the notion of curation. Thus, we propose using a new term e curated forensic artifact (CuFA) e to address items which have been cleared for entry into a CuFA database (one implementation, the Artifact Genome Project, abbreviated as AGP, is under development and briefly outlined). An ontological model encapsulates these required fields while utilizing a lower-level taxonomic schema. We use the Cyber Observable eXpression (CybOX) project due to its rising popularity and rigorous classifications of forensic objects. Additionally, we suggest some improvements on its integration into our model and identify higher-level location categories to illustrate tracing an object from creation through investigative leads. Finally, a step-wise procedure for researching and logging CuFAs is devised to accompany the model
”CyberWorld” as a Theme for a University-wide First-year Common Course
Nowadays we all live in a cyber world and use the internet for emailing, banking, streaming video, shopping, reading news, or other activities. Given all the time people spend online, it is important that all students (regardless of their major) learn some basics about living in a cyber world, e.g., strategies for online safety, impact of artificial intelligence, digital forensics or ancestry.com. To facilitate students from many majors to learn about important issues related to the internet, eight faculty from a variety of disciplines at the University of New Haven integrated the theme of Cyber World into our team-taught, first-year experience course, also referred to as the “Common Course.” The Common Course’s primary purpose is to enable students to develop evidence-based arguments and to challenge their own and others’ assumptions in relation to that evidence. Each Common Course class focuses on a broad topic (e.g., Justice, Happiness, or Identity) that instructors use as a touch point to facilitate critical thinking. In Cyber World, however, the topic is given stronger focus, and all students in the class are expected to come away with specific cyber-related knowledge. A special challenge is that the majority of the 160 students are from non-STEM majors. Given the varied background of students, this course covers a variety of topics such as sharing DNA with ancestry.com, protecting against identity theft, detecting fake news, and oversharing personal information. The course is taught by eight faculty members from four different colleges having expertise in a variety of disciplines. An important side effect of this faculty diversity is that interdisciplinary collaborations among faculty are promoted. Our paper has three significant contributions: (1) We present the eight topics related to living in a cyber world that we chose for this course, including our rationale for why they are appropriate and relevant; (2) We summarize how we integrated the Cyber World topics into the structure of the Common Course, which includes a discussion of the challenges we faced; and (3) We summarize some initial results on how students perceived their experience as well as how they performed compared to other common course sections / topics
A Cyber Forensics Needs Analysis Survey: Revisiting the Domain\u27s Needs a Decade Later
The number of successful cyber attacks continues to increase, threatening financial and personal security worldwide. Cyber/digital forensics is undergoing a paradigm shift in which evidence is frequently massive in size, demands live acquisition, and may be insufficient to convict a criminal residing in another legal jurisdiction. This paper presents the findings of the first broad needs analysis survey in cyber forensics in nearly a decade, aimed at obtaining an updated consensus of professional attitudes in order to optimize resource allocation and to prioritize problems and possible solutions more efficiently. Results from the 99 respondents gave compelling testimony that the following will be necessary in the future: 1) better education/training/certification (opportunities, standardization, and skill-sets); 2) support for cloud and mobile forensics; 3) backing for and improvement of open-source tools 3) research on encryption, malware, and trail obfuscation; 4) revised laws (specific, up-to-date, and which protect user privacy); 5) better communication, especially between/with law enforcement (including establishing new frameworks to mitigate problematic communication); 6) more personnel and funding
DROP (DRone Open source Parser) Your Drone: Forensic Analysis of the DJI Phantom III
The DJI Phantom III drone has already been used for malicious activities (to drop bombs, remote surveillance and plane watching) in 2016 and 2017. At the time of writing, DJI was the drone manufacturer with the largest market share. Our work presents the primary thorough forensic analysis of the DJI Phantom III drone, and the primary account for proprietary file structures stored by the examined drone. It also presents the forensically sound open source tool DRone Open source Parser (DROP) that parses proprietary DAT files extracted from the drone\u27s nonvolatile internal storage. These DAT files are encrypted and encoded. The work also shares preliminary findings on TXT files, which are also proprietary, encrypted, encoded, files found on the mobile device controlling the drone. These files provided a slew of data such as GPS locations, battery, flight time, etc. By extracting data from the controlling mobile device, and the drone, we were able to correlate data and link the user to a specific device based on extracted metadata. Furthermore, results showed that the best mechanism to forensically acquire data from the tested drone is to manually extract the SD card by disassembling the drone. Our findings illustrated that the drone should not be turned on as turning it on changes data on the drone by creating a new DAT file, but may also delete stored data if the drone\u27s internal storage is full
Forensic State Acquisition from Internet of Things (FSAIoT): A General Framework and Practical Approach for IoT Forensics through IoT Device State Acquisition
IoT device forensics is a difficult problem given that manufactured IoT devices are not standardized, many store little to no historical data, and are always connected; making them extremely volatile. The goal of this paper was to address these challenges by presenting a primary account for a general framework and practical approach we term Forensic State Acquisition from Internet of Things (FSAIoT). We argue that by leveraging the acquisition of the state of IoT devices (e.g. if an IoT lock is open or locked), it becomes possible to paint a clear picture of events that have occurred. To this end, FSAIoT consists of a centralized Forensic State Acquisition Controller (FSAC) employed in three state collection modes: controller to IoT device, controller to cloud, and controller to controller. We present a proof of concept implementation using openHAB -- a device agnostic open source IoT device controller -- and self-created scripts, to resemble a FSAC implementation. Our proof of concept employed an Insteon IP Camera as a controller to device test, an Insteon Hub as a controller to controller test, and a nest thermostat for a a controller to cloud test. Our findings show that it is possible to practically pull forensically relevant state data from IoT devices. Future work and open research problems are shared
First Year Students\u27 Experience in a Cyber World Course - An Evaluation
Although cybersecurity is a major present concern, it is not a required subject in University. In response, we developed Cyber World which introduces students to eight highly important cybersecurity topics (primarily taught by none cybersecurity experts). We embedded it into our critical thinking Common Course (core curriculum) which is a team-taught first-year experience required for all students. Cyber World was first taught in Fall 2018 to a cohort of over 150 students from various majors at the University of New Haven. This article presents the evaluation of our Fall taught course. In detail, we compare the performance of Cyber World students to other Common Course sections that ran in parallel and conclude that despite the higher workload students performed equally well. Furthermore, we assess the students’ development throughout the course with respect to their cybersecurity knowledge where our results indicate a significant gain of knowledge. Note, this article also presents the idea and topics of Cyber World; however a detailed explanation has been released previously
Transitioning Virtual-Only Group Therapy for Substance Use Disorder Patients to a Hybrid Model
Tyler S Oesterle,1,* Nicholas L Bormann,1,* Domenic A Ochal,2 Stephan Arndt,3,4 Scott A Breitinger1 1Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA; 2Mayo Clinic Alix School of Medicine, Rochester, MN, USA; 3Department of Psychiatry, University of Iowa, Iowa City, IA, USA; 4Department of Biostatistics, University of Iowa, Iowa City, IA, USA*These authors contributed equally to this workCorrespondence: Nicholas L Bormann, Department of Psychiatry and Psychology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA, Tel +1 507 284 2088, Fax +1 507 284 4158, Email [email protected]: Telehealth is associated with a myriad of benefits; however, little is known regarding substance use disorder (SUD) treatment outcomes when participants join group therapy sessions in a combination in-person and virtual setting (hybrid model). We sought to determine if treatment completion rates differed.Patients and Methods: Policy changes caused by the COVID-19 pandemic created a naturalistic, observational cohort study at seven intensive outpatient (IOP) programs in rural Minnesota. Virtual-only delivery occurred 6/1/2020-6/30/2021, while hybrid groups occurred 7/1/2021-7/31/2022. Data was evaluated retrospectively for participants who initiated and discharged treatment during the study period. Participants were IOP group members 18 years and older who had a SUD diagnosis that both entered and discharged treatment during the 26-month period. A consecutive sample of 1502 participants (181– 255 per site) was available, with 644 removed: 576 discharged after the study conclusion, 49 were missing either enrollment or discharge data, 14 transferred sites during treatment, and 5 initiated treatment before the study initiation. Helmert contrasts evaluated the impact of hybrid group exposure.Results: A total of 858 individuals were included. Data was not from the medical chart and was deidentified preventing specific demographics; however, the overall IOP sample for 2020– 2022, from which the sample was derived, was 29.8% female, and 64.1% were 18– 40 years of age. For completed treatment, hybrid group exposure relative to virtual-only had a univariate odds ratio of 1.88 (95% CI: 1.50– 2.41, p < 0.001). No significant difference was seen across IOP sites.Conclusion: These results describe a novel hybrid group approach to virtual care for SUDs with outcome data not previously documented in the literature. While virtual treatment delivery can increase access, these results suggest a benefit is derived from including an in-person option. Further research is needed to identify how an in-person component may change dynamics and if it can be replicated in virtual-only models.Keywords: substance-related disorders, telemedicine, group psychotherapy, comparative study, outcome assessment, health car
Skeletons for Distributed Topological Computation
Parallel implementation of topological algorithms is highly desirable, but the challenges, from reconstructing algorithms around independent threads through to runtime load balancing, have proven to be formidable. This problem, made all the more acute by the diversity of hardware platforms, has led to new kinds of implementation platform for computational science, with sophisticated runtime systems managing and coordinating large threadcounts to keep processing elements heavily utilized. While simpler and more portable than direct management of threads, these approaches still entangle program logic with resource management. Similar kinds of highly parallel runtime system have also been developed for functional languages. Here, however, language support for higher-order functions allows a cleaner separation between the algorithm and `skeletons' that express generic patterns of parallel computation. We report results on using this technique to develop a distributed version of the Joint Contour Net, a generalization of the Contour Tree to multifields. We present performance comparisons against a recent Haskell implementation using shared-memory parallelism, and initial work on a skeleton for distributed memory implementation that utilizes an innovative strategy to reduce inter-process communication overheads
- …