27 research outputs found

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Toward Secure Services from Untrusted Developers

    Get PDF
    We present a secure service prototype built from untrusted,contributed code.The service manages private data for a variety of different users, anduser programs frequently require access to other users' private data.However, aside from covert timing channels, no part of the service cancorrupt private data or leak it between users or outside the systemwithout permission from the data's owners.Instead, owners may choose to reveal their data in a controlled manner.This application model is demonstrated by Muenster, a job searchwebsite that protects both the integrity and secrecy of each user's data.In spite of running untrusted code, Muenster and other services canprevent overt leaks because the untrusted modules are constrained bythe operating system to follow pre-specified security policies, whichare nevertheless flexible enough for programmers to do useful work.We build Muenster atop Asbestos, a recently described operating systembased on a form of decentralized information flowcontrol

    Security, Privacy, Confidentiality and Integrity of Emerging Healthcare Technologies: A Framework for Quality of Life Technologies to be HIPAA/HITECH Compliant, with Emphasis on Health Kiosk Design

    Get PDF
    This dissertation research focused on the following: 1. Determined possible vulnerabilities that exist in multi-user kiosks and the computer systems that make up multi-user kiosk systems. 2. Developed an evaluation system and audit checklist for multi-user kiosk systems adapted from the Office for Civil Rights (OCR) audit protocols to address the vulnerabilities identified from our research. 3. Improved the design of a multi-user health kiosk to meet the HIPAA/HITECH standards by incorporating P&S policies. 4. Explored the feasibility and preliminary efficacy of an intervention to explore the magnitude of differences in users’ perceived risk of privacy and security (P&S) breaches as well as correlation between perceived risk and their intention to use a multi-user health kiosk. A gap analysis demonstrated that we successfully incorporated 81% of our P&S polices into the current design of our kiosk that is undergoing pilot testing. This is higher than our initial target of 50%. Repeated measures ANOVA was performed to analyze baseline and six-month follow-up of 36 study participants to measure the magnitude of the change in their “perceived risk”. Results from the ANOVA found significant group-by-time interaction (Time*Group) F (2, 33) = .27, P=.77, ηp2=.02, significant time interaction F (1, 33) = 4.73, P=.04, ηp2=.13, and no significant group interaction F (2, 33) =1.27, P=.30 ηp2=.07. The study intervention was able to significantly reduce users’ “perceived risk with time (baseline and six-month follow-up), even though the magnitude of the change was small. We were however, unable to perform the correlation analysis as intended since all the kiosk participants used in the analysis intended to use the kiosk both at baseline and at six-month follow-up. These findings will help in direct research into methods to reduce “perceived risk” as well as using education and communication to affect human behavior to reduce risky behavior on both internal and external use of new health IT applications and technologies. It could then serve as framework to drive policy in P&S of health applications, technologies and health IT systems

    Establishing the digital chain of evidence in biometric systems

    Get PDF
    Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses

    Forensic Methods and Tools for Web Environments

    Get PDF
    abstract: The Web is one of the most exciting and dynamic areas of development in today’s technology. However, with such activity, innovation, and ubiquity have come a set of new challenges for digital forensic examiners, making their jobs even more difficult. For examiners to become as effective with evidence from the Web as they currently are with more traditional evidence, they need (1) methods that guide them to know how to approach this new type of evidence and (2) tools that accommodate web environments’ unique characteristics. In this dissertation, I present my research to alleviate the difficulties forensic examiners currently face with respect to evidence originating from web environments. First, I introduce a framework for web environment forensics, which elaborates on and addresses the key challenges examiners face and outlines a method for how to approach web-based evidence. Next, I describe my work to identify extensions installed on encrypted web thin clients using only a sound understanding of these systems’ inner workings and the metadata of the encrypted files. Finally, I discuss my approach to reconstructing the timeline of events on encrypted web thin clients by using service provider APIs as a proxy for directly analyzing the device. In each of these research areas, I also introduce structured formats that I customized to accommodate the unique features of the evidence sources while also facilitating tool interoperability and information sharing.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem

    Get PDF
    Transport Layer Security (TLS) is one of the most widely deployed cryptographic protocols on the Internet that provides confidentiality, integrity, and a certain degree of authenticity of the communications between clients and servers. Following Snowden's revelations on US surveillance programs, the adoption of TLS has steadily increased. However, encrypted traffic prevents legitimate inspection. Therefore, security solutions such as personal antiviruses and enterprise firewalls may intercept encrypted connections in search for malicious or unauthorized content. Therefore, the end-to-end property of TLS is broken by these TLS proxies (a.k.a. middleboxes) for arguably laudable reasons; yet, may pose a security risk. While TLS clients and servers have been analyzed to some extent, such proxies have remained unexplored until recently. We propose a framework for analyzing client-end TLS proxies, and apply it to 14 consumer antivirus and parental control applications as they break end-to-end TLS connections. Overall, the security of TLS connections was systematically worsened compared to the guarantees provided by modern browsers. Next, we aim at exploring the non-public HTTPS ecosystem, composed of locally-trusted proxy-issued certificates, from the user's perspective and from several countries in residential and enterprise settings. We focus our analysis on the long tail of interception events. We characterize the customers of network appliances, ranging from small/medium businesses and institutes to hospitals, hotels, resorts, insurance companies, and government agencies. We also discover regional cases of traffic interception malware/adware that mostly rely on the same Software Development Kit (i.e., NetFilter). Our scanning and analysis techniques allow us to identify more middleboxes and intercepting apps than previously found from privileged server vantages looking at billions of connections. We further perform a longitudinal study over six years of the evolution of a prominent traffic-intercepting adware found in our dataset: Wajam. We expose the TLS interception techniques it has used and the weaknesses it has introduced on hundreds of millions of user devices. This study also (re)opens the neglected problem of privacy-invasive adware, by showing how adware evolves sometimes stronger than even advanced malware and poses significant detection and reverse-engineering challenges. Overall, whether beneficial or not, TLS interception often has detrimental impacts on security without the end-user being alerted
    corecore