2,697 research outputs found

    A Policy Examination of Digital Multimedia Evidence in Police Department Standard Operating Procedures (SOPs)

    Get PDF
    2020 will be a year forever marked by the Covid-19 pandemic. The year will also be remembered for the death of George Floyd at the hands of police officer Derek Chauvin. The death was recorded by a bystander’s cell phone and broadcast all over the world to see. This video proved pivotal in the prosecution and conviction of Chauvin for Floyd’s death. The video provided powerful evidence highlighting the importance of incorporating video evidence into the investigation and prosecution of crime. Today, police use a variety of video evidence to assist in their investigations. In some cases, it may be a small part of the case whereas in others it may provide vital evidence. There has been an explosion in the number of video sources where police can now gather evidence. Cellphone videos, private security cameras on homes or businesses, social media postings, and police body cameras all provide possible evidence that must be collected, extracted and analyzed. In 2019, there were 40 million professionally installed video recording systems and 224 million smartphones in the U.S. alone. Along with the approximately 400,000 body cameras worldwide, there is a numerous amount of video available to investigators. It is important for police departments to acquire this video evidence according to legal requirements and best practices according to industry leaders to avoid any future legal challenges to the evidence. This study will analyze how police departments around the country are handling video evidence through their Standing Operating Procedures (SOPs) using legal requirements and industry best practices as a guideline. The author chose to concentrate on two of the main legal challenges facing law enforcement today while working with digital evidence: authentication and integrity. Despite sometimes being used interchangeably, authentication and integrity present two different challenges when working with digital evidence. Authentication is when the evidence put forth in a trial is what the party admitting it into evidence claims it to be. Integrity is ensuring the evidence has not been changed or altered since its original form. In this study, the author chose to concentrate on the issues of authentication and integrity specifically in relation to Digital Multimedia Evidence (DME). DME is information of probative value stored in binary form including but not limited to tape, film, magnetic, optical media, and/or the information contained therein. The author created a rubric utilizing best practices identified by industry leaders along with legal guidelines set forth by the Federal Rules of Evidence, court cases, and law reviews. The rubric evaluated the Department’s SOPs on three phases: Training, Process, and Documentation

    Informational Power on Twitter: A Mixed-methods Exploration of User Knowledge and Technological Discourse About Information Flows

    Get PDF
    Following a number of recent examples where social media users have been confronted by information flows that did not match their understandings of the platforms, there is a pressing need to examine public knowledge of information flows on these systems, to map how this knowledge lines up against the extant flows of these systems, and to explore the factors that contribute to the construction of knowledge about these systems. There is an immediacy to this issue because as social media sites become further entrenched as dominant vehicles for communication, knowledge about these technologies will play an ever increasing role in users’ abilities to gauge the risks for information disclosure, to understand and respond to global information flows, to make meaningful decisions about use and participation, and to be a part of conversations around how information flows in these spaces should be governed. Ultimately, knowledge about how information flows through these platforms helps shape users’ informational power. This dissertation responds to such a need by investigating the extant state of information flows on the popular social media platform “Twitter,” user knowledge about information flows on Twitter, and explores how Twitter, Inc.’s messaging to users may impact users’ knowledge construction. Through a mixed-method approach that includes a science and technology studies informed technical analysis of the Twitter platform, a quantitative analysis of survey data gathered from Twitter users and non-users which tested knowledge of different aspects of information flows on Twitter, and a critical discourse analysis of Twitter’s messaging to users in the new-user orientation process, this dissertation theorizes how junctures and disjunctures among the three can impact individual power. Findings of this project suggest that while many of the protocols and algorithmic functions associated with real-time information production and consumption on Twitter are well understood by users and are clearly articulated by Twitter, Inc., other aspects of information flows on the platform—such as the commodification of user-generated content, the long-term lifecycle of Tweets (such as the archival of Twitter by the Library of Congress), and the differential global flows of information—are not as well understood by users, nor explained in as much detail by Twitter, Inc. This dissertation describes the resulting state of users’ informational power as one of “information flow solipsism.

    Becoming Artifacts: Medieval Seals, Passports and the Future of Digital Identity

    Get PDF
    What does a digital identity token have to do with medieval seals? Is the history of passports of any use for enabling the discovery of Internet users\u27 identity when crossing virtual domain boundaries during their digital browsing and transactions? The agility of the Internet architecture and its simplicity of use have been the engines of its growth and success with the users worldwide. As it turns out, there lies also its crux. In effect, Internet industry participants have argued that the critical problem business is faced with on the Internet is the absence of an identity layer from the core protocols of its logical infrastructure. As a result, the cyberspace parallels a global territory without any identification mechanism that is reliable, consistent and interoperable across domains. This dissertation is an investigation of the steps being taken by Internet stakeholders in order to resolve its identity problems, through the lenses of historical instances where similar challenges were tackled by social actors. Social science research addressing the Internet identity issues is barely nascent. Research on identification systems in general is either characterized by a paucity of historical perspective, or scantily references digital technology and online identification processes. This research is designed to bridge that gap. The general question at its core is: How do social actors, events or processes enable the historical emergence of authoritative identity credentials for the public at large? This work is guided by that line of inquiry through three broad historical case studies: first, the medieval experience with seals used as identity tokens in the signing of deeds that resulted in transfers of rights, particularly estate rights; second, comes the modern, national state with its claim to the right to know all individuals on its territory through credentials such as the passport or the national identity card; and finally, viewed from the United States, the case of ongoing efforts to build an online digital identity infrastructure. Following a process-tracing approach to historical case study, this inquiry presents enlightening connections between the three identity frameworks while further characterizing each. We understand how the medieval doctrines of the Trinity and the Eucharist developed by schoolmen within the Church accommodated seals as markers of identity, and we understand how the modern state seized on the term `nationality\u27 - which emerged as late as in the 19th century - to make it into a legal fiction that was critical for its identification project. Furthermore, this investigation brings analytical insights which enable us to locate the dynamics driving the emergence of those identity systems. An ordering of the contributing factors in sequential categories is proposed in a sociohistorical approach to explain the causal mechanisms at work across these large phenomena. Finally this research also proposes historically informed projections of scenarios as possible pathways to the realization of authoritative digital identity. But that is the beginning of yet another story of identity

    Applications of Artificial Intelligence to Cryptography

    Get PDF
    This paper considers some recent advances in the field of Cryptography using Artificial Intelligence (AI). It specifically considers the applications of Machine Learning (ML) and Evolutionary Computing (EC) to analyze and encrypt data. A short overview is given on Artificial Neural Networks (ANNs) and the principles of Deep Learning using Deep ANNs. In this context, the paper considers: (i) the implementation of EC and ANNs for generating unique and unclonable ciphers; (ii) ML strategies for detecting the genuine randomness (or otherwise) of finite binary strings for applications in Cryptanalysis. The aim of the paper is to provide an overview on how AI can be applied for encrypting data and undertaking cryptanalysis of such data and other data types in order to assess the cryptographic strength of an encryption algorithm, e.g. to detect patterns of intercepted data streams that are signatures of encrypted data. This includes some of the authors’ prior contributions to the field which is referenced throughout. Applications are presented which include the authentication of high-value documents such as bank notes with a smartphone. This involves using the antenna of a smartphone to read (in the near field) a flexible radio frequency tag that couples to an integrated circuit with a non-programmable coprocessor. The coprocessor retains ultra-strong encrypted information generated using EC that can be decrypted on-line, thereby validating the authenticity of the document through the Internet of Things with a smartphone. The application of optical authentication methods using a smartphone and optical ciphers is also briefly explored

    Architecture for Provenance Systems

    No full text
    This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies

    CookiExt: Patching the browser against session hijacking attacks

    Get PDF
    Session cookies constitute one of the main attack targets against client authentication on the Web. To counter these attacks, modern web browsers implement native cookie protection mechanisms based on the HttpOnly and Secure flags. While there is a general understanding about the effectiveness of these defenses, no formal result has so far been proved about the security guarantees they convey. With the present paper we provide the first such result, by presenting a mechanized proof of noninterference assessing the robustness of the HttpOnly and Secure cookie flags against both web and network attackers with the ability to perform arbitrary XSS code injection. We then develop CookiExt, a browser extension that provides client-side protection against session hijacking, based on appropriate flagging of session cookies and automatic redirection over HTTPS for HTTP requests carrying these cookies. Our solution improves over existing client-side defenses by combining protection against both web and network attacks, while at the same time being designed so as to minimise its effects on the user's browsing experience. Finally, we report on the experiments we carried out to practically evaluate the effectiveness of our approach

    Lex, Lies & Videotape

    Get PDF

    Identification of User Behavioural Biometrics for Authentication using Keystroke Dynamics and Machine Learning

    Get PDF
    This thesis focuses on the effective classification of the behavior of users accessing computing devices to authenticate them. The authentication is based on keystroke dynamics, which captures the users behavioral biometric and applies machine learning concepts to classify them. The users type a strong passcode ”.tie5Roanl” to record their typing pattern. In order to confirm identity, anonymous data from 94 users were collected to carry out the research. Given the raw data, features were extracted from the attributes based on the button pressed and action timestamp events. The support vector machine classifier uses multi-class classification with one vs. one decision shape function to classify different users. To reduce the classification error, it is essential to identify the important features from the raw data. In an effort to confront the generation of features from attributes an efficient feature extraction algorithm has been developed, obtaining high classification performance are now being sought. To handle the multi-class problem, the random forest classifier is used to identify the users effectively. In addition, mRMR feature selection has been applied to increase the classification performance metrics and to confirm the identity of the users based on the way they access computing devices. From the results, we conclude that device information and touch pressure effectively contribute to identifying each user. Out of them, features that contain device information are responsible for increasing the performance metrics of the system by adding a token-based authentication layer. Based upon the results, random forest yields better classification results for this dataset. The research will contribute significantly to the field of cyber-security by forming a robust authentication system using machine learning algorithms

    Performing authenticity: James Hogg and the portable short story

    Get PDF
    James Hogg (1770-1835), the labouring-class writer from Selkirkshire in Scotland, was the subject of the marginalising forces of class prejudice and economic inequality during his lifetime. In the 184 years since his death, Hogg’s place in literary histories of the Scottish – and British – Romantic-era has been characterised as peripheral and dissident. This thesis readdresses Hogg’s relationship to marginality by arguing that he had agency in the construction of his identity as an outsider. It foregrounds the formal characteristics of the short story, historicised within specific contexts of publication, in shaping Hogg’s performance of authenticity in relation to class, place, and language. In doing so, the thesis argues that authenticity is a performative function of text and form, rather than a natural essence of authorship and authorial biography. The performance of authenticity functions within and through the Hoggian short story’s characteristic portability. I argue that portability incorporates two dialogically related elements: materiality and narrative aesthetics. Firstly, portability is defined in its literal sense of material transference. Hogg’s short stories moved between contexts and media of publication, from one periodical to another, from periodical to book, and from one geographic location to another. The thesis also argues that portability is constructed within short stories, providing a unifying framework for Hogg’s interrogative narrative praxis, identified elsewhere in Hogg studies as a cross-generic aesthetic of his fiction and poetry. Those narrative aesthetics are grounded in the formal characteristics and historical contexts specific to the short story form and its mutable contexts of publication
    • 

    corecore