4,809 research outputs found

    A framework for forensic reconstruction of spontaneous ad hoc networks

    Get PDF
    Spontaneous ad hoc networks are distinguished by rapid deployment for a specific purpose, with no forward planning or pre-design in their topology. Often these networks will spring up through necessity whenever a network is required urgently but briefly. This may be in a disaster recovery setting, military uses where often the network is unplanned but the devices are pre-installed with security settings, educational networks or networks created as a one-off for a meeting such as in a business organisation. Generally, wireless networks pose problems for forensic investigators because of the open nature of the medium, but if logging procedures and pre-planned connections are in place, past messages, including nefarious activity can often be easily traced through normal forensic practices. However, the often urgent nature of the spontaneous ad hoc communication requirements of these networks leads to the acceptance onto the network of anyone with a wireless device. Additionally, the identity of the network members, their location and the numbers within the network are all unknown. With no centre of control of the network, such as a central server or wireless access point, the ability to forensically reconstruct the network topology and trace a malicious message or other inappropriate or criminal activity would seem impossible. This research aims to demonstrate that forensic reconstruction is possible in these types of networks and the current research provides initial results for how forensic investigators can best undertake these investigations

    Neuroprediction and A.I. in Forensic Psychiatry and Criminal Justice: A Neurolaw Perspective

    Get PDF
    Advances in the use of neuroimaging in combination with A.I., and specifically the use of machine learning techniques, have led to the development of brain-reading technologies which, in the nearby future, could have many applications, such as lie detection, neuromarketing or brain-computer interfaces. Some of these could, in principle, also be used in forensic psychiatry. The application of these methods in forensic psychiatry could, for instance, be helpful to increase the accuracy of risk assessment and to identify possible interventions. This technique could be referred to as ‘A.I. neuroprediction,’ and involves identifying potential neurocognitive markers for the prediction of recidivism. However, the future implications of this technique and the role of neuroscience and A.I. in violence risk assessment remain to be established. In this paper, we review and analyze the literature concerning the use of brain-reading A.I. for neuroprediction of violence and rearrest to identify possibilities and challenges in the future use of these techniques in the fields of forensic psychiatry and criminal justice, considering legal implications and ethical issues. The analysis suggests that additional research is required on A.I. neuroprediction techniques, and there is still a great need to understand how they can be implemented in risk assessment in the field of forensic psychiatry. Besides the alluring potential of A.I. neuroprediction, we argue that its use in criminal justice and forensic psychiatry should be subjected to thorough harms/benefits analyses not only when these technologies will be fully available, but also while they are being researched and developed

    Proceedings of the 15th Australian Digital Forensics Conference, 5-6 December 2017, Edith Cowan University, Perth, Australia

    Get PDF
    Conference Foreword This is the sixth year that the Australian Digital Forensics Conference has been held under the banner of the Security Research Institute, which is in part due to the success of the security conference program at ECU. As with previous years, the conference continues to see a quality papers with a number from local and international authors. 8 papers were submitted and following a double blind peer review process, 5 were accepted for final presentation and publication. Conferences such as these are simply not possible without willing volunteers who follow through with the commitment they have initially made, and I would like to take this opportunity to thank the conference committee for their tireless efforts in this regard. These efforts have included but not been limited to the reviewing and editing of the conference papers, and helping with the planning, organisation and execution of the conference. Particular thanks go to those international reviewers who took the time to review papers for the conference, irrespective of the fact that they are unable to attend this year. To our sponsors and supporters a vote of thanks for both the financial and moral support provided to the conference. Finally, to the student volunteers and staff of the ECU Security Research Institute, your efforts as always are appreciated and invaluable. Yours sincerely, Conference ChairProfessor Craig ValliDirector, Security Research Institute Congress Organising Committee Congress Chair: Professor Craig Valli Committee Members: Professor Gary Kessler – Embry Riddle University, Florida, USA Professor Glenn Dardick – Embry Riddle University, Florida, USA Professor Ali Babar – University of Adelaide, Australia Dr Jason Smith – CERT Australia, Australia Associate Professor Mike Johnstone – Edith Cowan University, Australia Professor Joseph A. Cannataci – University of Malta, Malta Professor Nathan Clarke – University of Plymouth, Plymouth UK Professor Steven Furnell – University of Plymouth, Plymouth UK Professor Bill Hutchinson – Edith Cowan University, Perth, Australia Professor Andrew Jones – Khalifa University, Abu Dhabi, UAE Professor Iain Sutherland – Glamorgan University, Wales, UK Professor Matthew Warren – Deakin University, Melbourne Australia Congress Coordinator: Ms Emma Burk

    Comparison of screwdriver tips to the resultant toolmarks

    Get PDF
    The subjective nature of correlating tools and toolmarks has been called into question since the 1993 Florida Supreme Court ruling in Daubert v Merrell Dow Pharmaceuticals, Inc. This has led to law enforcement agencies and officials placing an emphasis on developing objective techniques with known error rates to replace the traditional subjective comparisons. Additionally, if such objective techniques could be automated the heavy workloads currently faced by forensic examiners could be reduced. Development of a semi-automatic process that utilizes a three dimensional profilometer shows potential as a technique that may yield statistically verifiable results, removing the subjective nature currently inherent to toolmark evaluation, and be automated.;This work involves characterizing a number of consecutively manufactured tools with a scanning electron microscope (SEM) and comparing that tool to the resultant mark. By using software to analyze both the roughness of a tool and a toolmark---evaluated by SEM and profilometry---the two surfaces can be statistically compared and a correlation determined in a region of best fit. The project has sought to answer two distinct questions: Can a toolmark be related to a particular tool (and only that tool) on a statistical basis? Can a series of toolmarks be obtained and compared in an automated manner to yield a statistically valid match? Providing answers to these questions based upon quantitative techniques rather than subjective analysis removes the uncertainties raised by the Daubert decision. The methods employed have the potential for automation, thereby offering a means for decreasing examiner workload. Thus, successful completion of this project could lead to development of an automated system that produces statistically valid and verifiable results

    Portraits, Likenesses, Composites? Facial Difference in Forensic Art

    Get PDF
    The police composite sketch is arguably the most fundamental example of forensic art, and one which enjoys considerable cultural prominence. Intended to produce a positive identification of a specific individual, composites are a form of visual intelligence rather than hard evidence. Based on verbal descriptions drawn from memory deriving from highly contingent and possibly traumatic events, composites are by definition unique and precarious forensic objects, representing an epistemological paradox in their definition as simultaneous ‘artistic impression’ and ‘pictorial statement’. And despite decades of operational use, only in recent years has the field of cognitive psychology begun to fully understand and address the conditions that affect recognition rates both positively and negatively. How might composites contribute to our understanding of representational concepts such as ‘likeness’ and ‘accuracy’? And what role does visual medium – drawn, photographic or computerized depiction – play in the legibility of these images? Situated within the broader context of forensic art practices, this paper proceeds from an understanding that the face is simultaneously crafted as an analogy of the self and a forensic technology. In other words, the face is a space where concepts of identification and identity, sameness and difference (often uncomfortably) converge. With reference to selected examples from laboratory research, field application and artistic practice, I consider how composites, through their particular techniques and form, contribute to subject-making, and how they embody the fugitive, in literal and figurative terms

    Digital Forensics AI: Evaluating, Standardizing and Optimizing Digital Evidence Mining Techniques

    Get PDF
    The impact of AI on numerous sectors of our society and its successes over the years indicate that it can assist in resolving a variety of complex digital forensics investigative problems. Forensics analysis can make use of machine learning models’ pattern detection and recognition capabilities to uncover hidden evidence in digital artifacts that would have been missed if conducted manually. Numerous works have proposed ways for applying AI to digital forensics; nevertheless, scepticism regarding the opacity of AI has impeded the domain’s adequate formalization and standardization. We present three critical instruments necessary for the development of sound machine-driven digital forensics methodologies in this paper. We cover various methods for evaluating, standardizing, and optimizing techniques applicable to artificial intelligence models used in digital forensics. Additionally, we describe several applications of these instruments in digital forensics, emphasizing their strengths and weaknesses that may be critical to the methods’ admissibility in a judicial process

    Network Analysis with Stochastic Grammars

    Get PDF
    Digital forensics requires significant manual effort to identify items of evidentiary interest from the ever-increasing volume of data in modern computing systems. One of the tasks digital forensic examiners conduct is mentally extracting and constructing insights from unstructured sequences of events. This research assists examiners with the association and individualization analysis processes that make up this task with the development of a Stochastic Context -Free Grammars (SCFG) knowledge representation for digital forensics analysis of computer network traffic. SCFG is leveraged to provide context to the low-level data collected as evidence and to build behavior profiles. Upon discovering patterns, the analyst can begin the association or individualization process to answer criminal investigative questions. Three contributions resulted from this research. First , domain characteristics suitable for SCFG representation were identified and a step -by- step approach to adapt SCFG to novel domains was developed. Second, a novel iterative graph-based method of identifying similarities in context-free grammars was developed to compare behavior patterns represented as grammars. Finally, the SCFG capabilities were demonstrated in performing association and individualization in reducing the suspect pool and reducing the volume of evidence to examine in a computer network traffic analysis use case
    • 

    corecore