962 research outputs found

    Digital Forensics Tool Interface Visualization

    Get PDF
    Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator\u27s cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples\u27 mean tested for the null hypothesis of less than a significant value between the applications\u27 comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation

    The Lair: a resource for exploratory analysis of published RNA-Seq data

    Get PDF
    Increased emphasis on reproducibility of published research in the last few years has led to the large-scale archiving of sequencing data. While this data can, in theory, be used to reproduce results in papers, it is difficult to use in practice. We introduce a series of tools for processing and analyzing RNA-Seq data in the Sequence Read Archive, that together have allowed us to build an easily extendable resource for analysis of data underlying published papers. Our system makes the exploration of data easily accessible and usable without technical expertise. Our database and associated tools can be accessed at The Lair: http://pachterlab.github.io/lair.National Institutes of Health grants R01 HG006129, R01 DK094699 and R01 HG008164.Peer Reviewe

    Defrauding the Public Interest: A Critical Examination of Reengineered Audit Processes and the Likelihood of Detecting Fraud

    Get PDF
    In the past few years, most of the major international public accounting firms have reengineered their audit processes to improve the cost effectiveness of completing an audit and to focus on value-added services for clients. The reengineered audit processes generally focus on a client’s business processes and the information systems used by the client to generate financial information. In essence, the new audit approaches deemphasize direct testing of the underlying transactions and account balances. Such an approach emphasizes analytical procedures as the main source of substantive evidence. During this same time period, however, the profession (through the AICPA) explicitly acknowledged the profession’s responsibility for fraud detection. The main premise of this paper is that the increased emphasis on systems assessments is at odds with the profession’s position regarding fraud detection because most material frauds originate at the top levels of the organization, where controls and systems are least prevalent and effective. As such, the profession may be paying lip service to fraud detection, while at the same time changing the audit process in a manner that is less effective at detecting the most common frauds

    ALE Meta-Analysis Workflows Via the Brainmap Database: Progress Towards A Probabilistic Functional Brain Atlas

    Get PDF
    With the ever-increasing number of studies in human functional brain mapping, an abundance of data has been generated that is ready to be synthesized and modeled on a large scale. The BrainMap database archives peak coordinates from published neuroimaging studies, along with the corresponding metadata that summarize the experimental design. BrainMap was designed to facilitate quantitative meta-analysis of neuroimaging results reported in the literature and supports the use of the activation likelihood estimation (ALE) method. In this paper, we present a discussion of the potential analyses that are possible using the BrainMap database and coordinate-based ALE meta-analyses, along with some examples of how these tools can be applied to create a probabilistic atlas and ontological system of describing function–structure correspondences

    The BrainMap strategy for standardization, sharing, and meta-analysis of neuroimaging data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Neuroimaging researchers have developed rigorous community data and metadata standards that encourage meta-analysis as a method for establishing robust and meaningful convergence of knowledge of human brain structure and function. Capitalizing on these standards, the BrainMap project offers databases, software applications, and other associated tools for supporting and promoting quantitative coordinate-based meta-analysis of the structural and functional neuroimaging literature.</p> <p>Findings</p> <p>In this report, we describe recent technical updates to the project and provide an educational description for performing meta-analyses in the BrainMap environment.</p> <p>Conclusions</p> <p>The BrainMap project will continue to evolve in response to the meta-analytic needs of biomedical researchers in the structural and functional neuroimaging communities. Future work on the BrainMap project regarding software and hardware advances are also discussed.</p

    Explorations on the relationship between happiness and sustainable design

    Get PDF
    Through understanding the way in which design can contribute in a holistic way to sustainability, this thesis investigates and proposes the design methods, and characteristics of sustainable products, services or systems capable of contributing to our happiness, hence shaping and promoting society towards sustainable lifestyles. It presents the first indications of the relationship between Happiness and Sustainable Design. The review of a vast array of phenomena (Happiness, Sustainable Lifestyles/Society, Sustainable Product Design, Consumption Behaviour, and the emerging Role of the Designer), shed light on this relationship, as well as making evident the social gap that represents within sustainable design. This led to the development of an Initial Theory to bridge this gap, which then proposed the development of new design theories and tools and also a radical evolution of the design discipline. Preliminary Testing with sustainable design thinkers validated this theory and pointed out other interesting avenues in order to develop and test it further. Subsequently, through an exploratory and iterative approach, with the Initial Theory at the heart of the research, the Design for Happiness workshop framework emerged and took shape. Two pilot studies and a first study facilitated its planning, development and implementation, which ultimately led to a strong Design Process and Tool-Kit. In addition, two Main Studies confirmed its effectiveness and put forward a robust conceptual design outcome; the trials of which demonstrated its success and high potential to contribute to Happiness and Sustainable Lifestyles. Overall, the results and findings of this research demonstrated that material changes can take place without having to do without social networks which feed our happiness. The Design for Happiness workshop framework is a practical proposal that encourages multidisciplinary groups to reinterpret the relationship between objects and users, hence approaching design from a different perspective that results in innovative conceptual designs. Here, the designer becomes a process facilitator who shares design tools, encouraging participation in the construction of collective and integrated design visions and scenarios. Creativity and Sustainability are pivotal pillars of this proposal and its success is anchored in its capacity to deliver a collection of experiences that contribute to happiness through the way in which they require people to live in general. It also challenges the evolution of the Design discipline and its consequential theoretical development. The relationship between Design, Sustainability and Happiness is new territory. This research is the first on the subject of Sustainable Design and Happiness, therefore offering a groundbreaking opportunity for design, designers, and its practical applications

    A survey of frequent subgraph mining algorithms

    Get PDF
    AbstractGraph mining is an important research area within the domain of data mining. The field of study concentrates on the identification of frequent subgraphs within graph data sets. The research goals are directed at: (i) effective mechanisms for generating candidate subgraphs (without generating duplicates) and (ii) how best to process the generated candidate subgraphs so as to identify the desired frequent subgraphs in a way that is computationally efficient and procedurally effective. This paper presents a survey of current research in the field of frequent subgraph mining and proposes solutions to address the main research issues.</jats:p

    A Domain Specific Language for Digital Forensics and Incident Response Analysis

    Get PDF
    One of the longstanding conceptual problems in digital forensics is the dichotomy between the need for verifiable and reproducible forensic investigations, and the lack of practical mechanisms to accomplish them. With nearly four decades of professional digital forensic practice, investigator notes are still the primary source of reproducibility information, and much of it is tied to the functions of specific, often proprietary, tools. The lack of a formal means of specification for digital forensic operations results in three major problems. Specifically, there is a critical lack of: a) standardized and automated means to scientifically verify accuracy of digital forensic tools; b) methods to reliably reproduce forensic computations (their results); and c) framework for inter-operability among forensic tools. Additionally, there is no standardized means for communicating software requirements between users, researchers and developers, resulting in a mismatch in expectations. Combined with the exponential growth in data volume and complexity of applications and systems to be investigated, all of these concerns result in major case backlogs and inherently reduce the reliability of the digital forensic analyses. This work proposes a new approach to the specification of forensic computations, such that the above concerns can be addressed on a scientific basis with a new domain specific language (DSL) called nugget. DSLs are specialized languages that aim to address the concerns of particular domains by providing practical abstractions. Successful DSLs, such as SQL, can transform an application domain by providing a standardized way for users to communicate what they need without specifying how the computation should be performed. This is the first effort to build a DSL for (digital) forensic computations with the following research goals: 1) provide an intuitive formal specification language that covers core types of forensic computations and common data types; 2) provide a mechanism to extend the language that can incorporate arbitrary computations; 3) provide a prototype execution environment that allows the fully automatic execution of the computation; 4) provide a complete, formal, and auditable log of computations that can be used to reproduce an investigation; 5) demonstrate cloud-ready processing that can match the growth in data volumes and complexity
    corecore