5,947 research outputs found

    A Unified Forensics Analysis Approach to Digital Investigation

    Get PDF
    Digital forensics is now essential in addressing cybercrime and cyber-enabled crime but potentially it can have a role in almost every other type of crime. Given technology's continuous development and prevalence, the widespread adoption of technologies among society and the subsequent digital footprints that exist, the analysis of these technologies can help support investigations. The abundance of interconnected technologies and telecommunication platforms has significantly changed the nature of digital evidence. Subsequently, the nature and characteristics of digital forensic cases involve an enormous volume of data heterogeneity, scattered across multiple evidence sources, technologies, applications, and services. It is indisputable that the outspread and connections between existing technologies have raised the need to integrate, harmonise, unify and correlate evidence across data sources in an automated fashion. Unfortunately, the current state of the art in digital forensics leads to siloed approaches focussed upon specific technologies or support of a particular part of digital investigation. Due to this shortcoming, the digital investigator examines each data source independently, trawls through interconnected data across various sources, and often has to conduct data correlation manually, thus restricting the digital investigator’s ability to answer high-level questions in a timely manner with a low cognitive load. Therefore, this research paper investigates the limitations of the current state of the art in the digital forensics discipline and categorises common investigation crimes with the necessary corresponding digital analyses to define the characteristics of the next-generation approach. Based on these observations, it discusses the future capabilities of the next-generation unified forensics analysis tool (U-FAT), with a workflow example that illustrates data unification, correlation and visualisation processes within the proposed method.</jats:p

    Proposed L-Shape Pattern on UFS ACM For Risk Analysis

    Get PDF
    At this cloud age, there is tremendous growth in business, services, resources, and cloud technology. This growth comes with a risk of unsafe, unordered, and uncertainty due to unauthorized access and theft of confidential propriety data. Our objective is to model around Read, Write and Execute to resolve these unordered, unsafe, and uncertain issues. We will develop a L-Shape pattern model matching UFS ACM to minimize the accessibilities based on RIGHT & ROLE of the resources and maximize the quality of services for safety and high availability. The preventive, detective, corrective (PDC) services are the major roles for all levels of management to coordinate, control the multiple technologies and resources which are working simultaneously. It will be more ordered, accountable, and actionable on real-time access control mechanism for scalabilities, reliability, performance, and high availability of computational services. We have to make safer, certain, unified, and step-by-step normalization by applying this UFS ACM mechanism on UNIX operating system. This proposed research paper covers a wide range of areas covering optimization, normalization, Fuzzy Low, and Risk assessment

    Brain Radiation Information Data Exchange (BRIDE): Integration of experimental data from low-dose ionising radiation research for pathway discovery

    Get PDF
    Background: The underlying molecular processes representing stress responses to low-dose ionising radiation (LDIR) in mammals are just beginning to be understood. In particular, LDIR effects on the brain and their possible association with neurodegenerative disease are currently being explored using omics technologies. Results: We describe a light-weight approach for the storage, analysis and distribution of relevant LDIR omics datasets. The data integration platform, called BRIDE, contains information from the literature as well as experimental information from transcriptomics and proteomics studies. It deploys a hybrid, distributed solution using both local storage and cloud technology. Conclusions: BRIDE can act as a knowledge broker for LDIR researchers, to facilitate molecular research on the systems biology of LDIR response in mammals. Its flexible design can capture a range of experimental information for genomics, epigenomics, transcriptomics, and proteomics. The data collection is available at:

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise à élaborer une solution efficace pour la découverte et la réutilisation des composants logiciels dans les environnements de développement existants et couramment utilisés. Nous proposons une ontologie pour décrire et découvrir des composants logiciels élémentaires. La description couvre à la fois les propriétés fonctionnelles et les propriétés non fonctionnelles des composants logiciels exprimées comme des paramètres de QoS. Notre processus de recherche est basé sur la fonction qui calcule la distance sémantique entre la signature d'un composant et la signature d'une requête donnée, réalisant ainsi une comparaison judicieuse. Nous employons également la notion de " subsumption " pour comparer l'entrée-sortie de la requête et des composants. Après sélection des composants adéquats, les propriétés non fonctionnelles sont employées comme un facteur distinctif pour raffiner le résultat de publication des composants résultats. Nous proposons une approche de découverte des composants composite si aucun composant élémentaire n'est trouvé, cette approche basée sur l'ontologie commune. Pour intégrer le composant résultat dans le projet en cours de développement, nous avons développé l'ontologie d'intégration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "

    A Survey of Semantic Integration Approaches in Bioinformatics

    Get PDF
    Technological advances of computer science and data analysis are helping to provide continuously huge volumes of biological data, which are available on the web. Such advances involve and require powerful techniques for data integration to extract pertinent knowledge and information for a specific question. Biomedical exploration of these big data often requires the use of complex queries across multiple autonomous, heterogeneous and distributed data sources. Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontology. We provide a survey of some approaches and techniques for integrating biological data, we focus on those developed in the ontology community

    Automatic Ontology-Based Model Evolution for Learning Changes in Dynamic Environments

    Full text link
    [EN] Knowledge engineering relies on ontologies, since they provide formal descriptions of real¿world knowledge. However, ontology development is still a nontrivial task. From the view of knowledge engineering, ontology learning is helpful in generating ontologies semi¿automatically or automatically from scratch. It not only improves the efficiency of the ontology development pro¿ cess but also has been recognized as an interesting approach for extending preexisting ontologies with new knowledge discovered from heterogenous forms of input data. Driven by the great poten¿ tial of ontology learning, we present an automatic ontology¿based model evolution approach to ac¿ count for highly dynamic environments at runtime. This approach can extend initial models ex¿ pressed as ontologies to cope with rapid changes encountered in surrounding dynamic environ¿ ments at runtime. The main contribution of our presented approach is that it analyzes heterogene¿ ous semi¿structured input data for learning an ontology, and it makes use of the learned ontology to extend an initial ontology¿based model. Within this approach, we aim to automatically evolve an initial ontology¿based model through the ontology learning approach. Therefore, this approach is illustrated using a proof¿of¿concept implementation that demonstrates the ontology¿based model evolution at runtime. Finally, a threefold evaluation process of this approach is carried out to assess the quality of the evolved ontology¿based models. First, we consider a feature¿based evaluation for evaluating the structure and schema of the evolved models. Second, we adopt a criteria¿based eval¿ uation to assess the content of the evolved models. Finally, we perform an expert¿based evaluation to assess an initial and evolved models¿ coverage from an expert¿s point of view. The experimental results reveal that the quality of the evolved models is relevant in considering the changes observed in the surrounding dynamic environments at runtime.Jabla, R.; Khemaja, M.; Buendía García, F.; Faiz, S. (2021). Automatic Ontology-Based Model Evolution for Learning Changes in Dynamic Environments. Applied Sciences. 11(22):1-30. https://doi.org/10.3390/app112210770130112
    corecore