602 research outputs found

    A Four Layer Bayesian Network for Product Model Based Information Mining

    Get PDF
    Business and engineering knowledge in AEC/FM is captured mainly implicitly in project and corporate document repositories. Even with the increasing integration of model-based systems with project information spaces, a large percentage of the information exchange will further on rely on isolated and rather poorly structured text documents. In this paper we propose an approach enabling the use of product model data as a primary source of engineering knowledge to support information externalisation from relevant construction documents, to provide for domain-specific information retrieval, and to help in re-organising and re-contextualising documents in accordance to the user’s discipline-specific tasks and information needs. Suggested is a retrieval and mining framework combining methods for analysing text documents, filtering product models and reasoning on Bayesian networks to explicitly represent the content of text repositories in personalisable semantic content networks. We describe the proposed basic network that can be realised on short-term using minimal product model information as well as various extensions towards a full-fledged added value integration of document-based and model-based information

    A Four Layer Bayesian Network for Product Model Based Information Mining

    Get PDF
    Business and engineering knowledge in AEC/FM is captured mainly implicitly in project and corporate document repositories. Even with the increasing integration of model-based systems with project information spaces, a large percentage of the information exchange will further on rely on isolated and rather poorly structured text documents. In this paper we propose an approach enabling the use of product model data as a primary source of engineering knowledge to support information externalisation from relevant construction documents, to provide for domain-specific information retrieval, and to help in re-organising and re-contextualising documents in accordance to the user’s discipline-specific tasks and information needs. Suggested is a retrieval and mining framework combining methods for analysing text documents, filtering product models and reasoning on Bayesian networks to explicitly represent the content of text repositories in personalisable semantic content networks. We describe the proposed basic network that can be realised on short-term using minimal product model information as well as various extensions towards a full-fledged added value integration of document-based and model-based information

    Flexible information retrieval: some research trends

    Get PDF
    In this paper some research trends in the field of Information Retrieval are presented. The focus is on the definition of flexible systems, i.e. systems that can represent and manage the vagueness and uncertainty which is characteristic of the process of information searching and retrieval. In this paper the application of soft computing techniques is considered, in particular fuzzy set theory

    Content And Multimedia Database Management Systems

    Get PDF
    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independence. DBMSs, and in particular those based on the relational data model, have been very successful at the management of administrative data in the business domain. This thesis has investigated data management in multimedia digital libraries, and its implications on the design of database management systems. The main problem of multimedia data management is providing access to the stored objects. The content structure of administrative data is easily represented in alphanumeric values. Thus, database technology has primarily focused on handling the objects’ logical structure. In the case of multimedia data, representation of content is far from trivial though, and not supported by current database management systems

    Automated Identification of Digital Evidence across Heterogeneous Data Resources

    Get PDF
    Digital forensics has become an increasingly important tool in the fight against cyber and computer-assisted crime. However, with an increasing range of technologies at people’s disposal, investigators find themselves having to process and analyse many systems with large volumes of data (e.g., PCs, laptops, tablets, and smartphones) within a single case. Unfortunately, current digital forensic tools operate in an isolated manner, investigating systems and applications individually. The heterogeneity and volume of evidence place time constraints and a significant burden on investigators. Examples of heterogeneity include applications such as messaging (e.g., iMessenger, Viber, Snapchat, and WhatsApp), web browsers (e.g., Firefox and Google Chrome), and file systems (e.g., NTFS, FAT, and HFS). Being able to analyse and investigate evidence from across devices and applications in a universal and harmonized fashion would enable investigators to query all data at once. In addition, successfully prioritizing evidence and reducing the volume of data to be analysed reduces the time taken and cognitive load on the investigator. This thesis focuses on the examination and analysis phases of the digital investigation process. It explores the feasibility of dealing with big and heterogeneous data sources in order to correlate the evidence from across these evidential sources in an automated way. Therefore, a novel approach was developed to solve the heterogeneity issues of big data using three developed algorithms. The three algorithms include the harmonising, clustering, and automated identification of evidence (AIE) algorithms. The harmonisation algorithm seeks to provide an automated framework to merge similar datasets by characterising similar metadata categories and then harmonising them in a single dataset. This algorithm overcomes heterogeneity issues and makes the examination and analysis easier by analysing and investigating the evidential artefacts across devices and applications based on the categories to query data at once. Based on the merged datasets, the clustering algorithm is used to identify the evidential files and isolate the non-related files based on their metadata. Afterwards, the AIE algorithm tries to identify the cluster holding the largest number of evidential artefacts through searching based on two methods: criminal profiling activities and some information from the criminals themselves. Then, the related clusters are identified through timeline analysis and a search of associated artefacts of the files within the first cluster. A series of experiments using real-life forensic datasets were conducted to evaluate the algorithms across five different categories of datasets (i.e., messaging, graphical files, file system, internet history, and emails), each containing data from different applications across different devices. The results of the characterisation and harmonisation process show that the algorithm can merge all fields successfully, with the exception of some binary-based data found within the messaging datasets (contained within Viber and SMS). The error occurred because of a lack of information for the characterisation process to make a useful determination. However, on further analysis, it was found that the error had a minimal impact on subsequent merged data. The results of the clustering process and AIE algorithm showed the two algorithms can collaborate and identify more than 92% of evidential files.HCED Ira

    A study of different representation conventions during investigatory sensemaking

    Get PDF
    Background: During the process of conducting investigations, users structure information externally to help them make sense of what they know, and what they need to know. Software-based visual representations may be a natural place for doing this, but there are a number of types of information structuring that might be supported and hence designed for. Further, there might be important differences in how well different representational conventions support sensemaking. There are questions about what type of representational support might allow these users to be more effective when interacting with information. Aim: To explore the impact that different types of external representational structuring have on performance and user experience during intelligence type investigations. Intelligence analysis represents a difficult example domain were sensemaking is needed. We have a particular interest in the role that timeline representations might play given evidence that people are naturally predisposed to make sense of complex social scenarios by constructing narratives. From this we attempt to quantify possible benefits of timeline representation during investigatory sensemaking, compared with argumentation representation. Method: Participants performed a small investigation using the IEEE 2011 VAST challenge dataset in which they structured information either as a timeline, an argumentation or as they wished (freeform). 30 participants took part in the study. The study used three levels of a between participants independent variable of representation type. The dependent variables were performance (in terms of recall, precision efficiency and understanding) and user experience (in terms of cognitive load, engagement and confidence in understanding). Result: The result shows that the freeform condition experienced a lower cognitive load than the other two: timeline and argument respectively. A post hoc exploratory analysis was conducted to better understand the information behaviour and structuring activities across conditions and to better understand the types of structuring that participants perform in the freeform condition. The analysis resulted in an Embedded Representational Structuring Theory (ERST) that helps to characterise and describe representations primarily in terms of their elements and their relations. Conclusion: The results suggest that: (a) people experienced lower cognitive load when they are free to structure information as they wish, (b) during their investigations, they create complex heterogeneous representations consisting of various entities and multiple relation types and (c) their structuring activities can be described by a finite set of structuring conventions

    A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law

    Full text link

    Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

    Get PDF
    • …
    corecore