4 research outputs found

    Hybrid-search and storage of semi-structured information

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 113-118).Given today's tangle of digital information, one of the hardest tasks for computer users of information systems is finding anything in the mess. For a number of well documented reasons including the amazing growth in the Internet's popularity and the drop in the cost of storage, the amount of information on the net as well as on a user's local computer, has increased dramatically in recent years. Although this readily available information should be extremely beneficial for computer users, paradoxically it is now much harder to find anything. Many different solutions have been proposed to the general information seeking task of users, but few if any have addressed the needs of individuals or have leveraged the benefit of single-user interaction. The Haystack project is an attempt to answer the needs of the individual user. Once the user's information is represented in Haystack, the types of questions users may ask are highly varied. In this thesis we will propose a means of representing information in a robust framework within Haystack. Once the information is represented we describe a mechanism by which the diverse questions of the individual can be answered. This novel method functions by using a combination of existing information systems. We will call this combined system a hybrid-search system.by Eytan Adar.M.Eng

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    Data mining and integration of heterogeneous bioinformatics data sources

    Get PDF
    In this thesis, we have presented a novel approach to interoperability based on the use of biological relationships that have used relationship-based integration to integrate bioinformatics data sources; this refers to the use of different relationship types with different relationship closeness values to link gene expression datasets with other information available in public bioinformatics data sources. These relationships provide flexible linkage for biologists to discover linked data across the biological universe. Relationship closeness is a variable used to measure the closeness of the biological entities in a relationship and is a characteristic of the relationship. The novelty of this approach is that it allows a user to link a gene expression dataset with heterogeneous data sources dynamically and flexibly to facilitate comparative genomics investigations. Our research has demonstrated that using different relationships allows biologists to analyze experimental datasets in different ways, shorten the time needed to analyze the datasets and provide an easier way to undertake this analysis. Thus, it provides more power to biologists to do experimentations using changing threshold values and linkage types. This is achieved in our framework by introducing the Soft Link Model (SLM) and a Relationship Knowledge Base (RKB), which is built and used by SLM. Integration and Data Mining Bioinformatics Data sources system (IDMBD) is implemented as a proof of concept prototype to demonstrate the technique of linkages described in the thesis.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore