40 research outputs found

    An Efficient Architecture for Information Retrieval in P2P Context Using Hypergraph

    Full text link
    Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of Internet traffic. P2P systems have emerged as an accepted way to share enormous volumes of data. Needs for widely distributed information systems supporting virtual organizations have given rise to a new category of P2P systems called schema-based. In such systems each peer is a database management system in itself, ex-posing its own schema. In such a setting, the main objective is the efficient search across peer databases by processing each incoming query without overly consuming bandwidth. The usability of these systems depends on successful techniques to find and retrieve data; however, efficient and effective routing of content-based queries is an emerging problem in P2P networks. This work was attended as an attempt to motivate the use of mining algorithms in the P2P context may improve the significantly the efficiency of such methods. Our proposed method based respectively on combination of clustering with hypergraphs. We use ECCLAT to build approximate clustering and discovering meaningful clusters with slight overlapping. We use an algorithm MTMINER to extract all minimal transversals of a hypergraph (clusters) for query routing. The set of clusters improves the robustness in queries routing mechanism and scalability in P2P Network. We compare the performance of our method with the baseline one considering the queries routing problem. Our experimental results prove that our proposed methods generate impressive levels of performance and scalability with with respect to important criteria such as response time, precision and recall.Comment: 2o pages, 8 figure

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    Linked Data Entity Summarization

    Get PDF
    On the Web, the amount of structured and Linked Data about entities is constantly growing. Descriptions of single entities often include thousands of statements and it becomes difficult to comprehend the data, unless a selection of the most relevant facts is provided. This doctoral thesis addresses the problem of Linked Data entity summarization. The contributions involve two entity summarization approaches, a common API for entity summarization, and an approach for entity data fusion

    Data-driven conceptual modeling: how some knowledge drivers for the enterprise might be mined from enterprise data

    Get PDF
    As organizations perform their business, they analyze, design and manage a variety of processes represented in models with different scopes and scale of complexity. Specifying these processes requires a certain level of modeling competence. However, this condition does not seem to be balanced with adequate capability of the person(s) who are responsible for the task of defining and modeling an organization or enterprise operation. On the other hand, an enterprise typically collects various records of all events occur during the operation of their processes. Records, such as the start and end of the tasks in a process instance, state transitions of objects impacted by the process execution, the message exchange during the process execution, etc., are maintained in enterprise repositories as various logs, such as event logs, process logs, effect logs, message logs, etc. Furthermore, the growth rate in the volume of these data generated by enterprise process execution has increased manyfold in just a few years. On top of these, models often considered as the dashboard view of an enterprise. Models represents an abstraction of the underlying reality of an enterprise. Models also served as the knowledge driver through which an enterprise can be managed. Data-driven extraction offers the capability to mine these knowledge drivers from enterprise data and leverage the mined models to establish the set of enterprise data that conforms with the desired behaviour. This thesis aimed to generate models or knowledge drivers from enterprise data to enable some type of dashboard view of enterprise to provide support for analysts. The rationale for this has been started as the requirement to improve an existing process or to create a new process. It was also mentioned models can also serve as a collection of effectors through which an organization or an enterprise can be managed. The enterprise data refer to above has been identified as process logs, effect logs, message logs, and invocation logs. The approach in this thesis is to mine these logs to generate process, requirement, and enterprise architecture models, and how goals get fulfilled based on collected operational data. The above a research question has been formulated as whether it is possible to derive the knowledge drivers from the enterprise data, which represent the running operation of the enterprise, or in other words, is it possible to use the available data in the enterprise repository to generate the knowledge drivers? . In Chapter 2, review of literature that can provide the necessary background knowledge to explore the above research question has been presented. Chapter 3 presents how process semantics can be mined. Chapter 4 suggest a way to extract a requirements model. The Chapter 5 presents a way to discover the underlying enterprise architecture and Chapter 6 presents a way to mine how goals get orchestrated. Overall finding have been discussed in Chapter 7 to derive some conclusions

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Context-aware information delivery for mobile construction workers

    Get PDF
    The potential of mobile Information Technology (IT) applications to support the information needs of mobile construction workers has long been understood. However, existing mobile IT applications in the construction industry have underlined limitations, including their inability to respond to the changing user context, lack of semantic awareness and poor integration with the desktop-based infrastructure. This research argues that awareness of the user context (such as user role, preferences, task-at-hand, location, etc.) can enhance mobile IT applications in the construction industry by providing a mechanism to deliver highly specific information to mobile workers by intelligent interpretation of their context. Against this this background, the aim of this research was to investigate the applicability of context-aware information delivery (CAID) technologies in the construction industry. The research methodology adopted consisted of various methods. A literature review on context-aware and enabling technologies was undertaken and a conceptual framework developed, which addressed the key issues of context-capture, contextinference and context-integration. To illustrate the application of CAID in realistic construction situations, five futuristic deployment scenarios were developed which were analysed with several industry and technology experts. From the analysis, a common set of user needs was drawn up. These needs were subsequently translated into the system design goals, which acted as a key input to the design and evaluation of a prototype system, which was implemented on a Pocket-PC platform. The main achievements of this research include development of a CAID framework for mobile construction workers, demonstration of CAID concepts in realistic construction scenarios, analysis of the Construction industry needs for CAID and implementation and validation of the prototype to demonstrate the CAID concepts. The research concludes that CAID has the potential to significantly improve support for mobile construction workers and identifies the requirements for its effective deployment in the construction project delivery process. However, the industry needs to address various identified barriers to enable the realisation of the full potential of CAID

    A multi-matching technique for combining similarity measures in ontology integration

    Get PDF
    Ontology matching is a challenging problem in many applications, and is a major issue for interoperability in information systems. It aims to find semantic correspondences between a pair of input ontologies, which remains a labor intensive and expensive task. This thesis investigates the problem of ontology matching in both theoretical and practical aspects and proposes a solution methodology, called multi-matching . The methodology is validated using standard benchmark data and its performance is compared with available matching tools. The proposed methodology provides a framework for users to apply different individual matching techniques. It then proceeds with searching and combining the match results to provide a desired match result in reasonable time. In addition to existing applications for ontology matching such as ontology engineering, ontology integration, and exploiting the semantic web, the thesis proposes a new approach for ontology integration as a backbone application for the proposed matching techniques. In terms of theoretical contributions, we introduce new search strategies and propose a structure similarity measure to match structures of ontologies. In terms of practical contribution, we developed a research prototype, called MLMAR - Multi-Level Matching Algorithm with Recommendation analysis technique, which implements the proposed multi-level matching technique, and applies heuristics as optimization techniques. Experimental results show practical merits and usefulness of MLMA

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications

    Context-aware information delivery for mobile construction workers

    Get PDF
    The potential of mobile Information Technology (IT) applications to support the information needs of mobile construction workers has long been understood. However, existing mobile IT applications in the construction industry have underlined limitations, including their inability to respond to the changing user context, lack of semantic awareness and poor integration with the desktop-based infrastructure. This research argues that awareness of the user context (such as user role, preferences, task-at-hand, location, etc.) can enhance mobile IT applications in the construction industry by providing a mechanism to deliver highly specific information to mobile workers by intelligent interpretation of their context. Against this this background, the aim of this research was to investigate the applicability of context-aware information delivery (CAID) technologies in the construction industry. The research methodology adopted consisted of various methods. A literature review on context-aware and enabling technologies was undertaken and a conceptual framework developed, which addressed the key issues of context-capture, contextinference and context-integration. To illustrate the application of CAID in realistic construction situations, five futuristic deployment scenarios were developed which were analysed with several industry and technology experts. From the analysis, a common set of user needs was drawn up. These needs were subsequently translated into the system design goals, which acted as a key input to the design and evaluation of a prototype system, which was implemented on a Pocket-PC platform. The main achievements of this research include development of a CAID framework for mobile construction workers, demonstration of CAID concepts in realistic construction scenarios, analysis of the Construction industry needs for CAID and implementation and validation of the prototype to demonstrate the CAID concepts. The research concludes that CAID has the potential to significantly improve support for mobile construction workers and identifies the requirements for its effective deployment in the construction project delivery process. However, the industry needs to address various identified barriers to enable the realisation of the full potential of CAID.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Eight Biennial Report : April 2005 – March 2007

    No full text
    corecore