7,947 research outputs found

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    INFRAWEBS semantic web service development on the base of knowledge management layer

    Get PDF
    The paper gives an overview about the ongoing FP6-IST INFRAWEBS project and describes the main layers and software components embedded in an application oriented realisation framework. An important part of INFRAWEBS is a Semantic Web Unit (SWU) – a collaboration platform and interoperable middleware for ontology-based handling and maintaining of SWS. The framework provides knowledge about a specific domain and relies on ontologies to structure and exchange this knowledge to semantic service development modules. INFRAWEBS Designer and Composer are sub-modules of SWU responsible for creating Semantic Web Services using Case-Based Reasoning approach. The Service Access Middleware (SAM) is responsible for building up the communication channels between users and various other modules. It serves as a generic middleware for deployment of Semantic Web Services. This software toolset provides a development framework for creating and maintaining the full-life-cycle of Semantic Web Services with specific application support

    Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System

    Get PDF
    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology

    Selection of Composable Web Services Driven by User Requirements

    Get PDF
    International audienceBuilding a composite application based on Web services has become a real challenge regarding the large and diverse service space nowadays. Especially when considering the various functional and non-functional capabilities that Web services may afford and users may require. In this paper, we propose an approach for facilitating Web service selection according to user requirements. These requirements specify the needed functionality and expected QoS, as well as the composability between each pair of services. The originality of our approach is embodied in the use of Relational Concept Analysis (RCA), an extension of Formal Concept Analysis (FCA). Using RCA, we classify services by their calculated QoS levels and composability modes. We use a real case study of 901 services to show how to accomplish an efficient selection of services satisfying a specified set of functional and non-functional requirements

    Activating Generalized Fuzzy Implications from Galois Connections

    Get PDF
    This paper deals with the relation between fuzzy implications and Galois connections, trying to raise the awareness that the fuzzy implications are indispensable to generalise Formal Concept Analysis. The concrete goal of the paper is to make evident that Galois connections, which are at the heart of some of the generalizations of Formal Concept Analysis, can be interpreted as fuzzy incidents. Thus knowledge processing, discovery, exploration and visualization as well as data mining are new research areas for fuzzy implications as they are areas where Formal Concept Analysis has a niche.F.J. Valverde-Albacete—was partially supported by EU FP7 project LiMoSINe, (contract 288024). C. Peláez-Moreno—was partially supported by the Spanish Government-CICYT project 2011-268007/TEC.Publicad

    Using conceptual graphs for clinical guidelines representation and knowledge visualization

    Get PDF
    The intrinsic complexity of the medical domain requires the building of some tools to assist the clinician and improve the patient’s health care. Clinical practice guidelines and protocols (CGPs) are documents with the aim of guiding decisions and criteria in specific areas of healthcare and they have been represented using several languages, but these are difficult to understand without a formal background. This paper uses conceptual graph formalism to represent CGPs. The originality here is the use of a graph-based approach in which reasoning is based on graph-theory operations to support sound logical reasoning in a visual manner. It allows users to have a maximal understanding and control over each step of the knowledge reasoning process in the CGPs exploitation. The application example concentrates on a protocol for the management of adult patients with hyperosmolar hyperglycemic state in the Intensive Care Unit

    Fuzzy concept analysis for semantic knowledge extraction

    Get PDF
    2010 - 2011Availability of controlled vocabularies, ontologies, and so on is enabling feature to provide some added values in terms of knowledge management. Nevertheless, the design, maintenance and construction of domain ontologies are a human intensive and time consuming task. The Knowledge Extraction consists of automatic techniques aimed to identify and to define relevant concepts and relations of the domain of interest by analyzing structured (relational databases, XML) and unstructured (text, documents, images) sources. Specifically, methodology for knowledge extraction defined in this research work is aimed at enabling automatic ontology/taxonomy construction from existing resources in order to obtain useful information. For instance, the experimental results take into account data produced with Web 2.0 tools (e.g., RSS-Feed, Enterprise Wiki, Corporate Blog, etc.), text documents, and so on. Final results of Knowledge Extraction methodology are taxonomies or ontologies represented in a machine oriented manner by means of semantic web technologies, such as: RDFS, OWL and SKOS. The resulting knowledge models have been applied to different goals. On the one hand, the methodology has been applied in order to extract ontologies and taxonomies and to semantically annotate text. On the other hand, the resulting ontologies and taxonomies are exploited in order to enhance information retrieval performance and to categorize incoming data and to provide an easy way to find interesting resources (such as faceted browsing). Specifically, following objectives have been addressed in this research work: Ontology/Taxonomy Extraction: that concerns to automatic extraction of hierarchical conceptualizations (i.e., taxonomies) and relations expressed by means typical description logic constructs (i.e., ontologies). Information Retrieval: definition of a technique to perform concept-based the retrieval of information according to the user queries. Faceted Browsing: in order to automatically provide faceted browsing capabilities according to the categorization of the extracted contents. Semantic Annotation: definition of a text analysis process, aimed to automatically annotate subjects and predicates identified. The experimental results have been obtained in some application domains: e-learning, enterprise human resource management, clinical decision support system. Future challenges go in the following directions: investigate approaches to support ontology alignment and merging applied to knowledge management.X n.s

    Integrating Semantic Web Services Ranking Mechanisms Using a Common Preference Model

    Get PDF
    Service ranking has been long-acknowledged to play a fundamental role in helping users to select the best o erings among services retrieved from a search request. There exist many ranking mechanisms, each one providing ad hoc preference models that o er di erent levels of expressiveness. Consequently, applying a single mechanism to a particular scenario constrains the user to de ne preferences based on that mechanism's facilities. Furthermore, a more exible solution that uses several independent mechanisms will face interoperability issues because of the di erences between preference models provided by each ranking mechanism. In order to overcome these issues, we propose a Preference- based Universal Ranking Integration (PURI) framework that enables the combination of several ranking mechanisms using a common, holistic preference model. Using PURI, di erent ranking mechanisms are seamlessly and transparently integrated, o ering a single fa cade to de ne preferences using highly expressive facilities that are not only decoupled from the concrete mechanisms that perform the ranking process, but also allow to exploit synergies from the combination of integrated mechanisms. We also thoroughly present a particular application scenario in the SOA4All EU project and evaluate the bene ts and applicability of PURI in further domains

    LearnFCA: A Fuzzy FCA and Probability Based Approach for Learning and Classification

    Get PDF
    Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering. This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of it’s applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems. We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success. Adviser: Dr Jitender Deogu

    Web-based strategies in the manufacturing industry

    Get PDF
    The explosive growth of Internet-based architectures is allowing an efficient access to information resources over geographically dispersed areas. This fact is exerting a major influence on current manufacturing practices. Business activities involving customers, partners, employees and suppliers are being rapidly and efficiently integrated through networked information management environments. Therefore, efforts are required to take advantage of distributed infrastructures that can satisfy information integration and collaborative work strategies in corporate environments. In this research, Internet-based distributed solutions focused on the manufacturing industry are proposed. Three different systems have been developed for the tooling sector, specifically for the company Seco Tools UK Ltd (industrial collaborator). They are summarised as follows. SELTOOL is a Web-based open tool selection system involving the analysis of technical criteria to establish appropriate selection of inserts, toolholders and cutting data for turning, threading and grooving operations. It has been oriented to world-wide Seco customers. SELTOOL provides an interactive and crossed-way of searching for tooling parameters, rather than conventional representation schemes provided by catalogues. Mechanisms were developed to filter, convert and migrate data from different formats to the database (SQL-based) used by SELTOOL.TTS (Tool Trials System) is a Web-based system developed by the author and two other researchers to support Seco sales engineers and technical staff, who would perform tooling trials in geographically dispersed machining centres and benefit from sharing data and results generated by these tests. Through TTS tooling engineers (authorised users) can submit and retrieve highly specific technical tooling data for both milling and turning operations. Moreover, it is possible for tooling engineers to avoid the execution of new tool trials knowing the results of trials carried out in physically distant places, when another engineer had previously executed these trials. The system incorporates encrypted security features suitable for restricted use on the World Wide Web. An urgent need exists for tools to make sense of raw data, extracting useful knowledge from increasingly large collections of data now being constructed and made available from networked information environments. This explosive growth in the availability of information is overwhelming the capabilities of traditional information management systems, to provide efficient ways of detecting anomalies and significant patterns in large sets of data. Inexorably, the tooling industry is generating valuable experimental data. It is a potential and unexplored sector regarding the application of knowledge capturing systems. Hence, to address this issue, a knowledge discovery system called DISKOVER was developed. DISKOVER is an integrated Java-application consisting of five data mining modules, able to be operated through the Internet. Kluster and Q-Fast are two of these modules, entirely developed by the author. Fuzzy-K has been developed by the author in collaboration with another research student in the group at Durham. The final two modules (R-Set and MQG) have been developed by another member of the Durham group. To develop Kluster, a complete clustering methodology was proposed. Kluster is a clustering application able to combine the analysis of quantitative as well as categorical data (conceptual clustering) to establish data classification processes. This module incorporates two original contributions. Specifically, consistent indicators to measure the quality of the final classification and application of optimisation methods to the final groups obtained. Kluster provides the possibility, to users, of introducing case-studies to generate cutting parameters for particular Input requirements. Fuzzy-K is an application having the advantages of hierarchical clustering, while applying fuzzy membership functions to support the generation of similarity measures. The implementation of fuzzy membership functions helped to optimise the grouping of categorical data containing missing or imprecise values. As the tooling database is accessed through the Internet, which is a relatively slow access platform, it was decided to rely on faster Information retrieval mechanisms. Q-fast is an SQL-based exploratory data analysis (EDA) application, Implemented for this purpose
    • …
    corecore