36,902 research outputs found

    Improving root cause analysis through the integration of PLM systems with cross supply chain maintenance data

    Get PDF
    The purpose of this paper is to demonstrate a system architecture for integrating Product Lifecycle Management (PLM) systems with cross supply chain maintenance information to support root-cause analysis. By integrating product-data from PLM systems with warranty claims, vehicle diagnostics and technical publications, engineers were able to improve the root-cause analysis and close the information gaps. Data collection was achieved via in-depth semi-structured interviews and workshops with experts from the automotive sector. Unified Modelling Language (UML) diagrams were used to design the system architecture proposed. A user scenario is also presented to demonstrate the functionality of the system

    Tourism and the smartphone app: capabilities, emerging practice and scope in the travel domain.

    Get PDF
    Based on its advanced computing capabilities and ubiquity, the smartphone has rapidly been adopted as a tourism travel tool.With a growing number of users and a wide varietyof applications emerging, the smartphone is fundamentally altering our current use and understanding of the transport network and tourism travel. Based on a review of smartphone apps, this article evaluates the current functionalities used in the domestic tourism travel domain and highlights where the next major developments lie. Then, at a more conceptual level, the article analyses how the smartphone mediates tourism travel and the role it might play in more collaborative and dynamic travel decisions to facilitate sustainable travel. Some emerging research challenges are discussed

    XSRL: An XML web-services request language

    Get PDF
    One of the most serious challenges that web-service enabled e-marketplaces face is the lack of formal support for expressing service requests against UDDI-resident web-services in order to solve a complex business problem. In this paper we present a web-service request language (XSRL) developed on the basis of AI planning and the XML database query language XQuery. This framework is designed to handle and execute XSRL requests and is capable of performing planning actions under uncertainty on the basis of refinement and revision as new service-related information is accumulated (via interaction with the user or UDDI) and as execution circumstances necessitate change

    Experimental Case Studies for Investigating E-Banking Phishing Techniques and Attack Strategies

    Get PDF
    Phishing is a form of electronic identity theft in which a combination of social engineering and web site spoofing techniques are used to trick a user into revealing confidential information with economic value. The problem of social engineering attack is that there is no single solution to eliminate it completely, since it deals largely with the human factor. This is why implementing empirical experiments is very crucial in order to study and to analyze all malicious and deceiving phishing website attack techniques and strategies. In this paper, three different kinds of phishing experiment case studies have been conducted to shed some light into social engineering attacks, such as phone phishing and phishing website attacks for designing effective countermeasures and analyzing the efficiency of performing security awareness about phishing threats. Results and reactions to our experiments show the importance of conducting phishing training awareness for all users and doubling our efforts in developing phishing prevention techniques. Results also suggest that traditional standard security phishing factor indicators are not always effective for detecting phishing websites, and alternative intelligent phishing detection approaches are needed

    Description of the terminological concept in an ontology

    Get PDF
    Terminologie & Ontologie: ThĂ©ories et Applications. ConfĂ©rence TOTh 2017.ChambĂ©ry – 8 & 9 juin 2017Ontology editors are tools developed to classify and describe objects in a database that will be used by a computer program for any of a number of purposes. Since this tool allows elements to be grouped and classified, we decided it could be applied to the management of terminological concepts. To do so, we have formalised the description of the terminological concepts by means of characteristics and values, so that they match the form required by ontologies. Then, we show how to implement the conceptual information in the ontology editor ProtĂ©gĂ©, and more specifically how to represent concepts, descriptions of concepts and terminological definitions. We also analyse the advantages and drawbacks of this way of representing concepts, as well as outlining the future work that we are developing in relation to i

    Lightweight Ontologies

    Get PDF
    Ontologies are explicit specifications of conceptualizations. They are often thought of as directed graphs whose nodes represent concepts and whose edges represent relations between concepts. The notion of concept is understood as defined in Knowledge Representation, i.e., as a set of objects or individuals. This set is called the concept extension or the concept interpretation. Concepts are often lexically defined, i.e., they have natural language names which are used to describe the concept extensions (e.g., concept mother denotes the set of all female parents). Therefore, when ontologies are visualized, their nodes are often shown with corresponding natural language concept names. The backbone structure of the ontology graph is a taxonomy in which the relations are “is-a”, whereas the remaining structure of the graph supplies auxiliary information about the modeled domain and may include relations like “part-of”, “located-in”, “is-parent-of”, and many others

    FilteredWeb: A Framework for the Automated Search-Based Discovery of Blocked URLs

    Full text link
    Various methods have been proposed for creating and maintaining lists of potentially filtered URLs to allow for measurement of ongoing internet censorship around the world. Whilst testing a known resource for evidence of filtering can be relatively simple, given appropriate vantage points, discovering previously unknown filtered web resources remains an open challenge. We present a new framework for automating the process of discovering filtered resources through the use of adaptive queries to well-known search engines. Our system applies information retrieval algorithms to isolate characteristic linguistic patterns in known filtered web pages; these are then used as the basis for web search queries. The results of these queries are then checked for evidence of filtering, and newly discovered filtered resources are fed back into the system to detect further filtered content. Our implementation of this framework, applied to China as a case study, shows that this approach is demonstrably effective at detecting significant numbers of previously unknown filtered web pages, making a significant contribution to the ongoing detection of internet filtering as it develops. Our tool is currently deployed and has been used to discover 1355 domains that are poisoned within China as of Feb 2017 - 30 times more than are contained in the most widely-used public filter list. Of these, 759 are outside of the Alexa Top 1000 domains list, demonstrating the capability of this framework to find more obscure filtered content. Further, our initial analysis of filtered URLs, and the search terms that were used to discover them, gives further insight into the nature of the content currently being blocked in China.Comment: To appear in "Network Traffic Measurement and Analysis Conference 2017" (TMA2017
    • 

    corecore