7,347 research outputs found

    Network Analysis, Creative System Modelling and Decision Support: The NetSyMoD Approach

    Get PDF
    This paper presents the NetSyMoD approach – where NetSyMod stands for Network Analysis – Creative System Modelling – Decision Support. It represents the outcome of several years of research at FEEM in the field of natural resources management, environmental evaluation and decision-making, within the Natural Resources Management Research Programme. NetSyMoD is a flexible and comprehensive methodological framework, which uses a suite of support tools, aimed at facilitating the involvement of stakeholders or experts in decision-making processes. The main phases envisaged for the process are: (i) the identification of relevant actors, (ii) the analysis of social networks, (iii) the creative system modelling and modelling of the reality being considered (i.e. the local socio-economic and environmental system), and (iv) the analysis of alternative options available for the management of the specific case (e.g. alternative projects, plans, strategies). The strategies for participation are necessarily context-dependent, and thus not all the NetSyMod phases may be needed in every application. Furthermore, the practical solutions for their implementation may significantly differ from one case to another, depending not only on the context, but also on the available resources (human and financial). The various applications of NetSyMoD have nonetheless in common the same approach for problem analysis and communication within a group of actors, based upon the use of creative thinking techniques, the formalisation of human-environment relationships through the DPSIR framework, and the use of multi-criteria analysis through the mDSS software.Social Network, Integrated Analysis, Participatory Modelling, Decision Support

    Two-echelon freight transport optimisation: unifying concepts via a systematic review

    Get PDF
    Multi-echelon distribution schemes are one of the most common strategies adopted by the transport companies in an aim of cost reduction, but their identification in scientific literature is not always easy due to a lack of unification. This paper presents the main concepts of two-echelon distribution via a systematic review, in the specific a meta-narrative analysis, in order to identify and unify the main concepts, issues and methods that can be helpful for scientists and transport practitioners. The problem of system cost optimisation in two-echelon freight transport systems is defined. Moreover, the main variants are synthetically presented and discussed. Finally, future research directions are proposed.location-routing problems, multi-echelon distribution, cross-docking, combinatorial optimisation, systematic review.

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    The next generation of the web: an organisational perspective

    Get PDF
    The web has revolutionised information sharing, management, interoperability and knowledge discovery. The union of the two prominent web frameworks, Web 2.0 and the Semantic Web is often referred to as Web 3.0. This paper explores the basics behind the two paradigms, assesses their influence over organisational change and considers their effectiveness in supporting innovative solutions. It then outlines the challenges of combining the two web paradigms to form Web 3.0 and critically evaluates the impact that Web 3.0 will have on the social organisation. The research carried out follows action research principles and adopts an investigative and reviewing approach to the emerging trends and patterns that develop from the web's changing use, examining the underpinning enabling technologies that facilitate access, innovation and organisational change

    Two-echelon freight transport optimisation: unifying concepts via a systematic review

    Get PDF
    Multi-echelon distribution schemes are one of the most common strategies adopted by the transport companies in an aim of cost reduction, but their identification in scientific literature is not always easy due to a lack of unification. This paper presents the main concepts of two-echelon distribution via a systematic review, in the specific a meta-narrative analysis, in order to identify and unify the main concepts, issues and methods that can be helpful for scientists and transport practitioners. The problem of system cost optimisation in two-echelon freight transport systems is defined. Moreover, the main variants are synthetically presented and discussed. Finally, future research directions are proposed

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Scalable attack modelling in support of security information and event management

    Get PDF
    Includes bibliographical referencesWhile assessing security on single devices can be performed using vulnerability assessment tools, modelling of more intricate attacks, which incorporate multiple steps on different machines, requires more advanced techniques. Attack graphs are a promising technique, however they face a number of challenges. An attack graph is an abstract description of what attacks are possible against a specific network. Nodes in an attack graph represent the state of a network at a point in time while arcs between nodes indicate the transformation of a network from one state to another, via the exploit of a vulnerability. Using attack graphs allows system and network configuration information to be correlated and analysed to indicate imminent threats. This approach is limited by several serious issues including the state-space explosion, due to the exponential nature of the problem, and the difficulty in visualising an exhaustive graph of all potential attacks. Furthermore, the lack of availability of information regarding exploits, in a standardised format, makes it difficult to model atomic attacks in terms of exploit requirements and effects. This thesis has as its objective to address these issues and to present a proof of concept solution. It describes a proof of concept implementation of an automated attack graph based tool, to assist in evaluation of network security, assessing whether a sequence of actions could lead to an attacker gaining access to critical network resources. Key objectives are the investigation of attacks that can be modelled, discovery of attack paths, development of techniques to strengthen networks based on attack paths, and testing scalability for larger networks. The proof of concept framework, Network Vulnerability Analyser (NVA), sources vulnerability information from National Vulnerability Database (NVD), a comprehensive, publicly available vulnerability database, transforming it into atomic exploit actions. NVA combines these with a topological network model, using an automated planner to identify potential attacks on network devices. Automated planning is an area of Artificial Intelligence (AI) which focuses on the computational deliberation process of action sequences, by measuring their expected outcomes and this technique is applied to support discovery of a best possible solution to an attack graph that is created. Through the use of heuristics developed for this study, unpromising regions of an attack graph are avoided. Effectively, this prevents the state-space explosion problem associated with modelling large scale networks, only enumerating critical paths rather than an exhaustive graph. SGPlan5 was selected as the most suitable automated planner for this study and was integrated into the system, employing network and exploit models to construct critical attack paths. A critical attack path indicates the most likely attack vector to be used in compromising a targeted device. Critical attack paths are identifed by SGPlan5 by using a heuristic to search through the state-space the attack which yields the highest aggregated severity score. CVSS severity scores were selected as a means of guiding state-space exploration since they are currently the only publicly available metric which can measure the impact of an exploited vulnerability. Two analysis techniques have been implemented to further support the user in making an informed decision as to how to prevent identified attacks. Evaluation of NVA was broken down into a demonstration of its effectiveness in two case studies, and analysis of its scalability potential. Results demonstrate that NVA can successfully enumerate the expected critical attack paths and also this information to establish a solution to identified attacks. Additionally, performance and scalability testing illustrate NVA's success in application to realistically sized larger networks

    Towards ad-hoc situation determination

    Get PDF
    Toolkits such as PlaceLab [1] have been successful in making location information freely available for use in experimental ubiquitous computing applications. As users' expectations of ubiquitous computing applications grow, we envisage a need for tools that can deliver a much richer set of contextual information. The high-level situation of the current environment is a key contextual element, and this position paper focuses on a method to provide this information for an ad-hoc group of people and devices. The contributions of this paper are i) a demonstration of how information retrieval (IR) techniques can be applied to situation determination in context-aware systems, ii) a proposal of a novel approach to situation determination that combines these adapted IR techniques with a process of cooperative interaction, and iii) a report of preliminary results. The approach offers a high level of utility and accuracy, with a greater level of automation than other contemporary approaches
    corecore