29 research outputs found

    An Unexpected Journey: Towards Runtime Verification of Multiagent Systems and Beyond

    Get PDF
    The Trace Expression formalism derives from works started in 2012 and is mainly used to specify and verify interaction protocols at runtime, but other applications have been devised. More specically, this thesis describes how to extend and apply such formalism in the engineering process of distributed articial intelligence systems (such as Multiagent systems). This thesis extends the state of the art through four dierent contributions: 1. Theoretical: the thesis extends the original formalism in order to represent also parametric and probabilistic specications (parametric trace expressions and probabilistic trace expressions respectively). 2. Algorithmic: the thesis proposes algorithms for verifying trace expressions at runtime in a decentralized way. The algorithms have been designed to be as general as possible, but their implementation and experimentation address scenarios where the modelled and observed events are communicative events (interactions) inside a multiagent system. 3. Application: the thesis analyzes the relations between runtime and static verication (e.g. model checking) proposing hybrid integrations in both directions. First of all, the thesis proposes a trace expression model checking approach where it shows how to statically verify LTL property on a trace expression specication. After that, the thesis presents a novel approach for supporting static verication through the addition of monitors at runtime (post-process). 4. Implementation: the thesis presents RIVERtools, a tool supporting the writing, the syntactic analysis and the decentralization of trace expressions

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis

    Hazard elimination using backwards reachability techniques in discrete and hybrid models

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, February 2002.Includes bibliographical references (leaves 173-181).One of the most important steps in hazard analysis is determining whether a particular design can reach a hazardous state and, if it could, how to change the design to ensure that it does not. In most cases, this is done through testing or simulation or even less rigorous processes--none of which provide much confidence for complex systems. Because state spaces for software can be enormous (which is why testing is not an effective way to accomplish the goal), the innovative Hazard Automaton Reduction Algorithm (HARA) involves starting at a hypothetical unsafe state and using backwards reachability techniques to obtain enough information to determine how to design in order to ensure that state cannot be reached. State machine models are very powerful, but also present greater challenges in terms of reachability, including the backwards reachability needed to implement the Hazard Automaton Reduction Algorithm. The key to solving the backwards reachability problem lies in converting the state machine model into a controls state space formulation and creating a state transition matrix. Each successive step backward from the hazardous state then involves only one n by n matrix manipulation. Therefore, only a finite number of matrix manipulations is necessary to determine whether or not a state is reachable from another state, thus providing the same information that could be obtained from a complete backwards reachability graph of the state machine model. Unlike model checking, the computational cost does not increase as greatly with the number of backward states that need to be visited to obtain the information necessary to ensure that the design is safe or to redesign it to be safe. The functionality and optimality of this approach is proved in both discrete and hybrid cases.(cont.) The new approach of the Hazard Automaton Reduction Algorithm combined with backwards reachability controls techniques was demonstrated on a blackbox model of a real aircraft altitude switch. The algorithm is being implemented in a commercial specification language (SpecTRM-RL). SpecTRM-RL is formally extended to include continuous and hybrid models. An analysis of the safety of a medium term conflict detection algorithm (MTCD) for aircraft, that is being developed and tested by Eurocontrol for use in European Air Traffic Control, is performed. Attempts to validate such conflict detection algorithms is currently challenging researchers world wide. Model checking is unsatisfactory in general for this problem because of the lack of a termination guarantee in backwards reachability using model checking. The new state-space controls approach does not encounter this problem.by Natasha Anita Neogi.Ph.D

    Complexity Reduction in Image-Based Breast Cancer Care

    Get PDF
    The diversity of malignancies of the breast requires personalized diagnostic and therapeutic decision making in a complex situation. This thesis contributes in three clinical areas: (1) For clinical diagnostic image evaluation, computer-aided detection and diagnosis of mass and non-mass lesions in breast MRI is developed. 4D texture features characterize mass lesions. For non-mass lesions, a combined detection/characterisation method utilizes the bilateral symmetry of the breast s contrast agent uptake. (2) To improve clinical workflows, a breast MRI reading paradigm is proposed, exemplified by a breast MRI reading workstation prototype. Instead of mouse and keyboard, it is operated using multi-touch gestures. The concept is extended to mammography screening, introducing efficient navigation aids. (3) Contributions to finite element modeling of breast tissue deformations tackle two clinical problems: surgery planning and the prediction of the breast deformation in a MRI biopsy device

    Smart Monitoring and Control in the Future Internet of Things

    Get PDF
    The Internet of Things (IoT) and related technologies have the promise of realizing pervasive and smart applications which, in turn, have the potential of improving the quality of life of people living in a connected world. According to the IoT vision, all things can cooperate amongst themselves and be managed from anywhere via the Internet, allowing tight integration between the physical and cyber worlds and thus improving efficiency, promoting usability, and opening up new application opportunities. Nowadays, IoT technologies have successfully been exploited in several domains, providing both social and economic benefits. The realization of the full potential of the next generation of the Internet of Things still needs further research efforts concerning, for instance, the identification of new architectures, methodologies, and infrastructures dealing with distributed and decentralized IoT systems; the integration of IoT with cognitive and social capabilities; the enhancement of the sensing–analysis–control cycle; the integration of consciousness and awareness in IoT environments; and the design of new algorithms and techniques for managing IoT big data. This Special Issue is devoted to advancements in technologies, methodologies, and applications for IoT, together with emerging standards and research topics which would lead to realization of the future Internet of Things

    UTPA Graduate Catalog 1998-2000

    Get PDF
    https://scholarworks.utrgv.edu/edinburglegacycatalogs/1078/thumbnail.jp

    The archive saga:shepherds of data, documents and code, and their will to order

    Get PDF
    This thesis is an enquiry into expert-intensive work to support the aggregation and mass-dissemination of scientific articles. The enquiry draws on computer-supported actions and interactions between the people who worked as daily operators, and submitters who worked the system from remote as end-users. These recorded actions and interactions render visible matters of practical responsibility and competence with reference to reportable-accountable use of objects and devices. As a scholarly contribution, this thesis draws on the work started by Garfinkel and his followers, ie., the study programme of ethnomethodology. It poses a simple question—how do the people involved in these computer-supported actions and interactions do just what they do? In the attempt to answer that question, the enquiry shows that a competent and accountable use of system supports is manifested in reference to particular phenomena that need constant management—eg. warnings,anomalies, unknown entities, failing processes, disorderly work objects, and spurious actions—in short, phenomena of disorder. Records of how these phenomena are detected and attended to, and how particular problems are solved, reveals a complex relationship between computational functions and subtle human judgements. Without making generalised theoretical claims about this relationship, I conclude that remarkably little is known about the actual lived work that takes place in the operation and use of computer systems that are worked specifically to detect and cope with `anomalies' and thus require detective and reactionary labour. I offer this enquiry into the goings-on at this particular site as a singular opportunity to respecify problems of disorder
    corecore