219 research outputs found

    Workshop on Modelling of Objects, Components, and Agents, Aarhus, Denmark, August 27-28, 2001

    Get PDF
    This booklet contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'01), August 27-28, 2001. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" Group at the University of Hamburg, Germany. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop01

    A modelling and simulation framework for health care systems.

    Get PDF
    International audienceIn this paper, we propose a new modeling methodology named MedPRO for addressing organization problems of health care systems. It is based on a metamodel with three different views: process view (care pathways of patients), resource view (activities of relevant resources), and organization view (dependence and organization of resources). The resulting metamodel can be instantiated for a specific health care system and be converted into an executable model for simulation by means of a special class of Petri nets (PNs), called Health Care Petri Nets (HCPNs). HCPN models also serve as a basis for short-term planning and scheduling of health care activities. As a result, the MedPRO methodology leads to a fast-prototyping tool for easy and rigorous modeling and simulation of health care systems. A case study is presented to show the benefits of the MedPRO methodology

    Semantics and Verification of UML Activity Diagrams for Workflow Modelling

    Get PDF
    This thesis defines a formal semantics for UML activity diagrams that is suitable for workflow modelling. The semantics allows verification of functional requirements using model checking. Since a workflow specification prescribes how a workflow system behaves, the semantics is defined and motivated in terms of workflow systems. As workflow systems are reactive and coordinate activities, the defined semantics reflects these aspects. In fact, two formal semantics are defined, which are completely different. Both semantics are defined directly in terms of activity diagrams and not by a mapping of activity diagrams to some existing formal notation. The requirements-level semantics, based on the Statemate semantics of statecharts, assumes that workflow systems are infinitely fast w.r.t. their environment and react immediately to input events (this assumption is called the perfect synchrony hypothesis). The implementation-level semantics, based on the UML semantics of statecharts, does not make this assumption. Due to the perfect synchrony hypothesis, the requirements-level semantics is unrealistic, but easy to use for verification. On the other hand, the implementation-level semantics is realistic, but difficult to use for verification. A class of activity diagrams and a class of functional requirements is identified for which the outcome of the verification does not depend upon the particular semantics being used, i.e., both semantics give the same result. For such activity diagrams and such functional requirements, the requirements-level semantics is as realistic as the implementation-level semantics, even though the requirements-level semantics makes the perfect synchrony hypothesis. The requirements-level semantics has been implemented in a verification tool. The tool interfaces with a model checker by translating an activity diagram into an input for a model checker according to the requirements-level semantics. The model checker checks the desired functional requirement against the input model. If the model checker returns a counterexample, the tool translates this counterexample back into the activity diagram by highlighting a path corresponding to the counterexample. The tool supports verification of workflow models that have event-driven behaviour, data, real time, and loops. Only model checkers supporting strong fairness model checking turn out to be useful. The feasibility of the approach is demonstrated by using the tool to verify some real-life workflow models

    On the dynamic inventory routing problem in humanitarian logistics: a simulation optimization approach using agent-based modeling

    Get PDF
    80 páginasIn the immediate aftermath of any disaster event, operational decisions are made to relieve the affected population and minimize casualties and human suffering. To do so, humanitarian logistics planners should be supported by strong decision-making tools to better respond to disaster events. One of the most important decisions is the delivery of the correct amount of humanitarian aid in the right moment to the right place. This decision should be made considering the dynamism of the disaster response operations where the information is not known beforehand and vary over time. For instance, the effect of the Word-of-Mouth and shortages in distribution points’ demand can impact the operational decisions. Therefore, the inventory and transportation decisions should be made constantly to better serve the affected people. This work presents a simulation-optimization approach to make disaster relief distribution decisions dynamically. An agent-based simulation model solves the inventory routing problem dynamically, considering changes in the humanitarian supply chain over the planning horizon. Additionally, the inventory routing schemes are made using a proposed mathematical model that aims to minimize the level of shortage and inventory at risk (associated to the risk of losing it). The computational proposal is implemented in the ANYLOGIC and CPLEX software. Finally, a case study motivated by the 2017 Mocoa-Colombia landslide is developed using real data and is presented to be used in conjunction with the proposed framework. Computational experimentations show the impact of the word-of-mouth and the frequency in decision making in distribution points’ shortages and service levels. Therefore, considering changes in demand over the planning horizon contributes to lowering the shortages and contributes to making better distributions plans in the response phase of a disaster.Después de la ocurrencia de cualquier desastre se deben tomar decisiones para aliviar a la población afectada minimizando las pérdidas humanas y el sufrimiento. Para ello, los responsables de la logística humanitaria deben contar con robustas herramientas para tomar decisiones acertadas que respondan adecuadamente ante esos eventos. Una de las decisiones más importantes es la entrega de ayuda humanitaria en el lugar, las cantidades y en el momento correcto. La anterior decisión debe ser tomada teniendo en cuenta el dinamismo de las operaciones de respuesta humanitaria en donde la información no es conocida de antemano y varía en el tiempo. Por ejemplo, el efecto del Voz a Voz y la escasez en los puntos de distribución de ayuda humanitaira pueden impactar las decisiones operacionales. Es por lo anterior, que las decisiones de transporte de ayuda humanitaria deben ser realizadas constantemente para servir de una mejor forma a la población afectada. Este trabajo presenta una propuesta de simulación-optimización para tomar las decisiones de ruteo de inventario de ayuda humanitaria de forma dinámica. A través de un modelo de simulación basado en agentes se resuelve dinámicamente el problema de ruteo de inventario considerando cambios en la cadena de suministro humanitaria. Adicionalmente, las decisiones de ruteo de inventario son tomadas mediante un modelo matemático propuesto que busca minimizar el nivel de inventario en riesgo y el nivel de escases simultáneamente. La propuesta computacional es implementada en los programas ANYLOGIC y CPLEX. Finalmente mediante un caso de estudio basado en la catastrofe de Mocoa-Colombia en 2017 se evaluará la propuesta. Experimentos computacionales muestran el impacto del voz-a-voz y frecuencia de toma de decisiones en la escasez y el nivel de servicio en los puntos de distribución. Por lo tanto, considerar cambios en la demanda contribuye a disminuir la escasez y hacer mejores esquemas de distribución de ayuda humanitaria.Maestría en Diseño y Gestión de ProcesosMagíster en Diseño y Gestión de Proceso

    A Methodology for Internet of Things Business Modeling and Analysis using Agent-Based Simulation

    Get PDF
    Internet of Things (IoT) is a new vision of an integrated network covering physical objects that are able to collect and exchange data. It enables previously unconnected devices and objects to become connected using equipping devices with communication technology such as sensors and radio frequency identification tags (RFID). As technology progresses towards new paradigm such as IoT, there is a need for an approach to identify the significance of these projects. Conventional simulation modeling and data analysis approaches are not able to capture the system complexity or suffer from a lack of data needed that can help to build a prediction. Agent-based Simulation (ABM) proposes an efficient simulation scheme to capture the structure of this dimension and offer a potential solution. Two case studies were proposed in this research. The first one introduces a conceptual case study addressing the use of agent-based simulations to verify the effectiveness of the business model of IoT. The objective of the study is to assess the feasibility of such application, of the market in the city of Orlando (Florida, United States). The second case study seeks to use ABM to simulate the operational behavior of refrigeration units (7,420) in one of largest retail organizations in Saudi Arabia and assess the economic feasibility of IoT implementation by estimating the return on investment (ROI)

    A Framework For Workforce Management An Agent Based Simulation Approach

    Get PDF
    In today\u27s advanced technology world, enterprises are in a constant state of competition. As the intensity of competition increases the need to continuously improve organizational performance has never been greater. Managers at all levels must be on a constant quest for finding ways to maximize their enterprises\u27 strategic resources. Enterprises can develop sustained competitiveness only if their activities create value in unique ways. There should be an emphasis to transfer this competitiveness to the resources it has on hand and the resources it can develop to be used in this environment. The significance of human capital is even greater now, as the intangible value and the tacit knowledge of enterprises\u27 resources should be strategically managed to achieve a greater level of continuous organizational success. This research effort seeks to provide managers with means for accurate decision making for their workforce management. A framework for modeling and managing human capital to achieve effective workforce planning strategies is built to assist enterprise in their long term strategic organizational goals

    A Spatial Agent-based Model for Volcanic Evacuation of Mt. Merapi

    Get PDF
    Natural disasters, especially volcanic eruptions, are hazardous events that frequently happen in Indonesia. As a country within the “Ring of Fire”, Indonesia has hundreds of volcanoes and Mount Merapi is the most active. Historical studies of this volcano have revealed that there is potential for a major eruption in the future. Therefore, long-term disaster management is needed. To support the disaster management, physical and socially-based research has been carried out, but there is still a gap in the development of evacuation models. This modelling is necessary to evaluate the possibility of unexpected problems in the evacuation process since the hazard occurrences and the population behaviour are uncertain. The aim of this research was to develop an agent-based model (ABM) of volcanic evacuation to improve the effectiveness of evacuation management in Merapi. Besides the potential use of the results locally in Merapi, the development process of this evacuation model contributes by advancing the knowledge of ABM development for large-scale evacuation simulation in other contexts. Its novelty lies in (1) integrating a hazard model derived from historical records of the spatial impact of eruptions, (2) formulating and validating an individual evacuation decision model in ABM based on various interrelated factors revealed from literature reviews and surveys that enable the modelling of reluctant people, (3) formulating the integration of multi-criteria evaluation (MCE) in ABM to model a spatio-temporal dynamic model of risk (STDMR) that enables representation of the changing of risk as a consequence of changing hazard level, hazard extent and movement of people, and (4) formulating an evacuation staging method based on MCE using geographic and demographic criteria. The volcanic evacuation model represents the relationships between physical and human agents, consisting of the volcano, stakeholders, the population at risk and the environment. The experimentation of several evacuation scenarios in Merapi using the developed ABM of evacuation shows that simultaneous strategy is superior in reducing the risk, but the staged scenario is the most effective in minimising the potential of road traffic problems during evacuation events in Merapi. Staged evacuation can be a good option when there is enough time to evacuate. However, if the evacuation time is limited, the simultaneous strategy is better to be implemented. Appropriate traffic management should be prepared to avoid traffic problems when the second option is chosen

    Data and Design: Advancing Theory for Complex Adaptive Systems

    Get PDF
    Complex adaptive systems exhibit certain types of behaviour that are difficult to predict or understand using reductionist approaches, such as linearization or assuming conditions of optimality. This research focuses on the complex adaptive systems associated with public health. These are noted for being driven by many latent forces, shaped centrally by human behaviour. Dynamic simulation techniques, including agent-based models (ABMs) and system dynamics (SD) models, have been used to study the behaviour of complex adaptive systems, including in public health. While much has been learned, such work is still hampered by important limitations. Models of complex systems themselves can be quite complex, increasing the difficulty in explaining unexpected model behaviour, whether that behaviour comes from model code errors or is due to new learning. Model complexity also leads to model designs that are hard to adapt to growing knowledge about the subject area, further reducing model-generated insights. In the current literature of dynamic simulations of human public health behaviour, few focus on capturing explicit psychological theories of human behaviour. Given that human behaviour, especially health and risk behaviour, is so central to understanding of processes in public health, this work explores several methods to improve the utility and flexibility of dynamic models in public health. This work is undertaken in three projects. The first uses a machine learning algorithm, the particle filter, to augment a simple ABM in the presence of continuous disease prevalence data from the modelled system. It is shown that, while using the particle filter improves the accuracy of the ABM, when compared with previous work using SD with a particle filter, the ABM has some limitations, which are discussed. The second presents a model design pattern that focuses on scalability and modularity to improve the development time, testability, and flexibility of a dynamic simulation for tobacco smoking. This method also supports a general pattern of constructing hybrid models --- those that contain elements of multiple methods, such as agent-based or system dynamics. This method is demonstrated with a stylized example of tobacco smoking in a human population. The final line of work implements this modular design pattern, with differing mechanisms of addiction dynamics, within a rich behavioural model of tobacco purchasing and consumption. It integrates the results from a discrete choice experiment, which is a widely used economic method for study human preferences. It compares and contrasts four independent addiction modules under different population assumptions. A number of important insights are discussed: no single module was universally more accurate across all human subpopulations, demonstrating the benefit of exploring a diversity of approaches; increasing the number of parameters does not necessarily improve a module's predictions, since the overall least accurate module had the second highest number of parameters; and slight changes in module structure can lead to drastic improvements, implying the need to be able to iteratively learn from model behaviour
    corecore