1,831 research outputs found
Digital twin as risk-free experimentation aid for techno-socio-economic systems
Environmental uncertainties and hyperconnectivity force techno-socio-economic systems to introspect and adapt to succeed and survive. Current practice is chiefly intuition-driven which is inconsistent with the need for precision and rigor. We propose that this can be addressed through the use of digital twins by combining results from Modelling & Simulation, Artificial Intelligence, and Control Theory to create a risk free ‘in silico’ experimentation aid to help: (i) understand why system is the way it is, (ii) be prepared for possible outlier conditions, and (iii) identify plausible solutions for mitigating the outlier conditions in an evidence-backed manner. We use reinforcement learning to systematically explore the digital twin solution space. Our proposal is significant because it advances the effective use of digital twins to new problem domains that have greater impact potential. Our novel approach contributes a meta model for simulatable digital twin of industry scale techno-socio-economic systems, agent-based implementation of the digital twin, and an architecture that serves as a risk-free experimentation aid to support simulation-based evidence-backed decision-making. We also discuss validation of this approach, associated technology infrastructure, and architecture through a representative sample of industry-scale real-world use cases
Querying histories of organisation simulations
Industrial Dynamics involves system modelling, simulation and evaluation leading to policy making. Traditional approaches to industrial dynamics use expert knowledge to build top-down models that have been criticised as not taking into account the adaptability and sociotechnical features of modern organisations. Furthermore, such models require a-priori knowledge of policy-making theorems. This paper advances recent research on bottom-up agent-based organisational modelling for Industrial Dynamics by presenting a framework where simulations produce histories that can be used to establish a range of policy-based theorems. The framework is presented and evaluated using a case study that has been implemented using a toolset called ES
Querying histories of organisation simulations
Industrial Dynamics involves system modelling, simulation and evaluation leading to policy making. Traditional approaches to industrial dynamics use expert knowledge to build top-down models that have been criticised as not taking into account the adaptability and sociotechnical features of modern organisations. Furthermore, such models require a-priori knowledge of policy-making theorems. This paper advances recent research on bottom-up agent-based organisational modelling for Industrial Dynamics by presenting a framework where simulations produce histories that can be used to establish a range of policy-based theorems. The framework is presented and evaluated using a case study that has been implemented using a toolset called ES
A model based approach for complex dynamic decision-making
Current state-of-the-practice and state-of-the-art of decision-making aids are inadequate for modern organisations that deal with significant uncertainty and business dynamism. This paper highlights the limitations of prevalent decision-making aids and proposes a model-based approach that advances the modelling abstraction and analysis machinery for complex dynamic decision-making. In particular, this paper proposes a meta-model to comprehensively represent organisation, establishes the relevance of model-based simulation technique as analysis means, introduces the advancements over actor technology to address analysis needs, and proposes a method to utilise proposed modelling abstraction, analysis technique, and analysis machinery in an effective and convenient manner. The proposed approach is illustrated using a near real-life case-study from a business process outsourcing organisation
Actor based behavioural simulation as an aid for organisational decision making
Decision-making is a critical activity for most of the modern organizations to stay competitive in rapidly changing business environment. Effective organisational decision-making requires deep understanding of various organisational aspects such as its goals, structure, business-as-usual operational processes, environment where it operates, and inherent characteristics of the change drivers that may impact the organisation. The size of a modern organisation, its socio-technical characteristics, inherent uncertainty, volatile operating environment, and prohibitively high cost of the incorrect decisions make decision-making a challenging endeavor.
While the enterprise modelling and simulation technologies have evolved into a mature discipline for understanding a range of engineering, defense and control systems, their application in organisational decision-making is considerably low. Current organisational decision-making approaches that are prevalent in practice are largely qualitative. Moreover, they mostly rely on human experts who are often aided with the primitive technologies such as spreadsheets and
visual diagrams.
This thesis argues that the existing modelling and simulation technologies are neither suitable to represent organisation and decision artifacts in a comprehensive and machine-interpretable form nor do they comprehensively address the analysis needs. An approach that advances the modelling abstraction and analysis machinery for organisational decision-making is proposed. In particular, this thesis proposes a domain specific language to represent relevant aspects of an organisation for decision-making, establishes the relevance of a bottom-up simulation technique as a means for analysis, and introduces a method to utilise the proposed modelling abstraction, analysis technique, and analysis machinery in an effective and convenient manner
EU Cohesion Policy Implementation - Evaluation Challenges and Opportunities
This open access book is the result of the 1st International Conference on Evaluating Challenges in the Implementation of EU Cohesion Policy (EvEUCoP 2022). It presents the recent findings, sparks discussion, and reveals new research paths addressing the use of novel methodologies and approaches to tackle the challenges and opportunities that are unveiled with the implementation of the EU cohesion policy. The authors cover a wide range of topics including the monitoring of data; the clearness of indicators in measuring the impact of interventions; novel evaluation methods, addressing the mid-term and terminal assessment; as well as case studies and applications on evaluations of the thematic objectives under the scrutiny of the cohesion policy, namely: • Research, technological development, and innovation; • Information and communication technologies; • Shift toward a low-carbon economy. During the 2014-2020 programmatic period, member states were required to undertake assessments to evaluate the efficacy, efficiency, and impact of each operational program. Such evaluations are generally concerned with the compliance of projects and activities with programmatic priorities, as well as with funds' absorption capacity and refer to ex-ante and ex-post assessments. Hence, this book proposes the use of novel methodologies addressing the mid-term and terminal assessments that enable performing the efficiency appraisal of the operational programs and that can support decision-makers in the selection of projects that should be awarded for funding
Recommended from our members
Information-theoretic and stochastic methods for managing the quality of service and satisfaction in healthcare systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This research investigates and develops a new approach to the management of service quality with the emphasis on patient and staff satisfaction in the healthcare sector. The challenge of measuring the quality of service in healthcare requires us to view the problem from multiple perspectives. At the philosophical level, the true nature of quality is still debated; at the psychological level, an accurate conceptual
representation is problematic; whilst at the physical level, an accurate measurement of the concept still remains elusive to practitioners and academics. This research focuses on the problem of quality measurement in the healthcare sector. The contributions of this research are fourfold: Firstly, it argues that from the technological point of view the research to date into quality of service in healthcare has not considered methods of real-time measurement and monitoring. This research identifies the key elements that are necessary for developing a real-time quality monitoring system for the healthcare environment.Secondly, a unique index is proposed for the monitoring and improvement of healthcare performance using information-theoretic entropy formalism. The index is
formulated based on five key performance indicators and was tested as a Healthcare
Quality Index (HQI) based on three key quality indicators of dignity, confidence and
communication in an Accident and Emergency department. Thirdly, using an M/G/1 queuing model and its underlying Little’s Law, the
concept of Effective Satisfaction in healthcare has been proposed. The concept is based on a Staff-Patient Satisfaction Relation Model (S-PSRM) developed using a patient satisfaction model and an empirically tested model developed for measuring staff satisfaction with workload (service time). The argument is presented that a synergy between patient satisfaction and staff satisfaction is the key to sustainable improvement in healthcare quality. The final contribution is the proposal of a Discrete Event Simulation (DES)
modelling platform as a descriptive model that captures the random and stochastic
nature of healthcare service provision process to prove the applicability of the proposed quality measurement models
Social, environmental and economic impacts of alternative energy and fuel supply chains
Energy supply nowadays, being a vital element of a country’s development, has to independently meet diverse, sustainability criteria, be it economic, environmental and social. The main goal of the present research work is to present a methodological framework for the evaluation of alternative energy and fuel Supply Chains (SCs), consisting of a broad topology (representation) suggested, encompassing all the well-known energy and fuel SCs, under a unified scheme, a set of performance measures and indices as well as mathematical model development, formulated as Multi-objective Linear Programming with the extension of incorporating binary decisions as well (Multi-objective Mixed Integer-Linear programming). Basic characteristics of the current modelling approach include the adaptability of the model to be applied at different levels of energy SCs decisions, under different time frames and for multiple stakeholders. Model evaluation is carried for a set of Greek islands, located in the Aegean Archipelagos, examining both the existing energy supply options as well future, more sustainable Energy Supply Chains (ESCs) configurations. Results of the specific research work reveal the social and environmental costs which are underestimated under the traditional energy supply options' evaluation, as well as the benefits that may be produced from renewable energy based applications in terms of social security and employment
Learning Augmented Optimization for Network Softwarization in 5G
The rapid uptake of mobile devices and applications are posing unprecedented traffic burdens on the existing networking infrastructures. In order to maximize both user experience and investment return, the networking and communications systems are evolving to the next gen- eration – 5G, which is expected to support more flexibility, agility, and intelligence towards provisioned services and infrastructure management. Fulfilling these tasks is challenging, as nowadays networks are increasingly heterogeneous, dynamic and expanded with large sizes. Network softwarization is one of the critical enabling technologies to implement these requirements in 5G. In addition to these problems investigated in preliminary researches about this technology, many new emerging application requirements and advanced opti- mization & learning technologies are introducing more challenges & opportunities for its fully application in practical production environment. This motivates this thesis to develop a new learning augmented optimization technology, which merges both the advanced opti- mization and learning techniques to meet the distinct characteristics of the new application environment. To be more specific, the abstracts of the key contents in this thesis are listed as follows: • We first develop a stochastic solution to augment the optimization of the Network Function Virtualization (NFV) services in dynamical networks. In contrast to the dominant NFV solutions applied for the deterministic networking environments, the inherent network dynamics and uncertainties from 5G infrastructure are impeding the rollout of NFV in many emerging networking applications. Therefore, Chapter 3 investigates the issues of network utility degradation when implementing NFV in dynamical networks, and proposes a robust NFV solution with full respect to the underlying stochastic features. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. • Next, Chapter 4 aims to intertwin the traditional optimization and learning technologies. In order to reap the merits of both optimization and learning technologies but avoid their limitations, promissing integrative approaches are investigated to combine the traditional optimization theories with advanced learning methods. Subsequently, an online optimization process is designed to learn the system dynamics for the network slicing problem, another critical challenge for network softwarization. Specifically, we first present a two-stage slicing optimization model with time-averaged constraints and objective to safeguard the network slicing operations in time-varying networks. Directly solving an off-line solution to this problem is intractable since the future system realizations are unknown before decisions. To address this, we combine the historical learning and Lyapunov stability theories, and develop a learning augmented online optimization approach. This facilitates the system to learn a safe slicing solution from both historical records and real-time observations. We prove that the proposed solution is always feasible and nearly optimal, up to a constant additive factor. Finally, simulation experiments are also provided to demonstrate the considerable improvement of the proposals. • The success of traditional solutions to optimizing the stochastic systems often requires solving a base optimization program repeatedly until convergence. For each iteration, the base program exhibits the same model structure, but only differing in their input data. Such properties of the stochastic optimization systems encourage the work of Chapter 5, in which we apply the latest deep learning technologies to abstract the core structures of an optimization model and then use the learned deep learning model to directly generate the solutions to the equivalent optimization model. In this respect, an encoder-decoder based learning model is developed in Chapter 5 to improve the optimization of network slices. In order to facilitate the solving of the constrained combinatorial optimization program in a deep learning manner, we design a problem-specific decoding process by integrating program constraints and problem context information into the training process. The deep learning model, once trained, can be used to directly generate the solution to any specific problem instance. This avoids the extensive computation in traditional approaches, which re-solve the whole combinatorial optimization problem for every instance from the scratch. With the help of the REINFORCE gradient estimator, the obtained deep learning model in the experiments achieves significantly reduced computation time and optimality loss
- …