9,475 research outputs found
A dynamic network model with persistent links and node-specific latent variables, with an application to the interbank market
We propose a dynamic network model where two mechanisms control the
probability of a link between two nodes: (i) the existence or absence of this
link in the past, and (ii) node-specific latent variables (dynamic fitnesses)
describing the propensity of each node to create links. Assuming a Markov
dynamics for both mechanisms, we propose an Expectation-Maximization algorithm
for model estimation and inference of the latent variables. The estimated
parameters and fitnesses can be used to forecast the presence of a link in the
future. We apply our methodology to the e-MID interbank network for which the
two linkage mechanisms are associated with two different trading behaviors in
the process of network formation, namely preferential trading and trading
driven by node-specific characteristics. The empirical results allow to
recognise preferential lending in the interbank market and indicate how a
method that does not account for time-varying network topologies tends to
overestimate preferential linkage.Comment: 19 pages, 6 figure
Modelling and managing systemic risks in supply chains
A structured review of the supply chain and risk management literature supports an analysis of the sources and types of risks anticipated in supply chains and networks. We discuss alternative modelling approaches, such as Bayesian Belief Nets (BBN), System Dynamics, Fault and Event Trees, which are evaluated against the criteria characterizing systemic risks that emerge from the literature review. Finally, we briefly present an empirical pilot case study is conducted with a public sector organization in charge of a pharmaceutical distribution network to explore the feasibility of a BBN modelling approach
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Implementing Bayesian networks for ISO 31000:2018-based maritime oil spill risk management: State-of-art, implementation benefits and challenges, and future research directions
The risk of a large-scale oil spill remains significant in marine environments as international maritime transport continues to grow. The environmental as well as the socio-economic impacts of a large-scale oil spill could be substantial. Oil spill models and modeling tools for Pollution Preparedness and Response (PPR) can support effective risk management. However, there is a lack of integrated approaches that consider oil spill risks comprehensively, learn from all information sources, and treat the system uncertainties in an explicit manner. Recently, the use of the international ISO 31000:2018 risk management framework has been suggested as a suitable basis for supporting oil spill PPR risk management. Bayesian networks (BNs) are graphical models that express uncertainty in a probabilistic form and can thus support decision-making processes when risks are complex and data are scarce. While BNs have increasingly been used for oil spill risk assessment (OSRA) for PPR, no link between the BNs literature and the ISO 31000:2018 framework has previously been made. This study explores how Bayesian risk models can be aligned with the ISO 31000:2018 framework by offering a flexible approach to integrate various sources of probabilistic knowledge. In order to gain insight in the current utilization of BNs for oil spill risk assessment and management (OSRA-BNs) for maritime oil spill preparedness and response, a literature review was performed. The review focused on articles presenting BN models that analyze the occurrence of oil spills, consequence mitigation in terms of offshore and shoreline oil spill response, and impacts of spills on the variables of interest. Based on the results, the study discusses the benefits of applying BNs to the ISO 31000:2018 framework as well as the challenges and further research needs.Peer reviewe
RISK ASSESSMENT OF MALICIOUS ATTACKS AGAINST POWER SYSTEMS
The new scenarios of malicious attack prompt for their deeper consideration and mainly when critical systems are at stake. In this framework, infrastructural systems, including power systems, represent a possible target due to the huge impact they can have on society. Malicious attacks are different in their nature from other more traditional cause of threats to power system, since they embed a strategic interaction between the attacker and the defender (characteristics that cannot be found in natural events or systemic failures). This difference has not been systematically analyzed by the existent literature. In this respect, new approaches and tools are needed. This paper presents a mixed-strategy game-theory model able to capture the strategic interactions between malicious agents that may be willing to attack power systems and the system operators, with its related bodies, that are in charge of defending them. At the game equilibrium, the different strategies of the two players, in terms of attacking/protecting the critical elements of the systems, can be obtained. The information about the attack probability to various elements can be used to assess the risk associated with each of them, and the efficiency of defense resource allocation is evidenced in terms of the corresponding risk. Reference defense plans related to the online defense action and the defense action with a time delay can be obtained according to their respective various time constraints. Moreover, risk sensitivity to the defense/attack-resource variation is also analyzed. The model is applied to a standard IEEE RTS-96 test system for illustrative purpose and, on the basis of that system, some peculiar aspects of the malicious attacks are pointed ou
Untangling hotel industryâs inefficiency: An SFA approach applied to a renowned Portuguese hotel chain
The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio
Recommended from our members
A Systematic Framework to Optimize and Control Monoclonal Antibody Manufacturing Process
Since the approval of the first therapeutic monoclonal antibody in 1986, monoclonal antibody has become an important class of drugs within the biopharmaceutical industry, with indications and superior efficacy across multiple therapeutic areas, such as oncology and immunology. Although there has been great advance in this field, there are still challenges that hinder or delay the development and approval of new antibodies.
For example, we have seen issues in manufacturing, such as quality, process inconsistency and large manufacturing cost, which can be attributed to production failure, delay in approval and drug shortage. Recently, the development of new technologies, such as Process Analytical Tools (PCT), and the use of statistical tools, such as quality by design (QbD), Design of Experiment (DoE) and Statistical Process Control (SPC), has enabled us to identify critical process parameters and attributes, and monitor manufacturing performance.
However, these methods might not be reliable or comprehensive enough to accurately describe the relationship between critical process parameters and attributes, or still lack the ability to forecast manufacturing performance. In this work, by utilizing multiple modeling approaches, we have developed a systematic framework to optimize and control monoclonal antibody manufacturing process.
In our first study, we leverage DoE-PCA approach to unambiguously identify critical process parameters to improve process yield and cost of goods, followed by the use of Monte Carlo simulation to validate the impact of parameters on these attributes. In our second study, we use a Bayesian approach to predict product quality for future manufacturing batches, and hence mitigation strategies can be put in place if the data suggest a potential deviation. Finally, we use neural network model to accurately characterize the impurity reduction of each purification step, and ultimately use this model to develop acceptance criteria for the feed based on the predetermined process specifications. Overall, the work in this thesis demonstrates that the framework is powerful and more reliable for process optimization, monitoring and control
The History of the Quantitative Methods in Finance Conference Series. 1992-2007
This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.
- âŠ