566,136 research outputs found

    Deriving the Qubit from Entropy Principles

    Full text link
    The Heisenberg uncertainty principle is one of the most famous features of quantum mechanics. However, the non-determinism implied by the Heisenberg uncertainty principle --- together with other prominent aspects of quantum mechanics such as superposition, entanglement, and nonlocality --- poses deep puzzles about the underlying physical reality, even while these same features are at the heart of exciting developments such as quantum cryptography, algorithms, and computing. These puzzles might be resolved if the mathematical structure of quantum mechanics were built up from physically interpretable axioms, but it is not. We propose three physically-based axioms which together characterize the simplest quantum system, namely the qubit. Our starting point is the class of all no-signaling theories. Each such theory can be regarded as a family of empirical models, and we proceed to associate entropies, i.e., measures of information, with these models. To do this, we move to phase space and impose the condition that entropies are real-valued. This requirement, which we call the Information Reality Principle, arises because in order to represent all no-signaling theories (including quantum mechanics itself) in phase space, it is necessary to allow negative probabilities (Wigner [1932]). Our second and third principles take two important features of quantum mechanics and turn them into deliberately chosen physical axioms. One axiom is an Uncertainty Principle, stated in terms of entropy. The other axiom is an Unbiasedness Principle, which requires that whenever there is complete certainty about the outcome of a measurement in one of three mutually orthogonal directions, there must be maximal uncertainty about the outcomes in each of the two other directions.Comment: 8 pages, 3 figure

    Approaches to displaying information to assist decisions under uncertainty

    Get PDF
    The estimation of the costs of a product or project and the decisions based on these forecasts are subject to much uncertainty relating to factors like unknown future developments. This has been addressed repeatedly in research studies focusing on different aspects of uncertainty; unfortunately, this interest has not yet been adopted in practice. One reason can be found in the inadequate representation of uncertainty. This paper introduces an experiment, which engages different approaches to displaying cost forecasting information to gauge the consideration of uncertainty in the subsequent decision-making process. Three different approaches of displaying cost-forecasting information including the uncertainty involved in the data were tested, namely a three point trend forecast, a bar chart, and a FAN-diagram. Furthermore, the effects of using different levels of contextual information about the decision problem were examined. The results show that decision makers tend to simplify the level of uncertainty from a possible range of future outcomes to the limited form of a point estimate. Furthermore, the contextual information made the participants more aware of uncertainty. In addition, the fan-diagram prompted 75.0% of the participants to consider uncertainty even if they had not used this type of diagram before; it was therefore identified as the most suitable method of graphical information display for encouraging decision makers to consider the uncertainty in cost forecasting

    Distribution-free stochastic simulation methodology for model updating under hybrid uncertainties

    Get PDF
    In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019

    A Modl for Assessing Performance in Electronic Marketplaces

    Get PDF
    Theories of inter-organisational co-ordination propose that information processing capabilities (structure, process and technology) must be aligned with information processing needs (environmental, partnership and task uncertainty) and the fit between both is a strong determinant of performance. Electronic marketplaces dominate new developments in electronic commerce. While traditional models predominantly deal with one-to-one relationships, electronic marketplaces are mainly characterized as being one-to-many and many-to-many in nature. Such developments have meant performance is based on more than just the fit between information processing needs and information processing capabilities. Consequently measuring performance has become much more difficult. From utilising current research the authors develop a theoretical model examining issues such as trust, investment and ownership.The paper details the development of the model and proposes a research strategy for testing it

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    A Probabilistic Approach for Multiscale Poroelastic Modeling of Mature Organic-Rich Shales

    Get PDF
    Organic-rich shales have been recognized as one of the most important energy resources in the world due to their ubiquitous presence. However, there are numerous engineering challenges serving as obstacles for exploiting these geo-materials with multiscale microstructure. This work addresses an important aspect of engineering challenges in understanding the complex behavior of organic-rich source rocks, namely their anisotropic poroelastic behavior at multiple scales. To this end, we utilize a framework obtained by combining experimental characterization, physically-based modeling and uncertainty quantification that spans and integrates scales from nanoscale to macroscale. The multiscale models play a crucial role in predicting macroscale mechanical properties of organic-rich shales based on the available information on poromechanical properties in microscale. Recently a three-level multiscale model has been developed that spans from the nanometer length scale of organic-rich shales to the scale of macroscopic composite. This approach is powerful in capturing the homogenized/effective properties/behavior of these geomaterials. However, this model ignores the fluctuation/uncertainty in mechanical and compositional model parameters. As such the robustness and reliability of these estimates can be questioned in view of different sources of uncertainty, which in turn affect the requisite information based on which the models are constructed. In this research, we aim to develop a framework to systematically incorporate the main sources of uncertainty in modeling the multiscale behavior of organic-rich shales, and thus take the existing model one step forward. Particularly, we identify and model the uncertainty in main model parameters at each scale such as porosity and elastic properties. To that end, maximum entropy principle and random matrix theory are utilized to construct probabilistic descriptions of model parameters based on available information. Then, to propagate uncertainty across different scales the Monte Carlo simulation is carried out and consequently probabilistic descriptions of macro-scale properties are constructed. Furthermore, a global sensitivity analysis is carried out to characterize the contribution of each source of uncertainty on the overall response. Finally, methodological developments will be validated by both simulation and experimental test database

    Uncertainty quantification for radio interferometric imaging: II. MAP estimation

    Get PDF
    Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10510^5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.Comment: 13 pages, 10 figures, see companion article in this arXiv listin
    corecore