59,957 research outputs found

    A Relational Approach to Quantum Mechanics, Part I: Formulation

    Full text link
    Non-relativistic quantum mechanics is reformulated here based on the idea that relational properties among quantum systems, instead of the independent properties of a quantum system, are the most fundamental elements to construct quantum mechanics. This idea, combining with the emphasis that measurement of a quantum system is a bidirectional interaction process, leads to a new framework to calculate the probability of an outcome when measuring a quantum system. In this framework, the most basic variable is the relational probability amplitude. Probability is calculated as summation of weights from the alternative measurement configurations. The properties of quantum systems, such as superposition and entanglement, are manifested through the rules of counting the alternatives. Wave function and reduced density matrix are derived from the relational probability amplitude matrix. They are found to be secondary mathematical tools that equivalently describe a quantum system without explicitly calling out the reference system. Schr\"{o}dinger Equation is obtained when there is no entanglement in the relational probability amplitude matrix. Feynman Path Integral is used to calculate the relational probability amplitude, and is further generalized to formulate the reduced density matrix. In essence, quantum mechanics is reformulated as a theory that describes physical systems in terms of relational properties.Comment: 19 pages, 2 figures, article split into 3 parts during refereeing, minor correction. Adding journal reference for part

    An Analysis of Motorists’ Route Choice Using Stated Preference Techniques

    Get PDF
    This paper presents some results of an analysis of motorists' route choice based on stated preference responses. This is done for both an inter-urban and urban route choice context. The nature of the study is exploratory; the analysis being based upon a pilot survey of some 79 motorists undertaken in March/April 1984. The quality and nature of the responses are assessed in terms of a 'rationality' test and also through a consideration of lexicographical forms of decision making. The formal quantitative analysis examines the ranked preferences of motorists by means of an ordered multinomial logit model. Detailed results are presented for various formulations of the representative utility function to assess the influence of various relevant variables upon mute choice and to identify the best explanation of motorists' stated route preferences in both route choice contexts. Values of time are derived for a variety of rodel specifications as part of this consideration of the usefullness of the ranking approach to an analysis of motorists route choice

    The Distribution of Individual Values of Time: An Empirical Study Using Stated Preference Data

    Get PDF
    This paper reports the findings of further work undertaken on the Stated Preference (SP) data collected as part of the Department of Transport's Value of Time Project. The latter study estimated values of time in a variety of different circumstances and for a number of modes of travel and also examined how the value of time varied according to socio-economic factors. Although values of time were allowed to vary across individuals by segmenting the data according to socio-economic factors, the SP data permits the estimation of values of time at the individual level whereupon a distribution of individual values can be obtained. The Department of Transport expressed some interest in what information could be provided by the SP data about the distribution of values of in-vehicle time across individuals. Individual values of in-vehicle time have been estimated for each of the five SP experiments which had previously been conducted to obtain a distribution for each survey context and also pooled across surveys to derive a population distribution. The problems of estimating values at the individual level with the SP data available are considered and the findings are compared with the average values previously derived in the Value of Time Study. How these individual values of time vary with socio-economic factors is also considered

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks

    Full text link
    Bayesian Networks (BNs) represent conditional probability relations among a set of random variables (nodes) in the form of a directed acyclic graph (DAG), and have found diverse applications in knowledge discovery. We study the problem of learning the sparse DAG structure of a BN from continuous observational data. The central problem can be modeled as a mixed-integer program with an objective function composed of a convex quadratic loss function and a regularization penalty subject to linear constraints. The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions. However, the state-of-the-art optimization solvers are not able to obtain provably optimal solutions to the existing mathematical formulations for medium-size problems within reasonable computational times. To address this difficulty, we tackle the problem from both computational and statistical perspectives. On the one hand, we propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution to the mixed-integer program, and establish the consistency of this approximate solution. On the other hand, we improve the existing formulations by replacing the linear "big-MM" constraints that represent the relationship between the continuous and binary indicator variables with second-order conic constraints. Our numerical results demonstrate the effectiveness of the proposed approaches
    • …
    corecore