9,021 research outputs found

    Mathematical Basis for Physical Inference

    Full text link
    While the axiomatic introduction of a probability distribution over a space is common, its use for making predictions, using physical theories and prior knowledge, suffers from a lack of formalization. We propose to introduce, in the space of all probability distributions, two operations, the OR and the AND operation, that bring to the space the necessary structure for making inferences on possible values of physical parameters. While physical theories are often asumed to be analytical, we argue that consistent inference needs to replace analytical theories by probability distributions over the parameter space, and we propose a systematic way of obtaining such "theoretical correlations", using the OR operation on the results of physical experiments. Predicting the outcome of an experiment or solving "inverse problems" are then examples of the use of the AND operation. This leads to a simple and complete mathematical basis for general physical inference.Comment: 24 pages, 4 figure

    A heuristic approach for path planning for redundant robots

    Full text link
    A new method to solve the trajectory generation for a redundant manipulator is proposed. It avoids traditional computationally intensive methods by relying on the human experience; The proposed method uses a fuzzy logic controller to generate the magnitude of the angles needed to move the end effector to the next target point. The inputs to the controller are the desired displacement of the end effector and the elements of the jacobian matrix that correspond to the considered joint, while the output is the angle magnitude of the joint needed to reach the target point; An algorithm is used to determine the sign of the output from the fuzzy logic controller. Inverse kinematics is used to bring the end-effector to the target point; Several fuzzy logic controllers combined with heuristic algorithms are used to avoid the obstacles in the workspace and to avoid self collision of the links

    Strange Attractors in Dissipative Nambu Mechanics : Classical and Quantum Aspects

    Full text link
    We extend the framework of Nambu-Hamiltonian Mechanics to include dissipation in R3R^{3} phase space. We demonstrate that it accommodates the phase space dynamics of low dimensional dissipative systems such as the much studied Lorenz and R\"{o}ssler Strange attractors, as well as the more recent constructions of Chen and Leipnik-Newton. The rotational, volume preserving part of the flow preserves in time a family of two intersecting surfaces, the so called {\em Nambu Hamiltonians}. They foliate the entire phase space and are, in turn, deformed in time by Dissipation which represents their irrotational part of the flow. It is given by the gradient of a scalar function and is responsible for the emergence of the Strange Attractors. Based on our recent work on Quantum Nambu Mechanics, we provide an explicit quantization of the Lorenz attractor through the introduction of Non-commutative phase space coordinates as Hermitian N×N N \times N matrices in R3 R^{3}. They satisfy the commutation relations induced by one of the two Nambu Hamiltonians, the second one generating a unique time evolution. Dissipation is incorporated quantum mechanically in a self-consistent way having the correct classical limit without the introduction of external degrees of freedom. Due to its volume phase space contraction it violates the quantum commutation relations. We demonstrate that the Heisenberg-Nambu evolution equations for the Quantum Lorenz system give rise to an attracting ellipsoid in the 3N23 N^{2} dimensional phase space.Comment: 35 pages, 4 figures, LaTe

    Fuzzy System Identification Based Upon a Novel Approach to Nonlinear Optimization

    Get PDF
    Fuzzy systems are often used to model the behavior of nonlinear dynamical systems in process control industries because the model is linguistic in nature, uses a natural-language rule set, and because they can be included in control laws that meet the design goals. However, because the rigorous study of fuzzy logic is relatively recent, there is a shortage of well-defined and understood mechanisms for the design of a fuzzy system. One of the greatest challenges in fuzzy modeling is to determine a suitable structure, parameters, and rules that minimize an appropriately chosen error between the fuzzy system, a mathematical model, and the target system. Numerous methods for establishing a suitable fuzzy system have been proposed, however, none are able to demonstrate the existence of a structure, parameters, or rule base that will minimize the error between the fuzzy and the target system. The piecewise linear approximator (PLA) is a mathematical construct that can be used to approximate an input-output data set with a series of connected line segments. The number of segments in the PLA is generally selected by the designer to meet a given error criteria. Increasing the number of segments will generally improve the approximation. If the location of the breakpoints between segments is known, it is a straightforward process to select the PLA parameters to minimize the error. However, if the location of the breakpoints is not known, a mechanism is required to determine their locations. While algorithms exist that will determine the location of the breakpoints, they do not minimize the error between data and the model. This work will develop theory that shows that an optimal solution to this nonlinear optimization problem exists and demonstrates how it can be applied to fuzzy modeling. This work also demonstrates that a fuzzy system restricted to a particular class of input membership functions, output membership functions, conjunction operator, and defuzzification technique is equivalent to a piecewise linear approximator (PLA). Furthermore, this work develops a new nonlinear optimization technique that minimizes the error between a PLA and an arbitrary one-dimensional set of input-output data and solves the optimal breakpoint problem. This nonlinear optimization technique minimizes the approximation error of several classes of nonlinear functions leading up to the generalized PLA. While direct application of this technique is computationally intensive, several paths are available for investigation that may ease this limitation. An algorithm is developed based on this optimization theory that is significantly more computationally tractable. Several potential applications of this work are discussed including the ability to model the nonlinear portions of Hammerstein and Wiener systems

    Applying the structural equation model rule-based fuzzy system with genetic algorithm for trading in currency market

    Get PDF
    The present study uses the structural equation model (SEM) to analyze the correlations between various economic indices pertaining to latent variables, such as the New Taiwan Dollar (NTD) value, the United States Dollar (USD) value, and USD index. In addition, a risk factor of volatility of currency returns is considered to develop a risk-controllable fuzzy inference system. The rational and linguistic knowledge-based fuzzy rules are established based on the SEM model and then optimized using the genetic algorithm. The empirical results reveal that the fuzzy logic trading system using the SEM indeed outperforms the buy-and-hold strategy. Moreover, when considering the risk factor of currency volatility, the performance appears significantly better. Remarkably, the trading strategy is apparently affected when the USD value or the volatility of currency returns shifts into either a higher or lower state.Knowledge-based Systems, Fuzzy Sets, Structural Equation Model (SEM), Genetic Algorithm (GA), Currency Volatility

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Epistemic Uncertainty Quantification in Scientific Models

    Get PDF
    In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648-4663], where a framework for numerical treatment of epistemic uncertainty was proposed. In this paper, firstly, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in Lp norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach. And then, we apply this methods to work with one of the more restrictive uncertainty models, i.e., the fuzzy logic, where the p-distance, the weighted expected value and variance are defined to assess the accuracy of the solutions. At last, we give a brief introduction to our future work, which is epistemic uncertainty quantification using evidence theory

    The fully connected N-dimensional skeleton: probing the evolution of the cosmic web

    Full text link
    A method to compute the full hierarchy of the critical subsets of a density field is presented. It is based on a watershed technique and uses a probability propagation scheme to improve the quality of the segmentation by circumventing the discreteness of the sampling. It can be applied within spaces of arbitrary dimensions and geometry. This recursive segmentation of space yields, for a dd-dimensional space, a d−1d-1 succession of nn-dimensional subspaces that fully characterize the topology of the density field. The final 1D manifold of the hierarchy is the fully connected network of the primary critical lines of the field : the skeleton. It corresponds to the subset of lines linking maxima to saddle points, and provides a definition of the filaments that compose the cosmic web as a precise physical object, which makes it possible to compute any of its properties such as its length, curvature, connectivity etc... When the skeleton extraction is applied to initial conditions of cosmological N-body simulations and their present day non linear counterparts, it is shown that the time evolution of the cosmic web, as traced by the skeleton, is well accounted for by the Zel'dovich approximation. Comparing this skeleton to the initial skeleton undergoing the Zel'dovich mapping shows that two effects are competing during the formation of the cosmic web: a general dilation of the larger filaments that is captured by a simple deformation of the skeleton of the initial conditions on the one hand, and the shrinking, fusion and disappearance of the more numerous smaller filaments on the other hand. Other applications of the N dimensional skeleton and its peak patch hierarchy are discussed.Comment: Accepted for publication in MNRA
    • 

    corecore