124,857 research outputs found

    Performance bounds for coupled models

    No full text
    Two models are called "coupled" when a non empty set of the underlying parameters are related through a differentiable implicit function. The goal is to estimate the parameters of both models by merging all datasets, that is, by processing them jointly. In this context, we show that the parameter estimation accuracy under a general class of dataset distributions always improves when compared to an equivalent uncoupled model. We eventually illustrate our results with the fusion of multiple tensor data

    Sensor Scheduling for Energy-Efficient Target Tracking in Sensor Networks

    Full text link
    In this paper we study the problem of tracking an object moving randomly through a network of wireless sensors. Our objective is to devise strategies for scheduling the sensors to optimize the tradeoff between tracking performance and energy consumption. We cast the scheduling problem as a Partially Observable Markov Decision Process (POMDP), where the control actions correspond to the set of sensors to activate at each time step. Using a bottom-up approach, we consider different sensing, motion and cost models with increasing levels of difficulty. At the first level, the sensing regions of the different sensors do not overlap and the target is only observed within the sensing range of an active sensor. Then, we consider sensors with overlapping sensing range such that the tracking error, and hence the actions of the different sensors, are tightly coupled. Finally, we consider scenarios wherein the target locations and sensors' observations assume values on continuous spaces. Exact solutions are generally intractable even for the simplest models due to the dimensionality of the information and action spaces. Hence, we devise approximate solution techniques, and in some cases derive lower bounds on the optimal tradeoff curves. The generated scheduling policies, albeit suboptimal, often provide close-to-optimal energy-tracking tradeoffs

    Collective Decision-Making in Ideal Networks: The Speed-Accuracy Tradeoff

    Full text link
    We study collective decision-making in a model of human groups, with network interactions, performing two alternative choice tasks. We focus on the speed-accuracy tradeoff, i.e., the tradeoff between a quick decision and a reliable decision, for individuals in the network. We model the evidence aggregation process across the network using a coupled drift diffusion model (DDM) and consider the free response paradigm in which individuals take their time to make the decision. We develop reduced DDMs as decoupled approximations to the coupled DDM and characterize their efficiency. We determine high probability bounds on the error rate and the expected decision time for the reduced DDM. We show the effect of the decision-maker's location in the network on their decision-making performance under several threshold selection criteria. Finally, we extend the coupled DDM to the coupled Ornstein-Uhlenbeck model for decision-making in two alternative choice tasks with recency effects, and to the coupled race model for decision-making in multiple alternative choice tasks.Comment: to appear in IEEE TCN

    Optics Program Simplifies Analysis and Design

    Get PDF
    Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporatio

    Error bounds on block Gauss Seidel solutions of coupled\ud multiphysics problems

    Get PDF
    Mathematical models in many fields often consist of coupled sub–models, each of which describe a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub–models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss–Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss–Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non–linear coupled fluid–temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss–Seidel iteration.\ud \ud Copyright c 2000 John Wiley & Sons, Ltd

    Efficient Finite Difference Method for Computing Sensitivities of Biochemical Reactions

    Full text link
    Sensitivity analysis of biochemical reactions aims at quantifying the dependence of the reaction dynamics on the reaction rates. The computation of the parameter sensitivities, however, poses many computational challenges when taking stochastic noise into account. This paper proposes a new finite difference method for efficiently computing sensitivities of biochemical reactions. We employ propensity bounds of reactions to couple the simulation of the nominal and perturbed processes. The exactness of the simulation is reserved by applying the rejection-based mechanism. For each simulation step, the nominal and perturbed processes under our coupling strategy are synchronized and often jump together, increasing their positive correlation and hence reducing the variance of the estimator. The distinctive feature of our approach in comparison with existing coupling approaches is that it only needs to maintain a single data structure storing propensity bounds of reactions during the simulation of the nominal and perturbed processes. Our approach allows to computing sensitivities of many reaction rates simultaneously. Moreover, the data structure does not require to be updated frequently, hence improving the computational cost. This feature is especially useful when applied to large reaction networks. We benchmark our method on biological reaction models to prove its applicability and efficiency.Comment: 29 pages with 6 figures, 2 table

    Exploring multimodal data fusion through joint decompositions with flexible couplings

    Full text link
    A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.Comment: 15 pages, 7 figures, revised versio

    A Linear Programming Approach to Error Bounds for Random Walks in the Quarter-plane

    Full text link
    We consider the approximation of the performance of random walks in the quarter-plane. The approximation is in terms of a random walk with a product-form stationary distribution, which is obtained by perturbing the transition probabilities along the boundaries of the state space. A Markov reward approach is used to bound the approximation error. The main contribution of the work is the formulation of a linear program that provides the approximation error
    • …
    corecore