546 research outputs found

    Environmental boundary tracking and estimation using multiple autonomous vehicles

    Get PDF
    In this paper, we develop a framework for environmental boundary tracking and estimation by considering the boundary as a hidden Markov model (HMM) with separated observations collected from multiple sensing vehicles. For each vehicle, a tracking algorithm is developed based on Page’s cumulative sum algorithm (CUSUM), a method for change-point detection, so that individual vehicles can autonomously track the boundary in a density field with measurement noise. Based on the data collected from sensing vehicles and prior knowledge of the dynamic model of boundary evolvement, we estimate the boundary by solving an optimization problem, in which prediction and current observation are considered in the cost function. Examples and simulation results are presented to verify the efficiency of this approach

    Consistent Dynamic Mode Decomposition

    Full text link
    We propose a new method for computing Dynamic Mode Decomposition (DMD) evolution matrices, which we use to analyze dynamical systems. Unlike the majority of existing methods, our approach is based on a variational formulation consisting of data alignment penalty terms and constitutive orthogonality constraints. Our method does not make any assumptions on the structure of the data or their size, and thus it is applicable to a wide range of problems including non-linear scenarios or extremely small observation sets. In addition, our technique is robust to noise that is independent of the dynamics and it does not require input data to be sequential. Our key idea is to introduce a regularization term for the forward and backward dynamics. The obtained minimization problem is solved efficiently using the Alternating Method of Multipliers (ADMM) which requires two Sylvester equation solves per iteration. Our numerical scheme converges empirically and is similar to a provably convergent ADMM scheme. We compare our approach to various state-of-the-art methods on several benchmark dynamical systems

    Zero Shot Learning with the Isoperimetric Loss

    Full text link
    We introduce the isoperimetric loss as a regularization criterion for learning the map from a visual representation to a semantic embedding, to be used to transfer knowledge to unknown classes in a zero-shot learning setting. We use a pre-trained deep neural network model as a visual representation of image data, a Word2Vec embedding of class labels, and linear maps between the visual and semantic embedding spaces. However, the spaces themselves are not linear, and we postulate the sample embedding to be populated by noisy samples near otherwise smooth manifolds. We exploit the graph structure defined by the sample points to regularize the estimates of the manifolds by inferring the graph connectivity using a generalization of the isoperimetric inequalities from Riemannian geometry to graphs. Surprisingly, this regularization alone, paired with the simplest baseline model, outperforms the state-of-the-art among fully automated methods in zero-shot learning benchmarks such as AwA and CUB. This improvement is achieved solely by learning the structure of the underlying spaces by imposing regularity.Comment: Accepted to AAAI-2

    Canale di Reno e Paladozza; ricerca storica e progetto di riqualificazione

    No full text
    Progettazione di riqualificazione a scala urbana della via Riva di Reno, del Piazzale Azzarita, del Giardino Decorato Onore Civile e di piazza Resistenza, con ricerca storica sul sistema canali Bolognese, area Porto e Palazzetto dello Sport

    Logic Programming approaches for routing fault-free and maximally-parallel Wavelength Routed Optical Networks on Chip (Application paper)

    Get PDF
    One promising trend in digital system integration consists of boosting on-chip communication performance by means of silicon photonics, thus materializing the so-called Optical Networks-on-Chip (ONoCs). Among them, wavelength routing can be used to route a signal to destination by univocally associating a routing path to the wavelength of the optical carrier. Such wavelengths should be chosen so to minimize interferences among optical channels and to avoid routing faults. As a result, physical parameter selection of such networks requires the solution of complex constrained optimization problems. In previous work, published in the proceedings of the International Conference on Computer-Aided Design, we proposed and solved the problem of computing the maximum parallelism obtainable in the communication between any two endpoints while avoiding misrouting of optical signals. The underlying technology, only quickly mentioned in that paper, is Answer Set Programming (ASP). In this work, we detail the ASP approach we used to solve such problem. Another important design issue is to select the wavelengths of optical carriers such that they are spread across the available spectrum, in order to reduce the likelihood that, due to imperfections in the manufacturing process, unintended routing faults arise. We show how to address such problem in Constraint Logic Programming on Finite Domains (CLP(FD)). This paper is under consideration for possible publication on Theory and Practice of Logic Programming.Comment: Paper presented at the 33nd International Conference on Logic Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017. 16 pages, LaTeX, 5 figure

    Characterization of radially symmetric finite time blowup in multidimensional aggregation equations,

    Get PDF
    This paper studies the transport of a mass μ\mu in d,d2,\real^d, d \geq 2, by a flow field v=Kμv= -\nabla K*\mu. We focus on kernels K=xα/αK=|x|^\alpha/ \alpha for 2dα<22-d\leq \alpha<2 for which the smooth densities are known to develop singularities in finite time. For this range This paper studies the transport of a mass μ\mu in d,d2,\real^d, d \geq 2, by a flow field v=Kμv= -\nabla K*\mu. We focus on kernels K=xα/αK=|x|^\alpha/ \alpha for 2dα<22-d\leq \alpha<2 for which the smooth densities are known to develop singularities in finite time. For this range we prove the existence for all time of radially symmetric measure solutions that are monotone decreasing as a function of the radius, thus allowing for continuation of the solution past the blowup time. The monotone constraint on the data is consistent with the typical blowup profiles observed in recent numerical studies of these singularities. We prove monotonicity is preserved for all time, even after blowup, in contrast to the case α>2\alpha >2 where radially symmetric solutions are known to lose monotonicity. In the case of the Newtonian potential (α=2d\alpha=2-d), under the assumption of radial symmetry the equation can be transformed into the inviscid Burgers equation on a half line. This enables us to prove preservation of monotonicity using the classical theory of conservation laws. In the case 2d<α<22 -d < \alpha < 2 and at the critical exponent pp we exhibit initial data in LpL^p for which the solution immediately develops a Dirac mass singularity. This extends recent work on the local ill-posedness of solutions at the critical exponent.Comment: 30 page

    The regularity of the boundary of a multidimensional aggregation patch

    Get PDF
    Let d2d \geq 2 and let N(y)N(y) be the fundamental solution of the Laplace equation in RdR^d We consider the aggregation equation ρt+div(ρv)=0,v=Nρ \frac{\partial \rho}{\partial t} + \operatorname{div}(\rho v) =0, v = -\nabla N * \rho with initial data ρ(x,0)=χD0\rho(x,0) = \chi_{D_0}, where χD0\chi_{D_0} is the indicator function of a bounded domain D0Rd.D_0 \subset R^d. We now fix 0<γ<10 < \gamma < 1 and take D0D_0 to be a bounded C1+γC^{1+\gamma} domain (a domain with smooth boundary of class C1+γC^{1+\gamma}). Then we have Theorem: If D0D_0 is a C1+γC^{1 + \gamma} domain, then the initial value problem above has a solution given by ρ(x,t)=11tχDt(x),xRd,0t<1\rho(x,t) = \frac{1}{1 -t} \chi_{D_t}(x), \quad x \in R^d, \quad 0 \le t < 1 where DtD_t is a C1+γC^{1 + \gamma} domain for all 0t<10 \leq t < 1

    A Harmonic Extension Approach for Collaborative Ranking

    Full text link
    We present a new perspective on graph-based methods for collaborative ranking for recommender systems. Unlike user-based or item-based methods that compute a weighted average of ratings given by the nearest neighbors, or low-rank approximation methods using convex optimization and the nuclear norm, we formulate matrix completion as a series of semi-supervised learning problems, and propagate the known ratings to the missing ones on the user-user or item-item graph globally. The semi-supervised learning problems are expressed as Laplace-Beltrami equations on a manifold, or namely, harmonic extension, and can be discretized by a point integral method. We show that our approach does not impose a low-rank Euclidean subspace on the data points, but instead minimizes the dimension of the underlying manifold. Our method, named LDM (low dimensional manifold), turns out to be particularly effective in generating rankings of items, showing decent computational efficiency and robust ranking quality compared to state-of-the-art methods

    A Model for Optimal Human Navigation with Stochastic Effects

    Full text link
    We present a method for optimal path planning of human walking paths in mountainous terrain, using a control theoretic formulation and a Hamilton-Jacobi-Bellman equation. Previous models for human navigation were entirely deterministic, assuming perfect knowledge of the ambient elevation data and human walking velocity as a function of local slope of the terrain. Our model includes a stochastic component which can account for uncertainty in the problem, and thus includes a Hamilton-Jacobi-Bellman equation with viscosity. We discuss the model in the presence and absence of stochastic effects, and suggest numerical methods for simulating the model. We discuss two different notions of an optimal path when there is uncertainty in the problem. Finally, we compare the optimal paths suggested by the model at different levels of uncertainty, and observe that as the size of the uncertainty tends to zero (and thus the viscosity in the equation tends to zero), the optimal path tends toward the deterministic optimal path
    corecore