3,069 research outputs found
On Unconstrained Quasi-Submodular Function Optimization
With the extensive application of submodularity, its generalizations are
constantly being proposed. However, most of them are tailored for special
problems. In this paper, we focus on quasi-submodularity, a universal
generalization, which satisfies weaker properties than submodularity but still
enjoys favorable performance in optimization. Similar to the diminishing return
property of submodularity, we first define a corresponding property called the
{\em single sub-crossing}, then we propose two algorithms for unconstrained
quasi-submodular function minimization and maximization, respectively. The
proposed algorithms return the reduced lattices in iterations,
and guarantee the objective function values are strictly monotonically
increased or decreased after each iteration. Moreover, any local and global
optima are definitely contained in the reduced lattices. Experimental results
verify the effectiveness and efficiency of the proposed algorithms on lattice
reduction.Comment: 11 page
Motivating Time as a First Class Entity
In hard real-time applications, programs must not only be functionally correct but must also meet timing constraints. Unfortunately, little work has been done to allow a high-level incorporation of timing constraints into distributed real-time programs. Instead the programmer is required to ensure system timing through a complicated synchronization process or through low-level programming, making it difficult to create and modify programs. In this report, we describe six features that must be integrated into a high level language and underlying support system in order to promote time to a first class position in distributed real-time programming systems: expressibility of time, real-time communication, enforcement of timing constraints, fault tolerance to violations of constraints, ensuring distributed system state consistency in the time domain, and static timing verification. For each feature we describe what is required, what related work had been performed, and why this work does not adequately provide sufficient capabilities for distributed real-time programming. We then briefly outline an integrated approach to provide these six features using a high-level distributed programming language and system tools such as compilers, operating systems, and timing analyzers to enforce and verify timing constraints
Non Parametric Distributed Inference in Sensor Networks Using Box Particles Messages
This paper deals with the problem of inference in distributed systems where the probability model is stored in a distributed fashion. Graphical models provide powerful tools for modeling this kind of problems. Inspired by the box particle filter which combines interval analysis with particle filtering to solve temporal inference problems, this paper introduces a belief propagation-like message-passing algorithm that uses bounded error methods to solve the inference problem defined on an arbitrary graphical model. We show the theoretic derivation of the novel algorithm and we test its performance on the problem of calibration in wireless sensor networks. That is the positioning of a number of randomly deployed sensors, according to some reference defined by a set of anchor nodes for which the positions are known a priori. The new algorithm, while achieving a better or similar performance, offers impressive reduction of the information circulating in the network and the needed computation times
Recommended from our members
Structured Sub-Nyquist Sampling with Applications in Compressive Toeplitz Covariance Estimation, Super-Resolution and Phase Retrieval
Sub-Nyquist sampling has received a huge amount of interest in the past decade. In classical compressed sensing theory, if the measurement procedure satisfies a particular condition known as Restricted Isometry Property (RIP), we can achieve stable recovery of signals of low-dimensional intrinsic structures with an order-wise optimal sample size. Such low-dimensional structures include sparse and low rank for both vector and matrix cases. The main drawback of conventional compressed sensing theory is that random measurements are required to ensure the RIP property. However, in many applications such as imaging and array signal processing, applying independent random measurements may not be practical as the systems are deterministic. Moreover, random measurements based compressed sensing always exploits convex programs for signal recovery even in the noiseless case, and solving those programs is computationally intensive if the ambient dimension is large, especially in the matrix case. The main contribution of this dissertation is that we propose a deterministic sub-Nyquist sampling framework for compressing the structured signal and come up with computationally efficient algorithms. Besides widely studied sparse and low-rank structures, we particularly focus on the cases that the signals of interest are stationary or the measurements are of Fourier type. The key difference between our work from classical compressed sensing theory is that we explicitly exploit the second-order statistics of the signals, and study the equivalent quadratic measurement model in the correlation domain. The essential observation made in this dissertation is that a difference/sum coarray structure will arise from the quadratic model if the measurements are of Fourier type. With these observations, we are able to achieve a better compression rate for covariance estimation, identify more sources in array signal processing or recover the signals of larger sparsity. In this dissertation, we will first study the problem of Toeplitz covariance estimation. In particular, we will show how to achieve an order-wise optimal compression rate using the idea of sparse arrays in both general and low-rank cases. Then, an analysis framework of super-resolution with positivity constraint is established. We will present fundamental robustness guarantees, efficient algorithms and applications in practices. Next, we will study the problem of phase-retrieval for which we successfully apply the sparse array ideas by fully exploiting the quadratic measurement model. We achieve near-optimal sample complexity for both sparse and general cases with practical Fourier measurements and provide efficient and deterministic recovery algorithms. In the end, we will further elaborate on the essential role of non-negative constraint in underdetermined inverse problems. In particular, we will analyze the nonlinear co-array interpolation problem and develop a universal upper bound of the interpolation error. Bilinear problem with non-negative constraint will be considered next and the exact characterization of the ambiguous solutions will be established for the first time in literature. At last, we will show how to apply the nested array idea to solve real problems such as Kriging. Using spatial correlation information, we are able to have a stable estimate of the field of interest with fewer sensors than classic methodologies. Extensive numerical experiments are implemented to demonstrate our theoretical claims
- …