39,741 research outputs found

    Optimal Design of Low Order Controllers Satisfying Sensitivity and Robustness Constraint

    Get PDF
    The set of all stabilizing controllers of a given low order structure that guarantee specifications on the gain margin, phase margin and a bound on the sensitivity corresponds to a region in n-dimensional space defined by the coefficients of the controllers. For several practical criteria defined in the paper it is shown that the optimal controller lies on the surface of that region. Moreover, it is shown how to reduce that region to avoid actuator saturation during operation

    Collaborative search on the plane without communication

    Get PDF
    We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω\Omega(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant \epsilon \textgreater{} 0, there exists a rather simple uniform search algorithm which is O(log1+ϵk)O( \log^{1+\epsilon} k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant \epsilon \textgreater{} 0, if each agent is given a (one-sided) kϵk^\epsilon-approximation to k, then the competitiveness is Ω\Omega(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios

    Tangential Large Scale Structure as a Standard Ruler: Curvature Parameters from Quasars

    Get PDF
    Several observational analyses suggest that matter is spatially structured at 130h1Mpc\approx 130h^{-1}Mpc at low redshifts. This peak in the power spectrum provides a standard ruler in comoving space which can be used to compare the local geometry at high and low redshifts, thereby constraining the curvature parameters. It is shown here that this power spectrum peak is present in the observed quasar distribution at z2z\sim 2: qualitatively, via wedge diagrams which clearly show a void-like structure, and quantitatively, via one-dimensional Fourier analysis of the quasars' tangential distribution. The sample studied here contains 812 quasars. The method produces strong constraints (68% confidence limits) on the density parameter Ω0\Omega_0 and weaker constraints on the cosmological constant λ0\lambda_0, which can be expressed by the relation Ω0=(0.24±0.15)+(0.10±0.08)λ0 \Omega_0 = (0.24 \pm0.15) + (0.10\pm0.08) \lambda_0 . Independently of λ0\lambda_0 (in the range λ0[0,1]\lambda_0 \in [0,1]), the constraint is 0.1<Ω0<0.450.1 < \Omega_0 < 0.45. Combination of the present results with SN Type Ia results yields Ω0=(0.30±0.11)+(0.57±0.11)(λ00.7),\Omega_0 = (0.30\pm0.11) + (0.57\pm0.11) (\lambda_0-0.7), 0.55<λ0<0.95,0.55 < \lambda_0 < 0.95, (68% confidence limits). This strongly supports the possibility that the observable universe satisfies a nearly flat, perturbed Friedmann-Lema\^{\i}tre-Robertson-Walker model, independently of any cosmic microwave background observations.Comment: 15 pages, 15 figures; v2 has several minor modifications but conclusions unchanged; accepted by Astronomy & Astrophysic

    Direct Monocular Odometry Using Points and Lines

    Full text link
    Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in texture-less environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging texture-less environments, our algorithm reduces the state estimation error over 50%.Comment: ICRA 201

    Detection of compact objects by means of gravitational lensing in binary systems

    Get PDF
    We consider the gravitational magnification of light for binary systems containing two compact objects: white dwarfs, a white dwarf and a neutron star or a white dwarf and a black hole. Light curves of the flares of the white dwarf caused by this effect were built in analytical approximations and by means of numerical calculations. We estimate the probability of the detection of these events in our Galaxy for different types of binaries and show that gravitational lensing provides a tool for detecting such systems. We propose to use the facilities of the Sloan Digital Sky Survey (SDSS) to search for these flares. It is possible to detect several dozens compact object pairs in such a programme over 5 years. This programme is apparently the best way to detect stellar mass black holes with open event horizons.Comment: 15 pages, 11 figures; Accepted for publication in Astronomy & Astrophysic

    Detecting extreme mass ratio inspirals with LISA using time-frequency methods II: search characterization

    Get PDF
    The inspirals of stellar-mass compact objects into supermassive black holes constitute some of the most important sources for LISA. Detection of these sources using fully coherent matched filtering is computationally intractable, so alternative approaches are required. In a previous paper (Wen and Gair 2005, gr-qc/0502100), we outlined a detection method based on looking for excess power in a time-frequency spectrogram of the LISA data. The performance of the algorithm was assessed using a single `typical' trial waveform and approximations to the noise statistics. In this paper we present results of Monte Carlo simulations of the search noise statistics and examine its performance in detecting a wider range of trial waveforms. We show that typical extreme mass ratio inspirals (EMRIs) can be detected at distances of up to 1--3 Gpc, depending on the source parameters. We also discuss some remaining issues with the technique and possible ways in which the algorithm can be improved.Comment: 15 pages, 9 figures, to appear in proceedings of GWDAW 9, Annecy, France, December 200

    PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    Full text link
    Probability Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. In this paper, we present a modification of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multi-dimensional phase space, minimising the variance of the signal and background densities inside the cells. The implementation of the binning algorithm PDE-Foam is based on the MC event-generation package Foam. We present performance results for representative examples (toy models) and discuss the dependence of the obtained results on the choice of parameters. The new PDE-Foam shows improved classification capability for small training samples and reduced classification time compared to the original PDE method based on range searching.Comment: 19 pages, 11 figures; replaced with revised version accepted for publication in NIM A and corrected typos in description of Fig. 7 and
    corecore