901 research outputs found

    Network Uncertainty Informed Semantic Feature Selection for Visual SLAM

    Full text link
    In order to facilitate long-term localization using a visual simultaneous localization and mapping (SLAM) algorithm, careful feature selection can help ensure that reference points persist over long durations and the runtime and storage complexity of the algorithm remain consistent. We present SIVO (Semantically Informed Visual Odometry and Mapping), a novel information-theoretic feature selection method for visual SLAM which incorporates semantic segmentation and neural network uncertainty into the feature selection pipeline. Our algorithm selects points which provide the highest reduction in Shannon entropy between the entropy of the current state and the joint entropy of the state, given the addition of the new feature with the classification entropy of the feature from a Bayesian neural network. Each selected feature significantly reduces the uncertainty of the vehicle state and has been detected to be a static object (building, traffic sign, etc.) repeatedly with a high confidence. This selection strategy generates a sparse map which can facilitate long-term localization. The KITTI odometry dataset is used to evaluate our method, and we also compare our results against ORB_SLAM2. Overall, SIVO performs comparably to the baseline method while reducing the map size by almost 70%.Comment: Published in: 2019 16th Conference on Computer and Robot Vision (CRV

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Database Learning: Toward a Database that Becomes Smarter Every Time

    Full text link
    In today's databases, previous query answers rarely benefit answering future queries. For the first time, to the best of our knowledge, we change this paradigm in an approximate query processing (AQP) context. We make the following observation: the answer to each query reveals some degree of knowledge about the answer to another query because their answers stem from the same underlying distribution that has produced the entire dataset. Exploiting and refining this knowledge should allow us to answer queries more analytically, rather than by reading enormous amounts of raw data. Also, processing more queries should continuously enhance our knowledge of the underlying distribution, and hence lead to increasingly faster response times for future queries. We call this novel idea---learning from past query answers---Database Learning. We exploit the principle of maximum entropy to produce answers, which are in expectation guaranteed to be more accurate than existing sample-based approximations. Empowered by this idea, we build a query engine on top of Spark SQL, called Verdict. We conduct extensive experiments on real-world query traces from a large customer of a major database vendor. Our results demonstrate that Verdict supports 73.7% of these queries, speeding them up by up to 23.0x for the same accuracy level compared to existing AQP systems.Comment: This manuscript is an extended report of the work published in ACM SIGMOD conference 201

    Small crater modification on Meridiani Planum and implications for erosion rates and climate change on Mars

    Get PDF
    A morphometric and morphologic catalog of ~100 small craters imaged by the Opportunity rover over the 33.5 km traverse between Eagle and Endeavour craters on Meridiani Planum shows craters in six stages of degradation that range from fresh and blocky to eroded and shallow depressions ringed by planed off rim blocks. The age of each morphologic class from <50–200 ka to ~20 Ma has been determined from the size‐frequency distribution of craters in the catalog, the retention age of small craters on Meridiani Planum, and the age of the latest phase of ripple migration. The rate of degradation of the craters has been determined from crater depth, rim height, and ejecta removal over the class age. These rates show a rapid decrease from ~1 m/Myr for craters <1 Ma to ~ <0.1 m/Myr for craters 10–20 Ma, which can be explained by topographic diffusion with modeled diffusivities of ~10^(−6) m^2/yr. In contrast to these relatively fast, short‐term erosion rates, previously estimated average erosion rates on Mars over ~100 Myr and 3 Gyr timescales from the Amazonian and Hesperian are of order <0.01 m/Myr, which is 3–4 orders of magnitude slower than typical terrestrial rates. Erosion rates during the Middle‐Late Noachian averaged over ~250 Myr, and ~700 Myr intervals are around 1 m/Myr, comparable to slow terrestrial erosion rates calculated over similar timescales. This argues for a wet climate before ~3 Ga in which liquid water was the erosional agent, followed by a dry environment dominated by slow eolian erosion

    Implications of Electronics Constraints for Solid-State Quantum Error Correction and Quantum Circuit Failure Probability

    Full text link
    In this paper we present the impact of classical electronics constraints on a solid-state quantum dot logical qubit architecture. Constraints due to routing density, bandwidth allocation, signal timing, and thermally aware placement of classical supporting electronics significantly affect the quantum error correction circuit's error rate. We analyze one level of a quantum error correction circuit using nine data qubits in a Bacon-Shor code configured as a quantum memory. A hypothetical silicon double quantum dot quantum bit (qubit) is used as the fundamental element. A pessimistic estimate of the error probability of the quantum circuit is calculated using the total number of gates and idle time using a provably optimal schedule for the circuit operations obtained with an integer program methodology. The micro-architecture analysis provides insight about the different ways the electronics impact the circuit performance (e.g., extra idle time in the schedule), which can significantly limit the ultimate performance of any quantum circuit and therefore is a critical foundation for any future larger scale architecture analysis.Comment: 10 pages, 7 figures, 3 table

    VerdictDB: Universalizing Approximate Query Processing

    Full text link
    Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes of this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQL-on-Hadoop products) with implementing standard features. Additionally, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases---an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a database-agnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all off-the-shelf engines. Operating at the driver-level, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171×\times speedup (18.45×\times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift, while incurring less than 2.6% relative error. VerdictDB is open-sourced under Apache License.Comment: Extended technical report of the paper that appeared in Proceedings of the 2018 International Conference on Management of Data, pp. 1461-1476. ACM, 201

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    Evaluating the Plausible Range of N2O Biosignatures on Exo-Earths: An Integrated Biogeochemical, Photochemical, and Spectral Modeling Approach

    Full text link
    Nitrous oxide (N2O) -- a product of microbial nitrogen metabolism -- is a compelling exoplanet biosignature gas with distinctive spectral features in the near- and mid-infrared, and only minor abiotic sources on Earth. Previous investigations of N2O as a biosignature have examined scenarios using Earthlike N2O mixing ratios or surface fluxes, or those inferred from Earth's geologic record. However, biological fluxes of N2O could be substantially higher, due to a lack of metal catalysts or if the last step of the denitrification metabolism that yields N2 from N2O had never evolved. Here, we use a global biogeochemical model coupled with photochemical and spectral models to systematically quantify the limits of plausible N2O abundances and spectral detectability for Earth analogs orbiting main-sequence (FGKM) stars. We examine N2O buildup over a range of oxygen conditions (1%-100% present atmospheric level) and N2O fluxes (0.01-100 teramole per year; Tmol = 10^12 mole) that are compatible with Earth's history. We find that N2O fluxes of 10 [100] Tmol yr1^{-1} would lead to maximum N2O abundances of ~5 [50] ppm for Earth-Sun analogs, 90 [1600] ppm for Earths around late K dwarfs, and 30 [300] ppm for an Earthlike TRAPPIST-1e. We simulate emission and transmission spectra for intermediate and maximum N2O concentrations that are relevant to current and future space-based telescopes. We calculate the detectability of N2O spectral features for high-flux scenarios for TRAPPIST-1e with JWST. We review potential false positives, including chemodenitrification and abiotic production via stellar activity, and identify key spectral and contextual discriminants to confirm or refute the biogenicity of the observed N2O.Comment: 22 pages, 17 figures; ApJ, 937, 10
    corecore