3,522 research outputs found

    A Method of Topology Optimization for Curvature Continuous Designs

    Get PDF
    Recently, there have been many developments made in the field of topology optimization. Specifically, the structural dynamics community has been the leader of the engineering disciplines in using these methods to improve the designs of various structures, ranging from bridges to motor vehicle frames, as well as aerospace structures like the ribs and spars of an airplane. The representation of these designs, however, are usually stair-stepped or faceted throughout the optimization process and require post-process smoothing in the final design stages. Designs with these low-order representations are insufficient for use in higher-order computational fluid dynamics methods, which are becoming more and more popular. With the push for the development of higher-order infrastructures, including higher-order grid generation methods, there exists a need for techniques that handle curvature continuous boundary representations throughout an optimization process. Herein a method has been developed for topology optimization for high-Reynolds number flows that represents smooth bodies, that is, bodies that have continuous curvature. The specific objective function used herein is to match specified x-rays, which are a surrogate for the wake profile of a body in cross-flow. The parameterized level-set method is combined with a boundary extraction technique that incorporates a modified adaptive 4th-order Runge-Kutta algorithm, together with a classical cubic spline curve-fitting method, to produce curvature-continuous boundaries throughout the optimization process. The level-set function is parameterized by the locations and coefficients of Wendland C2 radial basis functions. Topology optimization is achieved by implementing a conjugate gradient optimization algorithm that simultaneously changes the locations of the radial basis function centers and their respective coefficients. To demonstrate the method several test cases are shown where the objective is to generate a smooth representation of a body or bodies that match specified x-rays. First, multiple examples of shape optimization are presented for different topologies. Then topology optimization is demonstrated with an example of two bodies merging and several examples of a single body splitting into separate bodies

    Large scale evaluation of local image feature detectors on homography datasets

    Full text link
    We present a large scale benchmark for the evaluation of local feature detectors. Our key innovation is the introduction of a new evaluation protocol which extends and improves the standard detection repeatability measure. The new protocol is better for assessment on a large number of images and reduces the dependency of the results on unwanted distractors such as the number of detected features and the feature magnification factor. Additionally, our protocol provides a comprehensive assessment of the expected performance of detectors under several practical scenarios. Using images from the recently-introduced HPatches dataset, we evaluate a range of state-of-the-art local feature detectors on two main tasks: viewpoint and illumination invariant detection. Contrary to previous detector evaluations, our study contains an order of magnitude more image sequences, resulting in a quantitative evaluation significantly more robust to over-fitting. We also show that traditional detectors are still very competitive when compared to recent deep-learning alternatives.Comment: Accepted to BMVC 201

    Quantum State Tomography of a Single Qubit: Comparison of Methods

    Full text link
    The tomographic reconstruction of the state of a quantum-mechanical system is an essential component in the development of quantum technologies. We present an overview of different tomographic methods for determining the quantum-mechanical density matrix of a single qubit: (scaled) direct inversion, maximum likelihood estimation (MLE), minimum Fisher information distance, and Bayesian mean estimation (BME). We discuss the different prior densities in the space of density matrices, on which both MLE and BME depend, as well as ways of including experimental errors and of estimating tomography errors. As a measure of the accuracy of these methods we average the trace distance between a given density matrix and the tomographic density matrices it can give rise to through experimental measurements. We find that the BME provides the most accurate estimate of the density matrix, and suggest using either the pure-state prior, if the system is known to be in a rather pure state, or the Bures prior if any state is possible. The MLE is found to be slightly less accurate. We comment on the extrapolation of these results to larger systems.Comment: 15 pages, 4 figures, 2 tables; replaced previous figure 5 by new table I. in Journal of Modern Optics, 201

    Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization

    Get PDF
    Keeping a vehicle well-localized within a prebuilt-map is at the core of any autonomous vehicle navigation system. In this work, we show that both standard SIR sampling and rejection-based optimal sampling are suitable for efficient (10 to 20 ms) real-time pose tracking without feature detection that is using raw point clouds from a 3D LiDAR. Motivated by the large amount of information captured by these sensors, we perform a systematic statistical analysis of how many points are actually required to reach an optimal ratio between efficiency and positioning accuracy. Furthermore, initialization from adverse conditions, e.g., poor GPS signal in urban canyons, we also identify the optimal particle filter settings required to ensure convergence. Our findings include that a decimation factor between 100 and 200 on incoming point clouds provides a large savings in computational cost with a negligible loss in localization accuracy for a VLP-16 scanner. Furthermore, an initial density of ∼2 particles/m 2 is required to achieve 100% convergence success for large-scale (∼100,000 m 2 ), outdoor global localization without any additional hint from GPS or magnetic field sensors. All implementations have been released as open-source software
    • …
    corecore