808 research outputs found

    Indirect test of M-S circuits using multiple specification band guarding

    Get PDF
    Testing analog and mixed-signal circuits is a costly task due to the required test time targets and high end technical resources. Indirect testing methods partially address these issues providing an efficient solution using easy to measure CUT information that correlates with circuit performances. In this work, a multiple specification band guarding technique is proposed as a method to achieve a test target of misclassified circuits. The acceptance/rejection test regions are encoded using octrees in the measurement space, where the band guarding factors precisely tune the test decision boundary according to the required test yield targets. The generated octree data structure serves to cluster the forthcoming circuits in the production testing phase by solely relying on indirect measurements. The combined use of octree based encoding and multiple specification band guarding makes the testing procedure fast, efficient and highly tunable. The proposed band guarding methodology has been applied to test a band-pass Butterworth filter under parametric variations. Promising simulation results are reported showing remarkable improvements when the multiple specification band guarding criterion is used.Peer ReviewedPostprint (author's final draft

    A sparse octree gravitational N-body code that runs entirely on the GPU processor

    Get PDF
    We present parallel algorithms for constructing and traversing sparse octrees on graphics processing units (GPUs). The algorithms are based on parallel-scan and sort methods. To test the performance and feasibility, we implemented them in CUDA in the form of a gravitational tree-code which completely runs on the GPU.(The code is publicly available at: http://castle.strw.leidenuniv.nl/software.html) The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second.Comment: Accepted version. Published in Journal of Computational Physics. 35 pages, 12 figures, single colum

    Models for Metal Hydride Particle Shape, Packing, and Heat Transfer

    Full text link
    A multiphysics modeling approach for heat conduction in metal hydride powders is presented, including particle shape distribution, size distribution, granular packing structure, and effective thermal conductivity. A statistical geometric model is presented that replicates features of particle size and shape distributions observed experimentally that result from cyclic hydride decreptitation. The quasi-static dense packing of a sample set of these particles is simulated via energy-based structural optimization methods. These particles jam (i.e., solidify) at a density (solid volume fraction) of 0.665+/-0.015 - higher than prior experimental estimates. Effective thermal conductivity of the jammed system is simulated and found to follow the behavior predicted by granular effective medium theory. Finally, a theory is presented that links the properties of bi-porous cohesive powders to the present systems based on recent experimental observations of jammed packings of fine powder. This theory produces quantitative experimental agreement with metal hydride powders of various compositions.Comment: 12 pages, 12 figures, 2 table

    Reuleaux: Robot Base Placement by Reachability Analysis

    Full text link
    Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed-base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where position of the robot and work piece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. Solutions are not limited only to vertically constrained placement, since complicated robotics tasks require the base to be placed at unique poses based on task demand. All Reuleaux library methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.Comment: Submitted to International Conference of Robotic Computing 201

    Kinetic Solvers with Adaptive Mesh in Phase Space

    Full text link
    An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for solving multi-dimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a tree of trees data structure. The mesh in r-space is automatically generated around embedded boundaries and dynamically adapted to local solution properties. The mesh in v-space is created on-the-fly for each cell in r-space. Mappings between neighboring v-space trees implemented for the advection operator in configuration space. We have developed new algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive mesh in velocity space: importance sampling, multi-point projection method, and the variance reduction method. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions in a Lorentz gas. New AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce computational cost and memory usage for solving challenging kinetic problems

    MAGRITTE, a modern software library for 3D radiative transfer - II. Adaptive ray-tracing, mesh construction, and reduction

    Get PDF
    Radiative transfer is a notoriously difficult and computationally demanding problem. Yet, it is an indispensable ingredient in nearly all astrophysical and cosmological simulations. Choosing an appropriate discretization scheme is a crucial part of the simulation, since it not only determines the direct memory cost of the model but also largely determines the computational cost and the achievable accuracy. In this paper, we show how an appropriate choice of directional discretization scheme as well as spatial model mesh can help alleviate the computational cost, while largely retaining the accuracy. First, we discuss the adaptive ray-tracing scheme implemented in our 3D radiative transfer library MAGRITTE, that adapts the rays to the spatial mesh and uses a hierarchical directional discretization based on HEALPIX. Second, we demonstrate how the free and open-source software library GMSH can be used to generate high-quality meshes that can be easily tailored for MAGRITTE. In particular, we show how the local element size distribution of the mesh can be used to optimize the sampling of both analytically and numerically defined models. Furthermore, we show that when using the output of hydrodynamics simulations as input for a radiative transfer simulation, the number of elements in the input model can often be reduced by an order of magnitude, without significant loss of accuracy in the radiation field. We demonstrate this for two models based on a hierarchical octree mesh resulting from adaptive mesh refinement, as well as two models based on smoothed particle hydrodynamics data

    Scalable 3D Surface Reconstruction by Local Stochastic Fusion of Disparity Maps

    Get PDF
    Digital three-dimensional (3D) models are of significant interest to many application fields, such as medicine, engineering, simulation, and entertainment. Manual creation of 3D models is extremely time-consuming and data acquisition, e.g., through laser sensors, is expensive. In contrast, images captured by cameras mean cheap acquisition and high availability. Significant progress in the field of computer vision already allows for automatic 3D reconstruction using images. Nevertheless, many problems still exist, particularly for big sets of large images. In addition to the complex formulation necessary to solve an ill-posed problem, one has to manage extremely large amounts of data. This thesis targets 3D surface reconstruction using image sets, especially for large-scale, but also for high-accuracy applications. To this end, a processing chain for dense scalable 3D surface reconstruction using large image sets is defined consisting of image registration, disparity estimation, disparity map fusion, and triangulation of point clouds. The main focus of this thesis lies on the fusion and filtering of disparity maps, obtained by Semi-Global Matching, to create accurate 3D point clouds. For unlimited scalability, a Divide and Conquer method is presented that allows for parallel processing of subspaces of the 3D reconstruction space. The method for fusing disparity maps employs local optimization of spatial data. By this means, it avoids complex fusion strategies when merging subspaces. Although the focus is on scalable reconstruction, a high surface quality is obtained by several extensions to state-of-the-art local optimization methods. To this end, the seminal local volumetric optimization method by Curless and Levoy (1996) is interpreted from a probabilistic perspective. From this perspective, the method is extended through Bayesian fusion of spatial measurements with Gaussian uncertainty. Additionally to the generation of an optimal surface, this probabilistic perspective allows for the estimation of surface probabilities. They are used for filtering outliers in 3D space by means of geometric consistency checks. A further improvement of the quality is obtained based on the analysis of the disparity uncertainty. To this end, Total Variation (TV)-based feature classes are defined that are highly correlated with the disparity uncertainty. The correlation function is learned from ground-truth data by means of an Expectation Maximization (EM) approach. Because of the consideration of a statistically estimated disparity error in a probabilistic framework for fusion of spatial data, this can be regarded as a stochastic fusion of disparity maps. In addition, the influence of image registration and polygonization for volumetric fusion is analyzed and used to extend the method. Finally, a multi-resolution strategy is presented that allows for the generation of surfaces from spatial data with a largely varying quality. This method extends state-of-the-art methods by considering the spatial uncertainty of 3D points from stereo data. The evaluation of several well-known and novel datasets demonstrates the potential of the scalable stochastic fusion method. The strength and the weakness of the method are discussed and direction for future research is given.Digitale dreidimensionale (3D) Modelle sind in vielen Anwendungsfeldern, wie Medizin, Ingenieurswesen, Simulation und Unterhaltung von signifikantem Interesse. Eine manuelle Erstellung von 3D-Modellen ist äußerst zeitaufwendig und die Erfassung der Daten, z.B. durch Lasersensoren, ist teuer. Kamerabilder ermöglichen hingegen preiswerte Aufnahmen und sind gut verfügbar. Der rasante Fortschritt im Forschungsfeld Computer Vision ermöglicht bereits eine automatische 3D-Rekonstruktion aus Bilddaten. Dennoch besteht weiterhin eine Vielzahl von Problemen, insbesondere bei der Verarbeitung von großen Mengen hochauflösender Bilder. Zusätzlich zur komplexen Formulierung, die zur Lösung eines schlecht gestellten Problems notwendig ist, besteht die Herausforderung darin, äußerst große Datenmengen zu verwalten. Diese Arbeit befasst sich mit dem Problem der 3D-Oberflächenrekonstruktion aus Bilddaten, insbesondere für sehr große Modelle, aber auch Anwendungen mit hohem Genauigkeitsanforderungen. Zu diesem Zweck wird eine Prozesskette zur dichten skalierbaren 3D-Oberflächenrekonstruktion für große Bildmengen definiert, bestehend aus Bildregistrierung, Disparitätsschätzung, Fusion von Disparitätskarten und Triangulation von Punktwolken. Der Schwerpunkt dieser Arbeit liegt auf der Fusion und Filterung von durch Semi-Global Matching generierten Disparitätskarten zur Bestimmung von genauen 3D-Punktwolken. Für eine unbegrenzte Skalierbarkeit wird eine Divide and Conquer Methode vorgestellt, welche eine parallele Verarbeitung von Teilräumen des 3D-Rekonstruktionsraums ermöglicht. Die Methode zur Fusion von Disparitätskarten basiert auf lokaler Optimierung von 3D Daten. Damit kann eine komplizierte Fusionsstrategie für die Unterräume vermieden werden. Obwohl der Fokus auf der skalierbaren Rekonstruktion liegt, wird eine hohe Oberflächenqualität durch mehrere Erweiterungen von lokalen Optimierungsmodellen erzielt, die dem Stand der Forschung entsprechen. Dazu wird die wegweisende lokale volumetrische Optimierungsmethode von Curless and Levoy (1996) aus einer probabilistischen Perspektive interpretiert. Aus dieser Perspektive wird die Methode durch eine Bayes Fusion von räumlichen Messungen mit Gaußscher Unsicherheit erweitert. Zusätzlich zur Bestimmung einer optimalen Oberfläche ermöglicht diese probabilistische Fusion die Extraktion von Oberflächenwahrscheinlichkeiten. Diese werden wiederum zur Filterung von Ausreißern mittels geometrischer Konsistenzprüfungen im 3D-Raum verwendet. Eine weitere Verbesserung der Qualität wird basierend auf der Analyse der Disparitätsunsicherheit erzielt. Dazu werden Gesamtvariation-basierte Merkmalsklassen definiert, welche stark mit der Disparitätsunsicherheit korrelieren. Die Korrelationsfunktion wird aus ground-truth Daten mittels eines Expectation Maximization (EM) Ansatzes gelernt. Aufgrund der Berücksichtigung eines statistisch geschätzten Disparitätsfehlers in einem probabilistischem Grundgerüst für die Fusion von räumlichen Daten, kann dies als eine stochastische Fusion von Disparitätskarten betrachtet werden. Außerdem wird der Einfluss der Bildregistrierung und Polygonisierung auf die volumetrische Fusion analysiert und verwendet, um die Methode zu erweitern. Schließlich wird eine Multi-Resolution Strategie präsentiert, welche die Generierung von Oberflächen aus räumlichen Daten mit unterschiedlichster Qualität ermöglicht. Diese Methode erweitert Methoden, die den Stand der Forschung darstellen, durch die Berücksichtigung der räumlichen Unsicherheit von 3D-Punkten aus Stereo Daten. Die Evaluierung von mehreren bekannten und neuen Datensätzen zeigt das Potential der skalierbaren stochastischen Fusionsmethode auf. Stärken und Schwächen der Methode werden diskutiert und es wird eine Empfehlung für zukünftige Forschung gegeben
    corecore