1,798 research outputs found

    Calibration by correlation using metric embedding from non-metric similarities

    Get PDF
    This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis

    LiDAR-Based Object Tracking and Shape Estimation

    Get PDF
    Umfeldwahrnehmung stellt eine Grundvoraussetzung fĂŒr den sicheren und komfortablen Betrieb automatisierter Fahrzeuge dar. Insbesondere bewegte Verkehrsteilnehmer in der unmittelbaren Fahrzeugumgebung haben dabei große Auswirkungen auf die Wahl einer angemessenen Fahrstrategie. Dies macht ein System zur Objektwahrnehmung notwendig, welches eine robuste und prĂ€zise ZustandsschĂ€tzung der Fremdfahrzeugbewegung und -geometrie zur VerfĂŒgung stellt. Im Kontext des automatisierten Fahrens hat sich das Box-Geometriemodell ĂŒber die Zeit als Quasistandard durchgesetzt. Allerdings stellt die Box aufgrund der stĂ€ndig steigenden Anforderungen an Wahrnehmungssysteme inzwischen hĂ€ufig eine unerwĂŒnscht grobe Approximation der tatsĂ€chlichen Geometrie anderer Verkehrsteilnehmer dar. Dies motiviert einen Übergang zu genaueren FormreprĂ€sentationen. In der vorliegenden Arbeit wird daher ein probabilistisches Verfahren zur gleichzeitigen SchĂ€tzung von starrer Objektform und -bewegung mittels Messdaten eines LiDAR-Sensors vorgestellt. Der Vergleich dreier Freiform-Geometriemodelle mit verschiedenen Detaillierungsgraden (Polygonzug, Dreiecksnetz und Surfel Map) gegenĂŒber dem einfachen Boxmodell zeigt, dass die Reduktion von Modellierungsfehlern in der Objektgeometrie eine robustere und prĂ€zisere ParameterschĂ€tzung von ObjektzustĂ€nden ermöglicht. DarĂŒber hinaus können automatisierte Fahrfunktionen, wie beispielsweise ein Park- oder Ausweichassistent, von einem genaueren Wissen ĂŒber die Fremdobjektform profitieren. Es existieren zwei EinflussgrĂ¶ĂŸen, welche die Auswahl einer angemessenen FormreprĂ€sentation maßgeblich beeinflussen sollten: Beobachtbarkeit (Welchen Detaillierungsgrad lĂ€sst die Sensorspezifikation theoretisch zu?) und Modell-AdĂ€quatheit (Wie gut bildet das gegebene Modell die tatsĂ€chlichen Beobachtungen ab?). Auf Basis dieser EinflussgrĂ¶ĂŸen wird in der vorliegenden Arbeit eine Strategie zur Modellauswahl vorgestellt, die zur Laufzeit adaptiv das am besten geeignete Formmodell bestimmt. WĂ€hrend die Mehrzahl der Algorithmen zur LiDAR-basierten Objektverfolgung ausschließlich auf Punktmessungen zurĂŒckgreift, werden in der vorliegenden Arbeit zwei weitere Arten von Messungen vorgeschlagen: Information ĂŒber den vermessenen Freiraum wird verwendet, um ĂŒber Bereiche zu schlussfolgern, welche nicht von Objektgeometrie belegt sein können. Des Weiteren werden LiDAR-IntensitĂ€ten einbezogen, um markante Merkmale wie Nummernschilder und Retroreflektoren zu detektieren und ĂŒber die Zeit zu verfolgen. Eine ausfĂŒhrliche Auswertung auf ĂŒber 1,5 Stunden von aufgezeichneten Fremdfahrzeugtrajektorien im urbanen Bereich und auf der Autobahn zeigen, dass eine prĂ€zise Modellierung der ObjektoberflĂ€che die BewegungsschĂ€tzung um bis zu 30%-40% verbessern kann. DarĂŒber hinaus wird gezeigt, dass die vorgestellten Methoden konsistente und hochprĂ€zise Rekonstruktionen von Objektgeometrien generieren können, welche die hĂ€ufig signifikante Überapproximation durch das einfache Boxmodell vermeiden

    Geophysical methods to detect tunnelling at a geological repository site : Applicability in safeguards

    Get PDF
    ABSTRACT Generating power with nuclear energy accumulates radioactive spent nuclear fuel, anticipated not to be diversified into any unknown purposes. Nuclear safeguards include bookkeeping of nuclear fuel inventories, frequent checking, and monitoring to confirm nuclear non-proliferation. Permanent isolation of radionuclides from biosphere by disposal challenges established practices, as opportunities for monitoring of individual fuel assemblies ceases. Different concepts for treatment and geological disposal of spent nuclear fuel exist. Spent nuclear fuel disposal facility is under construction in Olkiluoto in Southwest Finland. Posiva Oy has carried out multidisciplinary bedrock characterization of crystalline bedrock for siting and design of the facility. Site description involved compilation of geological models from investigations at surface level, from drillholes and from underground rock characterization facility ONKALO. Research focused on long term safety case (performance) of engineered and natural barriers in purpose to minimize risks of radionuclide release. Nuclear safeguards include several concepts. Containment and surveillance (C/S) are tracking presence of nuclear fuel through manufacturing, energy generation, cooling, transfer, and encapsulation. Continuity of knowledge (CoK) ensures traceability and non-diversion. Design information provided by the operator to the state and European Commission (Euratom), and further to IAEA describes spent nuclear fuel handling in the facility. Design information verification (DIV) using timely or unannounced inspections, provide credible assurance on absence of any ongoing undeclared activities within the disposal facility. Safeguards by design provide information applicable for the planning of safeguards measures, e.g., surveillance during operation of disposal facility. Probability of detection of an attempt to any undeclared intrusion into the repository containment needs to be high. Detection of such preparations after site closure would require long term monitoring or repeated geophysical measurements within or at proximity of the repository. Bedrock imaging (remote sensing, geophysical surveys) would serve for verifying declarations where applicable, or for characterization of surrounding rock mass to detect undeclared activities. ASTOR working group has considered ground penetrating radar (GPR) for DIV in underground constructed premises during operation. Seismic reflection survey and electrical or electromagnetic imaging may also apply. This report summarizes geophysical methods used in Olkiluoto, and some recent development, from which findings could be applied also for nuclear safeguards. In this report the geophysical source fields, involved physical properties, range of detection, resolution, survey geometries, and timing of measurements are reviewed for different survey methods. Useful interpretation of geophysical data may rely on comparison of results to declared repository layout, since independent understanding of the results may not be successful. Monitoring provided by an operator may enable alarm and localization of an undeclared activity in a cost-effective manner until closure of the site. Direct detection of constructed spaces, though possible, might require repeated effort, have difficulties to provide spatial coverage, and involve false positive alarms still requiring further inspection

    Development and Flight of a Robust Optical-Inertial Navigation System Using Low-Cost Sensors

    Get PDF
    This research develops and tests a precision navigation algorithm fusing optical and inertial measurements of unknown objects at unknown locations. It provides an alternative to the Global Positioning System (GPS) as a precision navigation source, enabling passive and low-cost navigation in situations where GPS is denied/unavailable. This paper describes two new contributions. First, a rigorous study of the fundamental nature of optical/inertial navigation is accomplished by examining the observability grammian of the underlying measurement equations. This analysis yields a set of design principles guiding the development of optical/inertial navigation algorithms. The second contribution of this research is the development and flight test of an optical-inertial navigation system using low-cost and passive sensors (including an inexpensive commercial-grade inertial sensor, which is unsuitable for navigation by itself). This prototype system was built and flight tested at the U.S. Air Force Test Pilot School. The algorithm that was implemented leveraged the design principles described above, and used images from a single camera. It was shown (and explained by the observability analysis) that the system gained significant performance by aiding it with a barometric altimeter and magnetic compass, and by using a digital terrain database (DTED). The (still) low-cost and passive system demonstrated performance comparable to high quality navigation-grade inertial navigation systems, which cost an order of magnitude more than this optical-inertial prototype. The resultant performance of the system tested provides a robust and practical navigation solution for Air Force aircraft

    Recovering Scale in Relative Pose and Target Model Estimation Using Monocular Vision

    Get PDF
    A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale. Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features. A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution. A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate. This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications. Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop
    • 

    corecore