8 research outputs found

    Separating true range measurements from multi-path and scattering interference in commercial range cameras

    Get PDF
    Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene

    A platform for the fast interpretation of movements and localization of users in 3D applications driven by a range camera

    Full text link
    Interactivity is one of the key challenges for immersive applications like gaming. Manufacturers have been working towards interfaces that are driven by a device (e.g. a Wiimote) or interfaces that are controlled by a camera with a subsequent computer vision module. Both approaches have unique advantages, but they do not permit to localize users in the scene with an appropriate accuracy. Therefore, we propose to use both a range camera and accurate range sensors to enable the interpretation of movements. This paper describes a platform that uses a range camera to acquire the silhouettes of users, regardless of illumination, and to improve the pose recovery with range information after some image processing steps. In addition, to circumvent the difficult process of calibration required to map range values to physical distances, we complete the system with several range laser sensors. These sensors are located in a horizontal plane, and measure distances up to a few centimeters. We combine all these measurements to obtain a localization map, used to locate users in the scene at a negligible computational cost. Our method fills a gap in 3D applications that requires absolute positions.Peer reviewe

    SR-4000 and CamCube3.0 Time of Flight (ToF) Cameras: Tests and Comparison

    Get PDF
    In this paper experimental comparisons between two Time-of-Flight (ToF) cameras are reported in order to test their performance and to give some procedures for testing data delivered by this kind of technology. In particular, the SR-4000 camera by Mesa Imaging AG and the CamCube3.0 by PMD Technologies have been evaluated since they have good performances and are well known to researchers dealing with Time-of- Flight (ToF) cameras. After a brief overview of commercial ToF cameras available on the market and the main specifications of the tested devices, two topics are presented in this paper. First, the influence of camera warm-up on distance measurement is analyzed: a warm-up of 40 minutes is suggested to obtain the measurement stability, especially in the case of the CamCube3.0 camera, that exhibits distance measurement variations of several centimeters. Secondly, the variation of distance measurement precision variation over integration time is presented: distance measurement precisions of some millimeters are obtained in both cases. Finally, a comparison between the two cameras based on the experiments and some information about future work on evaluation of sunlight influence on distance measurements are reporte

    MODELAGEM DO ERRO SISTEMÁTICO DE DISTÂNCIA NAS MEDIÇÕES REALIZADAS COM A CÂMARA PMD CAMCUBE 3.0

    Get PDF
    As cĂąmaras de distĂąncia sĂŁo capazes de medir a distĂąncia entre o sensor e a superfĂ­cie dos objetos para cada pixel da imagem. Comparando com os equipamentos de varredura a laser possuem a vantagem de obter a distĂąncia de vĂĄrios pontos em um Ășnico instante, sem equipamento de varredura. As medidas obtidas pela cĂąmara possuem erros sistemĂĄticos que devem ser minimizados. Alguns fatores como o tempo de integração, a distĂąncia a ser medida, bem como a iluminação da cena, influenciam na obtenção da medida. Neste estudo foi analisada a influĂȘncia da variação do tempo de integração e da distĂąncia cĂąmara-alvo na exatidĂŁo do cĂĄlculo da distĂąncia, procurando modelar os erros sistemĂĄticos das mediçÔes feitas com uma cĂąmara PMD Camcube 3.0. A modelagem foi feita por meio da Transformada Discreta de Fourier e permitiu diminuir o erro mĂ©dio quadrĂĄtico (RMSE) de 15,01 cm para 5,05 cm, para das observaçÔes feitas com tempo de integração de 4000 s. TambĂ©m foi verificado que a amplitude do erro Ă© diretamente proporcional ao tempo de integração utilizado

    Incident Light Related Distance Error Study and Calibration of the PMD-Range Imaging Camera

    No full text
    For various applications, such as object recognition or tracking and especially when the object is partly occluded or articulated, 3D information is crucial for the robustness of the application. A recently developed sensor to acquire distance information is based on the Photo Mixer Device (PMD) for which a distance error based on different causes can be observed. This article presents an improved distance calibration approach for PMD-based distance sensoring which handles objects with different Lambertian reflectance properties. Within this scope the relation of the sources of distance errors were investigated. Where applicable they were isolated for relational studies with the actuating variables, i.e. integration time, amplitude and measured distance, as these are the only parameters available for the calibration. The calibration results of the proposed method excel the results of all other known methods. In particular with objects with unknown reflectance properties a significant reduction of the error is achieved. 1

    AnĂĄlise de erros devidos a fatores externos nas medidas realizadas com cĂąmera de distĂąncia PMD Camcube 3.0

    Get PDF
    Orientador : Prof. Dr. Jorge Antonio Silva CentenoTese (doutorado) - Universidade Federal do ParanĂĄ, Setor de CiĂȘncias da Terra, Programa de PĂłs-Graduação em CiĂȘncias GeodĂ©sicas. Defesa: Curitiba, 28/09/2015Inclui referĂȘncias : f. 97-103Resumo: Um dos recentes equipamentos desenvolvidos para o Sensoriamento Remoto sĂŁo as cĂąmeras de distĂąncia, conhecidas como range cameras ou TOF cameras. Tais cĂąmeras tambĂ©m obtĂ©m a distĂąncia sensor-alvo, assim como os aparelhos de laser scanning, entretanto nĂŁo possuem equipamentos de varredura, capturando imagens quadro a quadro. A medidas obtidas com tais cĂąmeras estĂŁo sujeitas Ă  diversas fontes de erros instrumentais e ambientais. Dentre as fontes de erros existentes, esta Tese focou no estudo dos erros causados pela iluminação ambiente, pela refletividade do alvo e pela distĂąncia cĂąmera-alvo, alĂ©m de verificar o tempo de aquecimento necessĂĄrio para se atingir maior precisĂŁo/exatidĂŁo nas medidas da cĂąmera. Foi utilizado uma cĂąmera PMD Camcube 3.0, que permite obter imagens de intensidade, distĂąncia e amplitude, com 200 x 200 pixels, em um sensor CMOS (Complementary Metal Oxide Semiconductor). Para medir a distĂąncia para cada pixel da imagem, este equipamento utiliza o princĂ­pio time-of-flight (TOF) com base no mĂ©todo de diferença de fase. Os experimentos mostraram que a cĂąmera necessita de, pelo menos, 40 minutos de aquecimento para atingir a estabilidade das medidas. A distĂąncia cĂąmera-alvo influenciou o erro das medidas atravĂ©s de um comportamento oscilatĂłrio atenuada com o aumento da distĂąncia, sendo modelado pela equação da onda, com amplitude atenuada. Os erros devido Ă  iluminação ambiente apresentaram variação linear proporcional Ă  intensidade da iluminação ambiente. Por Ășltimo, quanto Ă  refletividade do alvo, verificou-se que alvos mais escuros apresentaram maior ruĂ­do e que, comparando com alvo de cor branca, a mudança na refletividade do alvo, por meio das cores cinza ou preta, aumentou significativamente os erros dos valores da distĂąncia.Abstract: One of the latest equipment developed for Remote Sensing distance are the cameras, known as range camera or TOF cameras. Such cameras also obtains the distance between the sensor and its target, as well as laser scanning devices, however have no scanning equipment, capturing images frame by frame. The measurements obtained with this camera are subject to several sources of instrumental and environmental errors. Among these errors sources, this thesis focused on the study of the errors caused by ambient lighting and by reflectivity of the target (environmental) by the camera target distance, and checking the heating time required to achieve higher precision / accuracy in measurements camera. A PMD Camcube 3.0 camera that obtains intensity images was used, distance and amplitude, with 200 x 200 pixels on a CMOS (Complementary Metal Oxide Semiconductor). To measure the distance for each pixel in the image, this equipment uses the time-of-flight (TOF) principle, based on the phase difference method. The experiments showed that the camera needs, at least, 40 minutes heating to achieve stable measurements. The error due to distante between camera and the target behaves as an attenuated wave, that decreases as the distance get higher. The errors due to ambient lighting showed a linear variation, proportional to the intensity of the ambient lighting. Finally, about the errors of target reflectivity, the darker the target, the greater the noise. Compared with white target, the change in reflectivity of the target, by means of gray or black color, significantly increased distance values

    Geometrische und stochastische Modelle zur Verarbeitung von 3D-Kameradaten am Beispiel menschlicher Bewegungsanalysen

    Get PDF
    Die dreidimensionale Erfassung der Form und Lage eines beliebigen Objekts durch die flexiblen Methoden und Verfahren der Photogrammetrie spielt fĂŒr ein breites Spektrum technisch-industrieller und naturwissenschaftlicher Einsatzgebiete eine große Rolle. Die Anwendungsmöglichkeiten reichen von Messaufgaben im Automobil-, Maschinen- und Schiffbau ĂŒber die Erstellung komplexer 3D-Modelle in Architektur, ArchĂ€ologie und Denkmalpflege bis hin zu Bewegungsanalysen in Bereichen der Strömungsmesstechnik, Ballistik oder Medizin. In der Nahbereichsphotogrammetrie werden dabei verschiedene optische 3D-Messsysteme verwendet. Neben flĂ€chenhaften Halbleiterkameras im Einzel- oder Mehrbildverband kommen aktive Triangulationsverfahren zur OberflĂ€chenmessung mit z.B. strukturiertem Licht oder Laserscanner-Systeme zum Einsatz. 3D-Kameras auf der Basis von Photomischdetektoren oder vergleichbaren Prinzipien erzeugen durch die Anwendung von Modulationstechniken zusĂ€tzlich zu einem Grauwertbild simultan ein Entfernungsbild. Als Einzelbildsensoren liefern sie ohne die Notwendigkeit einer stereoskopischen Zuordnung rĂ€umlich aufgelöste OberflĂ€chendaten in Videorate. In der 3D-Bewegungsanalyse ergeben sich bezĂŒglich der KomplexitĂ€t und des Rechenaufwands erhebliche Erleichterungen. 3D-Kameras verbinden die Handlichkeit einer Digitalkamera mit dem Potential der dreidimensionalen Datenakquisition etablierter OberflĂ€chenmesssysteme. Sie stellen trotz der noch vergleichsweise geringen rĂ€umlichen Auflösung als monosensorielles System zur Echtzeit-Tiefenbildakquisition eine interessante Alternative fĂŒr Aufgabenstellungen der 3D-Bewegungsanalyse dar. Der Einsatz einer 3D-Kamera als Messinstrument verlangt die Modellierung von Abweichungen zum idealen Abbildungsmodell; die Verarbeitung der erzeugten 3D-Kameradaten bedingt die zielgerichtete Adaption, Weiter- und Neuentwicklung von Verfahren der Computer Vision und Photogrammetrie. Am Beispiel der Untersuchung des zwischenmenschlichen Bewegungsverhaltens sind folglich die Entwicklung von Verfahren zur Sensorkalibrierung und zur 3D-Bewegungsanalyse die Schwerpunkte der Dissertation. Eine 3D-Kamera stellt aufgrund ihres inhĂ€renten Designs und Messprinzips gleichzeitig Amplituden- und Entfernungsinformationen zur VerfĂŒgung, welche aus einem Messsignal rekonstruiert werden. Die simultane Einbeziehung aller 3D-Kamerainformationen in jeweils einen integrierten Ansatz ist eine logische Konsequenz und steht im Vordergrund der Verfahrensentwicklungen. Zum einen stĂŒtzen sich die komplementĂ€ren Eigenschaften der Beobachtungen durch die Herstellung des funktionalen Zusammenhangs der MesskanĂ€le gegenseitig, wodurch Genauigkeits- und ZuverlĂ€ssigkeitssteigerungen zu erwarten sind. Zum anderen gewĂ€hrleistet das um eine VarianzkomponentenschĂ€tzung erweiterte stochastische Modell eine vollstĂ€ndige Ausnutzung des heterogenen Informationshaushalts. Die entwickelte integrierte BĂŒndelblockausgleichung ermöglicht die Bestimmung der exakten 3D-Kamerageometrie sowie die SchĂ€tzung der distanzmessspezifischen Korrekturparameter zur Modellierung linearer, zyklischer und signalwegeffektbedingter Fehleranteile einer 3D-Kamerastreckenmessung. Die integrierte Kalibrierroutine gleicht in beiden InformationskanĂ€len gemessene GrĂ¶ĂŸen gemeinsam, unter der automatischen SchĂ€tzung optimaler Beobachtungsgewichte, aus. Die Methode basiert auf dem flexiblen Prinzip einer Selbstkalibrierung und benötigt keine Objektrauminformation, wodurch insbesondere die aufwendige Ermittlung von Referenzstrecken ĂŒbergeordneter Genauigkeit entfĂ€llt. Die durchgefĂŒhrten Genauigkeitsuntersuchungen bestĂ€tigen die Richtigkeit der aufgestellten funktionalen ZusammenhĂ€nge, zeigen aber auch SchwĂ€chen aufgrund noch nicht parametrisierter distanzmessspezifischer Fehler. Die AdaptivitĂ€t und die modulare Implementierung des entwickelten mathematischen Modells gewĂ€hrleisten aber eine zukĂŒnftige Erweiterung. Die QualitĂ€t der 3D-Neupunktkoordinaten kann nach einer Kalibrierung mit 5 mm angegeben werden. FĂŒr die durch eine Vielzahl von meist simultan auftretenden Rauschquellen beeinflusste Tiefenbildtechnologie ist diese Genauigkeitsangabe sehr vielversprechend, vor allem im Hinblick auf die Entwicklung von auf korrigierten 3D-Kameradaten aufbauenden Auswertealgorithmen. 2,5D Least Squares Tracking (LST) ist eine im Rahmen der Dissertation entwickelte integrierte spatiale und temporale Zuordnungsmethode zur Auswertung von 3D-Kamerabildsequenzen. Der Algorithmus basiert auf der in der Photogrammetrie bekannten Bildzuordnung nach der Methode der kleinsten Quadrate und bildet kleine OberflĂ€chensegmente konsekutiver 3D-KameradatensĂ€tze aufeinander ab. Die Abbildungsvorschrift wurde, aufbauend auf einer 2D-Affintransformation, an die Datenstruktur einer 3D-Kamera angepasst. Die geschlossen formulierte Parametrisierung verknĂŒpft sowohl Grau- als auch Entfernungswerte in einem integrierten Modell. Neben den affinen Parametern zur Erfassung von Translations- und Rotationseffekten, modellieren die Maßstabs- sowie Neigungsparameter perspektivbedingte GrĂ¶ĂŸenĂ€nderungen des Bildausschnitts, verursacht durch DistanzĂ€nderungen in Aufnahmerichtung. Die Eingabedaten sind in einem Vorverarbeitungsschritt mit Hilfe der entwickelten Kalibrierroutine um ihre opto- und distanzmessspezifischen Fehler korrigiert sowie die gemessenen SchrĂ€gstrecken auf Horizontaldistanzen reduziert worden. 2,5D-LST liefert als integrierter Ansatz vollstĂ€ndige 3D-Verschiebungsvektoren. Weiterhin können die aus der Fehlerrechnung resultierenden Genauigkeits- und ZuverlĂ€ssigkeitsangaben als Entscheidungskriterien fĂŒr die Integration in einer anwendungsspezifischen Verarbeitungskette Verwendung finden. Die Validierung des Verfahrens zeigte, dass die EinfĂŒhrung komplementĂ€rer Informationen eine genauere und zuverlĂ€ssigere Lösung des Korrespondenzproblems bringt, vor allem bei schwierigen KontrastverhĂ€ltnissen in einem Kanal. Die Genauigkeit der direkt mit den Distanzkorrekturtermen verknĂŒpften Maßstabs- und Neigungsparameter verbesserte sich deutlich. DarĂŒber hinaus brachte die Erweiterung des geometrischen Modells insbesondere bei der Zuordnung natĂŒrlicher, nicht gĂ€nzlich ebener OberflĂ€chensegmente signifikante Vorteile. Die entwickelte flĂ€chenbasierte Methode zur Objektzuordnung und Objektverfolgung arbeitet auf der Grundlage berĂŒhrungslos aufgenommener 3D-Kameradaten. Sie ist somit besonders fĂŒr Aufgabenstellungen der 3D-Bewegungsanalyse geeignet, die den Mehraufwand einer multiokularen Experimentalanordnung und die Notwendigkeit einer Objektsignalisierung mit Zielmarken vermeiden möchten. Das Potential des 3D-Kamerazuordnungsansatzes wurde an zwei Anwendungsszenarien der menschlichen Verhaltensforschung demonstriert. 2,5D-LST kam zur Bestimmung der interpersonalen Distanz und Körperorientierung im erziehungswissenschaftlichen Untersuchungsgebiet der Konfliktregulation befreundeter Kindespaare ebenso zum Einsatz wie zur Markierung und anschließenden Klassifizierung von Bewegungseinheiten sprachbegleitender Handgesten. Die Implementierung von 2,5D-LST in die vorgeschlagenen Verfahren ermöglichte eine automatische, effektive, objektive sowie zeitlich und rĂ€umlich hochaufgelöste Erhebung und Auswertung verhaltensrelevanter Daten. Die vorliegende Dissertation schlĂ€gt die Verwendung einer neuartigen 3D-Tiefenbildkamera zur Erhebung menschlicher Verhaltensdaten vor. Sie prĂ€sentiert sowohl ein zur Datenaufbereitung entwickeltes Kalibrierwerkzeug als auch eine Methode zur berĂŒhrungslosen Bestimmung dichter 3D-Bewegungsvektorfelder. Die Arbeit zeigt, dass die Methoden der Photogrammetrie auch fĂŒr bewegungsanalytische Aufgabenstellungen auf dem bisher noch wenig erschlossenen Gebiet der Verhaltensforschung wertvolle Ergebnisse liefern können. Damit leistet sie einen Beitrag fĂŒr die derzeitigen Bestrebungen in der automatisierten videographischen Erhebung von Körperbewegungen in dyadischen Interaktionen.The three-dimensional documentation of the form and location of any type of object using flexible photogrammetric methods and procedures plays a key role in a wide range of technical-industrial and scientific areas of application. Potential applications include measurement tasks in the automotive, machine building and ship building sectors, the compilation of complex 3D models in the fields of architecture, archaeology and monumental preservation and motion analyses in the fields of flow measurement technology, ballistics and medicine. In the case of close-range photogrammetry a variety of optical 3D measurement systems are used. Area sensor cameras arranged in single or multi-image configurations are used besides active triangulation procedures for surface measurement (e.g. using structured light or laser scanner systems). The use of modulation techniques enables 3D cameras based on photomix detectors or similar principles to simultaneously produce both a grey value image and a range image. Functioning as single image sensors, they deliver spatially resolved surface data at video rate without the need for stereoscopic image matching. In the case of 3D motion analyses in particular, this leads to considerable reductions in complexity and computing time. 3D cameras combine the practicality of a digital camera with the 3D data acquisition potential of conventional surface measurement systems. Despite the relatively low spatial resolution currently achievable, as a monosensory real-time depth image acquisition system they represent an interesting alternative in the field of 3D motion analysis. The use of 3D cameras as measuring instruments requires the modelling of deviations from the ideal projection model, and indeed the processing of the 3D camera data generated requires the targeted adaptation, development and further development of procedures in the fields of computer graphics and photogrammetry. This Ph.D. thesis therefore focuses on the development of methods of sensor calibration and 3D motion analysis in the context of investigations into inter-human motion behaviour. As a result of its intrinsic design and measurement principle, a 3D camera simultaneously provides amplitude and range data reconstructed from a measurement signal. The simultaneous integration of all data obtained using a 3D camera into an integrated approach is a logical consequence and represents the focus of current procedural development. On the one hand, the complementary characteristics of the observations made support each other due to the creation of a functional context for the measurement channels, with is to be expected to lead to increases in accuracy and reliability. On the other, the expansion of the stochastic model to include variance component estimation ensures that the heterogeneous information pool is fully exploited. The integrated bundle adjustment developed facilitates the definition of precise 3D camera geometry and the estimation of range-measurement-specific correction parameters required for the modelling of the linear, cyclical and latency defectives of a distance measurement made using a 3D camera. The integrated calibration routine jointly adjusts appropriate dimensions across both information channels, and also automatically estimates optimum observation weights. The method is based on the same flexible principle used in self-calibration, does not require spatial object data and therefore foregoes the time-consuming determination of reference distances with superior accuracy. The accuracy analyses carried out confirm the correctness of the proposed functional contexts, but nevertheless exhibit weaknesses in the form of non-parameterized range-measurement-specific errors. This notwithstanding, the future expansion of the mathematical model developed is guaranteed due to its adaptivity and modular implementation. The accuracy of a new 3D point coordinate can be set at 5 mm further to calibration. In the case of depth imaging technology – which is influenced by a range of usually simultaneously occurring noise sources – this level of accuracy is very promising, especially in terms of the development of evaluation algorithms based on corrected 3D camera data. 2.5D Least Squares Tracking (LST) is an integrated spatial and temporal matching method developed within the framework of this Ph.D. thesis for the purpose of evaluating 3D camera image sequences. The algorithm is based on the least squares image matching method already established in photogrammetry, and maps small surface segments of consecutive 3D camera data sets on top of one another. The mapping rule has been adapted to the data structure of a 3D camera on the basis of a 2D affine transformation. The closed parameterization combines both grey values and range values in an integrated model. In addition to the affine parameters used to include translation and rotation effects, the scale and inclination parameters model perspective-related deviations caused by distance changes in the line of sight. A pre-processing phase sees the calibration routine developed used to correct optical and distance-related measurement specific errors in input data and measured slope distances reduced to horizontal distances. 2.5D LST is an integrated approach, and therefore delivers fully three-dimensional displacement vectors. In addition, the accuracy and reliability data generated by error calculation can be used as decision criteria for integration into an application-specific processing chain. Process validation showed that the integration of complementary data leads to a more accurate, reliable solution to the correspondence problem, especially in the case of difficult contrast ratios within a channel. The accuracy of scale and inclination parameters directly linked to distance correction terms improved dramatically. In addition, the expansion of the geometric model led to significant benefits, and in particular for the matching of natural, not entirely planar surface segments. The area-based object matching and object tracking method developed functions on the basis of 3D camera data gathered without object contact. It is therefore particularly suited to 3D motion analysis tasks in which the extra effort involved in multi-ocular experimental settings and the necessity of object signalling using target marks are to be avoided. The potential of the 3D camera matching approach has been demonstrated in two application scenarios in the field of research into human behaviour. As in the case of the use of 2.5D LST to mark and then classify hand gestures accompanying verbal communication, the implementation of 2.5D LST in the proposed procedures for the determination of interpersonal distance and body orientation within the framework of pedagogical research into conflict regulation between pairs of child-age friends facilitates the automatic, effective, objective and high-resolution (from both a temporal and spatial perspective) acquisition and evaluation of data with relevance to behaviour. This Ph.D. thesis proposes the use of a novel 3D range imaging camera to gather data on human behaviour, and presents both a calibration tool developed for data processing purposes and a method for the contact-free determination of dense 3D motion vector fields. It therefore makes a contribution to current efforts in the field of the automated videographic documentation of bodily motion within the framework of dyadic interaction, and shows that photogrammetric methods can also deliver valuable results within the framework of motion evaluation tasks in the as-yet relatively untapped field of behavioural research
    corecore