2,493 research outputs found
It's all Relative: Monocular 3D Human Pose Estimation from Weakly Supervised Data
We address the problem of 3D human pose estimation from 2D input images using
only weakly supervised training data. Despite showing considerable success for
2D pose estimation, the application of supervised machine learning to 3D pose
estimation in real world images is currently hampered by the lack of varied
training images with corresponding 3D poses. Most existing 3D pose estimation
algorithms train on data that has either been collected in carefully controlled
studio settings or has been generated synthetically. Instead, we take a
different approach, and propose a 3D human pose estimation algorithm that only
requires relative estimates of depth at training time. Such training signal,
although noisy, can be easily collected from crowd annotators, and is of
sufficient quality for enabling successful training and evaluation of 3D pose
algorithms. Our results are competitive with fully supervised regression based
approaches on the Human3.6M dataset, despite using significantly weaker
training data. Our proposed algorithm opens the door to using existing
widespread 2D datasets for 3D pose estimation by allowing fine-tuning with
noisy relative constraints, resulting in more accurate 3D poses.Comment: BMVC 2018. Project page available at
http://www.vision.caltech.edu/~mronchi/projects/RelativePos
Calibration Methods for Head-Tracked 3D Displays
Head-tracked 3D displays can provide a compelling 3D effect, but even small inaccuracies in the calibration of the participant’s viewpoint to the display can disrupt the 3D illusion. We propose a novel interactive procedure for a participant to easily and accurately calibrate a head-tracked display by visually aligning patterns across a multi-screen display. Head-tracker measurements are then calibrated to these known viewpoints. We conducted a user study to evaluate the effectiveness of different visual patterns and different display shapes. We found that the easiest to align shape was the spherical display and the best calibration pattern was the combination of circles and lines. We performed a quantitative camera-based calibration of a cubic display and found visual calibration outperformed manual tuning and generated viewpoint calibrations accurate to within a degree. Our work removes the usual, burdensome step of manual calibration when using head-tracked displays and paves the way for wider adoption of this inexpensive and effective 3D display technology
Biomimetic Micro Air Vehicle Testing Development and Small Scale Flapping-Wing Analysis
The purpose of this research was to develop testing methods capable of analyzing the performance of a miniature flapping-wing mechanism that can later be adapted to a biomimetic micro air vehicle (MAV). Three small scale flapping mechanisms capable of single plane flapping, flapping with active pitch control, and flapping/pitch with out-of-plane movement were designed using SolidWorks. The flapping-only model was fabricated on an Objet Eden 500V 3-dimensional printer. The flapping mechanism was mounted on an aluminum plate supported by air bearings, and thrust was measured for a variety of conditions. The testing was conducted using wings composed of carbon fiber and Mylar in four different size configurations, with flapping speeds ranging from 3.5 – 15 Hertz. The thrust was measured using an axially mounted 50 gram load cell which resulted in an accuracy of ± 0.1 gram. Non-dimensional thrust and power numbers were computed. The flapping mechanism was then mounted on a 6-component force balance to measure dynamic loading, which demonstrated the ability to gather time-accurate data within a single flapping stroke at speeds as high as 15 Hz. High speed cameras operated at 1500 Hz were also used for capturing images of the structure of the wing for various testing conditions. Overall this research successfully demonstrated both qualitative and quantitative testing procedures that can be utilized in developing small scale flapping-wing micro air vehicles
Automated calibration of multi-sensor optical shape measurement system
A multi-sensor optical shape measurement system (SMS) based on the fringe
projection method and temporal phase unwrapping has recently been commercialised
as a result of its easy implementation, computer control using a spatial light
modulator, and fast full-field measurement. The main advantage of a multi-sensor
SMS is the ability to make measurements for 360° coverage without the requirement
for mounting the measured component on translation and/or rotation stages. However,
for greater acceptance in industry, issues relating to a user-friendly calibration of the
multi-sensor SMS in an industrial environment for presentation of the measured data
in a single coordinate system need to be addressed.
The calibration of multi-sensor SMSs typically requires a calibration artefact, which
consequently leads to significant user input for the processing of calibration data, in
order to obtain the respective sensor's optimal imaging geometry parameters. The
imaging geometry parameters provide a mapping from the acquired shape data to real
world Cartesian coordinates. However, the process of obtaining optimal sensor
imaging geometry parameters (which involves a nonlinear numerical optimization
process known as bundle adjustment), requires labelling regions within each point
cloud as belonging to known features of the calibration artefact. This thesis describes
an automated calibration procedure which ensures that calibration data is processed
through automated feature detection of the calibration artefact, artefact pose
estimation, automated control point selection, and finally bundle adjustment itself. [Continues.
Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings
Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System
Az http://intechweb.org/ alatti "Books" fĂĽl alatt kell rákeresni a "Stereo Vision" cĂmre Ă©s az 1. fejezetre
Large volume artefact for calibration of multi-sensor projected fringe systems
Fringe projection is a commonly used optical technique for measuring the shapes of objects with dimensions of up to about 1 m across. There are however many instances in the aerospace and automotive industries where it would be desirable to extend the benefits of the technique (e.g., high temporal and spatial sampling rates, non-contacting measurements) to much larger measurement volumes. This thesis describes a process that has been developed to allow the creation of a large global measurement volume from two or more independent shape measurement systems.
A new 3-D large volume calibration artefact, together with a hexapod positioning stage, have been designed and manufactured to allow calibration of volumes of up to 3 x 1 x 1 m3. The artefact was built from carbon fibre composite tubes, chrome steel spheres, and mild steel end caps with rare earth rod magnets. The major advantage over other commonly used artefacts is the dimensionally stable relationship between features spanning multiple individual measurement volumes, thereby allowing calibration of several scanners within a global coordinate system, even when they have non-overlapping fields of view.
The calibration artefact is modular, providing the scalability needed to address still larger measurement volumes and volumes of different geometries. Both it and the translation stage are easy to transport and to assemble on site. The artefact also provides traceabitity for calibration through independent measurements on a mechanical CMM. The dimensions of the assembled artefact have been found to be consistent with those of the individual tube lengths, demonstrating that gravitational distortion corrections are not needed for the artefact size considered here. Deformations due to thermal and hygral effects have also been experimentally quantified.
The thesis describes the complete calibration procedure: large volume calibration artefact design, manufacture and testing; initial estimation of the sensor geometry parameters; processing of the calibration data from manually selected regions-of-interest (ROI) of the artefact features; artefact pose estimation; automated control point selection, and finally bundle adjustment. An accuracy of one part in 17 000 of the global measurement volume diagonal was achieved and verified
- …