2,304 research outputs found

    3D Capturing with Monoscopic Camera

    Get PDF
    This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye

    Phase-resolved Crab pulsar measurements from 25 to 400 GeV with the MAGIC telescopes

    Full text link
    We report on observations of the Crab pulsar with the MAGIC telescopes. Our data were taken in both monoscopic (> 25GeV) and stereoscopic (> 50GeV) observation modes. Two peaks were detected with both modes and phase-resolved energy spectra were calculated. By comparing with Fermi- LAT measurements, we find that the energy spectrum of the Crab pulsar does not follow a power law with an exponential cutoff, but has an additional hard component, extending up to at least 400 GeV. This suggests that the emission above 25 GeV is not dominated by curvature radiation, as suggested in the standard scenarios of the OG and SG models.Comment: 4 pages, 2 figures, Proc. TAUP 2011, submitted for publication in JCP

    Autonomous control of a humanoid soccer robot : development of tools and strategies using colour vision : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University

    Get PDF
    Humanoid robots research has been an ongoing area of development for researchers due to the benefits that humanoid robots present, whether for entertainment or industrial purposes because of their ability to move around in a human environment, mimic human movement and being aesthetically pleasing. The RoboCup is a competition designed to further the development of robotics, with the humanoid league being the forefront of the competition. A design for the robot platform to compete at an international level in the RoboCup competition will be developed. Along with the platform, tools are created to allow the robot to function autonomously, effectively and efficiently in this environment, primarily using colour vision as its main sensory input. By using a 'point and follow' approach to the robot control a simplistic A.I. was formed which enables the robot to complete the basic functionality of a striker of the ball. Mathematical models are then presented for the comparison of stereoscopic versus monoscopic vision, with the expansion on why monoscopic vision was chosen, due to the environment of the competition being known. A monoscopic depth perception mathematical model and algorithm is then developed, along with a ball trajectory algorithm to allow the robot to calculate a moving balls trajectory and react according to its motion path. Finally through analysis of the implementation of the constructed tools for the chosen platform, details on their effectiveness and their drawbacks are discussed

    Visual enhancements in pick-and-place tasks: Human operators controlling a simulated cylindrical manipulator

    Get PDF
    A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning

    Full text link
    Diabetic eye disease is one of the fastest growing causes of preventable blindness. With the advent of anti-VEGF (vascular endothelial growth factor) therapies, it has become increasingly important to detect center-involved diabetic macular edema (ci-DME). However, center-involved diabetic macular edema is diagnosed using optical coherence tomography (OCT), which is not generally available at screening sites because of cost and workflow constraints. Instead, screening programs rely on the detection of hard exudates in color fundus photographs as a proxy for DME, often resulting in high false positive or false negative calls. To improve the accuracy of DME screening, we trained a deep learning model to use color fundus photographs to predict ci-DME. Our model had an ROC-AUC of 0.89 (95% CI: 0.87-0.91), which corresponds to a sensitivity of 85% at a specificity of 80%. In comparison, three retinal specialists had similar sensitivities (82-85%), but only half the specificity (45-50%, p<0.001 for each comparison with model). The positive predictive value (PPV) of the model was 61% (95% CI: 56-66%), approximately double the 36-38% by the retinal specialists. In addition to predicting ci-DME, our model was able to detect the presence of intraretinal fluid with an AUC of 0.81 (95% CI: 0.81-0.86) and subretinal fluid with an AUC of 0.88 (95% CI: 0.85-0.91). The ability of deep learning algorithms to make clinically relevant predictions that generally require sophisticated 3D-imaging equipment from simple 2D images has broad relevance to many other applications in medical imaging

    Observations of the Crab Nebula with H.E.S.S. Phase II

    Full text link
    The High Energy Stereoscopic System (H.E.S.S.) phase I instrument was an array of four 100m2100\,\mathrm{m}^2 mirror area Imaging Atmospheric Cherenkov Telescopes (IACTs) that has very successfully mapped the sky at photon energies above 100\sim 100\,GeV. Recently, a 600m2600\,\mathrm{m}^2 telescope was added to the centre of the existing array, which can be operated either in standalone mode or jointly with the four smaller telescopes. The large telescope lowers the energy threshold for gamma-ray observations to several tens of GeV, making the array sensitive at energies where the Fermi-LAT instrument runs out of statistics. At the same time, the new telescope makes the H.E.S.S. phase II instrument. This is the first hybrid IACT array, as it operates telescopes of different size (and hence different trigger rates) and different field of view. In this contribution we present results of H.E.S.S. phase II observations of the Crab Nebula, compare them to earlier observations, and evaluate the performance of the new instrument with Monte Carlo simulations.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherland
    corecore