2,733 research outputs found

    The effects of video compression on acceptability of images for monitoring life sciences experiments

    Get PDF
    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments

    Deep Depth From Focus

    Full text link
    Depth from focus (DFF) is one of the classical ill-posed inverse problems in computer vision. Most approaches recover the depth at each pixel based on the focal setting which exhibits maximal sharpness. Yet, it is not obvious how to reliably estimate the sharpness level, particularly in low-textured areas. In this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end learning approach to this problem. One of the main challenges we face is the hunger for data of deep neural networks. In order to obtain a significant amount of focal stacks with corresponding groundtruth depth, we propose to leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us to digitally create focal stacks of varying sizes. Compared to existing benchmarks our dataset is 25 times larger, enabling the use of machine learning for this inverse problem. We compare our results with state-of-the-art DFF methods and we also analyze the effect of several key deep architectural components. These experiments show that our proposed method `DDFFNet' achieves state-of-the-art performance in all scenes, reducing depth error by more than 75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201

    Tailored displays to compensate for visual aberrations

    Get PDF
    We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 142563/2008-0)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 308936/2010-8)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 480485/2010- 0)National Science Foundation (U.S.) (NSF CNS 0913875)Alfred P. Sloan Foundation (fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Massachusetts Institute of Technology. Media Laboratory (Consortium Members

    High Quality 3D Shape Reconstruction via Digital Refocusing and Pupil Apodization in Multi-wavelength Holographic Interferometry.

    Full text link
    Multi-wavelength holographic interferometry (MWHI) has good potential for evolving into a high quality 3D shape reconstruction technique. There are several remaining challenges, including 1) depth-of-field limitation, leading to axial dimension inaccuracy of out-of-focus objects; and 2) smearing from shiny smooth objects to their dark dull neighbors, generating fake measurements within the dark area. This research is motivated by the goal of developing an advanced optical metrology system that provides accurate 3D profiles for target object or objects of axial dimension larger than the depth-of-field, and for objects with dramatically different surface conditions. The idea of employing digital refocusing in MWHI has been proposed as a solution to the depth-of-field limitation. One the one hand, traditional single wavelength refocusing formula is revised to reduce sensitivity to wavelength error. Investigation over real example demonstrates promising accuracy and repeatability of reconstructed 3D profiles. On the other hand, a phase contrast based focus detection criterion is developed especially for MWHI, which overcomes the problem of phase unwrapping. The combination for these two innovations gives birth to a systematic strategy of acquiring high quality 3D profiles. Following the first phase contrast based focus detection step, interferometric distance measurement by MWHI is implemented as a next step to conduct relative focus detection with high accuracy. This strategy results in ±100mm 3D profile with micron level axial accuracy, which is not available in traditional extended focus image (EFI) solutions. Pupil apodization has been implemented to address the second challenge of smearing. The process of reflective rough surface inspection has been mathematically modeled, which explains the origin of stray light and the necessity of replacing hard-edged pupil with one of gradually attenuating transmission (apodization). Metrics to optimize pupil types and parameters have been chosen especially for MWHI. A Gaussian apodized pupil has been installed and tested. A reduction of smearing in measurement result has been experimentally demonstrated.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91461/1/xulium_1.pd

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed
    corecore