2,296 research outputs found

    Review on the current trends in tongue diagnosis systems

    Get PDF
    AbstractTongue diagnosis is an essential process to noninvasively assess the condition of a patient's internal organs in traditional medicine. To obtain quantitative and objective diagnostic results, image acquisition and analysis devices called tongue diagnosis systems (TDSs) are required. These systems consist of hardware including cameras, light sources, and a ColorChecker, and software for color correction, segmentation of tongue region, and tongue classification. To improve the performance of TDSs, various types TDSs have been developed. Hyperspectral imaging TDSs have been suggested to acquire more information than a two-dimensional (2D) image with visible light waves, as it allows collection of data from multiple bands. Three-dimensional (3D) imaging TDSs have been suggested to provide 3D geometry. In the near future, mobile devices like the smart phone will offer applications for assessment of health condition using tongue images. Various technologies for the TDS have respective unique advantages and specificities according to the application and diagnostic environment, but this variation may cause inconsistent diagnoses in practical clinical applications. In this manuscript, we reviewed the current trends in TDSs for the standardization of systems. In conclusion, the standardization of TDSs can supply the general public and oriental medical doctors with convenient, prompt, and accurate information with diagnostic results for assessing the health condition

    A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging

    Full text link
    Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints. Full size images are available as HAL technical report hal-01107519v5, IEEE Transactions on Computational Imaging, 201

    Phase-Retrieved Tomography enables imaging of a Tumor Spheroid in Mesoscopy Regime

    Get PDF
    Optical tomographic imaging of biological specimen bases its reliability on the combination of both accurate experimental measures and advanced computational techniques. In general, due to high scattering and absorption in most of the tissues, multi view geometries are required to reduce diffuse halo and blurring in the reconstructions. Scanning processes are used to acquire the data but they inevitably introduces perturbation, negating the assumption of aligned measures. Here we propose an innovative, registration free, imaging protocol implemented to image a human tumor spheroid at mesoscopic regime. The technique relies on the calculation of autocorrelation sinogram and object autocorrelation, finalizing the tomographic reconstruction via a three dimensional Gerchberg Saxton algorithm that retrieves the missing phase information. Our method is conceptually simple and focuses on single image acquisition, regardless of the specimen position in the camera plane. We demonstrate increased deep resolution abilities, not achievable with the current approaches, rendering the data alignment process obsolete.Comment: 21 pages, 5 figure

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Synergies between Exoplanet Surveys and Variable Star Research

    Get PDF
    With the discovery of the first transiting extrasolar planetary system back to 1999, a great number of projects started to hunt for other similar systems. Because of the incidence rate of such systems was unknown and the length of the shallow transit events is only a few percent of the orbital period, the goal was to monitor continuously as many stars as possible for at least a period of a few months. Small aperture, large field of view automated telescope systems have been installed with a parallel development of new data reduction and analysis methods, leading to better than 1% per data point precision for thousands of stars. With the successful launch of the photometric satellites CoRot and Kepler, the precision increased further by one-two orders of magnitude. Millions of stars have been analyzed and searched for transits. In the history of variable star astronomy this is the biggest undertaking so far, resulting in photometric time series inventories immensely valuable for the whole field. In this review we briefly discuss the methods of data analysis that were inspired by the main science driver of these surveys and highlight some of the most interesting variable star results that impact the field of variable star astronomy.Comment: This is a review presented at "Wide-field variability surveys: a 21st-century perspective" - 22nd Los Alamos Stellar Pulsation Conference Series Meeting, held in: San Pedro de Atacama, Chile, Nov. 28-Dec. 2, 2016. To appear in Web of Conferences Journal: 13 pages, 8 figure
    corecore