10,559 research outputs found

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    A Simple Algorithm for 3D Face Reconstruction Model

    Get PDF
    [[abstract]]We propose a simpler and faster method to construct 3D face model. First, we extract ASM face features and get the depth face image with Kinect. Then, we describe the depth face image with OpenGL in three-dimension coordinate. Owing to the dropped depth information in face edge, we use CANDIDE-3 face model as basic face skeleton to remedy the distortive profile. Here, we match the CANDIDE-3 face model and front face image with the coordinate of ASM feature points. After we get an incomplete 3D face, we need to fill the data to the empty part of side face. According to the skeleton coordinate of face model, we extend the edge information for each row in profile face with the edge pixel in side face till the end edge of face model. Because the repaired 3D face still retains many texture information of original front face image, we can easily achieve a satisfied result with any rotation angle from 0 to ± 90 degree.[[sponsorship]]Asia-Pacific Education & Research Association[[conferencetype]]國際[[conferencedate]]20140711~20140713[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]普吉島, 泰

    Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation.

    Get PDF
    PurposeWith the advent of MR guided radiotherapy, internal organ motion can be imaged simultaneously during treatment. In this study, we evaluate the feasibility of pancreas MRI segmentation using state-of-the-art segmentation methods.Methods and materialT2 weighted HASTE and T1 weighted VIBE images were acquired on 3 patients and 2 healthy volunteers for a total of 12 imaging volumes. A novel dictionary learning (DL) method was used to segment the pancreas and compared to t mean-shift merging (MSM), distance regularized level set (DRLS), graph cuts (GC) and the segmentation results were compared to manual contours using Dice's index (DI), Hausdorff distance and shift of the-center-of-the-organ (SHIFT).ResultsAll VIBE images were successfully segmented by at least one of the auto-segmentation method with DI >0.83 and SHIFT ≤2 mm using the best automated segmentation method. The automated segmentation error of HASTE images was significantly greater. DL is statistically superior to the other methods in Dice's overlapping index. For the Hausdorff distance and SHIFT measurement, DRLS and DL performed slightly superior to the GC method, and substantially superior to MSM. DL required least human supervision and was faster to compute.ConclusionOur study demonstrated potential feasibility of automated segmentation of the pancreas on MRI images with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization

    Scaled, patient-specific 3D vertebral model reconstruction based on 2D lateral fluoroscopy

    Get PDF
    Backgrounds: Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). An alternative method using 2D lateral fluoroscopy was developed. Materials and methods: A technique was developed to reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model of the lumbar vertebrae. Four cadaveric lumbar spine segments and two statistical shape models were used for testing. Reconstruction accuracy was determined by comparison of the surface models reconstructed from the single lateral fluoroscopic images to the ground truth data from 3D CT segmentation. For each case, two different surface-based registration techniques were used to recover the unknown scale factor, and the rigid transformation between the reconstructed surface model and the ground truth model before the differences between the two discrete surface models were computed. Results: Successful reconstruction of scaled surface models was achieved for all test lumbar vertebrae based on single lateral fluoroscopic images. The mean reconstruction error was between 0.7 and 1.6mm. Conclusions: A scaled, patient-specific surface model of the lumbar vertebra from a single lateral fluoroscopic image can be synthesized using the present approach. This new method for patient-specific 3D modeling has potential applications in spine kinematics analysis, surgical planning, and navigatio

    TIFu: Tri-directional Implicit Function for High-Fidelity 3D Character Reconstruction

    Full text link
    Recent advances in implicit function-based approaches have shown promising results in 3D human reconstruction from a single RGB image. However, these methods are not sufficient to extend to more general cases, often generating dragged or disconnected body parts, particularly for animated characters. We argue that these limitations stem from the use of the existing point-level 3D shape representation, which lacks holistic 3D context understanding. Voxel-based reconstruction methods are more suitable for capturing the entire 3D space at once, however, these methods are not practical for high-resolution reconstructions due to their excessive memory usage. To address these challenges, we introduce Tri-directional Implicit Function (TIFu), which is a vector-level representation that increases global 3D consistencies while significantly reducing memory usage compared to voxel representations. We also introduce a new algorithm in 3D reconstruction at an arbitrary resolution by aggregating vectors along three orthogonal axes, resolving inherent problems with regressing fixed dimension of vectors. Our approach achieves state-of-the-art performances in both our self-curated character dataset and the benchmark 3D human dataset. We provide both quantitative and qualitative analyses to support our findings
    corecore