4,272 research outputs found

    End-to-end Projector Photometric Compensation

    Full text link
    Projector photometric compensation aims to modify a projector input image such that it can compensate for disturbance from the appearance of projection surface. In this paper, for the first time, we formulate the compensation problem as an end-to-end learning problem and propose a convolutional neural network, named CompenNet, to implicitly learn the complex compensation function. CompenNet consists of a UNet-like backbone network and an autoencoder subnet. Such architecture encourages rich multi-level interactions between the camera-captured projection surface image and the input image, and thus captures both photometric and environment information of the projection surface. In addition, the visual details and interaction information are carried to deeper layers along the multi-level skip convolution layers. The architecture is of particular importance for the projector compensation task, for which only a small training dataset is allowed in practice. Another contribution we make is a novel evaluation benchmark, which is independent of system setup and thus quantitatively verifiable. Such benchmark is not previously available, to our best knowledge, due to the fact that conventional evaluation requests the hardware system to actually project the final results. Our key idea, motivated from our end-to-end problem formulation, is to use a reasonable surrogate to avoid such projection process so as to be setup-independent. Our method is evaluated carefully on the benchmark, and the results show that our end-to-end learning solution outperforms state-of-the-arts both qualitatively and quantitatively by a significant margin.Comment: To appear in the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Source code and dataset are available at https://github.com/BingyaoHuang/compenne

    3D Shape Modeling Using High Level Descriptors

    Get PDF

    Simultaneous fabrication of multiple tablets within seconds using tomographic volumetric 3D printing

    Get PDF
    3D printing is driving a shift in patient care away from a generalised model and towards personalised treatments. To complement fast-paced clinical environments, 3D printing technologies must provide sufficiently high throughputs for them to be feasibly implemented. Volumetric printing is an emerging 3D printing technology that affords such speeds, being capable of producing entire objects within seconds. In this study, for the first time, rotatory volumetric printing was used to simultaneously produce two torus- or cylinder-shaped paracetamol-loaded Printlets (3D printed tablets). Six resin formulations comprising paracetamol as the model drug, poly(ethylene glycol) diacrylate (PEGDA) 575 or 700 as photoreactive monomers, water and PEG 300 as non-reactive diluents, and lithium phenyl-2,4,6-trimethylbenzoylphosphinate (LAP) as the photoinitiator were investigated. Two printlets were successfully printed in 12 to 32 s and exhibited sustained drug release profiles. These results support the use of rotary volumetric printing for efficient and effective manufacturing of various personalised medicines at the same time. With the speed and precision it affords, rotatory volumetric printing has the potential to become one of the most promising alternative manufacturing technologies in the pharmaceutical industry

    Multi-Projector Content Preservation with Linear Filters

    Get PDF
    Using aligned overlapping image projectors provides several ad-vantages when compared to a single projector: increased bright-ness, additional redundancy, and increased pixel density withina region of the screen. Aligning content between projectors isachieved by applying space transformation operations to the de-sired output. The transformation operations often degrade the qual-ity of the original image due to sampling and quantization. Thetransformation applied for a given projector is typically done in iso-lation of all other content-projector transformations. However, it ispossible to warp the images with prior knowledge of each othersuch that they utilize the increase in effective pixel density. Thisallows for an increase in the perceptual quality of the resultingstacked content. This paper presents a novel method of increas-ing the perceptual quality within multi-projector configurations. Amachine learning approach is used to train a linear filtering basedmodel that conditions the individual projected images on each othe

    Diagnostic accuracy and added value of dual-energy subtraction radiography compared to standard conventional radiography using computed tomography as standard of reference

    Full text link
    PURPOSE: To retrospectively evaluate diagnostic performance of dual-energy subtraction radiography (DESR) for interpretation of chest radiographs compared to conventional radiography (CR) using computed tomography (CT) as standard of reference. MATERIAL AND METHODS: A total of 199 patients (75 female, median age 67) were included in this institutional review board (IRB)-approved clinical trial. All patients were scanned in posteroanterior and lateral direction with dual-shot DE-technique. Chest CT was performed within ±72 hours. The system provides three types of images: bone weighted-image, soft tissue weighted-image, herein termed as DESR-images, and a standard image, termed CR-image (marked as CR-image). Images were evaluated by two radiologists for presence of inserted life support lines, pneumothorax, pleural effusion, infectious consolidation, interstitial lung changes, tumor, skeletal alterations, soft tissue alterations, aortic or tracheal calcification and pleural thickening. Inter-observer agreement between readers and diagnostic performance were calculated. McNemar's test was used to test for significant differences. RESULTS: Mean inter-observer agreement throughout the investigated parameters was higher in DESR images compared to CR-images (kDESR = 0.935 vs. kCR = 0.858). DESR images provided significantly increased sensitivity compared to CR-images for the detection of infectious consolidations (42% vs. 62%), tumor (46% vs. 57%), interstitial lung changes (69% vs. 87%) and aortic or tracheal calcification (25 vs. 73%) (p<0.05). There were no significant differences in sensitivity for the detection of inserted life support lines, pneumothorax, pleural effusion, skeletal alterations, soft tissue alterations or pleural thickening (p>0.05). CONCLUSION: DESR increases significantly the sensibility without affecting the specificity evaluating chest radiographs, with emphasis on the detection of interstitial lung diseases

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    The use of computer-aided design techniques in dynamic graphical simulation

    Get PDF
    Imperial Users onl
    • …
    corecore