2,185 research outputs found

    Robust and practical measurement of volume transport parameters in solid photo-polymer materials for 3D printing

    Get PDF
    Volumetric light transport is a pervasive physical phenomenon, and therefore its accurate simulation is important for a broad array of disciplines. While suitable mathematical models for computing the transport are now available, obtaining the necessary material parameters needed to drive such simulations is a challenging task: direct measurements of these parameters from material samples are seldom possible. Building on the inverse scattering paradigm, we present a novel measurement approach which indirectly infers the transport parameters from extrinsic observations of multiple-scattered radiance. The novelty of the proposed approach lies in replacing structured illumination with a structured reflector bonded to the sample, and a robust fitting procedure that largely compensates for potential systematic errors in the calibration of the setup. We show the feasibility of our approach by validating simulations of complex 3D compositions of the measured materials against physical prints, using photo-polymer resins. As presented in this paper, our technique yields colorspace data suitable for accurate appearance reproduction in the area of 3D printing. Beyond that, and without fundamental changes to the basic measurement methodology, it could equally well be used to obtain spectral measurements that are useful for other application areas

    Print engine color management using customer image content

    Get PDF
    The production of quality color prints requires that color accuracy and reproducibility be maintained to within very tight tolerances when transferred to different media. Variations in the printing process commonly produce color shifts that result in poor color reproduction. The primary function of a color management system is maintaining color quality and consistency. Currently these systems are tuned in the factory by printing a large set of test color patches, measuring them, and making necessary adjustments. This time-consuming procedure should be repeated as needed once the printer leaves the factory. In this work, a color management system that compensates for print color shifts in real-time using feedback from an in-line full-width sensor is proposed. Instead of printing test patches, this novel attempt at color management utilizes the output pixels already rendered in production pages, for a continuous printer characterization. The printed pages are scanned in-line and the results are utilized to update the process by which colorimetric image content is translated into engine specific color separations (e.g. CIELAB-\u3eCMYK). The proposed system provides a means to perform automatic printer characterization, by simply printing a set of images that cover the gamut of the printer. Moreover, all of the color conversion features currently utilized in production systems (such as Gray Component Replacement, Gamut Mapping, and Color Smoothing) can be achieved with the proposed system

    On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial Fuzzy <em>c</em>-Means Segmentation

    Get PDF
    The automated detection of pavement distress from remote sensing imagery is a promising but challenging task due to the complex structure of pavement surfaces, in addition to the intensity of non-uniformity, and the presence of artifacts and noise. Even though imaging and sensing systems such as high-resolution RGB cameras, stereovision imaging, LiDAR and terrestrial laser scanning can now be combined to collect pavement condition data, the data obtained by these sensors are expensive and require specially equipped vehicles and processing. This hinders the utilization of the potential efficiency and effectiveness of such sensor systems. This chapter presents the potentials of the use of the Kinect v2.0 RGB-D sensor, as a low-cost approach for the efficient and accurate pothole detection on asphalt pavements. By using spatial fuzzy c-means (SFCM) clustering, so as to incorporate the pothole neighborhood spatial information into the membership function for clustering, the RGB data are segmented into pothole and non-pothole objects. The results demonstrate the advantage of complementary processing of low-cost multisensor data, through channeling data streams and linking data processing according to the merits of the individual sensors, for autonomous cost-effective assessment of road-surface conditions using remote sensing technology

    Exploring Hyperspectral Imaging and 3D Convolutional Neural Network for Stress Classification in Plants

    Get PDF
    Hyperspectral imaging (HSI) has emerged as a transformative technology in imaging, characterized by its ability to capture a wide spectrum of light, including wavelengths beyond the visible range. This approach significantly differs from traditional imaging methods such as RGB imaging, which uses three color channels, and multispectral imaging, which captures several discrete spectral bands. Through this approach, HSI offers detailed spectral signatures for each pixel, facilitating a more nuanced analysis of the imaged subjects. This capability is particularly beneficial in applications like agricultural practices, where it can detect changes in physiological and structural characteristics of crops. Moreover, the ability of HSI to monitor these changes over time is advantageous for observing how subjects respond to different environmental conditions or treatments. However, the high-dimensional nature of hyperspectral data presents challenges in data processing and feature extraction. Traditional machine learning algorithms often struggle to handle such complexity. This is where 3D Convolutional Neural Networks (CNNs) become valuable. Unlike 1D-CNNs, which extract features from spectral dimensions, and 2D-CNNs, which focus on spatial dimensions, 3D CNNs have the capability to process data across both spectral and spatial dimensions. This makes them adept at extracting complex features from hyperspectral data. In this thesis, we explored the potency of HSI combined with 3D-CNN in agriculture domain where plant health and vitality are paramount. To evaluate this, we subjected lettuce plants to varying stress levels to assess the performance of this method in classifying the stressed lettuce at the early stages of growth into their respective stress-level groups. For this study, we created a dataset comprising 88 hyperspectral image samples of stressed lettuce. Utilizing Bayesian optimization, we developed 350 distinct 3D-CNN models to assess the method. The top-performing model achieved a 75.00\% test accuracy. Additionally, we addressed the challenge of generating valid 3D-CNN models in the Keras Tuner library through meticulous hyperparameter configuration. Our investigation also extends to the role of individual channels and channel groups within the color and near-infrared spectrum in predicting results for each stress-level group. We observed that the red and green spectra have a higher influence on the prediction results. Furthermore, we conducted a comprehensive review of 3D-CNN-based classification techniques for diseased and defective crops using non-UAV-based hyperspectral images.MITACSMaster of Science in Applied Computer Scienc

    Fast and flexible analysis of direct dark matter search data with machine learning

    Get PDF
    We present the results from combining machine learning with the profile likelihood fit procedure, using data from the Large Underground Xenon (LUX) dark matter experiment. This approach demonstrates reduction in computation time by a factor of 30 when compared with the previous approach, without loss of performance on real data. We establish its flexibility to capture nonlinear correlations between variables (such as smearing in light and charge signals due to position variation) by achieving equal performance using pulse areas with and without position-corrections applied. Its efficiency and scalability furthermore enables searching for dark matter using additional variables without significant computational burden. We demonstrate this by including a light signal pulse shape variable alongside more traditional inputs, such as light and charge signal strengths. This technique can be exploited by future dark matter experiments to make use of additional information, reduce computational resources needed for signal searches and simulations, and make inclusion of physical nuisance parameters in fits tractable

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Applicability of UAV-based optical imagery and classification algorithms for detecting pine wilt disease at different infection stages

    Get PDF
    As a quarantine disease with a rapid spread tendency in the context of climate change, accurate detection and location of pine wilt disease (PWD) at different infection stages is critical for maintaining forest health and being highly productivity. In recent years, unmanned aerial vehicle (UAV)-based optical remote-sensing images have provided new instruments for timely and accurate PWD monitoring. Numerous corresponding analysis algorithms have been proposed for UAV-based image classification, but their applicability of detecting different PWD infection stages has not yet been evaluated under a uniform conditions and criteria. This research aims to systematically assess the performance of multi-source images for detecting different PWD infection stages, analyze effective classification algorithms, and further analyze the validity of thermal images for early detection of PWD. In this study, PWD infection was divided into four stages: healthy, chlorosis, red and gray, and UAV-based hyperspectral (HSI), multispectral (MSI), and MSI with a thermal band (MSI&TIR) datasets were used as the data sources. Spectral analysis, support vector machine (SVM), random forest (RF), two- and three-dimensional convolutional network (2D- and 3D-CNN) algorithms were applied to these datasets to compare their classification abilities. The results were as follows: (I) The classification accuracy of the healthy, red, and gray stages using the MSI dataset was close to that obtained when using the MSI&TIR dataset with the same algorithms, whereas the HSI dataset displayed no obvious advantages. (II) The RF and 3D-CNN algorithms were the most accurate for all datasets (RF: overall accuracy = 94.26%, 3D-CNN: overall accuracy = 93.31%), while the spectral analysis method is also valid for the MSI&TIR dataset. (III) Thermal band displayed significant potential in detection of the chlorosis stage, and the MSI&TIR dataset displayed the best performance for detection of all infection stages. Considering this, we suggest that the MSI&TIR dataset can essentially satisfy PWD identification requirements at various stages, and the RF algorithm provides the best choice, especially in actual forest investigations. In addition, the performance of thermal imaging in the early monitoring of PWD is worthy of further investigation. These findings are expected to provide insight into future research and actual surveys regarding the selection of both remote sensing datasets and data analysis algorithms for detection requirements of different PWD infection stages to detect the disease earlier and prevent losses

    Efficient image-based rendering

    Get PDF
    Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jüngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche Maß an Kontrolle über Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lösen, indem die Möglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lösung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermöglicht, die aus spärlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhängigkeit ermöglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darüber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden können, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beiträgt. Schließlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten
    corecore