3,365 research outputs found

    Principal variable selection to explain grain yield variation in winter wheat from features extracted from UAV imagery

    Get PDF
    Background: Automated phenotyping technologies are continually advancing the breeding process. However, collecting various secondary traits throughout the growing season and processing massive amounts of data still take great efforts and time. Selecting a minimum number of secondary traits that have the maximum predictive power has the potential to reduce phenotyping efforts. The objective of this study was to select principal features extracted from UAV imagery and critical growth stages that contributed the most in explaining winter wheat grain yield. Five dates of multispectral images and seven dates of RGB images were collected by a UAV system during the spring growing season in 2018. Two classes of features (variables), totaling to 172 variables, were extracted for each plot from the vegetation index and plant height maps, including pixel statistics and dynamic growth rates. A parametric algorithm, LASSO regression (the least angle and shrinkage selection operator), and a non-parametric algorithm, random forest, were applied for variable selection. The regression coefficients estimated by LASSO and the permutation importance scores provided by random forest were used to determine the ten most important variables influencing grain yield from each algorithm. Results: Both selection algorithms assigned the highest importance score to the variables related with plant height around the grain filling stage. Some vegetation indices related variables were also selected by the algorithms mainly at earlier to mid growth stages and during the senescence. Compared with the yield prediction using all 172 variables derived from measured phenotypes, using the selected variables performed comparable or even better. We also noticed that the prediction accuracy on the adapted NE lines (r = 0.58–0.81) was higher than the other lines (r = 0.21–0.59) included in this study with different genetic backgrounds. Conclusions: With the ultra-high resolution plot imagery obtained by the UAS-based phenotyping we are now able to derive more features, such as the variation of plant height or vegetation indices within a plot other than just an averaged number, that are potentially very useful for the breeding purpose. However, too many features or variables can be derived in this way. The promising results from this study suggests that the selected set from those variables can have comparable prediction accuracies on the grain yield prediction than the full set of them but possibly resulting in a better allocation of efforts and resources on phenotypic data collection and processing

    Multispectral processing based on groups of resolution elements

    Get PDF
    Several nine-point rules are defined and compared with previously studied rules. One of the rules performed well in boundary areas, but with reduced efficiency in field interiors; another combined best performance on field interiors with good sensitivity to boundary detail. The basic threshold gradient and some modifications were investigated as a means of boundary point detection. The hypothesis testing methods of closed-boundary formation were also tested and evaluated. An analysis of the boundary detection problem was initiated, employing statistical signal detection and parameter estimation techniques to analyze various formulations of the problem. These formulations permit the atmospheric and sensor system effects on the data to be thoroughly analyzed. Various boundary features and necessary assumptions can also be investigated in this manner

    Translational Functional Imaging in Surgery Enabled by Deep Learning

    Get PDF
    Many clinical applications currently rely on several imaging modalities such as Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Computed Tomography (CT), etc. All such modalities provide valuable patient data to the clinical staff to aid clinical decision-making and patient care. Despite the undeniable success of such modalities, most of them are limited to preoperative scans and focus on morphology analysis, e.g. tumor segmentation, radiation treatment planning, anomaly detection, etc. Even though the assessment of different functional properties such as perfusion is crucial in many surgical procedures, it remains highly challenging via simple visual inspection. Functional imaging techniques such as Spectral Imaging (SI) link the unique optical properties of different tissue types with metabolism changes, blood flow, chemical composition, etc. As such, SI is capable of providing much richer information that can improve patient treatment and care. In particular, perfusion assessment with functional imaging has become more relevant due to its involvement in the treatment and development of several diseases such as cardiovascular diseases. Current clinical practice relies on Indocyanine Green (ICG) injection to assess perfusion. Unfortunately, this method can only be used once per surgery and has been shown to trigger deadly complications in some patients (e.g. anaphylactic shock). This thesis addressed common roadblocks in the path to translating optical functional imaging modalities to clinical practice. The main challenges that were tackled are related to a) the slow recording and processing speed that SI devices suffer from, b) the errors introduced in functional parameter estimations under changing illumination conditions, c) the lack of medical data, and d) the high tissue inter-patient heterogeneity that is commonly overlooked. This framework follows a natural path to translation that starts with hardware optimization. To overcome the limitation that the lack of labeled clinical data and current slow SI devices impose, a domain- and task-specific band selection component was introduced. The implementation of such component resulted in a reduction of the amount of data needed to monitor perfusion. Moreover, this method leverages large amounts of synthetic data, which paired with unlabeled in vivo data is capable of generating highly accurate simulations of a wide range of domains. This approach was validated in vivo in a head and neck rat model, and showed higher oxygenation contrast between normal and cancerous tissue, in comparison to a baseline using all available bands. The need for translation to open surgical procedures was met by the implementation of an automatic light source estimation component. This method extracts specular reflections from low exposure spectral images, and processes them to obtain an estimate of the light source spectrum that generated such reflections. The benefits of light source estimation were demonstrated in silico, in ex vivo pig liver, and in vivo human lips, where the oxygenation estimation error was reduced when utilizing the correct light source estimated with this method. These experiments also showed that the performance of the approach proposed in this thesis surpass the performance of other baseline approaches. Video-rate functional property estimation was achieved by two main components: a regression and an Out-of-Distribution (OoD) component. At the core of both components is a compact SI camera that is paired with state-of-the-art deep learning models to achieve real time functional estimations. The first of such components features a deep learning model based on a Convolutional Neural Network (CNN) architecture that was trained on highly accurate physics-based simulations of light-tissue interactions. By doing this, the challenge of lack of in vivo labeled data was overcome. This approach was validated in the task of perfusion monitoring in pig brain and in a clinical study involving human skin. It was shown that this approach is capable of monitoring subtle perfusion changes in human skin in an arm clamping experiment. Even more, this approach was capable of monitoring Spreading Depolarizations (SDs) (deoxygenation waves) in the surface of a pig brain. Even though this method is well suited for perfusion monitoring in domains that are well represented with the physics-based simulations on which it was trained, its performance cannot be guaranteed for outlier domains. To handle outlier domains, the task of ischemia monitoring was rephrased as an OoD detection task. This new functional estimation component comprises an ensemble of Invertible Neural Networks (INNs) that only requires perfused tissue data from individual patients to detect ischemic tissue as outliers. The first ever clinical study involving a video-rate capable SI camera in laparoscopic partial nephrectomy was designed to validate this approach. Such study revealed particularly high inter-patient tissue heterogeneity under the presence of pathologies (cancer). Moreover, it demonstrated that this personalized approach is now capable of monitoring ischemia at video-rate with SI during laparoscopic surgery. In conclusion, this thesis addressed challenges related to slow image recording and processing during surgery. It also proposed a method for light source estimation to facilitate translation to open surgical procedures. Moreover, the methodology proposed in this thesis was validated in a wide range of domains: in silico, rat head and neck, pig liver and brain, and human skin and kidney. In particular, the first clinical trial with spectral imaging in minimally invasive surgery demonstrated that video-rate ischemia monitoring is now possible with deep learning

    Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression of hyperspecral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality

    Uncertainty Quantification in Biophotonic Imaging using Invertible Neural Networks

    Get PDF
    Owing to high stakes in the field of healthcare, medical machine learning (ML) applications have to adhere to strict safety standards. In particular, their performance needs to be robust toward volatile clinical inputs. The aim of the work presented in this thesis was to develop a framework for uncertainty handling in medical ML applications as a way to increase their robustness and trustworthiness. In particular, it addresses three root causes for lack of robustness that can be deemed central to the successful clinical translation of ML methods: First, many tasks in medical imaging can be phrased in the language of inverse problems. Most common ML methods aimed at solving such inverse problems implicitly assume that they are well-posed, especially that the problem has a unique solution. However, the solution might be ambiguous. In this thesis, we introduce a data-driven method for analyzing the well-posedness of inverse problems. In addition, we propose a framework to validate the suggested method in a problem-aware manner. Second, simulation is an important tool for the development of medical ML systems due to small in vivo data sets and/or a lack of annotated references (e. g. spatially resolved blood oxygenation (sO 2 )). However, simulation introduces a new uncertainty to the ML pipeline as ML performance guarantees generally rely on the testing data being sufficiently similar to the training data. This thesis addresses the uncertainty by quantifying the domain gap between training and testing data via an out-of-distribution (OoD) detection approach. Third, we introduce a new paradigm for medical ML based on personalized models. In a data-scarce regime with high inter-patient variability, classical ML models cannot be assumed to generalize well to new patients. To overcome this problem, we propose to train ML models on a per-patient basis. This approach circumvents the inter-patient variability, but it requires training without a supervision signal. We address this issue via OoD detection, where the current status quo is encoded as in-distribution (ID) using a personalized ML model. Changes to the status quo are then detected as OoD. While these three facets might seem distinct, the suggested framework provides a unified view of them. The enabling technology is the so-called invertible neural network (INN), which can be used as a flexible and expressive (conditional) density estimator. In this way, they can encode solutions to inverse problems as a probability distribution as well as tackle OoD detection tasks via density-based scores, like the widely applicable information criterion (WAIC). The present work validates our framework on the example of biophotonic imaging. Biophotonic imaging promises the estimation of tissue parameters such as sO 2 in a non-invasive way by evaluating the “fingerprint” of the tissue in the light spectrum. We apply our framework to analyze the well-posedness of the tissue parameter estimation problem at varying spectral and spatial resolutions. We find that with sufficient spectral and/or spatial context, the sO 2 estimation problem is well-posed. Furthermore, we examine the realism of simulated biophotonic data using the proposed OoD approach to gauge the generalization capabilities of our ML models to in vivo data. Our analysis shows a considerable remaining domain gap between the in silico and in vivo spectra. Lastly, we validate the personalized ML approach on the example of non-invasive ischemia monitoring in minimally invasive kidney surgery, for which we developed the first-in-human laparoscopic multispectral imaging system. In our study, we find a strong OoD signal between perfused and ischemic kidney spectra. Furthermore, the proposed approach is video-rate capable. In conclusion, we successfully developed a framework for uncertainty handling in medical ML and validated it using a diverse set of medical ML tasks, highlighting the flexibility and potential impact of our approach. The framework opens the door to robust solutions to applications like (recording) device design, quality control for simulation pipelines, and personalized video-rate tissue parameter monitoring. In this way, this thesis facilitates the development of the next generation of trustworthy ML systems in medicine

    Multispectral image analysis in laparoscopy – A machine learning approach to live perfusion monitoring

    Get PDF
    Modern visceral surgery is often performed through small incisions. Compared to open surgery, these minimally invasive interventions result in smaller scars, fewer complications and a quicker recovery. While to the patients benefit, it has the drawback of limiting the physician’s perception largely to that of visual feedback through a camera mounted on a rod lens: the laparoscope. Conventional laparoscopes are limited by “imitating” the human eye. Multispectral cameras remove this arbitrary restriction of recording only red, green and blue colors. Instead, they capture many specific bands of light. Although these could help characterize important indications such as ischemia and early stage adenoma, the lack of powerful digital image processing prevents realizing the technique’s full potential. The primary objective of this thesis was to pioneer fluent functional multispectral imaging (MSI) in laparoscopy. The main technical obstacles were: (1) The lack of image analysis concepts that provide both high accuracy and speed. (2) Multispectral image recording is slow, typically ranging from seconds to minutes. (3) Obtaining a quantitative ground truth for the measurements is hard or even impossible. To overcome these hurdles and enable functional laparoscopy, for the first time in this field physical models are combined with powerful machine learning techniques. The physical model is employed to create highly accurate simulations, which in turn teach the algorithm to rapidly relate multispectral pixels to underlying functional changes. To reduce the domain shift introduced by learning from simulations, a novel transfer learning approach automatically adapts generic simulations to match almost arbitrary recordings of visceral tissue. In combination with the only available video-rate capable multispectral sensor, the method pioneers fluent perfusion monitoring with MSI. This system was carefully tested in a multistage process, involving in silico quantitative evaluations, tissue phantoms and a porcine study. Clinical applicability was ensured through in-patient recordings in the context of partial nephrectomy; in these, the novel system characterized ischemia live during the intervention. Verified against a fluorescence reference, the results indicate that fluent, non-invasive ischemia detection and monitoring is now possible. In conclusion, this thesis presents the first multispectral laparoscope capable of videorate functional analysis. The system was successfully evaluated in in-patient trials, and future work should be directed towards evaluation of the system in a larger study. Due to the broad applicability and the large potential clinical benefit of the presented functional estimation approach, I am confident the descendants of this system are an integral part of the next generation OR

    The Application of an Unmanned Aerial System and Machine Learning Techniques for Red Clover-Grass Mixture Yield Estimation under Variety Performance Trials

    Get PDF
    A significant trend has developed with the recent growing interest in the estimation of aboveground biomass of vegetation in legume-supported systems in perennial or semi-natural grasslands to meet the demands of sustainable and precise agriculture. Unmanned aerial systems (UAS) are a powerful tool when it comes to supporting farm-scale phenotyping trials. In this study, we explored the variation of the red clover-grass mixture dry matter (DM) yields between temporal periods (one- and two-year cultivated), farming operations [soil tillage methods (STM), cultivation methods (CM), manure application (MA)] using three machine learning (ML) techniques [random forest regression (RFR), support vector regression (SVR), and artificial neural network (ANN)] and six multispectral vegetation indices (VIs) to predict DM yields. The ML evaluation results showed the best performance for ANN in the 11-day before harvest category (R2 = 0.90, NRMSE = 0.12), followed by RFR (R2 = 0.90 NRMSE = 0.15), and SVR (R2 = 0.86, NRMSE = 0.16), which was furthermore supported by the leave-one-out cross-validation pre-analysis. In terms of VI performance, green normalized difference vegetation index (GNDVI), green difference vegetation index (GDVI), as well as modified simple ratio (MSR) performed better as predictors in ANN and RFR. However, the prediction ability of models was being influenced by farming operations. The stratified sampling, based on STM, had a better model performance than CM and MA. It is proposed that drone data collection was suggested to be optimum in this study, closer to the harvest date, but not later than the ageing stage

    Tissue classification for laparoscopic image understanding based on multispectral texture analysis.

    Get PDF
    Intraoperative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study through statistical analysis, we show that (1) multispectral imaging data are superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) combining the tissue texture with the reflectance spectrum improves the classification performance. The classifier reaches an accuracy of 98.4% on our dataset. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems
    • …
    corecore