3,122 research outputs found

    Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept

    Get PDF
    The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONE’s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver: 1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators; 2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species; 3. A proposal for a cost-effective biodiversity monitoring system. There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme. The issues that we faced were many: 1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset. 2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything. 3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration. 4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output. EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data. EBONE in its initial development, focused on three main indicators covering: (i) the extent and change of habitats of European interest in the context of a general habitat assessment; (ii) abundance and distribution of selected species (birds, butterflies and plants); and (iii) fragmentation of natural and semi-natural areas. For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles: using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples. For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved. Restricted access to European wide species data prevented work on the indicator ‘abundance and distribution of species’. With respect to the indicator ‘fragmentation’, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations

    An Archival Analysis of Stall Warning System Effectiveness During Airborne Icing Encounters

    Get PDF
    An archival study was conducted to determine the influence of stall warning system performance on aircrew decision-making outcomes during airborne icing encounters. A Conservative Icing Response Bias (CIRB) model was developed to explain the historical variability in aircrew performance in the face of airframe icing. The model combined Bayes’ Theorem with Signal Detection Theory (SDT) concepts to yield testable predictions that were evaluated using a Binary Logistic Regression (BLR) multivariate technique applied to two archives: the NASA Aviation Safety Reporting System (ASRS) incident database, and the National Transportation Safety Board (NTSB) accident databases, both covering the period January 1, 1988 to October 2, 2015. The CIRB model predicted that aircrew would experience more incorrect response outcomes in the face of missed stall warnings than with stall warning False Alarms. These predicted outcomes were observed at high significance levels in the final sample of 132 NASA/NTSB cases. The CIRB model had high sensitivity and specificity and explained 71.5% (Nagelkerke R2) of the variance of aircrew decision-making outcomes during the icing encounters. The reliability and validity metrics derived from this study suggest indicate that the findings are generalizable to the population of U.S. registered turbine-powered aircraft. These findings suggest that icing-related stall events could be reduced if the incidence of stall warning misses could be minimized. Observed stall warning misses stemmed from three principal causes: aerodynamic icing effects, which reduced the stall angle-of-attack (AoA) to below the stall warning calibration threshold; tail stalls, which are not monitored by contemporary protection systems; and icing-induced system issues (such as frozen pitot tubes), which compromised stall warning system effectiveness and airframe envelope protections. Each of these sources of missed stall warnings could be addressed by Aerodynamic Performance Monitoring (APM) systems that directly measure the boundary layer airflow adjacent to the affected aerodynamic surfaces, independent of other aircraft stall protection, air data, and AoA systems. In addition to investigating APM systems, measures should also be taken to include the CIRB phenomenon in aircrew training to better prepare crews to cope with airborne icing encounters. The SDT/BLR technique would allow the forecast gains from these improved systems and training processes to be evaluated objectively and quantitatively. The SDT/BLR model developed for this study has broad application outside the realm of airborne icing. The SDT technique has been extensively validated by prior research, and the BLR is a very robust multivariate technique. Combined, they could be applied to evaluate high order constructs (such as stall awareness for this study), in complex and dynamic environments. The union of SDT and BLR reduces the modeling complexities for each variable into the four binary SDT categories of Hit, Miss, False Alarm, and Correct Rejection, which is the optimum format for the BLR. Despite this reductionist approach to complex situations, the method has demonstrated very high statistical and practical significance, as well as excellent predictive power, when applied to the airborne icing scenario

    Classification of Causes of Errors in the Human - Robot System

    Get PDF
    The very first classification of causes of errors in the human - robot system has been presented in this paper. This new model of error classification in the human - robot system has a global character. This means that it includes the causes of the errors of the individual components of the system, but also the errors that result from their interaction.The model also includes all the factors surrounding this system, which can act as the cause of errors in the human - robot system. This model distinguishes five main groups of causes of errors, described in the paper. The classification of errors in the human - robot system has great importance. It can serve to designers as a guide or reminder on factors that should be taken into account during designing, in order to reduce the errors in the human - robot system. In addition, this model can serve to assess the efficiency and possible causes of accidents in the human - robot system. Certain general solutions for the reduction of causes of errors in the human - robot system are also presented

    Detection and Generalization of Spatio-temporal Trajectories for Motion Imagery

    Get PDF
    In today\u27s world of vast information availability users often confront large unorganized amounts of data with limited tools for managing them. Motion imagery datasets have become increasingly popular means for exposing and disseminating information. Commonly, moving objects are of primary interest in modeling such datasets. Users may require different levels of detail mainly for visualization and further processing purposes according to the application at hand. In this thesis we exploit the geometric attributes of objects for dataset summarization by using a series of image processing and neural network tools. In order to form data summaries we select representative time instances through the segmentation of an object\u27s spatio-temporal trajectory lines. High movement variation instances are selected through a new hybrid self-organizing map (SOM) technique to describe a single spatio-temporal trajectory. Multiple objects move in diverse yet classifiable patterns. In order to group corresponding trajectories we utilize an abstraction mechanism that investigates a vague moving relevance between the data in space and time. Thus, we introduce the spatio-temporal neighborhood unit as a variable generalization surface. By altering the unit\u27s dimensions, scaled generalization is accomplished. Common complications in tracking applications that include occlusion, noise, information gaps and unconnected segments of data sequences are addressed through the hybrid-SOM analysis. Nevertheless, entangled data sequences where no information on which data entry belongs to each corresponding trajectory are frequently evident. A multidimensional classification technique that combines geometric and backpropagation neural network implementation is used to distinguish between trajectory data. Further more, modeling and summarization of two-dimensional phenomena evolving in time brings forward the novel concept of spatio-temporal helixes as compact event representations. The phenomena models are comprised of SOM movement nodes (spines) and cardinality shape-change descriptors (prongs). While we focus on the analysis of MI datasets, the framework can be generalized to function with other types of spatio-temporal datasets. Multiple scale generalization is allowed in a dynamic significance-based scale rather than a constant one. The constructed summaries are not just a visualization product but they support further processing for metadata creation, indexing, and querying. Experimentation, comparisons and error estimations for each technique support the analyses discussed

    Towards generalizable machine learning models for computer-aided diagnosis in medicine

    Get PDF
    Hidden stratification represents a phenomenon in which a training dataset contains unlabeled (hidden) subsets of cases that may affect machine learning model performance. Machine learning models that ignore the hidden stratification phenomenon--despite promising overall performance measured as accuracy and sensitivity--often fail at predicting the low prevalence cases, but those cases remain important. In the medical domain, patients with diseases are often less common than healthy patients, and a misdiagnosis of a patient with a disease can have significant clinical impacts. Therefore, to build a robust and trustworthy CAD system and a reliable treatment effect prediction model, we cannot only pursue machine learning models with high overall accuracy, but we also need to discover any hidden stratification in the data and evaluate the proposing machine learning models with respect to both overall performance and the performance on certain subsets (groups) of the data, such as the ‘worst group’. In this study, I investigated three approaches for data stratification: a novel algorithmic deep learning (DL) approach that learns similarities among cases and two schema completion approaches that utilize domain expert knowledge. I further proposed an innovative way to integrate the discovered latent groups into the loss functions of DL models to allow for better model generalizability under the domain shift scenario caused by the data heterogeneity. My results on lung nodule Computed Tomography (CT) images and breast cancer histopathology images demonstrate that learning homogeneous groups within heterogeneous data significantly improves the performance of the computer-aided diagnosis (CAD) system, particularly for low-prevalence or worst-performing cases. This study emphasizes the importance of discovering and learning the latent stratification within the data, as it is a critical step towards building ML models that are generalizable and reliable. Ultimately, this discovery can have a profound impact on clinical decision-making, particularly for low-prevalence cases

    Deep learning for clinical decision support in oncology

    Get PDF
    In den letzten Jahrzehnten sind medizinische Bildgebungsverfahren wie die Computertomographie (CT) zu einem unersetzbaren Werkzeug moderner Medizin geworden, welche eine zeitnahe, nicht-invasive Begutachtung von Organen und Geweben ermöglichen. Die Menge an anfallenden Daten ist dabei rapide gestiegen, allein innerhalb der letzten Jahre um den Faktor 15, und aktuell verantwortlich fĂŒr 30 % des weltweiten Datenvolumens. Die Anzahl ausgebildeter Radiologen ist weitestgehend stabil, wodurch die medizinische Bildanalyse, angesiedelt zwischen Medizin und Ingenieurwissenschaften, zu einem schnell wachsenden Feld geworden ist. Eine erfolgreiche Anwendung verspricht Zeitersparnisse, und kann zu einer höheren diagnostischen QualitĂ€t beitragen. Viele Arbeiten fokussieren sich auf „Radiomics“, die Extraktion und Analyse von manuell konstruierten Features. Diese sind jedoch anfĂ€llig gegenĂŒber externen Faktoren wie dem Bildgebungsprotokoll, woraus Implikationen fĂŒr Reproduzierbarkeit und klinische Anwendbarkeit resultieren. In jĂŒngster Zeit sind Methoden des „Deep Learning“ zu einer hĂ€ufig verwendeten Lösung algorithmischer Problemstellungen geworden. Durch Anwendungen in Bereichen wie Robotik, Physik, Mathematik und Wirtschaft, wurde die Forschung im Bereich maschinellen Lernens wesentlich verĂ€ndert. Ein Kriterium fĂŒr den Erfolg stellt die VerfĂŒgbarkeit großer Datenmengen dar. Diese sind im medizinischen Bereich rar, da die Bilddaten strengen Anforderungen bezĂŒglich Datenschutz und Datensicherheit unterliegen, und oft heterogene QualitĂ€t, sowie ungleichmĂ€ĂŸige oder fehlerhafte Annotationen aufweisen, wodurch ein bedeutender Teil der Methoden keine Anwendung finden kann. Angesiedelt im Bereich onkologischer Bildgebung zeigt diese Arbeit Wege zur erfolgreichen Nutzung von Deep Learning fĂŒr medizinische Bilddaten auf. Mittels neuer Methoden fĂŒr klinisch relevante Anwendungen wie die SchĂ€tzung von LĂ€sionswachtum, Überleben, und Entscheidungkonfidenz, sowie Meta-Learning, Klassifikator-Ensembling, und Entscheidungsvisualisierung, werden Wege zur Verbesserungen gegenĂŒber State-of-the-Art-Algorithmen aufgezeigt, welche ein breites Anwendungsfeld haben. Hierdurch leistet die Arbeit einen wesentlichen Beitrag in Richtung einer klinischen Anwendung von Deep Learning, zielt auf eine verbesserte Diagnose, und damit letztlich eine verbesserte Gesundheitsversorgung insgesamt.Over the last decades, medical imaging methods, such as computed tomography (CT), have become an indispensable tool of modern medicine, allowing for a fast, non-invasive inspection of organs and tissue. Thus, the amount of acquired healthcare data has rapidly grown, increased 15-fold within the last years, and accounts for more than 30 % of the world's generated data volume. In contrast, the number of trained radiologists remains largely stable. Thus, medical image analysis, settled between medicine and engineering, has become a rapidly growing research field. Its successful application may result in remarkable time savings and lead to a significantly improved diagnostic performance. Many of the work within medical image analysis focuses on radiomics, i. e. the extraction and analysis of hand-crafted imaging features. Radiomics, however, has been shown to be highly sensitive to external factors, such as the acquisition protocol, having major implications for reproducibility and clinical applicability. Lately, deep learning has become one of the most employed methods for solving computational problems. With successful applications in diverse fields, such as robotics, physics, mathematics, and economy, deep learning has revolutionized the process of machine learning research. Having large amounts of training data is a key criterion for its successful application. These data, however, are rare within medicine, as medical imaging is subject to a variety of data security and data privacy regulations. Moreover, medical imaging data often suffer from heterogeneous quality, label imbalance, and label noise, rendering a considerable fraction of deep learning-based algorithms inapplicable. Settled in the field of CT oncology, this work addresses these issues, showing up ways to successfully handle medical imaging data using deep learning. It proposes novel methods for clinically relevant tasks, such as lesion growth and patient survival prediction, confidence estimation, meta-learning and classifier ensembling, and finally deep decision explanation, yielding superior performance in comparison to state-of-the-art approaches, and being applicable to a wide variety of applications. With this, the work contributes towards a clinical translation of deep learning-based algorithms, aiming for an improved diagnosis, and ultimately overall improved patient healthcare
    • 

    corecore