4,401 research outputs found

    Explaining Aviation Safety Incidents Using Deep Temporal Multiple Instance Learning

    Full text link
    Although aviation accidents are rare, safety incidents occur more frequently and require a careful analysis to detect and mitigate risks in a timely manner. Analyzing safety incidents using operational data and producing event-based explanations is invaluable to airline companies as well as to governing organizations such as the Federal Aviation Administration (FAA) in the United States. However, this task is challenging because of the complexity involved in mining multi-dimensional heterogeneous time series data, the lack of time-step-wise annotation of events in a flight, and the lack of scalable tools to perform analysis over a large number of events. In this work, we propose a precursor mining algorithm that identifies events in the multidimensional time series that are correlated with the safety incident. Precursors are valuable to systems health and safety monitoring and in explaining and forecasting safety incidents. Current methods suffer from poor scalability to high dimensional time series data and are inefficient in capturing temporal behavior. We propose an approach by combining multiple-instance learning (MIL) and deep recurrent neural networks (DRNN) to take advantage of MIL's ability to learn using weakly supervised data and DRNN's ability to model temporal behavior. We describe the algorithm, the data, the intuition behind taking a MIL approach, and a comparative analysis of the proposed algorithm with baseline models. We also discuss the application to a real-world aviation safety problem using data from a commercial airline company and discuss the model's abilities and shortcomings, with some final remarks about possible deployment directions

    Improvement in Land Cover and Crop Classification based on Temporal Features Learning from Sentinel-2 Data Using Recurrent-Convolutional Neural Network (R-CNN)

    Get PDF
    Understanding the use of current land cover, along with monitoring change over time, is vital for agronomists and agricultural agencies responsible for land management. The increasing spatial and temporal resolution of globally available satellite images, such as provided by Sentinel-2, creates new possibilities for researchers to use freely available multi-spectral optical images, with decametric spatial resolution and more frequent revisits for remote sensing applications such as land cover and crop classification (LC&CC), agricultural monitoring and management, environment monitoring. Existing solutions dedicated to cropland mapping can be categorized based on per-pixel based and object-based. However, it is still challenging when more classes of agricultural crops are considered at a massive scale. In this paper, a novel and optimal deep learning model for pixel-based LC&CC is developed and implemented based on Recurrent Neural Networks (RNN) in combination with Convolutional Neural Networks (CNN) using multi-temporal sentinel-2 imagery of central north part of Italy, which has diverse agricultural system dominated by economic crop types. The proposed methodology is capable of automated feature extraction by learning time correlation of multiple images, which reduces manual feature engineering and modeling crop phenological stages. Fifteen classes, including major agricultural crops, were considered in this study. We also tested other widely used traditional machine learning algorithms for comparison such as support vector machine SVM, random forest (RF), Kernal SVM, and gradient boosting machine, also called XGBoost. The overall accuracy achieved by our proposed Pixel R-CNN was 96.5%, which showed considerable improvements in comparison with existing mainstream methods. This study showed that Pixel R-CNN based model offers a highly accurate way to assess and employ time-series data for multi-temporal classification tasks

    Development and validation of an automatic thermal imaging process forassessing plant water status

    Get PDF
    [EN] Leaf temperature is a physiological trait that can be used for monitoring plant water status. Nowadays, by means of thermography, canopy temperature can be remotely determined. In this sense, it is crucial to automatically process the images. In the present work, a methodology for the automatic analysis of frontal images taken on individual trees was developed. The procedure can be used when cameras take at the same time thermal and visible scenes, so it is not necessary to reference the images. In this way, during the processing in batch, no operator participated. The procedure was developed by means of a non supervised classification of the visible image from which the presence of sky and soil could be detected. In case of existence, a mask was performed for the extraction of intermediate pixels to calculate canopy temperature by means of the thermal image. At the same time, sunlit and shady leaves could be detected and isolated. Thus, the procedure allowed to separately determine canopy temperature either of the more exposed part of the canopy or of the shaded portion. The methodology developed was validated using images taken in several regulated deficit irrigation trials in Persimmon and two citrus cultivars (Clementina de Nules and Navel Lane-Late). Overall, results indicated that similar canopy temperatures were calculated either by means of the automatic process or the manual procedure. The procedure developed allows to drastically reduce the time needed for image analysis also considering that no operator participation was required. This tool will facilitate further investigations in course for assessing the feasibility of thermography for detecting plant water status in woody perennial crops with discontinuous canopies. Preliminary results reported indicate that the type of crop evaluated has an important influence in the results obtained from thermographic imagery. Thus, in Persimmon trees there were good correlations between canopy temperature and plant water status while, in Clementina de Nules and Navel Lane-Late citrus cultivars canopy temperature differences among trees could not be related with tree-to-tree variations in plant water status.This research was supported by funds from the Instituto Valenciano de Investigaciones Agrarias and the "Denominacion de origen Caqui Ribera del Xuquer" via "Proyecto Integral Caqui". from projects Rideco-Consolider CSD2006-0067 and Interreg IV Sudoe Telerieg. Thanks are also due to J. Castel, E. Badal, I. Buesa and D. Guerra for assistance with field work and to the Servicio de Tecnologia del Riego for providing the meteorological data.Jiménez Bello, MÁ.; Ballester, C.; Castel Sanchez, R.; Intrigliolo Molina, DS. (2011). Development and validation of an automatic thermal imaging process forassessing plant water status. Agricultural Water Management. (98):1497-1504. https://doi.org/10.1016/j.agwat.2011.05.002S149715049

    Intelligent control of mobile robot with redundant manipulator & stereovision: quantum / soft computing toolkit

    Get PDF
    The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed. An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced. Design of robust knowledge bases is performed using a developed computational intelligence – quantum / soft computing toolkit (QC/SCOptKBTM). The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described. The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described. The general design methodology of a generalizing control unit based on the physical laws of quantum computing (quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal) is considered. The modernization of the pattern recognition system based on stereo vision technology presented. The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system

    Detection of Road Conditions Using Image Processing and Machine Learning Techniques for Situation Awareness

    Get PDF
    In this modern era, land transports are increasing dramatically. Moreover, self-driven car or the Advanced Driving Assistance System (ADAS) is now the public demand. For these types of cars, road conditions detection is mandatory. On the other hand, compared to the number of vehicles, to increase the number of roads is not possible. Software is the only alternative solution. Road Conditions Detection system will help to solve the issues. For solving this problem, Image processing, and machine learning have been applied to develop a project namely, Detection of Road Conditions Using Image Processing and Machine Learning Techniques for Situation Awareness. Many issues could be considered for road conditions but the main focus will be on the detection of potholes, Maintenance sings and lane. Image processing and machine learning have been combined for our system for detecting in real-time. Machine learning has been applied to maintains signs detection. Image processing has been applied for detecting lanes and potholes. The detection system will provide a lane mark with colored lines, the pothole will be a marker with a red rectangular box and for a road Maintenance sign, the system will also provide information of aintenance sign as maintenance sing is detected. By observing all these scenarios, the driver will realize the road condition. On the other hand situation awareness is the ability to perceive information from it’s surrounding, takes decisions based on perceived information and it makes decision based on prediction

    Real-time, noise and drift resilient formaldehyde sensing at room temperature with aerogel filaments

    Full text link
    Formaldehyde, a known human carcinogen, is a common indoor air pollutant. However, its real-time and selective recognition from interfering gases remains challenging, especially for low-power sensors suffering from noise and baseline drift. We report a fully 3D-printed quantum dot/graphene-based aerogel sensor for highly sensitive and real-time recognition of formaldehyde at room temperature. By optimising the morphology and doping of the printed structures, we achieve a record-high response of 15.23 percent for 1 parts-per-million formaldehyde and an ultralow detection limit of 8.02 parts-per-billion consuming only 130 uW power. Based on measured dynamic response snapshots, we also develop an intelligent computational algorithm for robust and accurate detection in real time despite simulated substantial noise and baseline drift, hitherto unachievable for room-temperature sensors. Our framework in combining materials engineering, structural design and computational algorithm to capture dynamic response offers unprecedented real-time identification capabilities of formaldehyde and other volatile organic compounds at room temperature.Comment: Main manuscript: 21 pages, 5 figure. Supplementary: 21 pages. 13 Figures, 2 tabl

    Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration

    Get PDF
    Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology. Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. In this interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins. Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training
    corecore