424 research outputs found

    Automated Activity Estimation of the Cold-Water Coral Lophelia pertusa by Multispectral Imaging and Computational Pixel Classification

    Get PDF
    The cold-water coral Lophelia pertusa builds up bioherms that sustain high biodiversity in the deep ocean worldwide. Photographic monitoring of the polyp activity represents a helpful tool to characterize the health status of the corals and to assess anthropogenic impacts on the microhabitat. Discriminating active polyps from skeletons of white Lophelia pertusa is usually time-consuming and error-prone due to their similarity in color in common RGB camera footage. Acquisition of finer resolved spectral information might increase the contrast between the segments of polyps and skeletons, and therefore could support automated classification and accurate activity estimation of polyps. For recording the needed footage, underwater multispectral imaging systems can be used, but they are often expensive and bulky. Here we present results of a new, light-weight, compact and low-cost deep-sea tunable LED-based underwater multispectral imaging system (TuLUMIS) with eight spectral channels. A brunch of healthy white Lophelia pertusa was observed under controlled conditions in a laboratory tank. Spectral reflectance signatures were extracted from pixels of polyps and skeletons of the observed coral. Results showed that the polyps can be better distinguished from the skeleton by analysis of the eight-dimensional spectral reflectance signatures compared to three-channel RGB data. During a 72-hour monitoring of the coral with a half-hour temporal resolution in the lab, the polyp activity was estimated based on the results of the multispectral pixel classification using a support vector machine (SVM) approach. The computational estimated polyp activity was consistent with that of the manual annotation, which yielded a correlation coefficient of 0.957

    Sustainable marine ecosystems: deep learning for water quality assessment and forecasting

    Get PDF
    An appropriate management of the available resources within oceans and coastal regions is vital to guarantee their sustainable development and preservation, where water quality is a key element. Leveraging on a combination of cross-disciplinary technologies including Remote Sensing (RS), Internet of Things (IoT), Big Data, cloud computing, and Artificial Intelligence (AI) is essential to attain this aim. In this paper, we review methodologies and technologies for water quality assessment that contribute to a sustainable management of marine environments. Specifically, we focus on Deep Leaning (DL) strategies for water quality estimation and forecasting. The analyzed literature is classified depending on the type of task, scenario and architecture. Moreover, several applications including coastal management and aquaculture are surveyed. Finally, we discuss open issues still to be addressed and potential research lines where transfer learning, knowledge fusion, reinforcement learning, edge computing and decision-making policies are expected to be the main involved agents.Postprint (published version

    Feasibility Study for an Aquatic Ecosystem Earth Observing System Version 1.2.

    Get PDF
    International audienceMany Earth observing sensors have been designed, built and launched with primary objectives of either terrestrial or ocean remote sensing applications. Often the data from these sensors are also used for freshwater, estuarine and coastal water quality observations, bathymetry and benthic mapping. However, such land and ocean specific sensors are not designed for these complex aquatic environments and consequently are not likely to perform as well as a dedicated sensor would. As a CEOS action, CSIRO and DLR have taken the lead on a feasibility assessment to determine the benefits and technological difficulties of designing an Earth observing satellite mission focused on the biogeochemistry of inland, estuarine, deltaic and near coastal waters as well as mapping macrophytes, macro-algae, sea grasses and coral reefs. These environments need higher spatial resolution than current and planned ocean colour sensors offer and need higher spectral resolution than current and planned land Earth observing sensors offer (with the exception of several R&D type imaging spectrometry satellite missions). The results indicate that a dedicated sensor of (non-oceanic) aquatic ecosystems could be a multispectral sensor with ~26 bands in the 380-780 nm wavelength range for retrieving the aquatic ecosystem variables as well as another 15 spectral bands between 360-380 nm and 780-1400 nm for removing atmospheric and air-water interface effects. These requirements are very close to defining an imaging spectrometer with spectral bands between 360 and 1000 nm (suitable for Si based detectors), possibly augmented by a SWIR imaging spectrometer. In that case the spectral bands would ideally have 5 nm spacing and Full Width Half Maximum (FWHM), although it may be necessary to go to 8 nm wide spectral bands (between 380 to 780nm where the fine spectral features occur -mainly due to photosynthetic or accessory pigments) to obtain enough signal to noise. The spatial resolution of such a global mapping mission would be between ~17 and ~33 m enabling imaging of the vast majority of water bodies (lakes, reservoirs, lagoons, estuaries etc.) larger than 0.2 ha and ~25% of river reaches globally (at ~17 m resolution) whilst maintaining sufficient radiometric resolution

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Applications of Machine Learning in Chemical and Biological Oceanography

    Full text link
    Machine learning (ML) refers to computer algorithms that predict a meaningful output or categorize complex systems based on a large amount of data. ML is applied in various areas including natural science, engineering, space exploration, and even gaming development. This review focuses on the use of machine learning in the field of chemical and biological oceanography. In the prediction of global fixed nitrogen levels, partial carbon dioxide pressure, and other chemical properties, the application of ML is a promising tool. Machine learning is also utilized in the field of biological oceanography to detect planktonic forms from various images (i.e., microscopy, FlowCAM, and video recorders), spectrometers, and other signal processing techniques. Moreover, ML successfully classified the mammals using their acoustics, detecting endangered mammalian and fish species in a specific environment. Most importantly, using environmental data, the ML proved to be an effective method for predicting hypoxic conditions and harmful algal bloom events, an essential measurement in terms of environmental monitoring. Furthermore, machine learning was used to construct a number of databases for various species that will be useful to other researchers, and the creation of new algorithms will help the marine research community better comprehend the chemistry and biology of the ocean.Comment: 58 Pages, 5 Figure

    NASA's surface biology and geology designated observable: A perspective on surface imaging algorithms

    Full text link
    The 2017–2027 National Academies' Decadal Survey, Thriving on Our Changing Planet, recommended Surface Biology and Geology (SBG) as a “Designated Targeted Observable” (DO). The SBG DO is based on the need for capabilities to acquire global, high spatial resolution, visible to shortwave infrared (VSWIR; 380–2500 nm; ~30 m pixel resolution) hyperspectral (imaging spectroscopy) and multispectral midwave and thermal infrared (MWIR: 3–5 μm; TIR: 8–12 μm; ~60 m pixel resolution) measurements with sub-monthly temporal revisits over terrestrial, freshwater, and coastal marine habitats. To address the various mission design needs, an SBG Algorithms Working Group of multidisciplinary researchers has been formed to review and evaluate the algorithms applicable to the SBG DO across a wide range of Earth science disciplines, including terrestrial and aquatic ecology, atmospheric science, geology, and hydrology. Here, we summarize current state-of-the-practice VSWIR and TIR algorithms that use airborne or orbital spectral imaging observations to address the SBG DO priorities identified by the Decadal Survey: (i) terrestrial vegetation physiology, functional traits, and health; (ii) inland and coastal aquatic ecosystems physiology, functional traits, and health; (iii) snow and ice accumulation, melting, and albedo; (iv) active surface composition (eruptions, landslides, evolving landscapes, hazard risks); (v) effects of changing land use on surface energy, water, momentum, and carbon fluxes; and (vi) managing agriculture, natural habitats, water use/quality, and urban development. We review existing algorithms in the following categories: snow/ice, aquatic environments, geology, and terrestrial vegetation, and summarize the community-state-of-practice in each category. This effort synthesizes the findings of more than 130 scientists

    Remote Sensing of the Aquatic Environments

    Get PDF
    The book highlights recent research efforts in the monitoring of aquatic districts with remote sensing observations and proximal sensing technology integrated with laboratory measurements. Optical satellite imagery gathered at spatial resolutions down to few meters has been used for quantitative estimations of harmful algal bloom extent and Chl-a mapping, as well as winds and currents from SAR acquisitions. The knowledge and understanding gained from this book can be used for the sustainable management of bodies of water across our planet

    Luminescence lifetime imaging of three-dimensional biological objects

    Get PDF
    ABSTRACT A major focus of current biological studies is to fill the knowledge gaps between cell, tissue and organism scales. To this end, a wide array of contemporary optical analytical tools enable multiparameter quantitative imaging of live and fixed cells, three-dimensional (3D) systems, tissues, organs and organisms in the context of their complex spatiotemporal biological and molecular features. In particular, the modalities of luminescence lifetime imaging, comprising fluorescence lifetime imaging (FLI) and phosphorescence lifetime imaging microscopy (PLIM), in synergy with Förster resonance energy transfer (FRET) assays, provide a wealth of information. On the application side, the luminescence lifetime of endogenous molecules inside cells and tissues, overexpressed fluorescent protein fusion biosensor constructs or probes delivered externally provide molecular insights at multiple scales into protein–protein interaction networks, cellular metabolism, dynamics of molecular oxygen and hypoxia, physiologically important ions, and other physical and physiological parameters. Luminescence lifetime imaging offers a unique window into the physiological and structural environment of cells and tissues, enabling a new level of functional and molecular analysis in addition to providing 3D spatially resolved and longitudinal measurements that can range from microscopic to macroscopic scale. We provide an overview of luminescence lifetime imaging and summarize key biological applications from cells and tissues to organisms.</jats:p

    Semantic Segmentation for Real-World Applications

    Get PDF
    En visión por computador, la comprensión de escenas tiene como objetivo extraer información útil de una escena a partir de datos de sensores. Por ejemplo, puede clasificar toda la imagen en una categoría particular o identificar elementos importantes dentro de ella. En este contexto general, la segmentación semántica proporciona una etiqueta semántica a cada elemento de los datos sin procesar, por ejemplo, a todos los píxeles de la imagen o, a todos los puntos de la nube de puntos. Esta información es esencial para muchas aplicaciones de visión por computador, como conducción, aplicaciones médicas o robóticas. Proporciona a los ordenadores una comprensión sobre el entorno que es necesaria para tomar decisiones autónomas.El estado del arte actual de la segmentación semántica está liderado por métodos de aprendizaje profundo supervisados. Sin embargo, las condiciones del mundo real presentan varias restricciones para la aplicación de estos modelos de segmentación semántica. Esta tesis aborda varios de estos desafíos: 1) la cantidad limitada de datos etiquetados disponibles para entrenar modelos de aprendizaje profundo, 2) las restricciones de tiempo y computación presentes en aplicaciones en tiempo real y/o en sistemas con poder computacional limitado, y 3) la capacidad de realizar una segmentación semántica cuando se trata de sensores distintos de la cámara RGB estándar.Las aportaciones principales en esta tesis son las siguientes:1. Un método nuevo para abordar el problema de los datos anotados limitados para entrenar modelos de segmentación semántica a partir de anotaciones dispersas. Los modelos de aprendizaje profundo totalmente supervisados lideran el estado del arte, pero mostramos cómo entrenarlos usando solo unos pocos píxeles etiquetados. Nuestro enfoque obtiene un rendimiento similar al de los modelos entrenados con imágenescompletamente etiquetadas. Demostramos la relevancia de esta técnica en escenarios de monitorización ambiental y en dominios más generales.2. También tratando con datos de entrenamiento limitados, proponemos un método nuevo para segmentación semántica semi-supervisada, es decir, cuando solo hay una pequeña cantidad de imágenes completamente etiquetadas y un gran conjunto de datos sin etiquetar. La principal novedad de nuestro método se basa en el aprendizaje por contraste. Demostramos cómo el aprendizaje por contraste se puede aplicar a la tarea de segmentación semántica y mostramos sus ventajas, especialmente cuando la disponibilidad de datos etiquetados es limitada logrando un nuevo estado del arte.3. Nuevos modelos de segmentación semántica de imágenes eficientes. Desarrollamos modelos de segmentación semántica que son eficientes tanto en tiempo de ejecución, requisitos de memoria y requisitos de cálculo. Algunos de nuestros modelos pueden ejecutarse en CPU a altas velocidades con alta precisión. Esto es muy importante para configuraciones y aplicaciones reales, ya que las GPU de gama alta nosiempre están disponibles.4. Nuevos métodos de segmentación semántica con sensores no RGB. Proponemos un método para la segmentación de nubes de puntos LiDAR que combina operaciones de aprendizaje eficientes tanto en 2D como en 3D. Logra un rendimiento de segmentación excepcional a velocidades realmente rápidas. También mostramos cómo mejorar la robustez de estos modelos al abordar el problema de sobreajuste y adaptaciónde dominio. Además, mostramos el primer trabajo de segmentación semántica con cámaras de eventos, haciendo frente a la falta de datos etiquetados.Estas contribuciones aportan avances significativos en el campo de la segmentación semántica para aplicaciones del mundo real. Para una mayor contribución a la comunidad cientfíica, hemos liberado la implementación de todas las soluciones propuestas.----------------------------------------In computer vision, scene understanding aims at extracting useful information of a scene from raw sensor data. For instance, it can classify the whole image into a particular category (i.e. kitchen or living room) or identify important elements within it (i.e., bottles, cups on a table or surfaces). In this general context, semantic segmentation provides a semantic label to every single element of the raw data, e.g., to all image pixels or to all point cloud points.This information is essential for many applications relying on computer vision, such as AR, driving, medical or robotic applications. It provides computers with understanding about the environment needed to make autonomous decisions, or detailed information to people interacting with the intelligent systems. The current state of the art for semantic segmentation is led by supervised deep learning methods.However, real-world scenarios and conditions introduce several challenges and restrictions for the application of these semantic segmentation models. This thesis tackles several of these challenges, namely, 1) the limited amount of labeled data available for training deep learning models, 2) the time and computation restrictions present in real time applications and/or in systems with limited computational power, such as a mobile phone or an IoT node, and 3) the ability to perform semantic segmentation when dealing with sensors other than the standard RGB camera.The general contributions presented in this thesis are following:A novel approach to address the problem of limited annotated data to train semantic segmentation models from sparse annotations. Fully supervised deep learning models are leading the state-of-the-art, but we show how to train them by only using a few sparsely labeled pixels in the training images. Our approach obtains similar performance than models trained with fully-labeled images. We demonstrate the relevance of this technique in environmental monitoring scenarios, where it is very common to have sparse image labels provided by human experts, as well as in more general domains. Also dealing with limited training data, we propose a novel method for semi-supervised semantic segmentation, i.e., when there is only a small number of fully labeled images and a large set of unlabeled data. We demonstrate how contrastive learning can be applied to the semantic segmentation task and show its advantages, especially when the availability of labeled data is limited. Our approach improves state-of-the-art results, showing the potential of contrastive learning in this task. Learning from unlabeled data opens great opportunities for real-world scenarios since it is an economical solution. Novel efficient image semantic segmentation models. We develop semantic segmentation models that are efficient both in execution time, memory requirements, and computation requirements. Some of our models able to run in CPU at high speed rates with high accuracy. This is very important for real set-ups and applications since high-end GPUs are not always available. Building models that consume fewer resources, memory and time, would increase the range of applications that can benefit from them. Novel methods for semantic segmentation with non-RGB sensors.We propose a novel method for LiDAR point cloud segmentation that combines efficient learning operations both in 2D and 3D. It surpasses state-of-the-art segmentation performance at really fast rates. We also show how to improve the robustness of these models tackling the overfitting and domain adaptation problem. Besides, we show the first work for semantic segmentation with event-based cameras, coping with the lack of labeled data. To increase the impact of this contributions and ease their application in real-world settings, we have made available an open-source implementation of all proposed solutions to the scientific community.<br /
    corecore