244 research outputs found

    Shallow Water Bathymetry Mapping from UAV Imagery based on Machine Learning

    Get PDF
    The determination of accurate bathymetric information is a key element for near offshore activities, hydrological studies such as coastal engineering applications, sedimentary processes, hydrographic surveying as well as archaeological mapping and biological research. UAV imagery processed with Structure from Motion (SfM) and Multi View Stereo (MVS) techniques can provide a low-cost alternative to established shallow seabed mapping techniques offering as well the important visual information. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this paper, in order to overcome the water refraction errors, we employ machine learning tools that are able to learn the systematic underestimation of the estimated depths. In the proposed approach, based on known depth observations from bathymetric LiDAR surveys, an SVR model was developed able to estimate more accurately the real depths of point clouds derived from SfM-MVS procedures. Experimental results over two test sites along with the performed quantitative validation indicated the high potential of the developed approach.Comment: 8 pages, 9 figure

    SHALLOW WATER BATHYMETRY MAPPING FROM UAV IMAGERY BASED ON MACHINE LEARNING

    Get PDF
    The determination of accurate bathymetric information is a key element for near offshore activities, hydrological studies such as coastal engineering applications, sedimentary processes, hydrographic surveying as well as archaeological mapping and biological research. UAV imagery processed with Structure from Motion (SfM) and Multi View Stereo (MVS) techniques can provide a low-cost alternative to established shallow seabed mapping techniques offering as well the important visual information. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this paper, in order to overcome the water refraction errors, we employ machine learning tools that are able to learn the systematic underestimation of the estimated depths. In the proposed approach, based on known depth observations from bathymetric LiDAR surveys, an SVR model was developed able to estimate more accurately the real depths of point clouds derived from SfM-MVS procedures. Experimental results over two test sites along with the performed quantitative validation indicated the high potential of the developed approach

    ACCURATE 3D SCANNING OF DAMAGED ANCIENT GREEK INSCRIPTIONS FOR REVEALING WEATHERED LETTERS

    Get PDF
    In this paper two non-invasive non-destructive alternative techniques to the traditional and invasive technique of squeezes are presented alongside with specialized developed processing methods, aiming to help the epigraphists to reveal and analyse weathered letters in ancient Greek inscriptions carved in masonry or marble. The resulting 3D model would serve as a detailed basis for the epigraphists to try to decipher the inscription. The data were collected by using a Structured Light scanner. The creation of the final accurate three dimensional model is a complicated procedure requiring large computation cost and human effort. It includes the collection of geometric data in limited space and time, the creation of the surface, the noise filtering and the merging of individual surfaces. The use of structured light scanners is time consuming and requires costly hardware and software. Therefore an alternative methodology for collecting 3D data of the inscriptions was also implemented for reasons of comparison. Hence, image sequences from varying distances were collected using a calibrated DSLR camera aiming to reconstruct the 3D scene through SfM techniques in order to evaluate the efficiency and the level of precision and detail of the obtained reconstructed inscriptions. Problems in the acquisition processes as well as difficulties in the alignment step and mesh optimization are also encountered. A meta-processing framework is proposed and analysed. Finally, the results of processing and analysis and the different 3D models are critically inspected and then evaluated by a specialist in terms of accuracy, quality and detail of the model and the capability of revealing damaged and ”hidden” letters

    A Self-Organizing Algorithm for Modeling Protein Loops

    Get PDF
    Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies

    Under and Through Water Datasets for Geospatial Studies: the 2023 ISPRS Scientific Initiative “NAUTILUS”

    Get PDF
    Benchmark datasets have become increasingly widespread in the scientific community as a method of comparison, validation, and improvement of theories and techniques thanks to more affordable means for sharing. While this especially holds for test sites and data collected above the water, publicly accessible benchmark activities for geospatial analyses in the underwater environment are not very common. Applying geomatic techniques underwater is challenging and expensive, especially when dealing with deep water and offshore operations. Moreover, benchmarking requires ground truth data for which, in water, several open issues exist concerning geometry and radiometry. Recognizing this scientific and technological challenge, the NAUTILUS (uNder And throUgh waTer datasets for geospatIaL stUdieS) project aims to create guidelines for new multi-sensor/cross-modality benchmark datasets. The project focuses on (i) surveying the actual needs and gaps in through and under-the-water geospatial applications through a questionnaire and interviews, (ii) launching a unique publicly available database collecting already existing datasets scattered across the web and literature, (iii) designing and identifying proper test site(s) and methodologies to deliver to the extended underwater community a brand-new multi-sensor/cross-modality benchmark dataset. The project outputs are available to researchers and practitioners in underwater measurements-related domains, as they can now access a comprehensive tool providing a synthesis of open questions and data already available. In doing so, past research efforts to collect and publish datasets have received additional credit and visibility

    HiTSEE KNIME: a visualization tool for hit selection and analysis in high-throughput screening experiments for the KNIME platform

    Get PDF
    We present HiTSEE (High-Throughput Screening Exploration Environment), a visualization tool for the analysis of large chemical screens used to examine biochemical processes. The tool supports the investigation of structure-activity relationships (SAR analysis) and, through a flexible interaction mechanism, the navigation of large chemical spaces. Our approach is based on the projection of one or a few molecules of interest and the expansion around their neighborhood and allows for the exploration of large chemical libraries without the need to create an all encompassing overview of the whole library. We describe the requirements we collected during our collaboration with biologists and chemists, the design rationale behind the tool, and two case studies on different datasets. The described integration (HiTSEE KNIME) into the KNIME platform allows additional flexibility in adopting our approach to a wide range of different biochemical problems and enables other research groups to use HiTSEE

    CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    Get PDF
    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis

    Structure-based classification and ontology in chemistry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving <it>relevant </it>results from the available information, and <it>organising </it>those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.</p> <p>Results</p> <p>We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.</p> <p>Conclusion</p> <p>Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.</p
    corecore