269 research outputs found

    Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Hopkinson, B. M., King, A. C., Owen, D. P., Johnson-Roberson, M., Long, M. H., & Bhandarkar, S. M. Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. PLoS One, 15(3), (2020): e0230671, doi: 10.1371/journal.pone.0230671.Coral reefs are biologically diverse and structurally complex ecosystems, which have been severally affected by human actions. Consequently, there is a need for rapid ecological assessment of coral reefs, but current approaches require time consuming manual analysis, either during a dive survey or on images collected during a survey. Reef structural complexity is essential for ecological function but is challenging to measure and often relegated to simple metrics such as rugosity. Recent advances in computer vision and machine learning offer the potential to alleviate some of these limitations. We developed an approach to automatically classify 3D reconstructions of reef sections and assessed the accuracy of this approach. 3D reconstructions of reef sections were generated using commercial Structure-from-Motion software with images extracted from video surveys. To generate a 3D classified map, locations on the 3D reconstruction were mapped back into the original images to extract multiple views of the location. Several approaches were tested to merge information from multiple views of a point into a single classification, all of which used convolutional neural networks to classify or extract features from the images, but differ in the strategy employed for merging information. Approaches to merging information entailed voting, probability averaging, and a learned neural-network layer. All approaches performed similarly achieving overall classification accuracies of ~96% and >90% accuracy on most classes. With this high classification accuracy, these approaches are suitable for many ecological applications.This study was funded by grants from the Alfred P. Sloan Foundation (BMH, BR2014-049; https://sloan.org), and the National Science Foundation (MHL, OCE-1657727; https://www.nsf.gov). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Object classification in semi structured enviroment using forward-looking sonar

    Get PDF
    La exploración submarina utilizando robots ha ido en aumento en los últimos años. La automatización de tareas tales como monitoreo, inspección y mantenimiento bajo el agua requiere la comprensión del entorno del robot. El reconocimiento de objetos en la escena se está convirtiendo en un problema crítico para estos sistemas. En este trabajo, se estudia una tubería de clasificación de objetos bajo el agua aplicada en imágenes acústicas adquiridas por Forward-Looking Sonar (FLS). La segmentación de objetos combina el umbral, la búsqueda de píxeles conectados y las técnicas de análisis de picos de intensidad. El descriptor del objeto extrae la intensidad y las características geométricas de los objetos detectados. Se presenta una comparación entre los clasificadores Máquina de vectores de soporte, Vecinos más cercanos a K y Árboles aleatorios. Se desarrolló una herramienta de código abierto para anotar y clasificar los objetos y evaluar su rendimiento de clasificación. El método propuesto segmenta y clasifica eficientemente las estructuras en la escena utilizando un conjunto de datos real adquirido por un vehículo submarino en un área de puerto. Los resultados experimentales demuestran la solidez y precisión del método descrito en este documento.The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper.• National Institute of Science and Technology - Integrated Oceanography and Multiple Uses of the Continental Shelf and Adjacent Ocean - Integrated Oceanography Center INCT-Mar COI funded by CNPq. Beca 610012/2011-8 • BS-NAVLOC (CAPES no 321/15, DGPU 7523 / 14-9, proyecto MEC PHBP14 / 00083)peerReviewe

    SwimmerNET: Underwater 2D Swimmer Pose Estimation Exploiting Fully Convolutional Neural Networks

    Get PDF
    Professional swimming coaches make use of videos to evaluate their athletes' performances. Specifically, the videos are manually analyzed in order to observe the movements of all parts of the swimmer's body during the exercise and to give indications for improving swimming technique. This operation is time-consuming, laborious and error prone. In recent years, alternative technologies have been introduced in the literature, but they still have severe limitations that make their correct and effective use impossible. In fact, the currently available techniques based on image analysis only apply to certain swimming styles; moreover, they are strongly influenced by disturbing elements (i.e., the presence of bubbles, splashes and reflections), resulting in poor measurement accuracy. The use of wearable sensors (accelerometers or photoplethysmographic sensors) or optical markers, although they can guarantee high reliability and accuracy, disturb the performance of the athletes, who tend to dislike these solutions. In this work we introduce swimmerNET, a new marker-less 2D swimmer pose estimation approach based on the combined use of computer vision algorithms and fully convolutional neural networks. By using a single 8 Mpixel wide-angle camera, the proposed system is able to estimate the pose of a swimmer during exercise while guaranteeing adequate measurement accuracy. The method has been successfully tested on several athletes (i.e., different physical characteristics and different swimming technique), obtaining an average error and a standard deviation (worst case scenario for the dataset analyzed) of approximately 1 mm and 10 mm, respectively

    A novel monitoring system for the training of elite swimmers

    Get PDF
    Swimming performance is primarily judged on the overall time taken for a swimmer to complete a specified distance performing a stroke that complies with current regulations defined by the Fédération Internationale de Natation (FINA), the International governing body of swimming. There are three contributing factors to this overall time; the start, free swimming and turns. The contribution of each of these factors is event dependent; for example, in a 50m event there are no turns, however, the start can be a significant contributor. To improve overall performance each of these components should be optimised in terms of skill and execution. This thesis details the research undertaken towards improving performance-related feedback in swimming. The research included collaboration with British Swimming, the national governing body for swimming in the U.K., to drive the requirements and direction of research. An evaluation of current methods of swimming analysis identified a capability gap in real-time, quantitative feedback. A number of components were developed to produce an integrated system for comprehensive swim performance analysis in all phases of the swim, i.e. starts, free swimming and turns. These components were developed to satisfy two types of stakeholder requirements. Firstly, the measurement requirements, i.e. what does the end user want to measure? Secondly, the process requirements, i.e. how would these measurements be achieved? The components developed in this research worked towards new technologies to facilitate a wider range of measurement parameters using automated methods as well as the application of technologies to facilitate the automation of current techniques. The development of the system is presented in detail and the application of these technologies is presented in case studies for starts, free swimming and turns. It was found that developed components were able to provide useful data indicating levels of performance in all aspects of swimming, i.e. starts, free swimming and turns. For the starts, an integrated solution of vision, force plate technology and a wireless iii node enabled greater insight into overall performance and quantitative measurements of performance to be captured. Force profiles could easily identify differences in swimmer ability or changes in technique. The analysis of free swimming was predominantly supported by the wireless sensor technology, whereby signal analysis was capable of automatically determining factors such as lap times variations within strokes. The turning phase was also characterised in acceleration space, allowing the phases of the turn to be individually assessed and their contribution to total turn time established. Each of the component technologies were not used in isolation but were supported by other synchronous data capture. In all cases a vision component was used to increase understanding of data outputs and provide a medium that coaches and athletes were comfortable with interpreting. The integrated, component based system has been developed and tested to prove its ability to produce useful, quantitative feedback information for swimmers. The individual components were found to be capable of providing greater insight into swimming performance, that has not been previously possible using the current state of the art techniques. Future work should look towards the fine-tuning of the prototype system into a useable solution for end users. This relies on the refinement of components and the development of an appropriate user interface to enable ease of data collection, analysis, presentation and interpretation

    Action Sport Cameras As An Instrument To Perform A 3d Underwater Motion Analysis

    Get PDF
    Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280x720/1920x1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m(3). A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280x720) and 1.5mm (1920x1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.118Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (Sao Paulo Research Foundation) [00/1293-1, 2006/02403-1, 2009/09359-6]Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (National Counsel of Technological and Scientific Development) [473729/2008-3, 304975/2009-5, 478120/2011-7, 234088/2014-1, 481391/2013-4]Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (Brazilian Federal Agency for Support and Evaluation of Graduation Education) [2011/10-7, 08/2014]Fundacao de Amparo a Pesquisa de Minas Gerais (Minas Gerais Research Foundation) [PEE-00596-14]Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES

    Drag force and jet propulsion investigation of a swimming squid

    Get PDF
    Gökçen, Gökhan (Dogus Author)In this study, CAD model of a squid was obtained by taking computer tomography images of a real squid. The model later placed into a computational domain to calculate drag force and performance of jet propulsion. The drag study was performed on the CAD model so that drag force subjected to real squid was revealed at squid's different swimming speeds and comparison has been made with other underwater creatures (e.g., a dolphin, sea lion and penguin). The drag coefficient (referenced to total wetted surface area) of squid is 0.0042 at Reynolds number 1.6x106 that is a %4.5 difference from Gentoo penguin. Besides, jet flow of squid was simulated to observe the flow region generated in the 2D domain utilizing dynamic mesh method to mimic the movement of squid's mantle cavity
    corecore