2,877 research outputs found

    ON-ICE DETECTION, CLASSIFICATION, LOCALIZATION AND TRACKING OF ANTHROPOGENIC ACOUSTIC SOURCES WITH MACHINE LEARNING

    Get PDF
    Arctic acoustics have been of concern in recent years for the US navy. First-year ice is now the prevalent factor in ice coverage in the Arctic, which changes the previously understood acoustic properties. Due to the ice melting each year, anthropogenic sources in the Arctic region are more common: military exercises, shipping, and tourism. For the navy, it is of interest to detect, classify, localize, and track these sources to have situational awareness of these surroundings. Because the sources are on-water or on-ice, acoustic radiation propagates at a longer distance and so acoustics are the method by which the sources are detected, classified, localized, and tracked. These methods are all part of sound navigation and ranging (SONAR). This dissertation describes algorithms which will better SONAR results without modification of the sensors or the environment and the process by which to arrive to this point. The focus is to use supervised machine learning algorithms to facilitate such technological enhancements. Specifically, neural networks analyze labeled experimental data from a first-year, shore-fast, shallow and narrow water environment. The experiments were conducted over the span of three years from 2019 to 2022, mostly during the months from January to March where ice formed over the Keweenaw Waterway at the Michigan Technological University. All experiments were conducted to analyze a passive acoustic source; that is, the source was non-cooperative and did not send any localizing pings for active SONAR. The experiments were recorded using an underwater pa-type acoustic vector sensor (AVS). The data and analysis were done intermittently to update any upcoming experiments with discrepancies found in the analysis to create a more generalized algorithm. The work in this dissertation focuses on two topics for passive SONAR: localization and classification. Because of the ``black box nature in machine learning, tracking the target source is an extension of localization and thought of as the same goal within machine learning. To introduce and verify the complexity of the testing environment, an underwater acoustic simulation is shown with Ray tracing and bathymetry data to compare with the experimental results used in machine learning. The focus of the algorithms is to produce the best results for the experiments and compare the results with traditional methods, such as a simulation or a linear Gaussian localization with a Kalman filter. Experiments studying neural network types have shown that the Vision Transformer (ViT) produces excellent results. The ViT is capable of analyzing acoustic intensity azimuthal spectrogram (azigram) data and localizing a moving target at high accuracy, and the ViT is capable of classifying multiple acoustic sources with the acoustic intensity magnitude spectrogram at high accuracy as well

    Through-Ice Acoustic Source Tracking Using Vision Transformers with Ordinal Classification

    Get PDF
    Ice environments pose challenges for conventional underwater acoustic localization techniques due to theirmultipath and non-linear nature. In this paper, we compare different deep learning networks, such as Transformers, Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Vision Transformers (ViTs), for passive localization and tracking of single moving, on-ice acoustic sources using two underwater acoustic vector sensors. We incorporate ordinal classification as a localization approach and compare the results with other standard methods. We conduct experiments passively recording the acoustic signature of an anthropogenic source on the ice and analyze these data. The results demonstrate that Vision Transformers are a strong contender for tracking moving acoustic sources on ice. Additionally, we show that classification as a localization technique can outperform regression for networks more suited for classification, such as the CNN and ViTs

    Guided Deep Reinforcement Learning for Swarm Systems

    Full text link
    In this paper, we investigate how to learn to control a group of cooperative agents with limited sensing capabilities such as robot swarms. The agents have only very basic sensor capabilities, yet in a group they can accomplish sophisticated tasks, such as distributed assembly or search and rescue tasks. Learning a policy for a group of agents is difficult due to distributed partial observability of the state. Here, we follow a guided approach where a critic has central access to the global state during learning, which simplifies the policy evaluation problem from a reinforcement learning point of view. For example, we can get the positions of all robots of the swarm using a camera image of a scene. This camera image is only available to the critic and not to the control policies of the robots. We follow an actor-critic approach, where the actors base their decisions only on locally sensed information. In contrast, the critic is learned based on the true global state. Our algorithm uses deep reinforcement learning to approximate both the Q-function and the policy. The performance of the algorithm is evaluated on two tasks with simple simulated 2D agents: 1) finding and maintaining a certain distance to each others and 2) locating a target.Comment: 15 pages, 8 figures, accepted at the AAMAS 2017 Autonomous Robots and Multirobot Systems (ARMS) Worksho
    • …
    corecore