1,166 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Exploration autonome et efficiente de chantiers miniers souterrains inconnus avec un drone filaire

    Get PDF
    Abstract: Underground mining stopes are often mapped using a sensor located at the end of a pole that the operator introduces into the stope from a secure area. The sensor emits laser beams that provide the distance to a detected wall, thus creating a 3D map. This produces shadow zones and a low point density on the distant walls. To address these challenges, a research team from the Université de Sherbrooke is designing a tethered drone equipped with a rotating LiDAR for this mission, thus benefiting from several points of view. The wired transmission allows for unlimited flight time, shared computing, and real-time communication. For compatibility with the movement of the drone after tether entanglements, the excess length is integrated into an onboard spool, contributing to the drone payload. During manual piloting, the human factor causes problems in the perception and comprehension of a virtual 3D environment, as well as the execution of an optimal mission. This thesis focuses on autonomous navigation in two aspects: path planning and exploration. The system must compute a trajectory that maps the entire environment, minimizing the mission time and respecting the maximum onboard tether length. Path planning using a Rapidly-exploring Random Tree (RRT) quickly finds a feasible path, but the optimization is computationally expensive and the performance is variable and unpredictable. Exploration by the frontier method is representative of the space to be explored and the path can be optimized by solving a Traveling Salesman Problem (TSP) but existing techniques for a tethered drone only consider the 2D case and do not optimize the global path. To meet these challenges, this thesis presents two new algorithms. The first one, RRT-Rope, produces an equal or shorter path than existing algorithms in a significantly shorter computation time, up to 70% faster than the next best algorithm in a representative environment. A modified version of RRT-connect computes a feasible path, shortened with a deterministic technique that takes advantage of previously added intermediate nodes. The second algorithm, TAPE, is the first 3D cavity exploration method that focuses on minimizing mission time and unwound tether length. On average, the overall path is 4% longer than the method that solves the TSP, but the tether remains under the allowed length in 100% of the simulated cases, compared to 53% with the initial method. The approach uses a 2-level hierarchical architecture: global planning solves a TSP after frontier extraction, and local planning minimizes the path cost and tether length via a decision function. The integration of these two tools in the NetherDrone produces an intelligent system for autonomous exploration, with semi-autonomous features for operator interaction. This work opens the door to new navigation approaches in the field of inspection, mapping, and Search and Rescue missions.La cartographie des chantiers miniers souterrains est souvent réalisée à l’aide d’un capteur situé au bout d’une perche que l’opérateur introduit dans le chantier, depuis une zone sécurisée. Le capteur émet des faisceaux laser qui fournissent la distance à un mur détecté, créant ainsi une carte en 3D. Ceci produit des zones d’ombres et une faible densité de points sur les parois éloignées. Pour relever ces défis, une équipe de recherche de l’Université de Sherbrooke conçoit un drone filaire équipé d’un LiDAR rotatif pour cette mission, bénéficiant ainsi de plusieurs points de vue. La transmission filaire permet un temps de vol illimité, un partage de calcul et une communication en temps réel. Pour une compatibilité avec le mouvement du drone lors des coincements du fil, la longueur excédante est intégrée dans une bobine embarquée, qui contribue à la charge utile du drone. Lors d’un pilotage manuel, le facteur humain entraîne des problèmes de perception et compréhension d’un environnement 3D virtuel, et d’exécution d’une mission optimale. Cette thèse se concentre sur la navigation autonome sous deux aspects : la planification de trajectoire et l’exploration. Le système doit calculer une trajectoire qui cartographie l’environnement complet, en minimisant le temps de mission et en respectant la longueur maximale de fil embarquée. La planification de trajectoire à l’aide d’un Rapidly-exploring Random Tree (RRT) trouve rapidement un chemin réalisable, mais l’optimisation est coûteuse en calcul et la performance est variable et imprévisible. L’exploration par la méthode des frontières est représentative de l’espace à explorer et le chemin peut être optimisé en résolvant un Traveling Salesman Problem (TSP), mais les techniques existantes pour un drone filaire ne considèrent que le cas 2D et n’optimisent pas le chemin global. Pour relever ces défis, cette thèse présente deux nouveaux algorithmes. Le premier, RRT-Rope, produit un chemin égal ou plus court que les algorithmes existants en un temps de calcul jusqu’à 70% plus court que le deuxième meilleur algorithme dans un environnement représentatif. Une version modifiée de RRT-connect calcule un chemin réalisable, raccourci avec une technique déterministe qui tire profit des noeuds intermédiaires préalablement ajoutés. Le deuxième algorithme, TAPE, est la première méthode d’exploration de cavités en 3D qui minimise le temps de mission et la longueur du fil déroulé. En moyenne, le trajet global est 4% plus long que la méthode qui résout le TSP, mais le fil reste sous la longueur autorisée dans 100% des cas simulés, contre 53% avec la méthode initiale. L’approche utilise une architecture hiérarchique à 2 niveaux : la planification globale résout un TSP après extraction des frontières, et la planification locale minimise le coût du chemin et la longueur de fil via une fonction de décision. L’intégration de ces deux outils dans le NetherDrone produit un système intelligent pour l’exploration autonome, doté de fonctionnalités semi-autonomes pour une interaction avec l’opérateur. Les travaux réalisés ouvrent la porte à de nouvelles approches de navigation dans le domaine des missions d’inspection, de cartographie et de recherche et sauvetage

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    University of Windsor Graduate Calendar 2023 Spring

    Get PDF
    https://scholar.uwindsor.ca/universitywindsorgraduatecalendars/1027/thumbnail.jp

    Efficient parameterized algorithms on structured graphs

    Get PDF
    In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist. Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte. Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value. In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems. Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values. As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure. In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures. Using algebraic expressions, we define new combined graph classes of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout

    University of Windsor Graduate Calendar 2023 Winter

    Get PDF
    https://scholar.uwindsor.ca/universitywindsorgraduatecalendars/1026/thumbnail.jp

    Distributed MIS in O(log log n) Awake Complexity

    Get PDF
    Maximal Independent Set (MIS) is one of the fundamental and most well-studied problems in distributed graph algorithms. Even after four decades of intensive research, the best known (randomized) MIS algorithms have O(log n) round complexity on general graphs [Luby, STOC 1986] (where n is the number of nodes), while the best known lower bound is [EQUATION] [Kuhn, Moscibroda, Wattenhofer, JACM 2016]. Breaking past the O(log n) round complexity upper bound or showing stronger lower bounds have been longstanding open problems. Energy is a premium resource in various settings such as battery-powered wireless networks and sensor networks. The bulk of the energy is used by nodes when they are awake, i.e., when they are sending, receiving, and even just listening for messages. On the other hand, when a node is sleeping, it does not perform any communication and thus spends very little energy. Several recent works have addressed the problem of designing energy-efficient distributed algorithms for various fundamental problems. These algorithms operate by minimizing the number of rounds in which any node is awake, also called the (worst-case) awake complexity. An intriguing open question is whether one can design a distributed MIS algorithm that has significantly smaller awake complexity compared to existing algorithms. In particular, the question of obtaining a distributed MIS algorithm with o(log n) awake complexity was left open in [Chatterjee, Gmyr, Pandurangan, PODC 2020]. Our main contribution is to show that MIS can be computed in awake complexity that is exponentially better compared to the best known round complexity of O(log n) and also bypassing its fundamental [EQUATION] round complexity lower bound exponentially. Specifically, we show that MIS can be computed by a randomized distributed (Monte Carlo) algorithm in O(log log n) awake complexity with high probability.1 However, this algorithm has a round complexity that is O(poly(n)). We then show how to drastically improve the round complexity at the cost of a slight increase in awake complexity by presenting a randomized distributed (Monte Carlo) algorithm for MIS that, with high probability computes an MIS in O((log log n) log* n) awake complexity and O((log3 n)(log log n) log* n) round complexity. Our algorithms work in the CONGEST model where messages of size O(log n) bits can be sent per edge per round
    • …
    corecore