3,254 research outputs found

    Domain-specific implementation of high-order Discontinuous Galerkin methods in spherical geometry

    Get PDF
    In recent years, domain-specific languages (DSLs) have achieved significant success in large-scale efforts to reimplement existing meteorological models in a performance portable manner. The dynamical cores of these models are based on finite difference and finite volume schemes, and existing DSLs are generally limited to supporting only these numerical methods. In the meantime, there have been numerous attempts to use high-order Discontinuous Galerkin (DG) methods for atmospheric dynamics, which are currently largely unsupported in main-stream DSLs. In order to link these developments, we present two domain-specific languages which extend the existing GridTools (GT) ecosystem to high-order DG discretization. The first is a C++-based DSL called G4GT, which, despite being no longer supported, gave us the impetus to implement extensions to the subsequent Python-based production DSL called GT4Py to support the operations needed for DG solvers. As a proof of concept, the shallow water equations in spherical geometry are implemented in both DSLs, thus providing a blueprint for the application of domain-specific languages to the development of global atmospheric models. We believe this is the first GPU-capable DSL implementation of DG in spherical geometry. The results demonstrate that a DSL designed for finite difference/volume methods can be successfully extended to implement a DG solver, while preserving the performance-portability of the DSL.ISSN:0010-4655ISSN:1879-294

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    EV-Tach : a handheld rotational speed estimation system with event camera

    Get PDF
    Rotational speed is one of the important metrics to be measured for calibrating electric motors in manufacturing, monitoring engines during car repairs, detecting faults in electrical appliance and more. However, existing measurement techniques either require prohibitive hardware (e.g., high-speed camera) or are inconvenient to use in real-world application scenarios. In this paper, we propose, EV-Tach, a novel handheld rotational speed estimation system that utilizes emerging imaging sensors known as event cameras or dynamic vision sensors (DVS). The pixels of DVS work independent and trigger an event as soon as a per-pixel intensity change is detected, without global synchronization like CCD/CMOS cameras. Thus, its unique design features high temporal resolution and generates sparse events, which benefits the high-speed rotation estimation. To achieve accurate and efficient rotational speed estimation, a series of signal processing algorithms are specifically designed for the event streams generated by event cameras on an embedded platform. First, a new cluster-centroids initialization module is proposed to initialize the centroids of the clusters to address the issue that common clustering approaches are easy to fall into a local optimal solution without proper initial centroids. Second, an outlier removal module is designed to suppress the background noise caused by subtle hand movements and host devices vibrations. Third, a coarse-to-fine alignment strategy is proposed with Iterative closest point (ICP)-based event stream alignment to obtain angle of rotation and achieve accurate estimation for rotational speed in a large range. With these bespoke components, EV-Tach is able to extract the rotational speed accurately from the event stream produced by an event camera recording rotary targets. According to our extensive evaluations under controlled and practical experiment settings, the Relative Mean Absolute Error (RMAE) of EV-Tach is as low as 0.3‰ which is comparable to the state-of-the-art laser tachometer under fixed measurement mode. Moreover, EV-Tach is robust to subtle movement of user’s hand and dazzling light outdoor, therefore, can be used as a handheld device under challenging lighting condition, where the laser tachometer fails to produce reasonable results. To speed up the processing of EV-Tach and reduce its resource consumption on embedded devices, VoxelGrid filtering is applied to significantly downsample the event streams by merging the events within the same 3D-VoxelGrid while preserving its formation in spatial-temporal domain. At last, we implement EV-Tach on Raspberry Pi and the evaluation results show that the downsampling process preserves the high measurement accuracy while saving the computation speed and energy consumption by approximately 8 times and 30 times in average

    Exploration autonome et efficiente de chantiers miniers souterrains inconnus avec un drone filaire

    Get PDF
    Abstract: Underground mining stopes are often mapped using a sensor located at the end of a pole that the operator introduces into the stope from a secure area. The sensor emits laser beams that provide the distance to a detected wall, thus creating a 3D map. This produces shadow zones and a low point density on the distant walls. To address these challenges, a research team from the Université de Sherbrooke is designing a tethered drone equipped with a rotating LiDAR for this mission, thus benefiting from several points of view. The wired transmission allows for unlimited flight time, shared computing, and real-time communication. For compatibility with the movement of the drone after tether entanglements, the excess length is integrated into an onboard spool, contributing to the drone payload. During manual piloting, the human factor causes problems in the perception and comprehension of a virtual 3D environment, as well as the execution of an optimal mission. This thesis focuses on autonomous navigation in two aspects: path planning and exploration. The system must compute a trajectory that maps the entire environment, minimizing the mission time and respecting the maximum onboard tether length. Path planning using a Rapidly-exploring Random Tree (RRT) quickly finds a feasible path, but the optimization is computationally expensive and the performance is variable and unpredictable. Exploration by the frontier method is representative of the space to be explored and the path can be optimized by solving a Traveling Salesman Problem (TSP) but existing techniques for a tethered drone only consider the 2D case and do not optimize the global path. To meet these challenges, this thesis presents two new algorithms. The first one, RRT-Rope, produces an equal or shorter path than existing algorithms in a significantly shorter computation time, up to 70% faster than the next best algorithm in a representative environment. A modified version of RRT-connect computes a feasible path, shortened with a deterministic technique that takes advantage of previously added intermediate nodes. The second algorithm, TAPE, is the first 3D cavity exploration method that focuses on minimizing mission time and unwound tether length. On average, the overall path is 4% longer than the method that solves the TSP, but the tether remains under the allowed length in 100% of the simulated cases, compared to 53% with the initial method. The approach uses a 2-level hierarchical architecture: global planning solves a TSP after frontier extraction, and local planning minimizes the path cost and tether length via a decision function. The integration of these two tools in the NetherDrone produces an intelligent system for autonomous exploration, with semi-autonomous features for operator interaction. This work opens the door to new navigation approaches in the field of inspection, mapping, and Search and Rescue missions.La cartographie des chantiers miniers souterrains est souvent réalisée à l’aide d’un capteur situé au bout d’une perche que l’opérateur introduit dans le chantier, depuis une zone sécurisée. Le capteur émet des faisceaux laser qui fournissent la distance à un mur détecté, créant ainsi une carte en 3D. Ceci produit des zones d’ombres et une faible densité de points sur les parois éloignées. Pour relever ces défis, une équipe de recherche de l’Université de Sherbrooke conçoit un drone filaire équipé d’un LiDAR rotatif pour cette mission, bénéficiant ainsi de plusieurs points de vue. La transmission filaire permet un temps de vol illimité, un partage de calcul et une communication en temps réel. Pour une compatibilité avec le mouvement du drone lors des coincements du fil, la longueur excédante est intégrée dans une bobine embarquée, qui contribue à la charge utile du drone. Lors d’un pilotage manuel, le facteur humain entraîne des problèmes de perception et compréhension d’un environnement 3D virtuel, et d’exécution d’une mission optimale. Cette thèse se concentre sur la navigation autonome sous deux aspects : la planification de trajectoire et l’exploration. Le système doit calculer une trajectoire qui cartographie l’environnement complet, en minimisant le temps de mission et en respectant la longueur maximale de fil embarquée. La planification de trajectoire à l’aide d’un Rapidly-exploring Random Tree (RRT) trouve rapidement un chemin réalisable, mais l’optimisation est coûteuse en calcul et la performance est variable et imprévisible. L’exploration par la méthode des frontières est représentative de l’espace à explorer et le chemin peut être optimisé en résolvant un Traveling Salesman Problem (TSP), mais les techniques existantes pour un drone filaire ne considèrent que le cas 2D et n’optimisent pas le chemin global. Pour relever ces défis, cette thèse présente deux nouveaux algorithmes. Le premier, RRT-Rope, produit un chemin égal ou plus court que les algorithmes existants en un temps de calcul jusqu’à 70% plus court que le deuxième meilleur algorithme dans un environnement représentatif. Une version modifiée de RRT-connect calcule un chemin réalisable, raccourci avec une technique déterministe qui tire profit des noeuds intermédiaires préalablement ajoutés. Le deuxième algorithme, TAPE, est la première méthode d’exploration de cavités en 3D qui minimise le temps de mission et la longueur du fil déroulé. En moyenne, le trajet global est 4% plus long que la méthode qui résout le TSP, mais le fil reste sous la longueur autorisée dans 100% des cas simulés, contre 53% avec la méthode initiale. L’approche utilise une architecture hiérarchique à 2 niveaux : la planification globale résout un TSP après extraction des frontières, et la planification locale minimise le coût du chemin et la longueur de fil via une fonction de décision. L’intégration de ces deux outils dans le NetherDrone produit un système intelligent pour l’exploration autonome, doté de fonctionnalités semi-autonomes pour une interaction avec l’opérateur. Les travaux réalisés ouvrent la porte à de nouvelles approches de navigation dans le domaine des missions d’inspection, de cartographie et de recherche et sauvetage

    QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum - Classical Neural Network

    Full text link
    Supply chain management relies on accurate backorder prediction for optimizing inventory control, reducing costs, and enhancing customer satisfaction. However, traditional machine-learning models struggle with large-scale datasets and complex relationships, hindering real-world data collection. This research introduces a novel methodological framework for supply chain backorder prediction, addressing the challenge of handling large datasets. Our proposed model, QAmplifyNet, employs quantum-inspired techniques within a quantum-classical neural network to predict backorders effectively on short and imbalanced datasets. Experimental evaluations on a benchmark dataset demonstrate QAmplifyNet's superiority over classical models, quantum ensembles, quantum neural networks, and deep reinforcement learning. Its proficiency in handling short, imbalanced datasets makes it an ideal solution for supply chain management. To enhance model interpretability, we use Explainable Artificial Intelligence techniques. Practical implications include improved inventory control, reduced backorders, and enhanced operational efficiency. QAmplifyNet seamlessly integrates into real-world supply chain management systems, enabling proactive decision-making and efficient resource allocation. Future work involves exploring additional quantum-inspired techniques, expanding the dataset, and investigating other supply chain applications. This research unlocks the potential of quantum computing in supply chain optimization and paves the way for further exploration of quantum-inspired machine learning models in supply chain management. Our framework and QAmplifyNet model offer a breakthrough approach to supply chain backorder prediction, providing superior performance and opening new avenues for leveraging quantum-inspired techniques in supply chain management

    Boosting precision crop protection towards agriculture 5.0 via machine learning and emerging technologies: A contextual review

    Get PDF
    Crop protection is a key activity for the sustainability and feasibility of agriculture in a current context of climate change, which is causing the destabilization of agricultural practices and an increase in the incidence of current or invasive pests, and a growing world population that requires guaranteeing the food supply chain and ensuring food security. In view of these events, this article provides a contextual review in six sections on the role of artificial intelligence (AI), machine learning (ML) and other emerging technologies to solve current and future challenges of crop protection. Over time, crop protection has progressed from a primitive agriculture 1.0 (Ag1.0) through various technological developments to reach a level of maturity closelyin line with Ag5.0 (section 1), which is characterized by successfully leveraging ML capacity and modern agricultural devices and machines that perceive, analyze and actuate following the main stages of precision crop protection (section 2). Section 3 presents a taxonomy of ML algorithms that support the development and implementation of precision crop protection, while section 4 analyses the scientific impact of ML on the basis of an extensive bibliometric study of >120 algorithms, outlining the most widely used ML and deep learning (DL) techniques currently applied in relevant case studies on the detection and control of crop diseases, weeds and plagues. Section 5 describes 39 emerging technologies in the fields of smart sensors and other advanced hardware devices, telecommunications, proximal and remote sensing, and AI-based robotics that will foreseeably lead the next generation of perception-based, decision-making and actuation systems for digitized, smart and real-time crop protection in a realistic Ag5.0. Finally, section 6 highlights the main conclusions and final remarks

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Interaction of elastomechanics and fluid dynamics in the human heart : Opportunities and challenges of light coupling strategies

    Get PDF
    Das menschliche Herz ist das hochkomplexe Herzstück des kardiovaskulären Systems, das permanent, zuverlässig und autonom den Blutfluss im Körper aufrechterhält. In Computermodellen wird die Funktionalität des Herzens nachgebildet, um Simulationsstudien durchzuführen, die tiefere Einblicke in die zugrundeliegenden Phänomene ermöglichen oder die Möglichkeit bieten, relevante Parameter unter vollständig kontrollierten Bedingungen zu variieren. Angesichts der Tatsache, dass Herz-Kreislauf-Erkrankungen die häufigste Todesursache in den Ländern der westlichen Hemisphäre sind, ist ein Beitrag zur frühzeit- igen Diagnose derselben von großer klinischer Bedeutung. In diesem Zusammenhang können computergestützte Strömungssimulationen wertvolle Einblicke in die Blutflussdynamik liefern und bieten somit die Möglichkeit, einen zentralen Bereich der Physik dieses multiphysikalischen Organs zu untersuchen. Da die Verformung der Endokardoberfläche den Blutfluss antreibt, müssen die Effekte der Elastomechanik als Randbedingungen für solche Strömungssimulationen berücksichtigt werden. Um im klinischen Kontext relevant zu sein, muss jedoch ein Mittelweg zwischen dem Rechenaufwand und der erforderlichen Genauigkeit gefunden werden, und die Modelle müssen sowohl robust als auch zuverlässig sein. Daher werden in dieser Arbeit die Möglichkeiten und Herausforderungen leichter und daher weniger komplexer Kopplungsstrategien mit Schwerpunkt auf drei Schlüsselaspekten bewertet: Erstens wird ein auf dem Immersed Boundary-Ansatz basierender Fluiddynamik-Löser implementiert, da diese Methode mit einer sehr robusten Darstellung von bewegten Netzen besticht. Die grundlegende Funktionalität wurde für verschiedene vereinfachte Geometrien verifiziert und zeigte eine hohe Übereinstimmung mit der jeweiligen analytischen Lösung. Vergleicht man die 3D-Simulation einer realistischen Geometrie des linken Teils des Herzens mit einem körperangepassten Netzbeschreibung, so wurden grundlegende globale Größen korrekt reproduziert. Allerdings zeigten Variationen der Randbedingungen einen großen Einfluss auf die Simulationsergebnisse. Die Anwendung des Lösers zur Simulation des Einflusses von Pathologien auf die Blutströmungsmuster ergab Ergebnisse in guter Übereinstimmung mit Literaturwerten. Bei Simulationen der Mitralklappeninsuffizienz wurde der rückströmende Anteil mit Hilfe einer Partikelverfolgungsmethode visualisiert. Bei hypertropher Kardiomyopathie wurden die Strömungsmuster im linken Ventrikel mit Hilfe eines passiven Skalartransports bewertet, um die lokale Konzentration des ursprünglichen Blutvolumens zu visualisieren. Da in den vorgenannten Studien nur ein unidirektionaler Informationsfluss vom elas- tomechanischen Modell zum Strömungslöser berücksichtigt wurde, wird die Rückwirkung des räumlich aufgelösten Druckfeldes aus den Strömungssimulationen auf die Elastomechanik quantifiziert. Es wird ein sequenzieller Kopplungsansatz eingeführt, um fluiddynamische Einflüsse in einer Schlag-für-Schlag-Kopplungsstruktur zu berücksichtigen. Die geringen Abweichungen im mechanischen Solver von 2 mm verschwanden bereits nach einer Iteration, was darauf schließen lässt, dass die Rückwirkungen der Fluiddynamik im gesunden Herzen begrenzt ist. Zusammenfassend lässt sich sagen, dass insbesondere bei Strömungsdynamiksimula- tionen die Randbedingungen mit Vorsicht gewählt werden müssen, da sie aufgrund ihres großen Einflusses die Anfälligkeit der Modelle erhöhen. Nichtsdestotrotz zeigten verein- fachte Kopplungsstrategien vielversprechende Ergebnisse bei der Reproduktion globaler fluiddynamischer Größen, während die Abhängigkeit zwischen den Lösern reduziert und Rechenaufwand eingespart wird

    Neural Fields with Hard Constraints of Arbitrary Differential Order

    Full text link
    While deep learning techniques have become extremely popular for solving a broad range of optimization problems, methods to enforce hard constraints during optimization, particularly on deep neural networks, remain underdeveloped. Inspired by the rich literature on meshless interpolation and its extension to spectral collocation methods in scientific computing, we develop a series of approaches for enforcing hard constraints on neural fields, which we refer to as Constrained Neural Fields (CNF). The constraints can be specified as a linear operator applied to the neural field and its derivatives. We also design specific model representations and training strategies for problems where standard models may encounter difficulties, such as conditioning of the system, memory consumption, and capacity of the network when being constrained. Our approaches are demonstrated in a wide range of real-world applications. Additionally, we develop a framework that enables highly efficient model and constraint specification, which can be readily applied to any downstream task where hard constraints need to be explicitly satisfied during optimization.Comment: 37th Conference on Neural Information Processing Systems (NeurIPS 2023
    corecore