8,193 research outputs found
Une méthode de mesure du mouvement humain pour la programmation par démonstration
Programming by demonstration (PbD) is an intuitive approach to impart a task to a robot from one or several demonstrations by the human teacher. The acquisition of the demonstrations involves the solution of the correspondence problem when the teacher and the learner differ in sensing and actuation. Kinesthetic guidance is widely used to perform demonstrations. With such a method, the robot is manipulated by the teacher and the demonstrations are recorded by the robot's encoders. In this way, the correspondence problem is trivial but the teacher dexterity is afflicted which may impact the PbD process. Methods that are more practical for the teacher usually require the identification of some mappings to solve the correspondence problem. The demonstration acquisition method is based on a compromise between the difficulty of identifying these mappings, the level of accuracy of the recorded elements and the user-friendliness and convenience for the teacher. This thesis proposes an inertial human motion tracking method based on inertial measurement units (IMUs) for PbD for pick-and-place tasks. Compared to kinesthetic guidance, IMUs are convenient and easy to use but can present a limited accuracy. Their potential for PbD applications is investigated.
To estimate the trajectory of the teacher's hand, 3 IMUs are placed on her/his arm segments (arm, forearm and hand) to estimate their orientations. A specific method is proposed to partially compensate the well-known drift of the sensor orientation estimation around the gravity direction by exploiting the particular configuration of the demonstration. This method, called heading reset, is based on the assumption that the sensor passes through its original heading with stationary phases several times during the demonstration. The heading reset is implemented in an integration and vector observation algorithm. Several experiments illustrate the advantages of this heading reset.
A comprehensive inertial human hand motion tracking (IHMT) method for PbD is then developed. It includes an initialization procedure to estimate the orientation of each sensor with respect to the human arm segment and the initial orientation of the sensor with respect to the teacher attached frame. The procedure involves a rotation and a static position of the extended arm. The measurement system is thus robust with respect to the positioning of the sensors on the segments. A procedure for estimating the position of the human teacher relative to the robot and a calibration procedure for the parameters of the method are also proposed. At the end, the error of the human hand trajectory is measured experimentally and is found in an interval between mm and mm. The mappings to solve the correspondence problem are identified. Unfortunately, the observed level of accuracy of this IHMT method is not sufficient for a PbD process.
In order to reach the necessary level of accuracy, a method is proposed to correct the hand trajectory obtained by IHMT using vision data. A vision system presents a certain complementarity with inertial sensors. For the sake of simplicity and robustness, the vision system only tracks the objects but not the teacher. The correction is based on so-called Positions Of Interest (POIs) and involves 3 steps: the identification of the POIs in the inertial and vision data, the pairing of the hand POIs to objects POIs that correspond to the same action in the task, and finally, the correction of the hand trajectory based on the pairs of POIs. The complete method for demonstration acquisition is experimentally evaluated in a full PbD process. This experiment reveals the advantages of the proposed method over kinesthesy in the context of this work.La programmation par démonstration est une approche intuitive permettant de transmettre une tâche à un robot à partir d'une ou plusieurs démonstrations faites par un enseignant humain. L'acquisition des démonstrations nécessite cependant la résolution d'un problème de correspondance quand les systèmes sensitifs et moteurs de l'enseignant et de l'apprenant diffèrent. De nombreux travaux utilisent des démonstrations faites par kinesthésie, i.e., l'enseignant manipule directement le robot pour lui faire faire la tâche. Ce dernier enregistre ses mouvements grâce à ses propres encodeurs. De cette façon, le problème de correspondance est trivial. Lors de telles démonstrations, la dextérité de l'enseignant peut être altérée et impacter tout le processus de programmation par démonstration. Les méthodes d'acquisition de démonstration moins invalidantes pour l'enseignant nécessitent souvent des procédures spécifiques pour résoudre le problème de correspondance. Ainsi l'acquisition des démonstrations se base sur un compromis entre complexité de ces procédures, le niveau de précision des éléments enregistrés et la commodité pour l'enseignant. Cette thèse propose ainsi une méthode de mesure du mouvement humain par capteurs inertiels pour la programmation par démonstration de tâches de ``pick-and-place''. Les capteurs inertiels sont en effet pratiques et faciles à utiliser, mais sont d'une précision limitée. Nous étudions leur potentiel pour la programmation par démonstration.
Pour estimer la trajectoire de la main de l'enseignant, des capteurs inertiels sont placés sur son bras, son avant-bras et sa main afin d'estimer leurs orientations. Une méthode est proposée afin de compenser partiellement la dérive de l'estimation de l'orientation des capteurs autour de la direction de la gravité. Cette méthode, appelée ``heading reset'', est basée sur l'hypothèse que le capteur passe plusieurs fois par son azimut initial avec des phases stationnaires lors d'une démonstration. Cette méthode est implémentée dans un algorithme d'intégration et d'observation de vecteur. Des expériences illustrent les avantages du ``heading reset''.
Cette thèse développe ensuite une méthode complète de mesure des mouvements de la main humaine par capteurs inertiels (IHMT). Elle comprend une première procédure d'initialisation pour estimer l'orientation des capteurs par rapport aux segments du bras humain ainsi que l'orientation initiale des capteurs par rapport au repère de référence de l'humain. Cette procédure, consistant en une rotation et une position statique du bras tendu, est robuste au positionnement des capteurs. Une seconde procédure est proposée pour estimer la position de l'humain par rapport au robot et pour calibrer les paramètres de la méthode. Finalement, l'erreur moyenne sur la trajectoire de la main humaine est mesurée expérimentalement entre 28.5 mm et 61.8 mm, ce qui n'est cependant pas suffisant pour la programmation par démonstration.
Afin d'atteindre le niveau de précision nécessaire, une nouvelle méthode est développée afin de corriger la trajectoire de la main par IHMT à partir de données issues d'un système de vision, complémentaire des capteurs inertiels. Pour maintenir une certaine simplicité et robustesse, le système de vision ne suit que les objets et pas l'enseignant. La méthode de correction, basée sur des ``Positions Of Interest (POIs)'', est constituée de 3 étapes: l'identification des POIs dans les données issues des capteurs inertiels et du système de vision, puis l'association de POIs liées à la main et de POIs liées aux objets correspondant à la même action, et enfin, la correction de la trajectoire de la main à partir des paires de POIs. Finalement, la méthode IHMT corrigée est expérimentalement évaluée dans un processus complet de programmation par démonstration. Cette expérience montre l'avantage de la méthode proposée sur la kinesthésie dans le contexte de ce travail
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Ecology of methanotrophs in a landfill methane biofilter
Decomposing landfill waste is a significant anthropogenic source of the potent climate-active gas methane (CHâ‚„). To mitigate fugitive methane emissions Norfolk County Council are trialling a landfill biofilter, designed to harness the methane oxidizing potential of methanotrophic bacteria. These methanotrophs can convert CHâ‚„ to COâ‚‚ or biomass and act as CHâ‚„ sinks.
The most active CHâ‚„ oxidising regions of the Strumpshaw biofilter were identified from in-situ temperature, CHâ‚„, Oâ‚‚ and COâ‚‚ profiles. While soil CHâ‚„ oxidation potential was estimated and used to confirm methanotroph activity and determine optimal soil moisture conditions for CHâ‚„ oxidation. It was observed that most CHâ‚„ oxidation occurs in the top 60cm of the biofilter (up to 50% of CH4 input) at temperatures around 50ÂşC, optimal soil moisture was 10-27.5%. A decrease in in-situ temperature following CHâ‚„ supply interruption suggested the high biofilter temperatures were driven by CHâ‚„ oxidation.
The biofilter soil bacterial community was profiled by 16S rRNA gene analysis, with methanotrophs accounting for ~5-10% of bacteria. Active methanotrophs at a range of different incubation temperatures were identified by ¹³CH₄ DNA stable-isotope probing coupled with 16S rRNA gene amplicon and metagenome analysis. These methods identified Methylocella, Methylobacter, Methylocystis and Crenothrix as potential CH₄ oxidisers at the lower temperatures (30ºC/37ºC) observed following system start-up or gas-feed interruption. At higher temperatures typical of established biofilter operation (45ºC/50ºC), Methylocaldum and an unassigned Methylococcaceae species were the dominant active methanotrophs.
Finally, novel methanotrophs Methylococcus capsulatus (Norfolk) and Methylocaldum szegediense (Norfolk) were isolated from biofilter soil enrichments. Methylocaldum szegediense (Norfolk) may be very closely related to or the same species as one of the most abundant active methanotrophs in a metagenome from a 50ÂşC biofilter soil incubation, based on genome-to-MAG similarity. This isolate was capable of growth over a broad temperature range (37-62ÂşC) including the higher (in-situ) biofilter temperatures (>50ÂşC)
Jacobian-Scaled K-means Clustering for Physics-Informed Segmentation of Reacting Flows
This work introduces Jacobian-scaled K-means (JSK-means) clustering, which is
a physics-informed clustering strategy centered on the K-means framework. The
method allows for the injection of underlying physical knowledge into the
clustering procedure through a distance function modification: instead of
leveraging conventional Euclidean distance vectors, the JSK-means procedure
operates on distance vectors scaled by matrices obtained from dynamical system
Jacobians evaluated at the cluster centroids. The goal of this work is to show
how the JSK-means algorithm -- without modifying the input dataset -- produces
clusters that capture regions of dynamical similarity, in that the clusters are
redistributed towards high-sensitivity regions in phase space and are described
by similarity in the source terms of samples instead of the samples themselves.
The algorithm is demonstrated on a complex reacting flow simulation dataset (a
channel detonation configuration), where the dynamics in the thermochemical
composition space are known through the highly nonlinear and stiff
Arrhenius-based chemical source terms. Interpretations of cluster partitions in
both physical space and composition space reveal how JSK-means shifts clusters
produced by standard K-means towards regions of high chemical sensitivity
(e.g., towards regions of peak heat release rate near the detonation reaction
zone). The findings presented here illustrate the benefits of utilizing
Jacobian-scaled distances in clustering techniques, and the JSK-means method in
particular displays promising potential for improving former partition-based
modeling strategies in reacting flow (and other multi-physics) applications
A Benchmark Comparison of Visual Place Recognition Techniques for Resource-Constrained Embedded Platforms
Autonomous navigation has become a widely researched area of expertise over the past few years, gaining a massive following due to its necessity in creating a fully autonomous robotic system. Autonomous navigation is an exceedingly difficult task to accomplish in and of itself. Successful navigation relies heavily on the ability to self-localise oneself within a given environment. Without this awareness of one’s
own location, it is impossible to successfully navigate in an autonomous manner. Since its inception Simultaneous Localization and Mapping (SLAM) has become one of the most widely researched areas of autonomous navigation. SLAM focuses on self-localization within a mapped or un-mapped environment, and constructing or updating the map of one’s surroundings. Visual Place Recognition (VPR) is an essential part of any SLAM system. VPR relies on visual cues to determine one’s location within a mapped environment.
This thesis presents two main topics within the field of VPR. First, this thesis presents a benchmark analysis of several popular embedded platforms when performing VPR. The presented benchmark analyses six different VPR techniques
across three different datasets, and investigates accuracy, CPU usage, memory usage, processing time and power consumption. The benchmark demonstrated a clear relationship between platform architecture and the metrics measured, with platforms of the same architecture achieving comparable accuracy and algorithm efficiency.
Additionally, the Raspberry Pi platform was noted as a standout in terms of algorithm efficiency and power consumption.
Secondly, this thesis proposes an evaluation framework intended to provide information about a VPR technique’s useability within a real-time application. The approach
makes use of the incoming frame rate of an image stream and the VPR frame rate, the rate at which the technique can perform VPR, to determine how efficient VPR techniques would be in a real-time environment. This evaluation framework determined that CoHOG would be the most effective algorithm to be deployed in a real-time environment as it had the best ratio between computation time and accuracy
One stone, two birds: A lightweight multidimensional learned index with cardinality support
Innovative learning based structures have recently been proposed to tackle
index and cardinality estimation tasks, specifically learned indexes and data
driven cardinality estimators. These structures exhibit excellent performance
in capturing data distribution, making them promising for integration into AI
driven database kernels. However, accurate estimation for corner case queries
requires a large number of network parameters, resulting in higher computing
resources on expensive GPUs and more storage overhead. Additionally, the
separate implementation for CE and learned index result in a redundancy waste
by storage of single table distribution twice. These present challenges for
designing AI driven database kernels. As in real database scenarios, a compact
kernel is necessary to process queries within a limited storage and time
budget. Directly integrating these two AI approaches would result in a heavy
and complex kernel due to a large number of network parameters and repeated
storage of data distribution parameters. Our proposed CardIndex structure
effectively killed two birds with one stone. It is a fast multidim learned
index that also serves as a lightweight cardinality estimator with parameters
scaled at the KB level. Due to its special structure and small parameter size,
it can obtain both CDF and PDF information for tuples with an incredibly low
latency of 1 to 10 microseconds. For tasks with low selectivity estimation, we
did not increase the model's parameters to obtain fine grained point density.
Instead, we fully utilized our structure's characteristics and proposed a
hybrid estimation algorithm in providing fast and exact results
Towards a centralized multicore automotive system
Today’s automotive systems are inundated with embedded electronics to host chassis, powertrain, infotainment, advanced driver assistance systems, and other modern vehicle functions. As many as 100 embedded microcontrollers execute hundreds of millions of lines of code in a single vehicle. To control the increasing complexity in vehicle electronics and services, automakers are planning to consolidate different on-board automotive functions as software tasks on centralized multicore hardware platforms. However, these vehicle software services have different and contrasting timing, safety, and security requirements. Existing vehicle operating systems are ill-equipped to provide all the required service guarantees on a single machine. A centralized automotive system aims to tackle this by assigning software tasks to multiple criticality domains or levels according to their consequences of failures, or international safety standards like ISO 26262. This research investigates several emerging challenges in time-critical systems for a centralized multicore automotive platform and proposes a novel vehicle operating system framework to address them.
This thesis first introduces an integrated vehicle management system (VMS), called DriveOS™, for a PC-class multicore hardware platform. Its separation kernel design enables temporal and spatial isolation among critical and non-critical vehicle services in different domains on the same machine. Time- and safety-critical vehicle functions are implemented in a sandboxed Real-time Operating System (OS) domain, and non-critical software is developed in a sandboxed general-purpose OS (e.g., Linux, Android) domain. To leverage the advantages of model-driven vehicle function development, DriveOS provides a multi-domain application framework in Simulink. This thesis also presents a real-time task pipeline scheduling algorithm in multiprocessors for communication between connected vehicle services with end-to-end guarantees. The benefits and performance of the overall automotive system framework are demonstrated with hardware-in-the-loop testing using real-world applications, car datasets and simulated benchmarks, and with an early-stage deployment in a production-grade luxury electric vehicle
A conceptual framework for developing dashboards for big mobility data
Dashboards are an increasingly popular form of data visualization. Large, complex, and dynamic mobility data present a number of challenges in dashboard design. The overall aim for dashboard design is to improve information communication and decision making, though big mobility data in particular require considering privacy alongside size and complexity. Taking these issues into account, a gap remains between wrangling mobility data and developing meaningful dashboard output. Therefore, there is a need for a framework that bridges this gap to support the mobility dashboard development and design process. In this paper we outline a conceptual framework for mobility data dashboards that provides guidance for the development process while considering mobility data structure, volume, complexity, varied application contexts, and privacy constraints. We illustrate the proposed framework’s components and process using example mobility dashboards with varied inputs, end-users and objectives. Overall, the framework offers a basis for developers to understand how informational displays of big mobility data are determined by end-user needs as well as the types of data selection, transformation, and display available to particular mobility datasets
Challenges for Monocular 6D Object Pose Estimation in Robotics
Object pose estimation is a core perception task that enables, for example,
object grasping and scene understanding. The widely available, inexpensive and
high-resolution RGB sensors and CNNs that allow for fast inference based on
this modality make monocular approaches especially well suited for robotics
applications. We observe that previous surveys on object pose estimation
establish the state of the art for varying modalities, single- and multi-view
settings, and datasets and metrics that consider a multitude of applications.
We argue, however, that those works' broad scope hinders the identification of
open challenges that are specific to monocular approaches and the derivation of
promising future challenges for their application in robotics. By providing a
unified view on recent publications from both robotics and computer vision, we
find that occlusion handling, novel pose representations, and formalizing and
improving category-level pose estimation are still fundamental challenges that
are highly relevant for robotics. Moreover, to further improve robotic
performance, large object sets, novel objects, refractive materials, and
uncertainty estimates are central, largely unsolved open challenges. In order
to address them, ontological reasoning, deformability handling, scene-level
reasoning, realistic datasets, and the ecological footprint of algorithms need
to be improved.Comment: arXiv admin note: substantial text overlap with arXiv:2302.1182
- …