1,965 research outputs found
Improving Cross-Lingual Transfer Learning for Event Detection
The widespread adoption of applications powered by Artificial Intelligence (AI) backbones has unquestionably changed the way we interact with the world around us. Applications such as automated personal assistants, automatic question answering, and machine-based translation systems have become mainstays of modern culture thanks to the recent considerable advances in Natural Language Processing (NLP) research. Nonetheless, with over 7000 spoken languages in the world, there still remain a considerable number of marginalized communities that are unable to benefit from these technological advancements largely due to the language they speak. Cross-Lingual Learning (CLL) looks to address this issue by transferring the knowledge acquired from a popular, high-resource source language (e.g., English, Chinese, or Spanish) to a less favored, lower-resourced target language (e.g., Urdu or Swahili). This dissertation leverages the Event Detection (ED) sub-task of Information Extraction (IE) as a testbed and presents three novel approaches that improve cross-lingual transfer learning from distinct perspectives: (1) direct knowledge transfer, (2) hybrid knowledge transfer, and (3) few-shot learning
Self-supervised learning for transferable representations
Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Hybrid Cloud-Based Privacy Preserving Clustering as Service for Enterprise Big Data
Clustering as service is being offered by many cloud service providers. It helps enterprises to learn hidden patterns and learn knowledge from large, big data generated by enterprises. Though it brings lot of value to enterprises, it also exposes the data to various security and privacy threats. Privacy preserving clustering is being proposed a solution to address this problem. But the privacy preserving clustering as outsourced service model involves too much overhead on querying user, lacks adaptivity to incremental data and involves frequent interaction between service provider and the querying user. There is also a lack of personalization to clustering by the querying user. This work “Locality Sensitive Hashing for Transformed Dataset (LSHTD)” proposes a hybrid cloud-based clustering as service model for streaming data that address the problems in the existing model such as privacy preserving k-means clustering outsourcing under multiple keys (PPCOM) and secure nearest neighbor clustering (SNNC) models, The solution combines hybrid cloud, LSHTD clustering algorithm as outsourced service model. Through experiments, the proposed solution is able is found to reduce the computation cost by 23% and communication cost by 6% and able to provide better clustering accuracy with ARI greater than 4.59% compared to existing works
Une méthode de mesure du mouvement humain pour la programmation par démonstration
Programming by demonstration (PbD) is an intuitive approach to impart a task to a robot from one or several demonstrations by the human teacher. The acquisition of the demonstrations involves the solution of the correspondence problem when the teacher and the learner differ in sensing and actuation. Kinesthetic guidance is widely used to perform demonstrations. With such a method, the robot is manipulated by the teacher and the demonstrations are recorded by the robot's encoders. In this way, the correspondence problem is trivial but the teacher dexterity is afflicted which may impact the PbD process. Methods that are more practical for the teacher usually require the identification of some mappings to solve the correspondence problem. The demonstration acquisition method is based on a compromise between the difficulty of identifying these mappings, the level of accuracy of the recorded elements and the user-friendliness and convenience for the teacher. This thesis proposes an inertial human motion tracking method based on inertial measurement units (IMUs) for PbD for pick-and-place tasks. Compared to kinesthetic guidance, IMUs are convenient and easy to use but can present a limited accuracy. Their potential for PbD applications is investigated.
To estimate the trajectory of the teacher's hand, 3 IMUs are placed on her/his arm segments (arm, forearm and hand) to estimate their orientations. A specific method is proposed to partially compensate the well-known drift of the sensor orientation estimation around the gravity direction by exploiting the particular configuration of the demonstration. This method, called heading reset, is based on the assumption that the sensor passes through its original heading with stationary phases several times during the demonstration. The heading reset is implemented in an integration and vector observation algorithm. Several experiments illustrate the advantages of this heading reset.
A comprehensive inertial human hand motion tracking (IHMT) method for PbD is then developed. It includes an initialization procedure to estimate the orientation of each sensor with respect to the human arm segment and the initial orientation of the sensor with respect to the teacher attached frame. The procedure involves a rotation and a static position of the extended arm. The measurement system is thus robust with respect to the positioning of the sensors on the segments. A procedure for estimating the position of the human teacher relative to the robot and a calibration procedure for the parameters of the method are also proposed. At the end, the error of the human hand trajectory is measured experimentally and is found in an interval between mm and mm. The mappings to solve the correspondence problem are identified. Unfortunately, the observed level of accuracy of this IHMT method is not sufficient for a PbD process.
In order to reach the necessary level of accuracy, a method is proposed to correct the hand trajectory obtained by IHMT using vision data. A vision system presents a certain complementarity with inertial sensors. For the sake of simplicity and robustness, the vision system only tracks the objects but not the teacher. The correction is based on so-called Positions Of Interest (POIs) and involves 3 steps: the identification of the POIs in the inertial and vision data, the pairing of the hand POIs to objects POIs that correspond to the same action in the task, and finally, the correction of the hand trajectory based on the pairs of POIs. The complete method for demonstration acquisition is experimentally evaluated in a full PbD process. This experiment reveals the advantages of the proposed method over kinesthesy in the context of this work.La programmation par démonstration est une approche intuitive permettant de transmettre une tâche à un robot à partir d'une ou plusieurs démonstrations faites par un enseignant humain. L'acquisition des démonstrations nécessite cependant la résolution d'un problème de correspondance quand les systèmes sensitifs et moteurs de l'enseignant et de l'apprenant diffèrent. De nombreux travaux utilisent des démonstrations faites par kinesthésie, i.e., l'enseignant manipule directement le robot pour lui faire faire la tâche. Ce dernier enregistre ses mouvements grâce à ses propres encodeurs. De cette façon, le problème de correspondance est trivial. Lors de telles démonstrations, la dextérité de l'enseignant peut être altérée et impacter tout le processus de programmation par démonstration. Les méthodes d'acquisition de démonstration moins invalidantes pour l'enseignant nécessitent souvent des procédures spécifiques pour résoudre le problème de correspondance. Ainsi l'acquisition des démonstrations se base sur un compromis entre complexité de ces procédures, le niveau de précision des éléments enregistrés et la commodité pour l'enseignant. Cette thèse propose ainsi une méthode de mesure du mouvement humain par capteurs inertiels pour la programmation par démonstration de tâches de ``pick-and-place''. Les capteurs inertiels sont en effet pratiques et faciles à utiliser, mais sont d'une précision limitée. Nous étudions leur potentiel pour la programmation par démonstration.
Pour estimer la trajectoire de la main de l'enseignant, des capteurs inertiels sont placés sur son bras, son avant-bras et sa main afin d'estimer leurs orientations. Une méthode est proposée afin de compenser partiellement la dérive de l'estimation de l'orientation des capteurs autour de la direction de la gravité. Cette méthode, appelée ``heading reset'', est basée sur l'hypothèse que le capteur passe plusieurs fois par son azimut initial avec des phases stationnaires lors d'une démonstration. Cette méthode est implémentée dans un algorithme d'intégration et d'observation de vecteur. Des expériences illustrent les avantages du ``heading reset''.
Cette thèse développe ensuite une méthode complète de mesure des mouvements de la main humaine par capteurs inertiels (IHMT). Elle comprend une première procédure d'initialisation pour estimer l'orientation des capteurs par rapport aux segments du bras humain ainsi que l'orientation initiale des capteurs par rapport au repère de référence de l'humain. Cette procédure, consistant en une rotation et une position statique du bras tendu, est robuste au positionnement des capteurs. Une seconde procédure est proposée pour estimer la position de l'humain par rapport au robot et pour calibrer les paramètres de la méthode. Finalement, l'erreur moyenne sur la trajectoire de la main humaine est mesurée expérimentalement entre 28.5 mm et 61.8 mm, ce qui n'est cependant pas suffisant pour la programmation par démonstration.
Afin d'atteindre le niveau de précision nécessaire, une nouvelle méthode est développée afin de corriger la trajectoire de la main par IHMT à partir de données issues d'un système de vision, complémentaire des capteurs inertiels. Pour maintenir une certaine simplicité et robustesse, le système de vision ne suit que les objets et pas l'enseignant. La méthode de correction, basée sur des ``Positions Of Interest (POIs)'', est constituée de 3 étapes: l'identification des POIs dans les données issues des capteurs inertiels et du système de vision, puis l'association de POIs liées à la main et de POIs liées aux objets correspondant à la même action, et enfin, la correction de la trajectoire de la main à partir des paires de POIs. Finalement, la méthode IHMT corrigée est expérimentalement évaluée dans un processus complet de programmation par démonstration. Cette expérience montre l'avantage de la méthode proposée sur la kinesthésie dans le contexte de ce travail
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
AI-based design methodologies for hot form quench (HFQ®)
This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits.
To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality.
The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces
Proceedings of SIRM 2023 - The 15th European Conference on Rotordynamics
It was our great honor and pleasure to host the SIRM Conference after 2003 and 2011 for the third time in Darmstadt. Rotordynamics covers a huge variety of different applications and challenges which are all in the scope of this conference. The conference was opened with a keynote lecture given by Rainer Nordmann, one of the three founders of SIRM “Schwingungen in rotierenden Maschinen”. In total 53 papers passed our strict review process and were presented. This impressively shows that rotordynamics is relevant as ever. These contributions cover a very wide spectrum of session topics: fluid bearings and seals; air foil bearings; magnetic bearings; rotor blade interaction; rotor fluid interactions; unbalance and balancing; vibrations in turbomachines; vibration control; instability; electrical machines; monitoring, identification and diagnosis; advanced numerical tools and nonlinearities as well as general rotordynamics. The international character of the conference has been significantly enhanced by the Scientific Board since the 14th SIRM resulting on one hand in an expanded Scientific Committee which meanwhile consists of 31 members from 13 different European countries and on the other hand in the new name “European Conference on Rotordynamics”. This new international profile has also been
emphasized by participants of the 15th SIRM coming from 17 different countries out of three continents. We experienced a vital discussion and dialogue between industry and academia at the conference where roughly one third of the papers were presented by industry and two thirds by academia being an excellent basis to follow a bidirectional transfer what we call xchange at Technical University of Darmstadt. At this point we also want to give our special thanks to the eleven industry sponsors for their great support of the conference. On behalf of the Darmstadt Local Committee I welcome you to read the papers of the 15th SIRM giving you further insight into the topics and presentations
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Modular lifelong machine learning
Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge.
Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand.
This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems.
First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures.
Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations.
Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods.
Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer
- …