468 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Pathway to Future Symbiotic Creativity

    Full text link
    This report presents a comprehensive view of our vision on the development path of the human-machine symbiotic art creation. We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist (Turing Artists) to a Machine artist in its own right. We begin with an overview of the limitations of the Turing Artists then focus on the top two-level systems, Machine Artists, emphasizing machine-human communication in art creation. In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations. The rapid development of immersive environment and further evolution into the new concept of metaverse enable symbiotic art creation through unprecedented flexibility of bi-directional communication between artists and art manifestation environments. By examining the latest sensor and XR technologies, we illustrate the novel way for art data collection to constitute the base of a new form of human-machine bidirectional communication and understanding in art creation. Based on such communication and understanding mechanisms, we propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle rather than the traditional "end-to-end" dogma. By proposing a new form of inverse reinforcement learning model, we outline the platform design of machine artists, demonstrate its functions and showcase some examples of technologies we have developed. We also provide a systematic exposition of the ecosystem for AI-based symbiotic art form and community with an economic model built on NFT technology. Ethical issues for the development of machine artists are also discussed

    Virtual Model Building for Multi-Axis Machine Tools Using Field Data

    Get PDF
    Accurate machine dynamic models are the foundation of many advanced machining technologies such as virtual process planning and machine condition monitoring. Viewing recent designs of modern high-performance machine tools, to enhance the machine versatility and productivity, the machine axis configuration is becoming more complex and diversified, and direct drive motors are more commonly used. Due to the above trends, coupled and nonlinear multibody dynamics in machine tools are gaining more attention. Also, vibration due to limited structural rigidity is an important issue that must be considered simultaneously. Hence, this research aims at building high-fidelity machine dynamic models that are capable of predicting the dynamic responses, such as the tracking error and motor current signals, considering a wide range of dynamic effects such as structural flexibility, inter-axis coupling, and posture-dependency. Building machine dynamic models via conventional bottom-up approaches may require extensive investigation on every single component. Such approaches are time-consuming or sometimes infeasible for the machine end-users. Alternatively, as the recent trend of Industry 4.0, utilizing data via Computer Numerical Controls (CNCs) and/or non-intrusive sensors to build the machine model is rather favorable for industrial implementation. Thus, the methods proposed in this thesis are top-down model building approaches, utilizing available data from CNCs and/or other auxiliary sensors. The achieved contributions and results of this thesis are summarized below. As the first contribution, a new modeling and identification technique targeting a closed-loop control system of coupled rigid multi-axis feed drives has been developed. A multi-axis closed-loop control system, including the controller and the electromechanical plant, is described by a multiple-input multiple-output (MIMO) linear time-invariant (LTI) system, coupled with a generalized disturbance input that represents all the nonlinear dynamics. Then, the parameters of the open-loop and closed-loop dynamic models are respectively identified by a strategy that combines linear Least Squares (LS) and constrained global optimization. This strategy strikes a balance between model accuracy and computational efficiency. This proposed method was validated using an industrial 5-axis laser drilling machine and an experimental feed drive, achieving 2.38% and 5.26% root mean square (RMS) prediction error, respectively. Inter-axis coupling effects, i.e., the motion of one axis causing the dynamic responses of another axis, are correctly predicted. Also, the tracking error induced by motor ripple and nonlinear friction is correctly predicted as well. As the second contribution, the above proposed methodology is extended to also consider structural flexibility, which is a crucial behavior of large-sized industrial 5-axis machine tools. More importantly, structural vibration is nonlinear and posture-dependent due to the nature of a multibody system. In this thesis, prominent cases of flexibility-induced vibrations in a linear feed drive are studied and modeled by lumped mass-spring-damper system. Then, a flexible linear drive coupled with a rotary drive is systematically analyzed. It is found that the case with internal structural vibration between the linear and rotary drives requires an additional motion sensor for the proposed model identification method. This particular case is studied with an experimental setup. The thesis presents a method to reconstruct such missing internal structural vibration using the data from the embedded encoders as well as a low-cost micro-electromechanical system (MEMS) inertial measurement unit (IMU) mounted on the machine table. It is achieved by first synchronizing the data, aligning inertial frames, and calibrating mounting misalignments. Finally, the unknown internal vibration is reconstructed by comparing the rigid and flexible machine kinematic models. Due to the measurement limitation of MEMS IMUs and geometric assembly error, the reconstructed angle is unfortunately inaccurate. Nevertheless, the vibratory angular velocity and acceleration are consistently reconstructed, which is sufficient for the identification with reasonable model simplification. Finally, the reconstructed internal vibration along with the gathered servo data are used to identify the proposed machine dynamic model. Due to the separation of linear and nonlinear dynamics, the vibratory dynamics can be simply considered by adding complex pole pairs into the MIMO LTI system. Experimental validation shows that the identified model is able to predict the dynamic responses of the tracking error and motor force/torque to the input command trajectory and external disturbances, with 2% ~ 6% RMS error. Especially, the vibratory inter-axis coupling effect and posture-dependent effect are accurately depicted. Overall, this thesis presents a dynamic model-building approach for multi-axis feed drive assemblies. The proposed model is general and can be configured according to the kinematic configuration. The model-building approach only requires the data from the servo system or auxiliary motion sensors, e.g., an IMU, which is non-intrusive and in favor of industrial implementation. Future research includes further investigation of the IMU measurement, geometric error identification, validation using more complicated feed drive system, and applications to the planning and monitoring of 5-axis machining process

    Tracking and Grasping of Moving Objects Using Aerial Robotic Manipulators: A Brief Survey

    Get PDF
    Unmanned Aerial Vehicles (UAV) has evolved in recent years, their features have changed to be more useful to the society, although some years ago the drones had been thought to be teleoperated by humans and to take some pictures from above, which is useful; nevertheless, nowadays the drones are capable of developing autonomous tasks like tracking a dynamic target or even grasping different kind of objects. Some task like transporting heavy loads or manipulating complex shapes are more challenging for a single UAV, but for a fleet of them might be easier. This brief survey presents a compilation of relevant works related to tracking and grasping with aerial robotic manipulators, as well as cooperation among them. Moreover, challenges and limitations are presented in order to contribute with new areas of research. Finally, some trends in aerial manipulation are foreseeing for different sectors and relevant features for these kind of systems are standing out

    Real-Time Hybrid Visual Servoing of a Redundant Manipulator via Deep Reinforcement Learning

    Get PDF
    Fixtureless assembly may be necessary in some manufacturing tasks and environ-ments due to various constraints but poses challenges for automation due to non-deterministic characteristics not favoured by traditional approaches to industrial au-tomation. Visual servoing methods of robotic control could be effective for sensitive manipulation tasks where the desired end-effector pose can be ascertained via visual cues. Visual data is complex and computationally expensive to process but deep reinforcement learning has shown promise for robotic control in vision-based manipu-lation tasks. However, these methods are rarely used in industry due to the resources and expertise required to develop application-specific systems and prohibitive train-ing costs. Training reinforcement learning models in simulated environments offers a number of benefits for the development of robust robotic control algorithms by reducing training time and costs, and providing repeatable benchmarks for which algorithms can be tested, developed and eventually deployed on real robotic control environments. In this work, we present a new simulated reinforcement learning envi-ronment for developing accurate robotic manipulation control systems in fixtureless environments. Our environment incorporates a contemporary collaborative industrial robot, the KUKA LBR iiwa, with the goal of positioning its end effector in a generic fixtureless environment based on a visual cue. Observational inputs are comprised of the robotic joint positions and velocities, as well as two cameras, whose positioning reflect hybrid visual servoing with one camera attached to the robotic end-effector, and another observing the workspace respectively. We propose a state-of-the-art deep reinforcement learning approach to solving the task environment and make prelimi-nary assessments of the efficacy of this approach to hybrid visual servoing methods for the defined problem environment. We also conduct a series of experiments ex-ploring the hyperparameter space in the proposed reinforcement learning method. Although we could not prove the efficacy of a deep reinforcement approach to solving the task environment with our initial results, we remain confident that such an ap-proach could be feasible to solving this industrial manufacturing challenge and that our contributions in this work in terms of the novel software provide a good basis for the exploration of reinforcement learning approaches to hybrid visual servoing in accurate manufacturing contexts

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation

    Get PDF
    Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify different kinds of observed objects, and interpret sensory information to perform necessary operations. This thesis summarizes original methods aimed at enhancing machine’s autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The first category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and significantly higher precision compared to other studies. In the second part of this research, we tackle problems pertaining to the field of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset offers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More specifically, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the first study that accomplishes bi-directional loop closure on monocular images with a nominal field of view. To solve the localisation problem, we segregate the place identification from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged. All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition

    Localization in urban environments. A hybrid interval-probabilistic method

    Get PDF
    Ensuring safety has become a paramount concern with the increasing autonomy of vehicles and the advent of autonomous driving. One of the most fundamental tasks of increased autonomy is localization, which is essential for safe operation. To quantify safety requirements, the concept of integrity has been introduced in aviation, based on the ability of the system to provide timely and correct alerts when the safe operation of the systems can no longer be guaranteed. Therefore, it is necessary to assess the localization's uncertainty to determine the system's operability. In the literature, probability and set-membership theory are two predominant approaches that provide mathematical tools to assess uncertainty. Probabilistic approaches often provide accurate point-valued results but tend to underestimate the uncertainty. Set-membership approaches reliably estimate the uncertainty but can be overly pessimistic, producing inappropriately large uncertainties and no point-valued results. While underestimating the uncertainty can lead to misleading information and dangerous system failure without warnings, overly pessimistic uncertainty estimates render the system inoperative for practical purposes as warnings are fired more often. This doctoral thesis aims to study the symbiotic relationship between set-membership-based and probabilistic localization approaches and combine them into a unified hybrid localization approach. This approach enables safe operation while not being overly pessimistic regarding the uncertainty estimation. In the scope of this work, a novel Hybrid Probabilistic- and Set-Membership-based Coarse and Refined (HyPaSCoRe) Localization method is introduced. This method localizes a robot in a building map in real-time and considers two types of hybridizations. On the one hand, set-membership approaches are used to robustify and control probabilistic approaches. On the other hand, probabilistic approaches are used to reduce the pessimism of set-membership approaches by augmenting them with further probabilistic constraints. The method consists of three modules - visual odometry, coarse localization, and refined localization. The HyPaSCoRe Localization uses a stereo camera system, a LiDAR sensor, and GNSS data, focusing on localization in urban canyons where GNSS data can be inaccurate. The visual odometry module computes the relative motion of the vehicle. In contrast, the coarse localization module uses set-membership approaches to narrow down the feasible set of poses and provides the set of most likely poses inside the feasible set using a probabilistic approach. The refined localization module further refines the coarse localization result by reducing the pessimism of the uncertainty estimate by incorporating probabilistic constraints into the set-membership approach. The experimental evaluation of the HyPaSCoRe shows that it maintains the integrity of the uncertainty estimation while providing accurate, most likely point-valued solutions in real-time. Introducing this new hybrid localization approach contributes to developing safe and reliable algorithms in the context of autonomous driving

    Multi-Character Motion Retargeting for Large Scale Changes

    Get PDF

    Investigating Ultrasound-Guided Autonomous Assistance during Robotic Minimally Invasive Surgery

    Get PDF
    Despite it being over twenty years since the first introduction of robotic surgical systems in common surgical practice, they are still far from widespread across all healthcare systems, surgical disciplines and procedures. At the same time, the systems that are used act as mere tele-manipulators with motion scaling and have yet to make use of the immense potential of their sensory data in providing autonomous assistance during surgery or perform tasks themselves in a semi-autonomous fashion. Equivalently, the potential of using intracorporeal imaging, particularly Ultrasound (US) during surgery for improved tumour localisation remains largely unused. Aside from the cost factors, this also has to do with the necessity of adequate training for scan interpretation and the difficulty of handling an US probe near the surgical sight. Additionally, the potential for automation that is being explored in extracorporeal US using serial manipulators does not yet translate into ultrasound-enabled autonomous assistance in a surgical robotic setting. Motivated by this research gap, this work explores means to enable autonomous intracorporeal ultrasound in a surgical robotic setting. Based around the the da Vinci Research Kit (dVRK), it first develops a surgical robotics platform that allows for precise evaluation of the robot’s performance using Infrared (IR) tracking technology. Based on this initial work, it then explores the possibility to provide autonomous ultrasound guidance during surgery. Therefore, it develops and assesses means to improve kinematic accuracy despite manipulator backlash as well as enabling adequate probe position with respect to the tissue surface and anatomy. Founded on the acquired anatomical information, this thesis explores the integration of a second robotic arm and its usage for autonomous assistance. Starting with an autonomously acquired tumor scan, the setup is extended and methods devised to enable the autonomous marking of margined tumor boundaries on the tissue surface both in a phantom as well as in an ex-vivo experiment on porcine liver. Moving towards increased autonomy, a novel minimally invasive High Intensity Focused Ultrasound (HIFUS) transducer is integrated into the robotic setup including a sensorised, water-filled membrane for sensing interaction forces with the tissue surface. For this purpose an extensive material characterisation is caried out, exploring different surface material pairings. Finally, the proposed system, including trajectory planning and a hybrid-force position control scheme are evaluated in a benchtop ultrasound phantom trial
    corecore