1,803 research outputs found

    Data Management in Industry 4.0: State of the Art and Open Challenges

    Full text link
    Information and communication technologies are permeating all aspects of industrial and manufacturing systems, expediting the generation of large volumes of industrial data. This article surveys the recent literature on data management as it applies to networked industrial environments and identifies several open research challenges for the future. As a first step, we extract important data properties (volume, variety, traffic, criticality) and identify the corresponding data enabling technologies of diverse fundamental industrial use cases, based on practical applications. Secondly, we provide a detailed outline of recent industrial architectural designs with respect to their data management philosophy (data presence, data coordination, data computation) and the extent of their distributiveness. Then, we conduct a holistic survey of the recent literature from which we derive a taxonomy of the latest advances on industrial data enabling technologies and data centric services, spanning all the way from the field level deep in the physical deployments, up to the cloud and applications level. Finally, motivated by the rich conclusions of this critical analysis, we identify interesting open challenges for future research. The concepts presented in this article thematically cover the largest part of the industrial automation pyramid layers. Our approach is multidisciplinary, as the selected publications were drawn from two fields; the communications, networking and computation field as well as the industrial, manufacturing and automation field. The article can help the readers to deeply understand how data management is currently applied in networked industrial environments, and select interesting open research opportunities to pursue

    Novel Force Estimation-based Bilateral Teleoperation applying Type-2 Fuzzy logic and Moving Horizon Estimation

    Full text link
    This paper develops a novel force observer for bilateral teleoperation systems. Type-2 fuzzy logic is used to describe the overall dynamic system, and Moving Horizon Estimation (MHE) is employed to assess clean states as well as the values of dynamic uncertainties, and simultaneously filter out the measurement noises, which guarantee the high degree of accuracy for the observed forces. Compared with the existing methods, the proposed force observer can run without knowing exact mathematical dynamic functions and is robust to different kinds of noises. A force-reflection four-channel teleoperation control laws is also proposed that involving the observed environmental and human force to provide the highly accurate force tracking between the master and the slave in the presence of time delays. Finally, experiments based on two haptic devices demonstrate the superiority of the proposed method through the comparisons with multiple state-to-the-art force observers.Comment: 12 pages, 13 figure

    Hierarchical Bayesian Data Fusion for Robotic Platform Navigation

    Full text link
    Data fusion has become an active research topic in recent years. Growing computational performance has allowed the use of redundant sensors to measure a single phenomenon. While Bayesian fusion approaches are common in general applications, the computer vision field has largely relegated this approach. Most object following algorithms have gone towards pure machine learning fusion techniques that tend to lack flexibility. Consequently, a more general data fusion scheme is needed. Within this work, a hierarchical Bayesian fusion approach is proposed, which outperforms individual trackers by using redundant measurements. The adaptive framework is achieved by relying on each measurement's local statistics and a global softened majority voting. The proposed approach was validated in a simulated application and two robotic platforms.Comment: 8 pages, 9 figure

    Warped Hypertime Representations for Long-term Autonomy of Mobile Robots

    Full text link
    This paper presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modelling long-term, pseudo-periodic variations caused by human activities. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The method extends the given spatial model with a set of wrapped dimensions that represent the periodicities of observed changes. By performing clustering over this extended representation, we obtain a model that allows us to predict future states of both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets and show that the method enables a robot to predict future states of repre- sentations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art

    Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies

    Full text link
    In this paper, we present an overview of robotic peg-in-hole assembly and analyze two main strategies: contact model-based and contact model-free strategies. More specifically, we first introduce the contact model control approaches, including contact state recognition and compliant control two steps. Additionally, we focus on a comprehensive analysis of the whole robotic assembly system. Second, without the contact state recognition process, we decompose the contact model-free learning algorithms into two main subfields: learning from demonstrations and learning from environments (mainly based on reinforcement learning). For each subfield, we survey the landmark studies and ongoing research to compare the different categories. We hope to strengthen the relation between these two research communities by revealing the underlying links. Ultimately, the remaining challenges and open questions in the field of robotic peg-in-hole assembly community is discussed. The promising directions and potential future work are also considered

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Computational Modeling Approaches For Task Analysis In Robotic-Assisted Surgery

    Get PDF
    Surgery is continuously subject to technological innovations including the introduction of robotic surgical devices. The ultimate goal is to program the surgical robot to perform certain difficult or complex surgical tasks in an autonomous manner. The feasibility of current robotic surgery systems to record quantitative motion and video data motivates developing descriptive mathematical models to recognize, classify and analyze surgical tasks. Recent advances in machine learning research for uncovering concealed patterns in huge data sets, like kinematic and video data, offer a possibility to better understand surgical procedures from a system point of view. This dissertation focuses on bridging the gap between these two lines of the research by developing computational models for task analysis in robotic-assisted surgery. The key step for advance study in robotic-assisted surgery and autonomous skill assessment is to develop techniques that are capable of recognizing fundamental surgical tasks intelligently. Surgical tasks and at a more granular level, surgical gestures, need to be quantified to make them amenable for further study. To answer to this query, we introduce a new framework, namely DTW-kNN, to recognize and classify three important surgical tasks including suturing, needle passing and knot tying based on kinematic data captured using da Vinci robotic surgery system. Our proposed method needs minimum preprocessing that results in simple, straightforward and accurate framework which can be applied for any autonomous control system. We also propose an unsupervised gesture segmentation and recognition (UGSR) method which has the ability to automatically segment and recognize temporal sequence of gestures in RMIS task. We also extent our model by applying soft boundary segmentation (Soft-UGSR) to address some of the challenges that exist in the surgical motion segmentation. The proposed algorithm can effectively model gradual transitions between surgical activities. Additionally, surgical training is undergoing a paradigm shift with more emphasis on the development of technical skills earlier in training. Thus metrics for the skills, especially objective metrics, become crucial. One field of surgery where such techniques can be developed is robotic surgery, as here all movements are already digitalized and therefore easily susceptible to analysis. Robotic surgery requires surgeons to perform a much longer and difficult training process which create numerous new challenges for surgical training. Hence, a new method of surgical skill assessment is required to ensure that surgeons have adequate skill level to be allowed to operate freely on patients. Among many possible approaches, those that provide noninvasive monitoring of expert surgeon and have the ability to automatically evaluate surgeon\u27s skill are of increased interest. Therefore, in this dissertation we develop a predictive framework for surgical skill assessment to automatically evaluate performance of surgeon in RMIS. Our classification framework is based on the Global Movement Features (GMFs) which extracted from kinematic movement data. The proposed method addresses some of the limitations in previous work and gives more insight about underlying patterns of surgical skill levels

    Virtualized Welding Based Learning of Human Welder Behaviors for Intelligent Robotic Welding

    Get PDF
    Combining human welder (with intelligence and sensing versatility) and automated welding robots (with precision and consistency) can lead to next generation intelligent welding systems. In this dissertation intelligent welding robots are developed by process modeling / control method and learning the human welder behavior. Weld penetration and 3D weld pool surface are first accurately controlled for an automated Gas Tungsten Arc Welding (GTAW) machine. Closed-form model predictive control (MPC) algorithm is derived for real-time welding applications. Skilled welder response to 3D weld pool surface by adjusting the welding current is then modeled using Adaptive Neuro-Fuzzy Inference System (ANFIS), and compared to the novice welder. Automated welding experiments confirm the effectiveness of the proposed human response model. A virtualized welding system is then developed that enables transferring the human knowledge into a welding robot. The learning of human welder movement (i.e., welding speed) is first realized with Virtual Reality (VR) enhancement using iterative K-means based local ANFIS modeling. As a separate effort, the learning is performed without VR enhancement utilizing a fuzzy classifier to rank the data and only preserve the high ranking “correct” response. The trained supervised ANFIS model is transferred to the welding robot and the performance of the controller is examined. A fuzzy weighting based data fusion approach to combine multiple machine and human intelligent models is proposed. The data fusion model can outperform individual machine-based control algorithm and welder intelligence-based models (with and without VR enhancement). Finally a data-driven approach is proposed to model human welder adjustments in 3D (including welding speed, arc length, and torch orientations). Teleoperated training experiments are conducted in which a human welder tries to adjust the torch movements in 3D based on his observation on the real-time weld pool image feedback. The data is off-line rated by the welder and a welder rating system is synthesized. ANFIS model is then proposed to correlate the 3D weld pool characteristic parameters and welder’s torch movements. A foundation is thus established to rapidly extract human intelligence and transfer such intelligence into welding robots

    Reinforcement Learning Algorithms in Humanoid Robotics

    Get PDF
    • …
    corecore