511 research outputs found

    Enhancing Automation with Label Defect Detection and Content Parsing Algorithms

    Get PDF
    The stable operation of power transmission and distribution is closely related to the overall performance and construction quality of circuit breakers. Focusing on circuit breakers as the research subject, we propose a machine vision method for automated defect detection, which can be applied in intelligent robots to improve detection efficiency, reduce costs, and address the issues related to performance and assembly quality. Based on the LeNet-5 convolutional neural network, a method for the detection of character defects on labels is proposed. This method is then combined with squeezing and excitation networks to achieve more precise classification with a feature graph mechanism. The experimental results show the accuracy of the LeNet-CB model can reach up to 99.75%, while the average time for single character detection is 17.9 milliseconds. Although the LeNet-SE model demonstrates certain limitations in handling some easily confused characters, it maintains an average accuracy of 98.95%. Through further optimization, a label content detection method based on the LSTM framework is constructed, with an average accuracy of 99.57%, and an average detection time of 84 milliseconds. Overall, the system meets the detection accuracy requirements and delivers a rapid response. making the results of this research a meaningful contribution to the practical foundation for ongoing improvements in robot intelligence and machine vision

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Service robotics and machine learning for close-range remote sensing

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Collaborative mobile industrial manipulator : a review of system architecture and applications

    Get PDF
    This paper provides a comprehensive review of the development of Collaborative Mobile Industrial Manipulator (CMIM), which is currently in high demand. Such a review is necessary to have an overall understanding about CMIM advanced technology. This is the first review to combine the system architecture and application which is necessary in order to gain a full understanding of the system. The classical framework of CMIM is firstly discussed, including hardware and software. Subsystems that are typically involved in hardware such as mobile platform, manipulator, end-effector and sensors are presented. With regards to software, planner, controller, perception, interaction and so on are also described. Following this, the common applications (logistics, manufacturing and assembly) in industry are surveyed. Finally, the trends are predicted and issues are indicated as references for CMIM researchers. Specifically, more research is needed in the areas of interaction, fully autonomous control, coordination and standards. Besides, experiments in real environment would be performed more and novel collaborative robotic systems would be proposed in future. Additionally, some advanced technology in other areas would also be applied into the system. In all, the system would become more intelligent, collaborative and autonomous

    Signal and Information Processing Methods for Embedded Robotic Tactile Sensing Systems

    Get PDF
    The human skin has several sensors with different properties and responses that are able to detect stimuli resulting from mechanical stimulations. Pressure sensors are the most important type of receptors for the exploration and manipulation of objects. In the last decades, smart tactile sensing based on different sensing techniques have been developed as their application in robotics and prosthetics is considered of huge interest, mainly driven by the prospect of autonomous and intelligent robots that can interact with the environment. However, regarding object properties estimation on robots, hardness detection is still a major limitation due to the lack of techniques to estimate it. Furthermore, finding processing methods that can interpret the measured information from multiple sensors and extract relevant information is a Challenging task. Moreover, embedding processing methods and machine learning algorithms in robotic applications to extract meaningful information such as object properties from tactile data is an ongoing challenge, which is controlled by the device constraints (power constraint, memory constraints, etc.), the computational complexity of the processing and machine learning algorithms, the application requirements (real-time operations, high prediction performance). In this dissertation, we focus on the design and implementation of pre-processing methods and machine learning algorithms to handle the aforementioned challenges for a tactile sensing system in robotic application. First, we propose a tactile sensing system for robotic application. Then we present efficient preprocessing and feature extraction methods for our tactile sensors. Then we propose a learning strategy to reduce the computational cost of our processing unit in object classification using sensorized Baxter robot. Finally, we present a real-time robotic tactile sensing system for hardness classification on a resource-constrained devices. The first study represents a further assessment of the sensing system that is based on the PVDF sensors and the interface electronics developed in our lab. In particular, first, it presents the development of a skin patch (multilayer structure) that allows us to use the sensors in several applications such as robotic hand/grippers. Second, it shows the characterization of the developed skin patch. Third, it validates the sensing system. Moreover, we designed a filter to remove noise and detect touch. The experimental assessment demonstrated that the developed skin patch and the interface electronics indeed can detect different touch patterns and stimulus waveforms. Moreover, the results of the experiments defined the frequency range of interest and the response of the system to realistic interactions with the sensing system to grasp and release events. In the next study, we presented an easy integration of our tactile sensing system into Baxter gripper. Computationally efficient pre-processing techniques were designed to filter the signal and extract relevant information from multiple sensor signals, in addition to feature extraction methods. These processing methods aim in turn to reduce also the computational complexity of machine learning algorithms utilized for object classification. The proposed system and processing strategy were evaluated on object classification application by integrating our system into the gripper and we collected data by grasping multiple objects. We further proposed a learning strategy to accomplish a trade-off between the generalization accuracy and the computational cost of the whole processing unit. The proposed pre-processing and feature extraction techniques together with the learning strategy have led to models with extremely low complexity and very high generalization accuracy. Moreover, the support vector machine achieved the best trade-off between accuracy and computational cost on tactile data from our sensors. Finally, we presented the development and implementation on the edge of a real–time tactile sensing system for hardness classification on Baxter robot based on machine and deep learning algorithms. We developed and implemented in plain C a set of functions that provide the fundamental layer functionalities of the Machine learning and Deep Learning models (ML and DL), along with the pre–processing methods to extract the features and normalize the data. The models can be deployed to any device that supports C code since it does not rely on any of the existing libraries. Shallow ML/DL algorithms for the deployment on resource–constrained devices are designed. To evaluate our work, we collected data by grasping objects of different hardness and shape. Two classification problems were addressed: 5 levels of hardness classified on the same objects’ shape, and 5 levels of hardness classified on two different objects’ shape. Furthermore, optimization techniques were employed. The models and pre–processing were implemented on a resource constrained device, where we assessed the performance of the system in terms of accuracy, memory footprint, time latency, and energy consumption. We achieved for both classification problems a real-time inference (< 0.08 ms), low power consumption (i.e., 3.35 μJ), extremely small models (i.e., 1576 Byte), and high accuracy (above 98%)

    A machine learning-based intrusion detection for detecting internet of things network attacks

    Get PDF
    The Internet of Things (IoT) refers to the collection of all those devices that could connect to the Internet to collect and share data. The introduction of varied devices continues to grow tremendously, posing new privacy and security risks—the proliferation of Internet connections and the advent of new technologies such as the IoT. Various and sophisticated intrusions are driving the IoT paradigm into computer networks. Companies are increasing their investment in research to improve the detection of these attacks. By comparing the highest rates of accuracy, institutions are picking intelligent procedures for testing and verification. The adoption of IoT in the different sectors, including health, has also continued to increase in recent times. Where the IoT applications became well known for technology researchers and developers. Unfortunately, the striking challenge of IoT is the privacy and security issues resulting from the energy limitations and scalability of IoT devices. Therefore, how to improve the security and privacy challenges of IoT remains an important problem in the computer security field. This paper proposes a machine learning-based intrusion detection system (ML-IDS) for detecting IoT network attacks. The primary objective of this research focuses on applying ML-supervised algorithm-based IDS for IoT. In the first stage of this research methodology, feature scaling was done using the Minimum-maximum (min–max) concept of normalization on the UNSW-NB15 dataset to limit information leakage on the test data. This dataset is a mixture of contemporary attacks and normal activities of network traffic grouped into nine different attack types. In the next stage, dimensionality reduction was performed with Principal Component Analysis (PCA). Lastly, six proposed machine learning models were used for the analysis. The experimental results of our findings were evaluated in terms of validation dataset, accuracy, the area under the curve, recall, F1, precision, kappa, and Mathew correlation coefficient (MCC). The findings were also benchmarked with the existing works, and our results were competitive with an accuracy of 99.9% and MCC of 99.97%.publishedVersio

    Positioning technology for stepwise underground robots

    Get PDF
    Pipeline robots, borehole robots or exploring robots that work in underground environments can be classified as underground robots. When an underground robot takes a task, tracing and mapping the track of the robot is very important. This project addresses the development of a positioning technique for stepwise underground robots, which have been developed in Durham University. This research is expected to provide a general benefit to stepwise robotic positioning systems rather than a particular robotic or other situation. The initial period of this project was the most difficult. After a few months of literature searching, no suitable positioning technique had been found. Existing techniques are suitable for surface robots, undersea robots or airborne robots but are far away from the application requirements for underground robots. Positioning technology depends on sensor techniques and measurement technologies. The underground environment restricts the use of absolute measurement technologies. Consequently, underground robotic positioning systems heavily rely on relative measurements, which can cause unbounded accumulation of the positioning errors. Moreover, underground environments restrict the use of many high precision sensors because of restricted space and other factors. Hence, the feasibility of developing high, long-term, accuracy underground robotic positioning systems was problematic. Since it was found that there was a lack of research on underground robotic positioning, fundamental investigation became necessary. The fundamentals include the dominant error and the characters of the accumulation of positioning errors. After the investigation of the fundamentals the difficulty and feasibility of developing a high long-term accuracy positioning system was understood more clearly and the key factors to improve the accuracy of a positioning system were known. Based on these, a novel parallel linkage mechanism based approach was proposed. This approach has flexibility in terms of geometrical structure and provides the possibility to improve long-term accuracy of a positioning system. Although parallel linkage mechanisms have drawn a great deal of attention from researchers in passed years, this is the first time a parallel linkage mechanism has been applied to a robotic positioning system. Consequently, new problems were generated by this application of parallel linkage mechanisms. In this project, a Principal Component Analysis (PCA) method is applied to solve the positioning problems and a particular case has been used to show how to solve these problems. Through this case, the advantages of this approach and the feasibility to improve the positioning accuracy is presented. The methodology that can be used to solve the problems for different particular cases can also be used to carry out study for general situations, which have also been illustrated. Many problems still need to be solved. At the end of this thesis, some further problems are discussed. The author of this thesis believes that the proposed approach can be applied to industrial projects in the near future

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    State of AI-based monitoring in smart manufacturing and introduction to focused section

    Get PDF
    Over the past few decades, intelligentization, supported by artificial intelligence (AI) technologies, has become an important trend for industrial manufacturing, accelerating the development of smart manufacturing. In modern industries, standard AI has been endowed with additional attributes, yielding the so-called industrial artificial intelligence (IAI) that has become the technical core of smart manufacturing. AI-powered manufacturing brings remarkable improvements in many aspects of closed-loop production chains from manufacturing processes to end product logistics. In particular, IAI incorporating domain knowledge has benefited the area of production monitoring considerably. Advanced AI methods such as deep neural networks, adversarial training, and transfer learning have been widely used to support both diagnostics and predictive maintenance of the entire production process. It is generally believed that IAI is the critical technologies needed to drive the future evolution of industrial manufacturing. This article offers a comprehensive overview of AI-powered manufacturing and its applications in monitoring. More specifically, it summarizes the key technologies of IAI and discusses their typical application scenarios with respect to three major aspects of production monitoring: fault diagnosis, remaining useful life prediction, and quality inspection. In addition, the existing problems and future research directions of IAI are also discussed. This article further introduces the papers in this focused section on AI-based monitoring in smart manufacturing by weaving them into the overview, highlighting how they contribute to and extend the body of literature in this area
    • …
    corecore