1,799 research outputs found

    Automated freeform assembly of threaded fasteners

    Get PDF
    Over the past two decades, a major part of the manufacturing and assembly market has been driven by its customer requirements. Increasing customer demand for personalised products create the demand for smaller batch sizes, shorter production times, lower costs, and the flexibility to produce families of products - or different parts - with the same sets of equipment. Consequently, manufacturing companies have deployed various automation systems and production strategies to improve their resource efficiency and move towards right-first-time production. However, many of these automated systems, which are involved with robot-based, repeatable assembly automation, require component- specific fixtures for accurate positioning and extensive robot programming, to achieve flexibility in their production. Threaded fastening operations are widely used in assembly. In high-volume production, the fastening processes are commonly automated using jigs, fixtures, and semi-automated tools. This form of automation delivers reliable assembly results at the expense of flexibility and requires component variability to be adequately controlled. On the other hand, in low- volume, high- value manufacturing, fastening processes are typically carried out manually by skilled workers. This research is aimed at addressing the aforementioned issues by developing a freeform automated threaded fastener assembly system that uses 3D visual guidance. The proof-of-concept system developed focuses on picking up fasteners from clutter, identifying a hole feature in an imprecisely positioned target component and carry out torque-controlled fastening. This approach has achieved flexibility and adaptability without the use of dedicated fixtures and robot programming. This research also investigates and evaluates different 3D imaging technology to identify the suitable technology required for fastener assembly in a non-structured industrial environment. The proposed solution utilises the commercially available technologies to enhance the precision and speed of identification of components for assembly processes, thereby improving and validating the possibility of reliably implementing this solution for industrial applications. As a part of this research, a number of novel algorithms are developed to robustly identify assembly components located in a random environment by enhancing the existing methods and technologies within the domain of the fastening processes. A bolt identification algorithm was developed to identify bolts located in a random clutter by enhancing the existing surface-based matching algorithm. A novel hole feature identification algorithm was developed to detect threaded holes and identify its size and location in 3D. The developed bolt and feature identification algorithms are robust and has sub-millimetre accuracy required to perform successful fastener assembly in industrial conditions. In addition, the processing time required for these identification algorithms - to identify and localise bolts and hole features - is less than a second, thereby increasing the speed of fastener assembly

    A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry

    Full text link
    The demand for autonomous vehicles is increasing gradually owing to their enormous potential benefits. However, several challenges, such as vehicle localization, are involved in the development of autonomous vehicles. A simple and secure algorithm for vehicle positioning is proposed herein without massively modifying the existing transportation infrastructure. For vehicle localization, vehicles on the road are classified into two categories: host vehicles (HVs) are the ones used to estimate other vehicles' positions and forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV transmits modulated data from the tail (or back) light, and the camera of the HV receives that signal using optical camera communication (OCC). In addition, the streetlight (SL) data are considered to ensure the position accuracy of the HV. Determining the HV position minimizes the relative position variation between the HV and FV. Using photogrammetry, the distance between FV or SL and the camera of the HV is calculated by measuring the occupied image area on the image sensor. Comparing the change in distance between HV and SLs with the change in distance between HV and FV, the positions of FVs are determined. The performance of the proposed technique is analyzed, and the results indicate a significant improvement in performance. The experimental distance measurement validated the feasibility of the proposed scheme

    Recent Advances in mmWave-Radar-Based Sensing, Its Applications, and Machine Learning Techniques: A Review

    Get PDF
    Human gesture detection, obstacle detection, collision avoidance, parking aids, automotive driving, medical, meteorological, industrial, agriculture, defense, space, and other relevant fields have all benefited from recent advancements in mmWave radar sensor technology. A mmWave radar has several advantages that set it apart from other types of sensors. A mmWave radar can operate in bright, dazzling, or no-light conditions. A mmWave radar has better antenna miniaturization than other traditional radars, and it has better range resolution. However, as more data sets have been made available, there has been a significant increase in the potential for incorporating radar data into different machine learning methods for various applications. This review focuses on key performance metrics in mmWave-radar-based sensing, detailed applications, and machine learning techniques used with mmWave radar for a variety of tasks. This article starts out with a discussion of the various working bands of mmWave radars, then moves on to various types of mmWave radars and their key specifications, mmWave radar data interpretation, vast applications in various domains, and, in the end, a discussion of machine learning algorithms applied with radar data for various applications. Our review serves as a practical reference for beginners developing mmWave-radar-based applications by utilizing machine learning techniques.publishedVersio

    COMPUTER VISION BASED ON RASPBERRY PI SYSTEM

    Get PDF
    The paper focused on designing and developing a Raspberry Pi based system employing a camera which is able to detect and count objects within a target area. Python was the programming language of choice for this work. This is because it is a very powerful language, and it is compatible with the Pi. Besides, it lends itself to rapid application development and there are online communities that program Raspberry Pi computer using python. The results show that the implemented system was able to detect different kinds of objects in a given image. The number of objects were also generated displayed by the system. Also the results show an average efficiency of 90.206% was determined. The system is therefore seen to be highly reliable

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions

    Synthetic Data for Object Classification in Industrial Applications

    Full text link
    One of the biggest challenges in machine learning is data collection. Training data is an important part since it determines how the model will behave. In object classification, capturing a large number of images per object and in different conditions is not always possible and can be very time-consuming and tedious. Accordingly, this work explores the creation of artificial images using a game engine to cope with limited data in the training dataset. We combine real and synthetic data to train the object classification engine, a strategy that has shown to be beneficial to increase confidence in the decisions made by the classifier, which is often critical in industrial setups. To combine real and synthetic data, we first train the classifier on a massive amount of synthetic data, and then we fine-tune it on real images. Another important result is that the amount of real images needed for fine-tuning is not very high, reaching top accuracy with just 12 or 24 images per class. This substantially reduces the requirements of capturing a great amount of real data.Comment: Accepted for publication at ICPRA
    corecore