59,156 research outputs found
Recommended from our members
A STUDY OF MACHINE VISION IN THE AUTOMOTIVE INDUSTRY
With the growth of industrial automation, it has become increasingly important to validate the quality of every manufactured part during production. Until now, human visual inspection aided with hard tooling or machines have been the primary means to this end, but the speed of today's production lines, the complexity of production equipment and the highest standards of quality to which parts must adhere frequently, make the traditional methods of industrial inspection and control impractical, if not impossible.
Subsequently, new solutions have been developed for the monitoring and control of industrial processes, in realÂtime. One such technology is the area of machine vision. After many years of research and development, computerised vision systems are now leaving the laboratory and are being used successfully in the factory environment. They are both robust and competitively priced as a sensing technique which has now opened up a whole new sector for automation.
Machine vision systems are becoming an important integral part of the automotive manufacturing process, with applications ranging from inspection, classification, robot guidance, assembly verification through to process monitoring and control. Although the number of systems in current use is still relatively small, there can be no doubt, given the issues at stake, that the automotive industry will once again lead the way with the implementation of machine vision just as it has done robotic technology.
The thesis considered the issue of machine vision and in particular, its deployment within the automotive industry. The thesis has presented work on machine vision for the prospective end-user and not the designer of such systems. It will provide sufficient background about the subject, to separate machine vision promises from reality and permit intelligent decisions regarding machine vision applications to be made.
The initial part of the dissertation focussed on the strategic issues affecting the selection of machine vision at the planning stage, such as a listing of the factors to justify investment, the capability of the technology and type of problems that are associated with this relatively new but complex science.
Though it is widely accepted that no two industrial machine vision systems are identical, knowledge of the basic fundamentals which underpin the structure of the technology in its application is presented.
This work covered a structured description detailing typical hardware components such as camera technology, lighting systems, etc... which form an integral part of an industrial system and discussions regarding the criteria for selection are presented. To complement this work, a further section is specifically devoted to the bewildering array of vision software analysis techniques which are currently available today. A detailed description of the various techniques that are applied to images in order to make use of and understand the data contained within them are discussed and explored.
Applications for machine vision fall into two main categories namely robotic guidance and inspection. Obviously within each category there are many further subÂgroups. Within this context the latter part of the thesis reviews with a well structured description of several industrial case studies derived from the automotive industry, which illustrate that machine vision is capable of providing real time solutions to manufacturing based problems.
In conclusion, despite the limited availability of industrially based machine vision systems, the success of implementation is not always guaranteed, as the technology imposes both technical limitations and introduce new human engineering considerations.
By understanding the application and the implications of the technical requirements on both the "staging" and the "image-processing" power required of the machine vision system. The thesis has shown that the most significant elements of a successful application are indeed the lighting, optics, component design, etc... - the "Staging". From the case studies investigated, optimised "staging" has resulted in the need for less computing power in the machine vision system. Inevitably, greater computing power not only requires more time but is generally more expensive.
The experience gained from the this project, has demonstrated that machine vision technology is a realistic alternative means of capturing data in real-time. Since the current limitations of the technology are well suited to the delivery process of the quality function within the manufacturing process
Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment
Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems
Knowledge and tasks representation for an industrial robotic application
The paper presents an implementation of knowledge representation and task representation, based on ontologies for an Industrial Robotic Application. The industrial application is to insert up to 56 small pins, e.g., sealants, in a harness box terminal for the automotive industry. The number of sealants and their insertion pattern vary significantly with the production requests. Based on the knowledge representation of the robot and also based on the tasks to be performed, plans are built and then sent to the robot controller based on the seal pattern production order. Moreover, the robotic system is capable to perform re-planning when an insertion error is reported by a machine vision system. The ontology-based approach was used to define the robot, the machine vision system, and the tasks that were needed to be performed by the robotic system. The robotic system was validated experimentally by showing its capability to correct seal insertion errors, while re-planning.info:eu-repo/semantics/publishedVersio
End-to-End Multiview Gesture Recognition for Autonomous Car Parking System
The use of hand gestures can be the most intuitive human-machine interaction medium.
The early approaches for hand gesture recognition used device-based methods. These
methods use mechanical or optical sensors attached to a glove or markers, which hinders
the natural human-machine communication. On the other hand, vision-based methods are
not restrictive and allow for a more spontaneous communication without the need of an
intermediary between human and machine. Therefore, vision gesture recognition has been
a popular area of research for the past thirty years.
Hand gesture recognition finds its application in many areas, particularly the automotive
industry where advanced automotive human-machine interface (HMI) designers are
using gesture recognition to improve driver and vehicle safety. However, technology advances
go beyond active/passive safety and into convenience and comfort. In this context,
one of America’s big three automakers has partnered with the Centre of Pattern Analysis
and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding
their product segment through machine learning to provide an increased driver convenience
and comfort with the particular application of hand gesture recognition for autonomous
car parking.
In this thesis, we leverage the state-of-the-art deep learning and optimization techniques
to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system.
We propose a 3DCNN gesture model architecture that we train on a publicly available
hand gesture database. We apply transfer learning methods to fine-tune the pre-trained
gesture model on a custom-made data, which significantly improved the proposed system
performance in real world environment. We adapt the architecture of the end-to-end solution
to expand the state of the art video classifier from a single image as input (fed by
monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we
optimize the proposed solution to work on a limited resources embedded platform (Nvidia
Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the
accuracy robustness and real time functionality of the system
Deep Learning in the Automotive Industry: Applications and Tools
Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.Comment: 10 page
A high speed Tri-Vision system for automotive applications
Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications.
Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring).
Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range.
Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe
- …