2,183 research outputs found
Recommended from our members
A STUDY OF MACHINE VISION IN THE AUTOMOTIVE INDUSTRY
With the growth of industrial automation, it has become increasingly important to validate the quality of every manufactured part during production. Until now, human visual inspection aided with hard tooling or machines have been the primary means to this end, but the speed of today's production lines, the complexity of production equipment and the highest standards of quality to which parts must adhere frequently, make the traditional methods of industrial inspection and control impractical, if not impossible.
Subsequently, new solutions have been developed for the monitoring and control of industrial processes, in realÂtime. One such technology is the area of machine vision. After many years of research and development, computerised vision systems are now leaving the laboratory and are being used successfully in the factory environment. They are both robust and competitively priced as a sensing technique which has now opened up a whole new sector for automation.
Machine vision systems are becoming an important integral part of the automotive manufacturing process, with applications ranging from inspection, classification, robot guidance, assembly verification through to process monitoring and control. Although the number of systems in current use is still relatively small, there can be no doubt, given the issues at stake, that the automotive industry will once again lead the way with the implementation of machine vision just as it has done robotic technology.
The thesis considered the issue of machine vision and in particular, its deployment within the automotive industry. The thesis has presented work on machine vision for the prospective end-user and not the designer of such systems. It will provide sufficient background about the subject, to separate machine vision promises from reality and permit intelligent decisions regarding machine vision applications to be made.
The initial part of the dissertation focussed on the strategic issues affecting the selection of machine vision at the planning stage, such as a listing of the factors to justify investment, the capability of the technology and type of problems that are associated with this relatively new but complex science.
Though it is widely accepted that no two industrial machine vision systems are identical, knowledge of the basic fundamentals which underpin the structure of the technology in its application is presented.
This work covered a structured description detailing typical hardware components such as camera technology, lighting systems, etc... which form an integral part of an industrial system and discussions regarding the criteria for selection are presented. To complement this work, a further section is specifically devoted to the bewildering array of vision software analysis techniques which are currently available today. A detailed description of the various techniques that are applied to images in order to make use of and understand the data contained within them are discussed and explored.
Applications for machine vision fall into two main categories namely robotic guidance and inspection. Obviously within each category there are many further subÂgroups. Within this context the latter part of the thesis reviews with a well structured description of several industrial case studies derived from the automotive industry, which illustrate that machine vision is capable of providing real time solutions to manufacturing based problems.
In conclusion, despite the limited availability of industrially based machine vision systems, the success of implementation is not always guaranteed, as the technology imposes both technical limitations and introduce new human engineering considerations.
By understanding the application and the implications of the technical requirements on both the "staging" and the "image-processing" power required of the machine vision system. The thesis has shown that the most significant elements of a successful application are indeed the lighting, optics, component design, etc... - the "Staging". From the case studies investigated, optimised "staging" has resulted in the need for less computing power in the machine vision system. Inevitably, greater computing power not only requires more time but is generally more expensive.
The experience gained from the this project, has demonstrated that machine vision technology is a realistic alternative means of capturing data in real-time. Since the current limitations of the technology are well suited to the delivery process of the quality function within the manufacturing process
Fruit Detection and Tree Segmentation for Yield Mapping in Orchards
Accurate information gathering and processing is critical for precision horticulture, as growers aim to optimise their farm management practices. An accurate inventory of the crop that details its spatial distribution along with health and maturity, can help farmers efficiently target processes such as chemical and fertiliser spraying, crop thinning, harvest management, labour planning and marketing. Growers have traditionally obtained this information by using manual sampling techniques, which tend to be labour intensive, spatially sparse, expensive, inaccurate and prone to subjective biases. Recent advances in sensing and automation for field robotics allow for key measurements to be made for individual plants throughout an orchard in a timely and accurate manner. Farmer operated machines or unmanned robotic platforms can be equipped with a range of sensors to capture a detailed representation over large areas. Robust and accurate data processing techniques are therefore required to extract high level information needed by the grower to support precision farming. This thesis focuses on yield mapping in orchards using image and light detection and ranging (LiDAR) data captured using an unmanned ground vehicle (UGV). The contribution is the framework and algorithmic components for orchard mapping and yield estimation that is applicable to different fruit types and orchard configurations. The framework includes detection of fruits in individual images and tracking them over subsequent frames. The fruit counts are then associated to individual trees, which are segmented from image and LiDAR data, resulting in a structured spatial representation of yield. The first contribution of this thesis is the development of a generic and robust fruit detection algorithm. Images captured in the outdoor environment are susceptible to highly variable external factors that lead to significant appearance variations. Specifically in orchards, variability is caused by changes in illumination, target pose, tree types, etc. The proposed techniques address these issues by using state-of-the-art feature learning approaches for image classification, while investigating the utility of orchard domain knowledge for fruit detection. Detection is performed using both pixel-wise classification of images followed instance segmentation, and bounding-box regression approaches. The experimental results illustrate the versatility of complex deep learning approaches over a multitude of fruit types. The second contribution of this thesis is a tree segmentation approach to detect the individual trees that serve as a standard unit for structured orchard information systems. The work focuses on trellised trees, which present unique challenges for segmentation algorithms due to their intertwined nature. LiDAR data are used to segment the trellis face, and to generate proposals for individual trees trunks. Additional trunk proposals are provided using pixel-wise classification of the image data. The multi-modal observations are fine-tuned by modelling trunk locations using a hidden semi-Markov model (HSMM), within which prior knowledge of tree spacing is incorporated. The final component of this thesis addresses the visual occlusion of fruit within geometrically complex canopies by using a multi-view detection and tracking approach. Single image fruit detections are tracked over a sequence of images, and associated to individual trees or farm rows, with the spatial distribution of the fruit counting forming a yield map over the farm. The results show the advantage of using multi-view imagery (instead of single view analysis) for fruit counting and yield mapping. This thesis includes extensive experimentation in almond, apple and mango orchards, with data captured by a UGV spanning a total of 5 hectares of farm area, over 30 km of vehicle traversal and more than 7,000 trees. The validation of the different processes is performed using manual annotations, which includes fruit and tree locations in image and LiDAR data respectively. Additional evaluation of yield mapping is performed by comparison against fruit counts on trees at the farm and counts made by the growers post-harvest. The framework developed in this thesis is demonstrated to be accurate compared to ground truth at all scales of the pipeline, including fruit detection and tree mapping, leading to accurate yield estimation, per tree and per row, for the different crops. Through the multitude of field experiments conducted over multiple seasons and years, the thesis presents key practical insights necessary for commercial development of an information gathering system in orchards
Person re-Identification over distributed spaces and time
PhDReplicating the human visual system and cognitive abilities that the brain uses to process the
information it receives is an area of substantial scientific interest. With the prevalence of video
surveillance cameras a portion of this scientific drive has been into providing useful automated
counterparts to human operators. A prominent task in visual surveillance is that of matching
people between disjoint camera views, or re-identification. This allows operators to locate people
of interest, to track people across cameras and can be used as a precursory step to multi-camera
activity analysis. However, due to the contrasting conditions between camera views and their
effects on the appearance of people re-identification is a non-trivial task. This thesis proposes
solutions for reducing the visual ambiguity in observations of people between camera views
This thesis first looks at a method for mitigating the effects on the appearance of people under
differing lighting conditions between camera views. This thesis builds on work modelling
inter-camera illumination based on known pairs of images. A Cumulative Brightness Transfer
Function (CBTF) is proposed to estimate the mapping of colour brightness values based on limited
training samples. Unlike previous methods that use a mean-based representation for a set of
training samples, the cumulative nature of the CBTF retains colour information from underrepresented
samples in the training set. Additionally, the bi-directionality of the mapping function
is explored to try and maximise re-identification accuracy by ensuring samples are accurately
mapped between cameras.
Secondly, an extension is proposed to the CBTF framework that addresses the issue of changing
lighting conditions within a single camera. As the CBTF requires manually labelled training
samples it is limited to static lighting conditions and is less effective if the lighting changes. This
Adaptive CBTF (A-CBTF) differs from previous approaches that either do not consider lighting
change over time, or rely on camera transition time information to update. By utilising contextual
information drawn from the background in each camera view, an estimation of the lighting
change within a single camera can be made. This background lighting model allows the mapping
of colour information back to the original training conditions and thus remove the need for
3
retraining.
Thirdly, a novel reformulation of re-identification as a ranking problem is proposed. Previous
methods use a score based on a direct distance measure of set features to form a correct/incorrect
match result. Rather than offering an operator a single outcome, the ranking paradigm is to give
the operator a ranked list of possible matches and allow them to make the final decision. By utilising
a Support Vector Machine (SVM) ranking method, a weighting on the appearance features
can be learned that capitalises on the fact that not all image features are equally important to
re-identification. Additionally, an Ensemble-RankSVM is proposed to address scalability issues
by separating the training samples into smaller subsets and boosting the trained models.
Finally, the thesis looks at a practical application of the ranking paradigm in a real world application.
The system encompasses both the re-identification stage and the precursory extraction
and tracking stages to form an aid for CCTV operators. Segmentation and detection are combined
to extract relevant information from the video, while several combinations of matching
techniques are combined with temporal priors to form a more comprehensive overall matching
criteria.
The effectiveness of the proposed approaches is tested on datasets obtained from a variety
of challenging environments including offices, apartment buildings, airports and outdoor public
spaces
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed
Hardware and software integration and testing for the automation of bright-field microscopy for tuberculosis detection
Automated microscopy for the detection of tuberculosis (TB) in sputum smears would reduce the load on technicians, especially in countries with a high TB burden. This dissertation reports on the development and testing of an automated system built around a conventional microscope for the detection of TB in Ziehl-Neelsen (ZN) stained sputum smears. Microscope auto-focusing, image analysis and stage movement were integrated. Images were captured at 40x magnification
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
NON-INVASIVE IMAGE DENOISING AND CONTRAST ENHANCEMENT TECHNIQUES FOR RETINAL FUNDUS IMAGES
The analysis of retinal vasculature in digital fundus images is important for
diagnosing eye related diseases. However, digital colour fundus images suffer from
low and varied contrast, and are also affected by noise, requiring the use of fundus
angiogram modality. The Fundus Fluorescein Angiogram (FFA) modality gives 5 to
6 timeâs higher contrast. However, FFA is an invasive method that requires contrast
agents to be injected and this can lead other physiological problems. A reported
digital image enhancement technique named RETICA that combines Retinex and ICA
(Independent Component Analysis) techniques, reduces varied contrast, and enhances
the low contrast blood vessels of model fundus images
- âŠ