615 research outputs found

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems

    Get PDF
    Multisensor data fusion and integration is a rapidly evolving research area that requires interdisciplinary knowledge in control theory, signal processing, artificial intelligence, probability and statistics, etc. Multisensor data fusion refers to the synergistic combination of sensory data from multiple sensors and related information to provide more reliable and accurate information than could be achieved using a single, independent sensor (Luo et al., 2007). Actually Multisensor data fusion is a multilevel, multifaceted process dealing with automatic detection, association, correlation, estimation, and combination of data from single and multiple information sources. The results of data fusion process help users make decisions in complicated scenarios. Integration of multiple sensor data was originally needed for military applications in ocean surveillance, air-to air and surface-to-air defence, or battlefield intelligence. More recently, multisensor data fusion has also included the nonmilitary fields of remote environmental sensing, medical diagnosis, automated monitoring of equipment, robotics, and automotive systems (Macci et al., 2008). The potential advantages of multisensor fusion and integration are redundancy, complementarity, timeliness, and cost of the information. The integration or fusion of redundant information can reduce overall uncertainty and thus serve to increase the accuracy with which the features are perceived by the system. Multiple sensors providing redundant information can also serve to increase reliability in the case of sensor error or failure. Complementary information from multiple sensors allows features in the environment to be perceived that are impossible to perceive using just the information from each individual sensor operating separately. (Luo et al., 2007) Besides, driving as one of our daily activities is a complex task involving a great amount of interaction between driver and vehicle. Drivers regularly share their attention among operating the vehicle, monitoring traffic and nearby obstacles, and performing secondary tasks such as conversing, adjusting comfort settings (e.g. temperature, radio.) The complexity of the task and uncertainty of the driving environment make driving a very dangerous task, as according to a study in the European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This fact points up the growing demand for automotive safety systems, which aim for a significant contribution to the overall road safety (Tatschke et al., 2006). Therefore, recently, there are an increased number of research activities focusing on the Driver Assistance System (DAS) development in order O pe n A cc es s D at ab as e w w w .in te ch w eb .o r

    Teat detection for an automated milking system

    Get PDF
    Application time when placing all four cups to the udder of a cow is the primary time constraint in high capacity group milking. A human labourer can manually apply four cups per animal as it passes on a rotary carousel in less than ten seconds. Existing automated milking machines typically have an average attachment time in excess of one minute. These systems apply the cups to each udder quadrant individually. To speed up the process it is proposed to attach all four cups simultaneously. To achieve this, the 3D position and orientation of each teat must be known in approximate real time. This thesis documents the analysis of a stereo-vision system for teat location and presents further developments of the system for detection of teat orientation. Test results demonstrate the suitability of stereovision for teat location but indicate that further refinement of the system is required to produce increased accuracy and precision. The additional functionality developed for the system to determine teat orientation has also been tested. Results show that while accurate determination of teat orientation is possible issues still exist with reliability and robustness

    Application of computer vision for roller operation management

    Get PDF
    Compaction is the last and possibly the most important phase in construction of asphalt concrete (AC) pavements. Compaction densifies the loose (AC) mat, producing a stable surface with low permeability. The process strongly affects the AC performance properties. Too much compaction may cause aggregate degradation and low air void content facilitating bleeding and rutting. On the other hand too little compaction may result in higher air void content facilitating oxidation and water permeability issues, rutting due to further densification by traffic and reduced fatigue life. Therefore, compaction is a critical issue in AC pavement construction.;The common practice for compacting a mat is to establish a roller pattern that determines the number of passes and coverages needed to achieve the desired density. Once the pattern is established, the roller\u27s operator must maintain the roller pattern uniformly over the entire mat.;Despite the importance of uniform compaction to achieve the expected durability and performance of AC pavements, having the roller operator as the only mean to manage the operation can involve human errors.;With the advancement of technology in recent years, the concept of intelligent compaction (IC) was developed to assist the roller operators and improve the construction quality. Commercial IC packages for construction rollers are available from different manufacturers. They can provide precise mapping of a roller\u27s location and provide the roller operator with feedback during the compaction process.;Although, the IC packages are able to track the roller passes with impressive results, there are also major hindrances. The high cost of acquisition and potential negative impact on productivity has inhibited implementation of IC.;This study applied computer vision technology to build a versatile and affordable system to count and map roller passes. An infrared camera is mounted on top of the roller to capture the operator view. Then, in a near real-time process, image features were extracted and tracked to estimate the incremental rotation and translation of the roller. Image featured are categorized into near and distant features based on the user defined horizon. The optical flow is estimated for near features located in the region below the horizon. The change in roller\u27s heading is constantly estimated from the distant features located in the sky region. Using the roller\u27s rotation angle, the incremental translation between two frames will be calculated from the optical flow. The roller\u27s incremental rotation and translation will put together to develop a tracking map.;During system development, it was noted that in environments with thermal uniformity, the background of the IR images exhibit less featured as compared to images captured with optical cameras which are insensitive to temperature. This issue is more significant overnight, since nature elements are not able to reflect the heat energy from sun. Therefore to improve roller\u27s heading estimation where less features are available in the sky region a unique methodology that allows heading detection based on the asphalt mat edges was developed for this research. The heading measurements based on the slope of the asphalt hot edges will be added to the pool of the headings measured from sky region. The median of all heading measurements will be used as the incremental roller\u27s rotation for the tracking analysis.;The record of tracking data is used for QC/QA purposes and verifying the proper implementation of the roller pattern throughout a job constructed under the roller pass specifications.;The system developed during this research was successful in mapping roller location for few projects tested. However the system should be independently validated

    Detection of Motorcycles in Urban Traffic Using Video Analysis: A Review

    Get PDF
    Motorcycles are Vulnerable Road Users (VRU) and as such, in addition to bicycles and pedestrians, they are the traffic actors most affected by accidents in urban areas. Automatic video processing for urban surveillance cameras has the potential to effectively detect and track these road users. The present review focuses on algorithms used for detection and tracking of motorcycles, using the surveillance infrastructure provided by CCTV cameras. Given the importance of results achieved by Deep Learning theory in the field of computer vision, the use of such techniques for detection and tracking of motorcycles is also reviewed. The paper ends by describing the performance measures generally used, publicly available datasets (introducing the Urban Motorbike Dataset (UMD) with quantitative evaluation results for different detectors), discussing the challenges ahead and presenting a set of conclusions with proposed future work in this evolving area

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei

    Maize and sorghum plant detection at early growth stages using proximity laser and time-of-flight sensors

    Get PDF
    Maize and sorghum are important cereal crops in the world. To increase the maize grain yield, two approaches are used: exploring hybrid maize in plant breeding and improving the crop management system. Plant population is a parameter for calculating the germination rate, which is an important phenotypic trait of seeds. An automated way to obtain the plant population at early growth stages can help breeders to save measuring time in the field and increase the efficiency of their breeding programs. Similar to what has been taking place in production agriculture, plant scientists and plant breeders have been looking for and adopting precision technologies into their research programs; and analyzing plant performance plot-by-plot and even plant-by-plant is becoming the norm and vitally important plant phenomics research and seed industry. Accurate plant location information is needed for determining plant distribution and generating plant stand maps. Two automated plant population detection and location estimation systems using different sensors were developed in this research. A 2D machine vision technique was applied to develop a real-time automatic plant population estimation and plant stand map generation system for maize and sorghum in early growth stages. Laser sensors were chosen as they are not affected by outdoor lighting conditions. Plant detection algorithms were developed based on the unique plant stem structure. Since maize and sorghum look similar at early growth stages, the system was tested over both plants in greenhouse condition. The detection rate of over 93.1% and 83.0% were achieved for maize and sorghum plants from V2 to V6 growth stage, respectively. The mean absolute error and root-mean-error of plant location were 3.1 cm and 3.2 cm m for maize and 2.8 cm and 2.9 cm for grain sorghum plants, respectively. Apart from using laser sensors, a 3D Time-of-Flight camera-based automatic system was also developed for maize and sorghum plant detection at their early growth stages. The images were captured by using a Swift camera from a side-view of the crop row without any shade during the daytime in a greenhouse. A serious of image processing algorithms including point cloud filtering, plant candidate extraction, invalid plant removal, and plant registration were developed for this system. By comparing with the manual measurement, for the maize plant, the average true positive detection rate was 89% with 0.06 standard deviation. For grain sorghum plants, the average true positive detection rate was 85% with 0.08 standard deviation

    Navigation for automatic guided vehicles using omnidirectional optical sensing

    Get PDF
    Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013Automatic Guided Vehicles (AGVs) are being used more frequently in a manufacturing environment. These AGVs are navigated in many different ways, utilising multiple types of sensors for detecting the environment like distance, obstacles, and a set route. Different algorithms or methods are then used to utilise this environmental information for navigation purposes applied onto the AGV for control purposes. Developing a platform that could be easily reconfigured in alternative route applications utilising vision was one of the aims of the research. In this research such sensors detecting the environment was replaced and/or minimised by the use of a single, omnidirectional Webcam picture stream utilising an own developed mirror and Perspex tube setup. The area of interest in each frame was extracted saving on computational recourses and time. By utilising image processing, the vehicle was navigated on a predetermined route. Different edge detection methods and segmentation methods were investigated on this vision signal for route and sign navigation. Prewitt edge detection was eventually implemented, Hough transfers used for border detection and Kalman filtering for minimising border detected noise for staying on the navigated route. Reconfigurability was added to the route layout by coloured signs incorporated in the navigation process. The result was the manipulation of a number of AGVā€™s, each on its own designated coloured signed route. This route could be reconfigured by the operator with no programming alteration or intervention. The YCbCr colour space signal was implemented in detecting specific control signs for alternative colour route navigation. The result was used generating commands to control the AGV through serial commands sent on a laptopā€™s Universal Serial Bus (USB) port with a PIC microcontroller interface board controlling the motors by means of pulse width modulation (PWM). A total MATLABĀ® software development platform was utilised by implementing written M-files, SimulinkĀ® models, masked function blocks and .mat files for sourcing the workspace variables and generating executable files. This continuous development system lends itself to speedy evaluation and implementation of image processing options on the AGV. All the work done in the thesis was validated by simulations using actual data and by physical experimentation

    ADAPTIVE PROCESSING ARCHITECTURE OF MULTISENSOR SIGNALS FOR LOW-IMPACT TREATMENTS OF PLANT DISEASES.

    Get PDF
    Intelligent sensing for production of high-value crops Scientific and technical quality This thesis has been realized within the CROPS project. CROPS will develop scientific know-how for a highly configurable, modular and clever carrier platform that includes modular parallel manipulators and intelligent tools (sensors, algorithms, sprayers, grippers) that can be easily installed onto the carrier and are capable of adapting to new tasks and conditions. Several technological demonstrators will be developed for high value crops like greenhouse vegetables, fruits in orchards, and grapes for premium wines. The CROPS robotic platform will be capable of site-specific spraying (targets spray only towards foliage and selective targets) and selective harvesting of fruit (detects the fruit, determines its ripeness, moves towards the fruit, grasps it and softly detaches it). Another objective of CROPS is to develop techniques for reliable detection and classification of obstacles and other objects to enable successful autonomous navigation and operation in plantations and forests. The agricultural and forestry applications share many research areas, primarily regarding sensing and learning capabilities. The project started in October 2010 and will run for 48 month. The aim of this thesis is to lay the foundations, suggesting the guidelines, of one task addressed by the CROPS project, in particular, the aim of this work is to study the application of a VIS-NIR imaging approach (intelligent sensing), based on a relatively simple algorithm, to detect symptoms of powdery mildew and downy mildew disease at early stages of infection (sustainable production of high-value crops). Also a preliminary work for botrytis detection will be shown. Concept and objectives Many site-specific agricultural and forestry tasks, such as cultivating, transplanting, spraying, trimming, selective harvesting, and transportation, could be performed more efficiently if carried out by robotic systems. However, to date, agriculture and forestry robots are still not available, partly due to the complex, and often contradictory, demands for developing such systems. On the one hand, agro-forestry robots must be of reasonable cost, but on the other, they must be able to deal with complex, dynamic, and partly changing tasks. Addressing problems such as continuously changing conditions (e.g., rain and illumination), high variability in both the products (size, and shape) and the environment (location and soil properties), the delicate nature of the products, and hostile environmental conditions (e.g. dust, dirt, extreme temperature and humidity) requires advanced sensing, manipulation, and control. Since it is impossible to model a-priori all environments and task conditions, the robot must be able to learn new tasks and new working conditions. The solution to these demands lies in a modular and configurable design that will keep costs to a minimum by applying a basic configuration to a range of agricultural applications. At least a 95% yield rate is necessary for economical feasibility of an agro-forestry robotic system. Objectives An objective of CROPS project is to develop an \u201cintelligent tools\u201d (sensors, algorithms, sprayers) that can easily be installed onto a modular and clever carrier platform. The CROPS robotic platform will be capable of site-specific spraying (targeted spraying only on foliage and selected targets). Research efforts To achieve the novel systems described above, we will focus on intelligent sensing of disease detection on crop canopy (investigating different types and/or multiple sensors with decision making models). Technology evaluation Technology evaluation of the developed systems will include the performance evaluation of the different components (e.g., capacities, success rates/misses). Progress beyond the state-of-the-art Despite the extensive research conducted to date in applying robots to a variety of agriculture and forestry tasks (e.g., transplanting, spraying, trimming, selective harvesting), limited operating efficiencies (speeds, success rates) and lack of economic justification have severely limited commercialization. The few commercial autonomous agriculture and forestry robots that are available on the market include a cow milking robot, a robot for cutting roses (RomboMatic), and various remote-controlled forest harvesters. These robots either have a low level of autonomy or are able to perform only simple operations in structured and static environments (e.g. dairy farms and plant breeding facilities). Developing capabilities for robots operating in unstructured outdoor environments or dealing with the highly variable objects that exist in agriculture and forestry is still open-ended, and one of CROPS aims is to address this problem. Current state-of-the-art Field trials have routinely shown that most crop damage due to diseases and pests can be efficiently controlled when treatments are applied timely and accurately by hand to susceptible targets (i.e., by intelligent spraying). Site-specific spraying targeted solely to trees and/or to infected areas can reduce pesticide use by 20\u201340%. An issue of relevance to targeted agriculture is the detection of diseases in field crops. Since such events often have a visual manifestation, state-of-the-art methods for achieving this goal include fluorescence imaging or the analysis of spectral reflectance in carefully selected spectral bands. While reports of these methods used separately achieved performance at 75\u201390% accuracy, attempts to combine them have boosted disease discrimination accuracy to 95%. We must note here, however, that despite these promising results, very little research has been conducted on in-field disease detection. Expected progress The diseased detection approach for precision pesticide spraying will be developed investigating image processing techniques (after a laboratory spectral evaluation and greenhouse testing) for high-precision close-range targeted spraying to selectively and precisely apply chemicals solely to targets susceptible to specific diseases/pests, with a mean 90% success rate. Local changes in spectral reflection of parts of the canopy will be used as an indication of disease. \u201cSoft-sensor\u201d for detection of ripeness and diseases (noncontact rapid sensing system) will be developed by multispectral sensor (multispectral spectral camera). These \u201csoft sensor\u201d can be used as a decision model for targeted spraying
    • ā€¦
    corecore