7,765 research outputs found

    Utilization of Robust Video Processing Techniques to Aid Efficient Object Detection and Tracking

    Get PDF
    AbstractIn this research, data acquired by Unmanned Aerial Vehicles (UAVs) are primarily used to detect and track moving objects which pose a major security threat along the United States southern border. Factors such as camera motion, poor illumination and noise make the detection and tracking of moving objects in surveillance videos a formidable task. The main objective of this research is to provide a less ambiguous image data for object detection and tracking by means of noise reduction, image enhancement, video stabilization, and illumination restoration. The improved data is later utilized to detect and track moving objects in surveillance videos. An optimization based image enhancement scheme was successfully implemented to increase edge information to facilitate object detection. Noise present in the raw video captured by the UAV was efficiently removed using search and match methodology. Undesired motion induced in the video frames was eliminated using block matching technique. Moving objects were detected and tracked by using contour information resulting from the implementation of adaptive background subtraction based detection process. Our simulation results shows the efficiency of these algorithms in processing noisy, un-stabilized raw video sequences which were utilized to detect and track moving objects in the video sequences

    Frequency-Aware Model Predictive Control

    Full text link
    Transferring solutions found by trajectory optimization to robotic hardware remains a challenging task. When the optimization fully exploits the provided model to perform dynamic tasks, the presence of unmodeled dynamics renders the motion infeasible on the real system. Model errors can be a result of model simplifications, but also naturally arise when deploying the robot in unstructured and nondeterministic environments. Predominantly, compliant contacts and actuator dynamics lead to bandwidth limitations. While classical control methods provide tools to synthesize controllers that are robust to a class of model errors, such a notion is missing in modern trajectory optimization, which is solved in the time domain. We propose frequency-shaped cost functions to achieve robust solutions in the context of optimal control for legged robots. Through simulation and hardware experiments we show that motion plans can be made compatible with bandwidth limits set by actuators and contact dynamics. The smoothness of the model predictive solutions can be continuously tuned without compromising the feasibility of the problem. Experiments with the quadrupedal robot ANYmal, which is driven by highly-compliant series elastic actuators, showed significantly improved tracking performance of the planned motion, torque, and force trajectories and enabled the machine to walk robustly on terrain with unmodeled compliance

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    River flow monitoring: LS-PIV technique, an image-based method to assess discharge

    Get PDF
    The measurement of the river discharge within a natural ort artificial channel is still one of the most challenging tasks for hydrologists and the scientific community. Although discharge is a physical quantity that theoretically can be measured with very high accuracy, since the volume of water flows in a well-defined domain, there are numerous critical issues in obtaining a reliable value. Discharge cannot be measured directly, so its value is obtained by coupling a measurement of a quantity related to the volume of flowing water and the area of a channel cross-section. Direct measurements of current velocity are made, traditionally with instruments such as current meters. Although measurements with current meters are sufficiently accurate and even if there are universally recognized standards for the current application of such instruments, they are often unusable under specific flow conditions. In flood conditions, for example, due to the need for personnel to dive into the watercourse, it is impossible to ensure adequate safety conditions to operators for carrying out flow measures. Critical issue arising from the use of current meters has been partially addressed thanks to technological development and the adoption of acoustic sensors. In particular, with the advent of Acoustic Doppler Current Profilers (ADCPs), flow measurements can take place without personnel having direct contact with the flow, performing measurements either from the bridge or from the banks. This made it possible to extend the available range of discharge measurements. However, the flood conditions of a watercourse also limit the technology of ADCPs. The introduction of the instrument into the current with high velocities and turbulence would put the instrument itself at serious risk, making it vulnerable and exposed to damage. In the most critical case, the instrument could be torn away by the turbulent current. On the other hand, considering smaller discharges, both current meters and ADCPs are technologically limited in their measurement as there are no adequate water levels for the use of the devices. The difficulty in obtaining information on the lowest and highest values of discharge has important implications on how to define the relationships linking flows to water levels. The stage-discharge relationship is one of the tools through which it is possible to monitor the flow in a specific section of a watercourse. Through this curve, a discharge value can be obtained from knowing the water stage. Curves are site-specific and must be continuously updated to account for changes in geometry that the sections for which they are defined may experience over time. They are determined by making simultaneous discharge and stage measurements. Since instruments such as current meters and ADCPs are traditionally used, stage-discharge curves suffer from instrumental limitations. So, rating curves are usually obtained by interpolation of field-measured data and by extrapolate them for the highest and the lowest discharge values, with a consequent reduction in accuracy. This thesis aims to identify a valid alternative to traditional flow measurements and to show the advantages of using new methods of monitoring to support traditional techniques, or to replace them. Optical techniques represent the best solution for overcoming the difficulties arising from the adoption of a traditional approach to flow measurement. Among these, the most widely used techniques are the Large-Scale Particle Image Velocimetry (LS-PIV) and the Large-Scale Particle Tracking Velocimetry. They are able to estimate the surface velocity fields by processing images representing a moving tracer, suitably dispersed on the liquid surface. By coupling velocity data obtained from optical techniques with geometry of a cross-section, a discharge value can easily be calculated. In this thesis, the study of the LS-PIV technique was deepened, analysing the performance of the technique, and studying the physical and environmental parameters and factors on which the optical results depend. As the LS-PIV technique is relatively new, there are no recognized standards available for the proper application of the technique. A preliminary numerical analysis was conducted to identify the factors on which the technique is significantly dependent. The results of these analyses enabled the development of specific guidelines through which the LS-PIV technique could subsequently be applied in open field during flow measurement campaigns in Sicily. In this way it was possible to observe experimentally the criticalities involved in applying the technique on real cases. These measurement campaigns provided the opportunity to carry out analyses on field case studies and structure an automatic procedure for optimising the LS-PIV technique. In all case studies it was possible to observe how the turbulence phenomenon is a worsening factor in the output results of the LS-PIV technique. A final numerical analysis was therefore performed to understand the influence of turbulence factor on the performance of the technique. The results obtained represent an important step for future development of the topic

    A model-based design flow for embedded vision applications on heterogeneous architectures

    Get PDF
    The ability to gather information from images is straightforward to human, and one of the principal input to understand external world. Computer vision (CV) is the process to extract such knowledge from the visual domain in an algorithmic fashion. The requested computational power to process these information is very high. Until recently, the only feasible way to meet non-functional requirements like performance was to develop custom hardware, which is costly, time-consuming and can not be reused in a general purpose. The recent introduction of low-power and low-cost heterogeneous embedded boards, in which CPUs are combine with heterogeneous accelerators like GPUs, DSPs and FPGAs, can combine the hardware efficiency needed for non-functional requirements with the flexibility of software development. Embedded vision is the term used to identify the application of the aforementioned CV algorithms applied in the embedded field, which usually requires to satisfy, other than functional requirements, also non-functional requirements such as real-time performance, power, and energy efficiency. Rapid prototyping, early algorithm parametrization, testing, and validation of complex embedded video applications for such heterogeneous architectures is a very challenging task. This thesis presents a comprehensive framework that: 1) Is based on a model-based paradigm. Differently from the standard approaches at the state of the art that require designers to manually model the algorithm in any programming language, the proposed approach allows for a rapid prototyping, algorithm validation and parametrization in a model-based design environment (i.e., Matlab/Simulink). The framework relies on a multi-level design and verification flow by which the high-level model is then semi-automatically refined towards the final automatic synthesis into the target hardware device. 2) Relies on a polyglot parallel programming model. The proposed model combines different programming languages and environments such as C/C++, OpenMP, PThreads, OpenVX, OpenCV, and CUDA to best exploit different levels of parallelism while guaranteeing a semi-automatic customization. 3) Optimizes the application performance and energy efficiency through a novel algorithm for the mapping and scheduling of the application 3 tasks on the heterogeneous computing elements of the device. Such an algorithm, called exclusive earliest finish time (XEFT), takes into consideration the possible multiple implementation of tasks for different computing elements (e.g., a task primitive for CPU and an equivalent parallel implementation for GPU). It introduces and takes advantage of the notion of exclusive overlap between primitives to improve the load balancing. This thesis is the result of three years of research activity, during which all the incremental steps made to compose the framework have been tested on real case studie
    • 

    corecore