278 research outputs found
Object Detection in Omnidirectional Images
Nowadays, computer vision (CV) is widely used to solve real-world problems, which pose increasingly higher challenges. In this context, the use of omnidirectional video in a growing number of applications, along with the fast development of Deep Learning (DL) algorithms for object detection, drives the need for further research to improve existing methods originally developed for conventional 2D planar images. However, the geometric distortion that common sphere-to-plane projections produce, mostly visible in objects near the poles, in addition to the lack of omnidirectional open-source labeled image datasets has made an accurate spherical image-based object detection algorithm a hard goal to achieve.
This work is a contribution to develop datasets and machine learning models particularly suited for omnidirectional images, represented in planar format through the well-known Equirectangular Projection (ERP). To this aim, DL methods are explored to improve the detection of visual objects in omnidirectional images, by considering the inherent distortions of ERP. An experimental study was, firstly, carried out to find out whether the error rate and type of detection errors were related to the characteristics of ERP images. Such study revealed that the error rate of object detection using existing DL models with ERP images, actually, depends on the object spherical location in the image.
Then, based on such findings, a new object detection framework is proposed to obtain a uniform error rate across the whole spherical image regions. The results show that the pre and post-processing stages of the implemented framework effectively contribute to reducing the performance dependency on the image region, evaluated by the above-mentioned metric
Vision Sensors and Edge Detection
Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Navigation for automatic guided vehicles using omnidirectional optical sensing
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013Automatic Guided Vehicles (AGVs) are being used more frequently in a manufacturing environment. These AGVs are navigated in many different ways, utilising multiple types of sensors for detecting the environment like distance, obstacles, and a set route. Different algorithms or methods are then used to utilise this environmental information for navigation purposes applied onto the AGV for control purposes. Developing a platform that could be easily reconfigured in alternative route applications utilising vision was one of the aims of the research.
In this research such sensors detecting the environment was replaced and/or minimised by the use of a single, omnidirectional Webcam picture stream utilising an own developed mirror and Perspex tube setup. The area of interest in each frame was extracted saving on computational recourses and time. By utilising image processing, the vehicle was navigated on a predetermined route.
Different edge detection methods and segmentation methods were investigated on this vision signal for route and sign navigation. Prewitt edge detection was eventually implemented, Hough transfers used for border detection and Kalman filtering for minimising border detected noise for staying on the navigated route.
Reconfigurability was added to the route layout by coloured signs incorporated in the navigation process. The result was the manipulation of a number of AGV’s, each on its own designated coloured signed route. This route could be reconfigured by the operator with no programming alteration or intervention. The YCbCr colour space signal was implemented in detecting specific control signs for alternative colour route navigation.
The result was used generating commands to control the AGV through serial commands sent on a laptop’s Universal Serial Bus (USB) port with a PIC microcontroller interface board controlling the motors by means of pulse width modulation (PWM).
A total MATLAB® software development platform was utilised by implementing written M-files, Simulink® models, masked function blocks and .mat files for sourcing the workspace variables and generating executable files. This continuous development system lends itself to speedy evaluation and implementation of image processing options on the AGV.
All the work done in the thesis was validated by simulations using actual data and by physical experimentation
Texture and Colour in Image Analysis
Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
Optimising programming to reduce side effects of subthalamic nucleus deep brain stimulation in Parkinson’s Disease
Subthalamic nucleus deep brain stimulation (STN DBS) is a widely used treatment for Parkinson’s disease patients with motor complications refractory to medical management. However, a significant proportion of treated patients suffer from stimulation induced side effects. Conventional options to address these by modulation of stimulation parameters and programming configurations have been limited. In recent years, technological advances have resulted in the emergence of novel programming features, including the use of short pulse width (PW) and directional steering, that represent further avenues to explore in this regard. In this thesis, I will present data on the utility of these programming techniques in alleviating stimulation induced side effects, and explore mechanisms that may mediate any observed effects. The data presented here is derived from four studies. Study 1 quantified the therapeutic window using short PW stimulation at 30μs relative to conventional 60μs settings. Study 2 represents a randomised controlled trial on short PW in chronic STN DBS patients with dysarthria. Study 3 evaluated the utility of directional steering, short PW, and the combination of these features in reversing stimulation induced dysarthria, dyskinesia, and pyramidal side effects. The findings of these studies suggest that short PW significantly increases the therapeutic window in terms of amplitude and charge, and that while it may not benefit chronic dysarthric patients collectively, directional steering and short PW can each significantly improve reversible stimulation induced side effects early in the course of STN DBS therapy. These novel techniques represent effective additional tools to conventional methods for optimising stimulation. In study 4, imaging and visualisation software are used to model and explore shifts in volume of tissue activated based on clinical data from study 3, and quantitatively compare charge per pulse, in order to explore potential mechanisms underlying the changes seen with these techniques
- …