469 research outputs found
Navigation for automatic guided vehicles using omnidirectional optical sensing
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013Automatic Guided Vehicles (AGVs) are being used more frequently in a manufacturing environment. These AGVs are navigated in many different ways, utilising multiple types of sensors for detecting the environment like distance, obstacles, and a set route. Different algorithms or methods are then used to utilise this environmental information for navigation purposes applied onto the AGV for control purposes. Developing a platform that could be easily reconfigured in alternative route applications utilising vision was one of the aims of the research.
In this research such sensors detecting the environment was replaced and/or minimised by the use of a single, omnidirectional Webcam picture stream utilising an own developed mirror and Perspex tube setup. The area of interest in each frame was extracted saving on computational recourses and time. By utilising image processing, the vehicle was navigated on a predetermined route.
Different edge detection methods and segmentation methods were investigated on this vision signal for route and sign navigation. Prewitt edge detection was eventually implemented, Hough transfers used for border detection and Kalman filtering for minimising border detected noise for staying on the navigated route.
Reconfigurability was added to the route layout by coloured signs incorporated in the navigation process. The result was the manipulation of a number of AGV’s, each on its own designated coloured signed route. This route could be reconfigured by the operator with no programming alteration or intervention. The YCbCr colour space signal was implemented in detecting specific control signs for alternative colour route navigation.
The result was used generating commands to control the AGV through serial commands sent on a laptop’s Universal Serial Bus (USB) port with a PIC microcontroller interface board controlling the motors by means of pulse width modulation (PWM).
A total MATLAB® software development platform was utilised by implementing written M-files, Simulink® models, masked function blocks and .mat files for sourcing the workspace variables and generating executable files. This continuous development system lends itself to speedy evaluation and implementation of image processing options on the AGV.
All the work done in the thesis was validated by simulations using actual data and by physical experimentation
Calibration of a reconfigurable array of omnidirectional cameras using a moving person
Reconfigurable arrays of omnidirectional cameras are useful for applications where multiple cameras working together are to be deployed at a short notice. This paper addresses the important issue of calibration of such arrays in terms of the relative camera positions and orientations. The lo-cation of a one-dimensional object moving parallel to itself, such as a moving person is used to establish correspondences between multiple cameras. In such case, the non-linear 3-D problem of calibration can be approximated by a 2-D problem in plan view. This enables an initial solution us-ing factorization method. A non-linear optimization stage is then used to account for the the approximations, as well as to minimize the geometric error between the observed and projected omni pixel coordinates. Experimental results with simulated and real data illustrate the effectiveness of the method. Categories and Subject Descriptor
Real-Time High-Resolution Multiple-Camera Depth Map Estimation Hardware and Its Applications
Depth information is used in a variety of 3D based signal processing applications such as autonomous navigation of robots and driving systems, object detection and tracking, computer games, 3D television, and free view-point synthesis. These applications require high accuracy and speed performances for depth estimation. Depth maps can be generated using disparity estimation methods, which are obtained from stereo matching between multiple images. The computational complexity of disparity estimation algorithms and the need of large size and bandwidth for the external and internal memory make the real-time processing of disparity estimation challenging, especially for high resolution images. This thesis proposes a high-resolution high-quality multiple-camera depth map estimation hardware. The proposed hardware is verified in real-time with a complete system from the initial image capture to the display and applications. The details of the complete system are presented. The proposed binocular and trinocular adaptive window size disparity estimation algorithms are carefully designed to be suitable to real-time hardware implementation by allowing efficient parallel and local processing while providing high-quality results. The proposed binocular and trinocular disparity estimation hardware implementations can process 55 frames per second on a Virtex-7 FPGA at a 1024 x 768 XGA video resolution for a 128 pixel disparity range. The proposed binocular disparity estimation hardware provides best quality compared to existing real-time high-resolution disparity estimation hardware implementations. A novel compressed-look up table based rectification algorithm and its real-time hardware implementation are presented. The low-complexity decompression process of the rectification hardware utilizes a negligible amount of LUT and DFF resources of the FPGA while it does not require the existence of external memory. The first real-time high-resolution free viewpoint synthesis hardware utilizing three-camera disparity estimation is presented. The proposed hardware generates high-quality free viewpoint video in real-time for any horizontally aligned arbitrary camera positioned between the leftmost and rightmost physical cameras. The full embedded system of the depth estimation is explained. The presented embedded system transfers disparity results together with synchronized RGB pixels to the PC for application development. Several real-time applications are developed on a PC using the obtained RGB+D results. The implemented depth estimation based real-time software applications are: depth based image thresholding, speed and distance measurement, head-hands-shoulders tracking, virtual mouse using hand tracking and face tracking integrated with free viewpoint synthesis. The proposed binocular disparity estimation hardware is implemented in an ASIC. The ASIC implementation of disparity estimation imposes additional constraints with respect to the FPGA implementation. These restrictions, their implemented efficient solutions and the ASIC implementation results are presented. In addition, a very high-resolution (82.3 MP) 360°x90° omnidirectional multiple camera system is proposed. The hemispherical camera system is able to view the target locations close to horizontal plane with more than two cameras. Therefore, it can be used in high-resolution 360° depth map estimation and its applications in the future
POINTING, ACQUISITION, AND TRACKING FOR DIRECTIONAL WIRELESS COMMUNICATIONS NETWORKS
Directional wireless communications networks (DWNs) are expected to
become a workhorse of the military, as they provide great network capacity in hostile
areas where omnidirectional RF systems can put their users in harm's way. These
networks will also be able to adapt to new missions, change topologies, use different
communications technologies, yet still reliably serve all their terminal users. DWNs
also have the potential to greatly expand the capacity of civilian and commercial
wireless communication. The inherently narrow beams present in these types of
systems require a means of steering them, acquiring the links, and tracking to
maintain connectivity. This area of technological challenges encompasses all the
issues of pointing, acquisition, and tracking (PAT).
iii
The two main technologies for DWNs are Free-Space Optical (FSO) and
millimeter wave RF (mmW). FSO offers tremendous bandwidths, long ranges, and
uses existing fiber-based technologies. However, it suffers from severe turbulence
effects when passing through long (>kms) atmospheric paths, and can be severely
affected by obscuration. MmW systems do not suffer from atmospheric effects
nearly as much, use much more sensitive coherent receivers, and have wider beam
divergences allowing for easier pointing. They do, however, suffer from a lack of
available small-sized power amplifiers, complicated RF infrastructure that must be
steered with a platform, and the requirement that all acquisition and tracking be done
with the data beam, as opposed to FSO which uses a beacon laser for acquisition and
a fast steering mirror for tracking.
This thesis analyzes the many considerations required for designing and
implementing a FSO PAT system, and extends this work to the rapidly expanding
area of mmW DWN systems. Different types of beam acquisition methods are
simulated and tested, and the tradeoffs between various design specifications are
analyzed and simulated to give insight into how to best implement a transceiver
platform.
An experimental test-bed of six FSO platforms is also designed and constructed
to test some of these concepts, along with the implementation of a three-node biconnected
network. Finally, experiments have been conducted to assess the
performance of fixed infrastructure routing hardware when operating with a
physically reconfigurable RF network
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed systemSpanish research project SAM: TEC 2015-68170-R (MINECO/FEDER, UE
Doctor of Philosophy
dissertationThis dissertation explores the design and use of an electromagnetic manipulation system that has been optimized for the dipole-eld model. This system can be used for noncontact manipulation of adjacent magnetic tools and combines the eld strength control of current electromagnetic systems with the analytical modeling of permanent-magnet systems. To design such a system, it is rst necessary to characterize how the shape of the eld source aects the shape of the magnetic eld. The magnetic eld generated by permanent magnets and electromagnets can be modeled, far from the source, using a multipole expansion. The error associated with the multipole expansion is quantied, and it is shown that, as long as the point of interest is 1.5 radii of the smallest sphere that can fully contain the magnetic source, the full expansion will have less than 1% error. If only the dipole term, the rst term in the expansion, is used, then the error is minimized for cylindrical shapes with a diameter-to-length ratio of 4=3 and for rectangular-bars with a cube. Applying the multipole expansion to electromagnets, an omnidirectional electromagnet, comprising three orthogonal solenoids and a spherical core, is designed that has minimal dipole-eld error and equal strength in all directions. Although this magnet can be constructed with any size core, the optimal design contains a spherical core with a diameter that is 60% of the outer dimension of the magnet. The resulting magnet's ability to dextrously control the eld at a point is demonstrated by rotating an endoscopic-pill mockup to drive it though a lumen and roll a permanent-magnet ball though several trajectories. Dipole elds also apply forces on adjacent magnetized objects. The ability to control these forces is demonstrated by performing position control on an orientation-constrained magnetic oat and nally by steering a permanent magnet, which is aligned with the applied dipole eld, around a rose curve
- …