150 research outputs found

    Space Image Processing and Orbit Estimation Using Small Aperture Optical Systems

    Get PDF
    Angles-only initial orbit determination (AIOD) methods have been used to find the orbit of satellites since the beginning of the Space Race. Given the ever increasing number of objects in orbit today, the need for accurate space situational awareness (SSA) data has never been greater. Small aperture (\u3c 0:5m) optical systems, increasingly popular in both amateur and professional circles, provide an inexpensive source of such data. However, utilizing these types of systems requires understanding their limits. This research uses a combination of image processing techniques and orbit estimation algorithms to evaluate the limits and improve the resulting orbit solution obtained using small aperture systems. Characterization of noise from physical, electronic, and digital sources leads to a better understanding of reducing noise in the images used to provide the best solution possible. Given multiple measurements, choosing the best images for use is a non-trivial process and often results in trying all combinations. In an effort to help autonomize the process, a novel “observability metric” using only information from the captured images was shown empirically as a method of choosing the best observations. A method of identifying resident space objects (RSOs) in a single image using a gradient based search algorithm was developed and tested on actual space imagery captured with a small aperture optical system. The algorithm was shown to correctly identify candidate RSOs in a variety of observational scenarios

    Hardware acceleration of the trace transform for vision applications

    Get PDF
    Computer Vision is a rapidly developing field in which machines process visual data to extract meaningful information. Digitised images in their pixels and bits serve no purpose of their own. It is only by interpreting the data, and extracting higher level information that a scene can be understood. The algorithms that enable this process are often complex, and data-intensive, limiting the processing rate when implemented in software. Hardware-accelerated implementations provide a significant performance boost that can enable real- time processing. The Trace Transform is a newly proposed algorithm that has been proven effective in image categorisation and recognition tasks. It is flexibly defined allowing the mathematical details to be tailored to the target application. However, it is highly computationally intensive, which limits its applications. Modern heterogeneous FPGAs provide an ideal platform for accelerating the Trace transform for real-time performance, while also allowing an element of flexibility, which highly suits the generality of the Trace transform. This thesis details the implementation of an extensible Trace transform architecture for vision applications, before extending this architecture to a full flexible platform suited to the exploration of Trace transform applications. As part of the work presented, a general set of architectures for large-windowed median and weighted median filters are presented as required for a number of Trace transform implementations. Finally an acceleration of Pseudo 2-Dimensional Hidden Markov Model decoding, usable in a person detection system, is presented. Such a system can be used to extract frames of interest from a video sequence, to be subsequently processed by the Trace transform. All these architectures emphasise the need for considered, platform-driven design in achieving maximum performance through hardware acceleration

    The multiresolution Fourier transform : a general purpose tool for image analysis

    Get PDF
    The extraction of meaningful features from an image forms an important area of image analysis. It enables the task of understanding visual information to be implemented in a coherent and well defined manner. However, although many of the traditional approaches to feature extraction have proved to be successful in specific areas, recent work has suggested that they do not provide sufficient generality when dealing with complex analysis problems such as those presented by natural images. This thesis considers the problem of deriving an image description which could form the basis of a more general approach to feature extraction. It is argued that an essential property of such a description is that it should have locality in both the spatial domain and in some classification space over a range of scales. Using the 2-d Fourier domain as a classification space, a number of image transforms that might provide the required description are investigated. These include combined representations such as a 2-d version of the short-time Fourier transform (STFT), and multiscale or pyramid representations such as the wavelet transform. However, it is shown that these are limited in their ability to provide sufficient locality in both domains and as such do not fulfill the requirement for generality. To overcome this limitation, an alternative approach is proposed in the form of the multiresolution Fourier transform (MFT). This has a hierarchical structure in which the outermost levels are the image and its discrete Fourier transform (DFT), whilst the intermediate levels are combined representations in space and spatial frequency. These levels are defined to be optimal in terms of locality and their resolution is such that within the transform as a whole there is a uniform variation in resolution between the spatial domain and the spatial frequency domain. This ensures that locality is provided in both domains over a range of scales. The MFT is also invertible and amenable to efficient computation via familiar signal processing techniques. Examples and experiments illustrating its properties are presented. The problem of extracting local image features such as lines and edges is then considered. A multiresolution image model based on these features is defined and it is shown that the MET provides an effective tool for estimating its parameters.. The model is also suitable for representing curves and a curve extraction algorithm is described. The results presented for synthetic and natural images compare favourably with existing methods. Furthermore, when coupled with the previous work in this area, they demonstrate that the MFT has the potential to provide a basis for the solution of general image analysis problems

    Machine learning methods for 3D object classification and segmentation

    Get PDF
    Field of study: Computer science.Dr. Ye Duan, Thesis Supervisor.Includes vita."July 2018."Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation. The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset. The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation.Includes bibliographical references (pages 116-140)

    Project OASIS: The Design of a Signal Detector for the Search for Extraterrestrial Intelligence

    Get PDF
    An 8 million channel spectrum analyzer (MCSA) was designed the meet to meet the needs of a SETI program. The MCSA puts out a very large data base at very high rates. The development of a device which follows the MCSA, is presented

    On the distribution of central values of Hecke L-functions

    Get PDF
    Questions regarding the behavior of the Riemann zeta function on the critical line 1/2 + it can be naturally interpreted as questions regarding the family of L-functions over Q associated to the archimedian characters ψ (k) = k -it at the center point 1/2. There are many families of characters besides those strictly of archimedean-type, especially as one expands their scope to proper finite extensions of Q. Consideration of these Hecke characters leads immediately to analogous questions concerning their associated L-functions. Using tools from p-adic analysis which are analogues of traditional archimedean techniques, we prove the q-aspect analogue of Heath-Brown’s result on the twelfth power moment of the Riemann zeta function for Dirichlet L-functions to odd prime power moduli. In particular, our results rely on the p-adic method of stationary phase for sums of products and complement Nunes’ bound for smooth square-free moduli. We additionally prove the frequency-aspect analogue of Soundararajan’s result on extreme values of the Riemann zeta function for Hecke L-functions to angular characters over imaginary quadratic number fields. This result relies on the resonance method, which is applied for the first time to this family of L-functions, where the classification and extraction of diagonal terms depends on the geometry of the associated field’s complex embedding

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    • 

    corecore