150 research outputs found
Space Image Processing and Orbit Estimation Using Small Aperture Optical Systems
Angles-only initial orbit determination (AIOD) methods have been used to find the orbit of satellites since the beginning of the Space Race. Given the ever increasing number of objects in orbit today, the need for accurate space situational awareness (SSA) data has never been greater. Small aperture (\u3c 0:5m) optical systems, increasingly popular in both amateur and professional circles, provide an inexpensive source of such data. However, utilizing these types of systems requires understanding their limits. This research uses a combination of image processing techniques and orbit estimation algorithms to evaluate the limits and improve the resulting orbit solution obtained using small aperture systems. Characterization of noise from physical, electronic, and digital sources leads to a better understanding of reducing noise in the images used to provide the best solution possible. Given multiple measurements, choosing the best images for use is a non-trivial process and often results in trying all combinations. In an effort to help autonomize the process, a novel âobservability metricâ using only information from the captured images was shown empirically as a method of choosing the best observations. A method of identifying resident space objects (RSOs) in a single image using a gradient based search algorithm was developed and tested on actual space imagery captured with a small aperture optical system. The algorithm was shown to correctly identify candidate RSOs in a variety of observational scenarios
Hardware acceleration of the trace transform for vision applications
Computer Vision is a rapidly developing field in which machines process visual data to extract meaningful information. Digitised images in their pixels and bits serve no purpose of their own. It is only by interpreting the data, and extracting higher level information that a scene can be understood. The algorithms that enable this process are often complex, and data-intensive, limiting the processing rate when implemented in software. Hardware-accelerated implementations provide a significant performance boost that can enable real- time processing. The Trace Transform is a newly proposed algorithm that has been proven effective in image categorisation and recognition tasks. It is flexibly defined allowing the mathematical details to be tailored to the target application. However, it is highly computationally intensive, which limits its applications. Modern heterogeneous FPGAs provide an ideal platform for accelerating the Trace transform for real-time performance, while also allowing an element of flexibility, which highly suits the generality of the Trace transform. This thesis details the implementation of an extensible Trace transform architecture for vision applications, before extending this architecture to a full flexible platform suited to the exploration of Trace transform applications. As part of the work presented, a general set of architectures for large-windowed median and weighted median filters are presented as required for a number of Trace transform implementations. Finally an acceleration of Pseudo 2-Dimensional Hidden Markov Model decoding, usable in a person detection system, is presented. Such a system can be used to extract frames of interest from a video sequence, to be subsequently processed by the Trace transform. All these architectures emphasise the need for considered, platform-driven design in achieving maximum performance through hardware acceleration
The multiresolution Fourier transform : a general purpose tool for image analysis
The extraction of meaningful features from an image forms an important area of image
analysis. It enables the task of understanding visual information to be implemented in a
coherent and well defined manner. However, although many of the traditional approaches to
feature extraction have proved to be successful in specific areas, recent work has suggested
that they do not provide sufficient generality when dealing with complex analysis problems
such as those presented by natural images.
This thesis considers the problem of deriving an image description which could form the basis
of a more general approach to feature extraction. It is argued that an essential property of such
a description is that it should have locality in both the spatial domain and in some
classification space over a range of scales. Using the 2-d Fourier domain as a classification
space, a number of image transforms that might provide the required description are investigated.
These include combined representations such as a 2-d version of the short-time Fourier
transform (STFT), and multiscale or pyramid representations such as the wavelet transform.
However, it is shown that these are limited in their ability to provide sufficient locality in both
domains and as such do not fulfill the requirement for generality.
To overcome this limitation, an alternative approach is proposed in the form of the multiresolution
Fourier transform (MFT). This has a hierarchical structure in which the outermost levels
are the image and its discrete Fourier transform (DFT), whilst the intermediate levels are
combined representations in space and spatial frequency. These levels are defined to be
optimal in terms of locality and their resolution is such that within the transform as a whole
there is a uniform variation in resolution between the spatial domain and the spatial frequency
domain. This ensures that locality is provided in both domains over a range of scales. The
MFT is also invertible and amenable to efficient computation via familiar signal processing
techniques. Examples and experiments illustrating its properties are presented.
The problem of extracting local image features such as lines and edges is then considered. A
multiresolution image model based on these features is defined and it is shown that the MET
provides an effective tool for estimating its parameters.. The model is also suitable for
representing curves and a curve extraction algorithm is described. The results presented for
synthetic and natural images compare favourably with existing methods. Furthermore, when
coupled with the previous work in this area, they demonstrate that the MFT has the potential
to provide a basis for the solution of general image analysis problems
Machine learning methods for 3D object classification and segmentation
Field of study: Computer science.Dr. Ye Duan, Thesis Supervisor.Includes vita."July 2018."Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation. The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset. The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation.Includes bibliographical references (pages 116-140)
Project OASIS: The Design of a Signal Detector for the Search for Extraterrestrial Intelligence
An 8 million channel spectrum analyzer (MCSA) was designed the meet to meet the needs of a SETI program. The MCSA puts out a very large data base at very high rates. The development of a device which follows the MCSA, is presented
Recommended from our members
Mathematical Imaging Tools in Cancer Research - From Mitosis Analysis to Sparse Regularisation
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways:
(i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser.
For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation.
Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation.
Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.NIHR Cambridge Biomedical Research Centre PhD Fellowshi
Recommended from our members
Complexity-reduced hardware-based track-trigger for CMS upgrade
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC)
is designed to study the results of proton-proton collisions. The Tracker
sub-detector is designed to detect and reconstruct the trajectories of charged
particles produced by the collisions. During the lifetime of the CMS detector,
there have been several upgrades aimed at increasing the chance of discovering
new physics through increased luminosity levels and instrumentation of
advanced technology. The High-Luminosity upgrade optimises the LHC to
accelerate high-energy particles with an average of 200 proton-proton
interactions per bunch crossing. The Level-1 Trigger system promptly analyses
and filters collisions using hardware to reduce the data volume in real-time. For
the upgrade, the trigger mechanism will use a particle trajectory estimator that
discriminates between particles based on their transverse momentum (pT ).
Particles with pT â„ 2 GeV/c will be transmitted to the Level-1 Track-Trigger
system for trajectory reconstruction within a fixed 3 ÎŒs latency. This thesis
presents a novel Hardware-based Multivariate Linear Fitter (MVLF) system
focusing on robustness in tracking efficiency and reduction in logic resource
usage within the specified latency. The system components are implemented in
Field Programmable Gate Arrays (FPGA), targeting 16 nm FinFET UltraScale+
silicon technology. The development was performed using the High-Level
Synthesis (HLS) automation tools and the Hardware acceleration platform for
Application-Specific Integrated Circuits (ASIC). A firmware demonstrator has
been assembled to verify the feasibility and compatibility of the scaled system
with the CMS Level-1 Track-Trigger infrastructure. The systemâs performance is
compared to past and current system developments, and the results are
presented accordingly
On the distribution of central values of Hecke L-functions
Questions regarding the behavior of the Riemann zeta function on the critical line 1/2 + it can be naturally interpreted as questions regarding the family of L-functions over Q associated to the archimedian characters Ï (k) = k -it at the center point 1/2. There are many families of characters besides those strictly of archimedean-type, especially as one expands their scope to proper finite extensions of Q. Consideration of these Hecke characters leads immediately to analogous questions concerning their associated L-functions.
Using tools from p-adic analysis which are analogues of traditional archimedean techniques, we prove the q-aspect analogue of Heath-Brownâs result on the twelfth power moment of the Riemann zeta function for Dirichlet L-functions to odd prime power moduli. In particular, our results rely on the p-adic method of stationary phase for sums of products and complement Nunesâ bound for smooth square-free moduli.
We additionally prove the frequency-aspect analogue of Soundararajanâs result on extreme values of the Riemann zeta function for Hecke L-functions to angular characters over imaginary quadratic number fields. This result relies on the resonance method, which is applied for the first time to this family of L-functions, where the classification and extraction of diagonal terms depends on the geometry of the associated fieldâs complex embedding
Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
- âŠ