768 research outputs found

    A Generalized Phase Gradient Autofocus Algorithm

    Get PDF
    The phase gradient autofocus (PGA) algorithm has seen widespread use and success within the synthetic aperture radar (SAR) imaging community. However, its use and success has largely been limited to collection geometries where either the polar format algorithm (PFA) or range migration algorithm is suitable for SAR image formation. In this work, a generalized phase gradient autofocus (GPGA) algorithm is developed which is applicable with both the PFA and backprojection algorithm (BPA), thereby directly supporting a wide range of collection geometries and SAR imaging modalities. The GPGA algorithm preserves the four crucial signal processing steps comprising the PGA algorithm, while alleviating the constraint of using a single scatterer per range cut for phase error estimation which exists with the PGA algorithm. Moreover, the GPGA algorithm, whether using the PFA or BPA, yields an approximate maxi- mum marginal likelihood estimate (MMLE) of phase errors having marginalized over unknown complex-valued reflectivities of selected scatterers. Also, in this work a new approximate MMLE, termed the max-semidefinite relaxation (Max-SDR) phase estimator, is proposed for use with the GPGA algorithm. The Max-SDR phase estimator provides a phase error estimate with a worst-case approximation bound compared to the solution set of MMLEs (i.e., solution set to the non-deterministic polynomial- time hard (NP-hard) GPGA phase estimation problem). Moreover, in this work a specialized interior-point method is presented for more efficiently performing Max- SDR phase estimation by exploiting low-rank structure typically associated with the GPGA phase estimation problem. Lastly, simulation and experimental results produced by applying the GPGA algorithm with the PFA and BPA are presented

    Adaptive Sensing Techniques for Dynamic Target Tracking and Detection with Applications to Synthetic Aperture Radars.

    Full text link
    This thesis studies adaptive allocation of a limited set of sensing or computational resources in order to maximize some criteria, such as detection probability, estimation accuracy, or throughput, with specific application to inference with synthetic aperture radars (SAR). Sparse scenarios are considered where the interesting element is embedded in a much larger signal space. Policies are examined that adaptively distribute the constrained resources by using observed measurements to inform the allocation at subsequent stages. This thesis studies adaptive allocation policies in three main directions. First, a framework for adaptive search for sparse targets is proposed to simultaneously detect and track moving targets. Previous work is extended to include a dynamic target model that incorporates target transitions, birth/death probabilities, and varying target amplitudes. Policies are proposed that are shown empirically to have excellent asymptotic performance in estimation error, detection probability, and robustness to model mismatch. Moreover, policies are provided with low computational complexity as compared to state-of-the-art dynamic programming solutions. Second, adaptive sensor management is studied for stable tracking of targets under different modalities. A sensor scheduling policy is proposed that guarantees that the target spatial uncertainty remains bounded. When stability conditions are met, fundamental performance limits are derived such as the maximum number of targets that can be tracked stably and the maximum spatial uncertainty of those targets. The theory is extended to the case where the system may be engaged in tasks other than tracking, such as wide area search or target classification. Lastly, these developed tools are applied to tracking targets using SAR imagery. A hierarchical Bayesian model is proposed for efficient estimation of the posterior distribution for the target and clutter states given observed SAR imagery. This model provides a unifying framework that models the physical, kinematic, and statistical properties of SAR imagery. It is shown that this method generally outperforms common algorithms for change detection. Moreover, the proposed method has the additional benefits of (a) easily incorporating additional information such as target motion models and/or correlated measurements, (b) having few tuning parameters, and (c) providing a characterization of the uncertainty in the state estimation process.PHDElectrical Engineering-SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/97931/1/newstage_1.pd

    Beyond Measurement: {E}xtracting Vegetation Height from High Resolution Imagery with Deep Learning

    Get PDF
    Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET’s architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 1×1 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times

    Image Reconstruction for Multistatic Stepped Frequency-Modulated Continuous Wave (FMCW) Ultrasound Imaging Systems With Reconfigurable Arrays

    Get PDF
    The standard architecture of a medical ultrasound transducer is a linear phased array of piezoelectric elements in a compact, hand-held form. Acoustic energy not directly reflected back towards the transducer elements during a transmit-receive cycle amounts to lost information for image reconstruction. To mitigate this loss, a large, flexible transducer array which conforms to contours of the subject's body would result in a greater effective aperture and an increase in received image data. However, in this reconfigurable array design, element distributions are irregular and an organized arrangement can no longer be assumed. Phased array architecture also has limited scalability potential for large 2D arrays. This research work investigates a multistatic, stepped-FMCW modality as an alternative to array phasing in order to accommodate the flexible and reconfigurable nature of an array. A space-time reconstruction algorithm was developed for the imaging system. We include ultrasound imaging experiments and describe a simulation method for quickly predicting imaging performance for any given target and array configuration. Lastly, we demonstrate two reconstruction techniques for improving image resolution. The first takes advantage of the statistical significance of pixel contributions prior to the final summation, and the second corrects data errors originating from the stepped-FMCW quadrature receiver

    Innovative Techniques for the Retrieval of Earth’s Surface and Atmosphere Geophysical Parameters: Spaceborne Infrared/Microwave Combined Analyses

    Get PDF
    With the advent of the first satellites for Earth Observation: Landsat-1 in July 1972 and ERS-1 in May 1991, the discipline of environmental remote sensing has become, over time, increasingly fundamental for the study of phenomena characterizing the planet Earth. The goal of environmental remote sensing is to perform detailed analyses and to monitor the temporal evolution of different physical phenomena, exploiting the mechanisms of interaction between the objects that are present in an observed scene and the electromagnetic radiation detected by sensors, placed at a distance from the scene, operating at different frequencies. The analyzed physical phenomena are those related to climate change, weather forecasts, global ocean circulation, greenhouse gas profiling, earthquakes, volcanic eruptions, soil subsidence, and the effects of rapid urbanization processes. Generally, remote sensing sensors are of two primary types: active and passive. Active sensors use their own source of electromagnetic radiation to illuminate and analyze an area of interest. An active sensor emits radiation in the direction of the area to be investigated and then detects and measures the radiation that is backscattered from the objects contained in that area. Passive sensors, on the other hand, detect natural electromagnetic radiation (e.g., from the Sun in the visible band and the Earth in the infrared and microwave bands) emitted or reflected by the object contained in the observed scene. The scientific community has dedicated many resources to developing techniques to estimate, study and analyze Earth’s geophysical parameters. These techniques differ for active and passive sensors because they depend strictly on the type of the measured physical quantity. In my P.h.D. work, inversion techniques for estimating Earth’s surface and atmosphere geophysical parameters will be addressed, emphasizing methods based on machine learning (ML). In particular, the study of cloud microphysics and the characterization of Earth’s surface changes phenomenon are the critical points of this work

    Efficient Resource Allocation Schemes for Search.

    Full text link
    This thesis concerns the problem of efficient resource allocation under constraints. In many applications a finite budget is used and allocating it efficiently can improve performance. In the context of medical imaging the constraint is exposure to ionizing radiation, e.g., computed tomography (CT). In radar and target tracking time spent searching a particular region before pointing the radar to another location or transmitted energy level may be limited. In airport security screening the constraint is screeners' time. This work addresses both static and dynamic resource allocation policies where the question is: How a budget should be allocated to maximize a certain performance criterion. In addition, many of the above examples correspond to a needle-in-a-haystack scenario. The goal is to find a small number of details, namely `targets', spread out in a far greater domain. The set of `targets' is named a region of interest (ROI). For example, in airport security screening perhaps one in a hundred travelers carry prohibited item and maybe one in several millions is a terrorist or a real threat. Nevertheless, in most aforementioned applications the common resource allocation policy is exhaustive: all possible locations are searched with equal effort allocation to spread sensitivity. A novel framework to deal with the problem of efficient resource allocation is introduced. The framework consists of a cost function trading the proportion of efforts allocated to the ROI and to its complement. Optimal resource allocation policies minimizing the cost are derived. These policies result in superior estimation and detection performance compared to an exhaustive resource allocation policy. Moreover, minimizing the cost has a strong connection to minimizing both probability of error and the CR bound on estimation mean square error. Furthermore, it is shown that the allocation policies asymptotically converge to the omniscient allocation policy that knows the location of the ROI in advance. Finally, a multi-scale allocation policy suitable for scenarios where targets tend to cluster is introduced. For a sparse scenario exhibiting good contrast between targets and background this method achieves significant performance gain yet tremendously reduces the number of samples required compared to an exhaustive search.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60698/1/bashan_1.pd

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Automated Design of Neural Network Architecture for Classification

    Get PDF
    This Ph.D. thesis deals with finding a good architecture of a neural network classifier. The focus is on methods to improve the performance of existing architectures (i.e. architectures that are initialised by a good academic guess) and automatically building neural networks. An introduction to the Multi-Layer feed-forward neural network is given and the most essential properties for neural networks; there ability to learn from examples is discussion. Topics like traning and generalisation are treated in more explicit. On the basic of this dissuscion methods for finding a good architecture of the network described. This includes methods like; Early stopping, Cross validation, Regularisation, Pruning and various constructions algorithms (methods that successively builds a network). New ideas of combining units with different types of transfer functions like radial basis functions and sigmoid or threshold functions led to the development of a new construction algorithm for classification. The algorithm called "GLOCAL" is fully described. Results from these experiments real life data from a Synthetic Aperture Radar (SAR) are provided.The thesis was written so people from the industry and graduate students who are interested in neural networks hopeful would find it useful.Key words: Neural networks, Architectures, Training, Generalisation deductive and construction algorithms

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Speckle Noise Reduction via Homomorphic Elliptical Threshold Rotations in the Complex Wavelet Domain

    Get PDF
    Many clinicians regard speckle noise as an undesirable artifact in ultrasound images masking the underlying pathology within a patient. Speckle noise is a random interference pattern formed by coherent radiation in a medium containing many sub-resolution scatterers. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. Studies have shown that the presence of speckle noise can reduce a physician's ability to detect lesions by a factor of eight. Without speckle, small high-contrast targets, low contrast objects, and image texture can be deduced quite readily. Speckle filtering of medical ultrasound images represents a critical pre-processing step, providing clinicians with enhanced diagnostic ability. Efficient speckle noise removal algorithms may also find applications in real time surgical guidance assemblies. However, it is vital that regions of interests are not compromised during speckle removal. This research pertains to the reduction of speckle noise in ultrasound images while attempting to retain clinical regions of interest. Recently, the advance of wavelet theory has lead to many applications in noise reduction and compression. Upon investigation of these two divergent fields, it was found that the speckle noise tends to rotate an image's homomorphic complex-wavelet coefficients. This work proposes a new speckle reduction filter involving a counter-rotation of these complex-wavelet coefficients to mitigate the presence of speckle noise. Simulations suggest the proposed denoising technique offers superior visual quality, though its signal-to-mean-square-error ratio (S/MSE) is numerically comparable to adaptive frost and kuan filtering. This research improves the quality of ultrasound medical images, leading to improved diagnosis for one of the most popular and cost effective imaging modalities used in clinical medicine
    • …
    corecore