49 research outputs found

    THEORETICAL ASPECTS AND REAL ISSUES IN AN INTEGRATED MULTIRADAR SYSTEM

    Get PDF
    In the last few years Homeland Security (HS) has gained a considerable interest in the research community. From a scientific point of view, it is a difficult task to provide a definition of this research area and to exactly draw up its boundaries. In fact, when we talk about the security and the surveillance, several problems and aspects must be considered. In particular, the following factors play a crucial role and define the complexity level of the considered application field: the number of potential threats can be high and uncertain; the threat detection and identification can be made more complicated by the use of camouflaging techniques; the monitored area is typically wide and it requires a large and heterogeneous sensor network; the surveillance operation is strongly related to the operational scenario, so that it is not possible to define a unique approach to solve the problem [1]. Information Technology (IT) can provide an important support to HS in preventing, detecting and early warning of threats. Even though the link between IT and HS is relatively recent, sensor integration and collaboration is a widely applied technique aimed to aggregate data from multiple sources, to yield timely information on potential threats and to improve the accuracy in monitoring events [2]. A large number of sensors have already been developed to support surveillance operations. Parallel to this technological effort in developing new powerful and dedicated sensors, interest in integrating a set of stand-alone sensors into an integrated multi-sensor system has been increasing. In fact, rather than to develop new sensors to achieve more accurate tracking and surveillance systems, it is more useful to integrate existing stand-alone sensors into a single system in order to obtain performance improvements In this dissertation, a notional integrated multi-sensor system acting in a maritime border control scenario for HS is considered. In general, a border surveillance system is composed of multiple land based and moving platforms carrying different types of sensors [1]. In a typical scenario, described in [1], the integrated system is composed of a land based platform, located on the coast, and an airborne platform moving in front of the coast line. In this dissertation, we handle two different fundamental aspects. In Part I, we focus on a single sensor in the system, i.e. the airborne radar. We analyze the tracking performance of such a kind of sensor in the presence of two different atmospheric problems: the turbulence (in Chapter 1) and the tropospheric refraction (in Chapter 2). In particular, in Chapter 1, the losses in tracking accuracy of a turbulence-ignorant tracking filter (i.e. a filter that does not take into account the effects of the atmospheric turbulences) acting in a turbulent scenario, is quantified. In Chapter 2, we focus our attention on the tropospheric propagation effects on the radar electromagnetic (em) signals and their correction for airborne radar tracking. It is well known that the troposphere is characterized by a refractive index that varies with the altitude and with the local weather. This variability of the refractive index causes an error in the radar measurements. First, a mathematical model to describe and calculate the em radar signal ray path in the troposphere is discussed. Using this mathematical model, the errors due to the tropospheric propagation are evaluated and the corrupted radar measurements are then numerically generated. Second, a tracking algorithm, based on the Kalman Filter, that is able to mitigate the tropospheric errors during the tracking procedure, is proposed. In Part II, we consider the integrated system in its wholeness to investigate a fundamental prerequisite of any data fusion process: the sensor registration process. The problem of sensor registration (also termed, for naval system, the grid-locking problem) arises when a set of data coming from two or more sensors must be combined. This problem involves a coordinate transformation and the reciprocal alignment among the various sensors: streams of data from different sensors must be converted into a common coordinate system (or frame) and aligned before they could be used in a tracking or surveillance system. If not corrected, registration errors can seriously degrade the global system performance by increasing tracking errors and even introducing ghost tracks. A first basic distinction is usually made between relative grid-locking and absolute grid-locking. The relative grid-locking process aligns remote data to local data under the assumption that the local data are bias free and that all biases reside with the remote sensor. The problem is that, actually, also the local sensor is affected by bias. Chapter 3 of this dissertation is dedicated to the solution of the relative grid-locking problem. Two different estimation algorithms are proposed: a linear Least Squares (LS) algorithm and an Expectation-Maximization-based (EM) algorithm. The linear LS algorithm is a simple and fast algorithm, but numerical results have shown that the LS estimator is not efficient for most of the registration bias errors. Such non-efficiency could be caused by the linearization implied by the linear LS algorithm. Then, in order to obtain a more efficient estimation algorithm, an Expectation-Maximization algorithm is derived. In Chapter 4 we generalize our findings to the absolute grid-locking problem. Part III of this dissertation is devoted to a more theoretical aspect of fundamental importance in a lot of practical applications: the estimate of the disturbance covariance matrix. Due to its relevance, in literature it can be found a huge quantity of works on this topic. Recently, a new geometrical concept has been applied to this estimation problem: the Riemann (or intrinsic) geometry. In Chapter 5, we give an overview on the state of the art of the application of the Riemann geometry for the covariance matrix estimation in radar problems. Particular attention is given for the detection problem in additive clutter. Some covariance matrix estimators and a new decision rule based on the Riemann geometry are analyzed and their performance are compared with the classical ones. [1] Sofia Giompapa, “Analysis, modeling, and simulation of an integrated multi-sensor system for maritime border control”, PhD dissertation, University of Pisa, April 2008. [2] H. Chen, F. Y. Wang, and D. Zeng, “Intelligence and security informatics for Homeland Security: information, communication and transportation,” Intelligent Transportation Systems, IEEE Transactions on, vol. 5, no. 4, pp. 329-341, December 2004

    Binokulare EigenbewegungsschĂ€tzung fĂŒr Fahrerassistenzanwendungen

    Get PDF
    Driving can be dangerous. Humans become inattentive when performing a monotonous task like driving. Also the risk implied while multi-tasking, like using the cellular phone while driving, can break the concentration of the driver and increase the risk of accidents. Others factors like exhaustion, nervousness and excitement affect the performance of the driver and the response time. Consequently, car manufacturers have developed systems in the last decades which assist the driver under various circumstances. These systems are called driver assistance systems. Driver assistance systems are meant to support the task of driving, and the field of action varies from alerting the driver, with acoustical or optical warnings, to taking control of the car, such as keeping the vehicle in the traffic lane until the driver resumes control. For such a purpose, the vehicle is equipped with on-board sensors which allow the perception of the environment and/or the state of the vehicle. Cameras are sensors which extract useful information about the visual appearance of the environment. Additionally, a binocular system allows the extraction of 3D information. One of the main requirements for most camera-based driver assistance systems is the accurate knowledge of the motion of the vehicle. Some sources of information, like velocimeters and GPS, are of common use in vehicles today. Nevertheless, the resolution and accuracy usually achieved with these systems are not enough for many real-time applications. The computation of ego-motion from sequences of stereo images for the implementation of driving intelligent systems, like autonomous navigation or collision avoidance, constitutes the core of this thesis. This dissertation proposes a framework for the simultaneous computation of the 6 degrees of freedom of ego-motion (rotation and translation in 3D Euclidean space), the estimation of the scene structure and the detection and estimation of independently moving objects. The input is exclusively provided by a binocular system and the framework does not call for any data acquisition strategy, i.e. the stereo images are just processed as they are provided. Stereo allows one to establish correspondences between left and right images, estimating 3D points of the environment via triangulation. Likewise, feature tracking establishes correspondences between the images acquired at different time instances. When both are used together for a large number of points, the result is a set of clouds of 3D points with point-to-point correspondences between clouds. The apparent motion of the 3D points between consecutive frames is caused by a variety of reasons. The most dominant motion for most of the points in the clouds is caused by the ego-motion of the vehicle; as the vehicle moves and images are acquired, the relative position of the world points with respect to the vehicle changes. Motion is also caused by objects moving in the environment. They move independently of the vehicle motion, so the observed motion for these points is the sum of the ego-vehicle motion and the independent motion of the object. A third reason, and of paramount importance in vision applications, is caused by correspondence problems, i.e. the incorrect spatial or temporal assignment of the point-to-point correspondence. Furthermore, all the points in the clouds are actually noisy measurements of the real unknown 3D points of the environment. Solving ego-motion and scene structure from the clouds of points requires some previous analysis of the noise involved in the imaging process, and how it propagates as the data is processed. Therefore, this dissertation analyzes the noise properties of the 3D points obtained through stereo triangulation. This leads to the detection of a bias in the estimation of 3D position, which is corrected with a reformulation of the projection equation. Ego-motion is obtained by finding the rotation and translation between the two clouds of points. This problem is known as absolute orientation, and many solutions based on least squares have been proposed in the literature. This thesis reviews the available closed form solutions to the problem. The proposed framework is divided in three main blocks: 1) stereo and feature tracking computation, 2) ego-motion estimation and 3) estimation of 3D point position and 3D velocity. The first block solves the correspondence problem providing the clouds of points as output. No special implementation of this block is required in this thesis. The ego-motion block computes the motion of the cameras by finding the absolute orientation between the clouds of static points in the environment. Since the cloud of points might contain independently moving objects and outliers generated by false correspondences, the direct computation of the least squares might lead to an erroneous solution. The first contribution of this thesis is an effective rejection rule that detects outliers based on the distance between predicted and measured quantities, and reduces the effects of noisy measurement by assigning appropriate weights to the data. This method is called Smoothness Motion Constraint (SMC). The ego-motion of the camera between two frames is obtained finding the absolute orientation between consecutive clouds of weighted 3D points. The complete ego-motion since initialization is achieved concatenating the individual motion estimates. This leads to a super-linear propagation of the error, since noise is integrated. A second contribution of this dissertation is a predictor/corrector iterative method, which integrates the clouds of 3D points of multiple time instances for the computation of ego-motion. The presented method considerably reduces the accumulation of errors in the estimated ego-position of the camera. Another contribution of this dissertation is a method which recursively estimates the 3D world position of a point and its velocity; by fusing stereo, feature tracking and the estimated ego-motion in a Kalman Filter system. An improved estimation of point position is obtained this way, which is used in the subsequent system cycle resulting in an improved computation of ego-motion. The general contribution of this dissertation is a single framework for the real time computation of scene structure, independently moving objects and ego-motion for automotive applications.Autofahren kann gefĂ€hrlich sein. Die Fahrleistung wird durch die physischen und psychischen Grenzen des Fahrers und durch externe Faktoren wie das Wetter beeinflusst. Fahrerassistenzsysteme erhöhen den Fahrkomfort und unterstĂŒtzen den Fahrer, um die Anzahl an UnfĂ€llen zu verringern. Fahrerassistenzsysteme unterstĂŒtzen den Fahrer durch Warnungen mit optischen oder akustischen Signalen bis hin zur Übernahme der Kontrolle ĂŒber das Auto durch das System. Eine der Hauptvoraussetzungen fĂŒr die meisten Fahrerassistenzsysteme ist die akkurate Kenntnis der Bewegung des eigenen Fahrzeugs. Heutzutage verfĂŒgt man ĂŒber verschiedene Sensoren, um die Bewegung des Fahrzeugs zu messen, wie zum Beispiel GPS und Tachometer. Doch Auflösung und Genauigkeit dieser Systeme sind nicht ausreichend fĂŒr viele Echtzeitanwendungen. Die Berechnung der Eigenbewegung aus Stereobildsequenzen fĂŒr Fahrerassistenzsysteme, z.B. zur autonomen Navigation oder Kollisionsvermeidung, bildet den Kern dieser Arbeit. Diese Dissertation prĂ€sentiert ein System zur Echtzeitbewertung einer Szene, inklusive Detektion und Bewertung von unabhĂ€ngig bewegten Objekten sowie der akkuraten SchĂ€tzung der sechs Freiheitsgrade der Eigenbewegung. Diese grundlegenden Bestandteile sind erforderlich, um viele intelligente Automobilanwendungen zu entwickeln, die den Fahrer in unterschiedlichen Verkehrssituationen unterstĂŒtzen. Das System arbeitet ausschließlich mit einer Stereokameraplattform als Sensor. Um die Eigenbewegung und die Szenenstruktur zu berechnen wird eine Analyse des Rauschens und der Fehlerfortpflanzung im Bildaufbereitungsprozess benötigt. Deshalb werden in dieser Dissertation die Rauscheigenschaften der durch Stereotriangulation erhaltenen 3D-Punkte analysiert. Dies fĂŒhrt zu der Entdeckung eines systematischen Fehlers in der SchĂ€tzung der 3D-Position, der sich mit einer Neuformulierung der Projektionsgleichung korrigieren lĂ€sst. Die Simulationsergebnisse zeigen, dass eine bedeutende Verringerung des Fehlers in der geschĂ€tzten 3D-Punktposition möglich ist. Die EigenbewegungsschĂ€tzung wird gewonnen, indem die Rotation und Translation zwischen Punktwolken geschĂ€tzt wird. Dieses Problem ist als „absolute Orientierung” bekannt und viele Lösungen auf Basis der Methode der kleinsten Quadrate sind in der Literatur vorgeschlagen worden. Diese Arbeit rezensiert die verfĂŒgbaren geschlossenen Lösungen zu dem Problem. Das vorgestellte System gliedert sich in drei wesentliche Bausteine: 1. Registrierung von Bildmerkmalen, 2. EigenbewegungsschĂ€tzung und 3. iterative SchĂ€tzung von 3D-Position und 3D-Geschwindigkeit von Weltpunkten. Der erster Block erhĂ€lt eine Folge rektifizierter Bilder als Eingabe und liefert daraus eine Liste von verfolgten Bildmerkmalen mit ihrer entsprechenden 3D-Position. Der Block „EigenbewegungsschĂ€tzung” besteht aus vier Hauptschritten in einer Schleife: 1. Bewegungsvorhersage, 2. Anwendung der Glattheitsbedingung fĂŒr die Bewegung (GBB), 3. absolute Orientierungsberechnung und 4. Bewegungsintegration. Die in dieser Dissertation vorgeschlagene GBB ist eine mĂ€chtige Bedingung fĂŒr die Ablehnung von Ausreißern und fĂŒr die Zuordnung von Gewichten zu den gemessenen 3D-Punkten. Simulationen werden mit gaußschem und slashschem Rauschen ausgefĂŒhrt. Die Ergebnisse zeigen die Überlegenheit der GBB-Version ĂŒber die Standardgewichtungsmethoden. Die StabilitĂ€t der Ergebnisse hinsichtlich Ausreißern wurde analysiert mit dem Resultat, dass der „break down point” grĂ¶ĂŸer als 50% ist. Wenn die vier Schritte iterativ ausgefĂŒhrt, werden wird ein PrĂ€diktor-Korrektor-Verfahren gewonnen.Wir nennen diese SchĂ€tzung Multi-frameschĂ€tzung im Gegensatz zur ZweiframeschĂ€tzung, die nur die aktuellen und vorherigen Bildpaare fĂŒr die Berechnung der Eigenbewegung betrachtet. Die erste Iteration wird zwischen der aktuellen und vorherigen Wolke von Punkten durchgefĂŒhrt. Jede weitere Iteration integriert eine zusĂ€tzliche Punktwolke eines vorherigen Zeitpunkts. Diese Methode reduziert die Fehlerakkumulation bei der Integration von mehreren SchĂ€tzungen in einer einzigen globalen SchĂ€tzung. Simulationsergebnisse zeigen, dass obwohl der Fehler noch superlinear im Laufe der Zeit zunimmt, die GrĂ¶ĂŸe des Fehlers um mehrere GrĂ¶ĂŸenordnungen reduziert wird. Der dritte Block besteht aus der iterativen SchĂ€tzung von 3D-Position und 3D-Geschwindigkeit von Weltpunkten. Hier wird eine Methode basierend auf einem Kalman Filter verwendet, das Stereo, Featuretracking und Eigenbewegungsdaten fusioniert. Messungen der Position eines Weltpunkts werden durch das Stereokamerasystem gewonnen. Die Differenzierung der Position des geschĂ€tzten Punkts erlaubt die zusĂ€tzliche SchĂ€tzung seiner Geschwindigkeit. Die Messungen werden durch das Messmodell gewonnen, das Stereo- und Bewegungsdaten fusioniert. Simulationsergebnisse validieren das Modell. Die Verringerung der Positionsunsicherheit im Laufe der Zeit wird mit einer Monte-Carlo Simulation erzielt. Experimentelle Ergebnisse werden mit langen Sequenzen von Bildern erzielt. ZusĂ€tzliche Tests, einschließlich einer 3D-Rekonstruktion einer Waldszene und der Berechnung der freien Kamerabewegung in einem Indoor-Szenario, wurden durchgefĂŒhrt. Die Methode zeigt gute Ergebnisse in allen FĂ€llen. Der Algorithmus liefert zudem akzeptable Ergebnisse bei der SchĂ€tzung der Lage kleiner Objekte, wie Köpfe und Beine von realen Crash-Test-Dummies

    Simultaneous Target and Multipath Positioning

    Get PDF
    <p>In this work, we present the Simultaneous Target and Multipath Positioning (STAMP) technique to jointly estimate the unknown target position and uncertain multipath channel parameters. We illustrate the applications of STAMP for target tracking/geolocation problems using single-station hybrid TOA/AOA system, monostatic MIMO radar and multistatic range-based/AOA based localization systems. The STAMP algorithm is derived using a recursive Bayesian framework by including the target state and multipath channel parameters as a single random vector, and the unknown correspondence between observations and signal propagation channels is solved using the multi-scan multi-hypothesis data association. In the presence of the unknown time-varying number of multipath propagation modes, the STAMP algorithm is modified based on the single-cluster PHD filtering by modeling the multipath parameter state as a random finite set. In this case, the target state is defined as the parent process, which is updated by using a particle filter or multi-hypothesis Kalman filter. The multipath channel parameter is defined as the daughter process and updated based on an explicit Gaussian mixture PHD filter. Moreover, the idenfiability analysis of the joint estimation problem is provided in terms of Cramér-Rao lower bound (CRLB). The Fisher information contributed by each propagation mode is investigated, and the effect of Fisher information loss caused by the measurement origin uncertainty is also studied. The proposed STAMP algorithms are evaluated based on a set of illustrative numeric simulations and real data experiments with an indoor multi-channel radar testbed. Substantial improvement in target localization accuracy is observed.</p>Dissertatio

    Generic Multisensor Integration Strategy and Innovative Error Analysis for Integrated Navigation

    Get PDF
    A modern multisensor integrated navigation system applied in most of civilian applications typically consists of GNSS (Global Navigation Satellite System) receivers, IMUs (Inertial Measurement Unit), and/or other sensors, e.g., odometers and cameras. With the increasing availabilities of low-cost sensors, more research and development activities aim to build a cost-effective system without sacrificing navigational performance. Three principal contributions of this dissertation are as follows: i) A multisensor kinematic positioning and navigation system built on Linux Operating System (OS) with Real Time Application Interface (RTAI), York University Multisensor Integrated System (YUMIS), was designed and realized to integrate GNSS receivers, IMUs, and cameras. YUMIS sets a good example of a low-cost yet high-performance multisensor inertial navigation system and lays the ground work in a practical and economic way for the personnel training in following academic researches. ii) A generic multisensor integration strategy (GMIS) was proposed, which features a) the core system model is developed upon the kinematics of a rigid body; b) all sensor measurements are taken as raw measurement in Kalman filter without differentiation. The essential competitive advantages of GMIS over the conventional error-state based strategies are: 1) the influences of the IMU measurement noises on the final navigation solutions are effectively mitigated because of the increased measurement redundancy upon the angular rate and acceleration of a rigid body; 2) The state and measurement vectors in the estimator with GMIS can be easily expanded to fuse multiple inertial sensors and all other types of measurements, e.g., delta positions; 3) one can directly perform error analysis upon both raw sensor data (measurement noise analysis) and virtual zero-mean process noise measurements (process noise analysis) through the corresponding measurement residuals of the individual measurements and the process noise measurements. iii) The a posteriori variance component estimation (VCE) was innovatively accomplished as an advanced analytical tool in the extended Kalman Filter employed by the GMIS, which makes possible the error analysis of the raw IMU measurements for the very first time, together with the individual independent components in the process noise vector

    Recent Advances in Wireless Communications and Networks

    Get PDF
    This book focuses on the current hottest issues from the lowest layers to the upper layers of wireless communication networks and provides "real-time" research progress on these issues. The authors have made every effort to systematically organize the information on these topics to make it easily accessible to readers of any level. This book also maintains the balance between current research results and their theoretical support. In this book, a variety of novel techniques in wireless communications and networks are investigated. The authors attempt to present these topics in detail. Insightful and reader-friendly descriptions are presented to nourish readers of any level, from practicing and knowledgeable communication engineers to beginning or professional researchers. All interested readers can easily find noteworthy materials in much greater detail than in previous publications and in the references cited in these chapters

    Sensors and Systems for Indoor Positioning

    Get PDF
    This reprint is a reprint of the articles that appeared in Sensors' (MDPI) Special Issue on “Sensors and Systems for Indoor Positioning". The published original contributions focused on systems and technologies to enable indoor applications

    Multistatic radar optimization for radar sensor network applications

    Get PDF
    The design of radar sensor networks (RSN) has undergone great advancements in recent years. In fact, this kind of system is characterized by a high degree of design flexibility due to the multiplicity of radar nodes and data fusion approaches. This thesis focuses on the development and analysis of RSN architectures to optimize target detection and positioning performances. A special focus is placed upon distributed (statistical) multiple-input multipleoutput (MIMO) RSN systems, where spatial diversity could be leveraged to enhance radar target detection capabilities. In the first part of this thesis, the spatial diversity is leveraged in conjunction with cognitive waveform selection and design techniques to quickly adapt to target scene variations in real time. In the second part, we investigate the impact of RSN geometry, particularly the placement of multistatic radar receivers, on target positioning accuracy. We develop a framework based on cognitive waveform selection in conjunction with adaptive receiver placement strategy to cope with time-varying target scattering characteristics and clutter distribution parameters in the dynamic radar scene. The proposed approach yields better target detection performance and positioning accuracy as compared with conventional methods based on static transmission or stationary multistatic radar topology. The third part of this thesis examines joint radar and communication systems coexistence and operation via two possible architectures. In the first one, several communication nodes in a network operate separately in frequency. Each node leverages the multi-look diversity of the distributed system by activating radar processing on multiple received bistatic streams at each node level in addition to the pre-existing monostatic processing. This architecture is based on the fact that the communication signal, such as the Orthogonal Frequency Division Multiplexing (OFDM) waveform, could be well-suited for radar tasks if the proper waveform parameters are chosen so as to simultaneously perform communication and radar tasks. The advantage of using a joint waveform for both applications is a permanent availability of radar and communication functions via a better use of the occupied spectrum inside the same joint hardware platform. We then examine the second main architecture, which is more complex and deals with separate radar and communication entities with a partial or total spectrum sharing constraint. We investigate the optimum placement of radar receivers for better target positioning accuracy while reducing the radar measurement errors by minimizing the interference caused by simultaneous operation of the communication system. Better performance in terms of communication interference handling and suppression at the radar level, were obtained with the proposed placement approach of radar receivers compared to the geometric dilution of precision (GDOP)-only minimization metric

    Sense and Respond

    Get PDF
    Over the past century, the manufacturing industry has undergone a number of paradigm shifts: from the Ford assembly line (1900s) and its focus on efficiency to the Toyota production system (1960s) and its focus on effectiveness and JIDOKA; from flexible manufacturing (1980s) to reconfigurable manufacturing (1990s) (both following the trend of mass customization); and from agent-based manufacturing (2000s) to cloud manufacturing (2010s) (both deploying the value stream complexity into the material and information flow, respectively). The next natural evolutionary step is to provide value by creating industrial cyber-physical assets with human-like intelligence. This will only be possible by further integrating strategic smart sensor technology into the manufacturing cyber-physical value creating processes in which industrial equipment is monitored and controlled for analyzing compression, temperature, moisture, vibrations, and performance. For instance, in the new wave of the ‘Industrial Internet of Things’ (IIoT), smart sensors will enable the development of new applications by interconnecting software, machines, and humans throughout the manufacturing process, thus enabling suppliers and manufacturers to rapidly respond to changing standards. This reprint of “Sense and Respond” aims to cover recent developments in the field of industrial applications, especially smart sensor technologies that increase the productivity, quality, reliability, and safety of industrial cyber-physical value-creating processes

    Elevation and Deformation Extraction from TomoSAR

    Get PDF
    3D SAR tomography (TomoSAR) and 4D SAR differential tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to provide an essential innovation of SAR Interferometry for many applications, sensing complex scenes with multiple scatterers mapped into the same SAR pixel cell. However, these are still influenced by DEM uncertainty, temporal decorrelation, orbital, tropospheric and ionospheric phase distortion and height blurring. In this thesis, these techniques are explored. As part of this exploration, the systematic procedures for DEM generation, DEM quality assessment, DEM quality improvement and DEM applications are first studied. Besides, this thesis focuses on the whole cycle of systematic methods for 3D & 4D TomoSAR imaging for height and deformation retrieval, from the problem formation phase, through the development of methods to testing on real SAR data. After DEM generation introduction from spaceborne bistatic InSAR (TanDEM-X) and airborne photogrammetry (Bluesky), a new DEM co-registration method with line feature validation (river network line, ridgeline, valley line, crater boundary feature and so on) is developed and demonstrated to assist the study of a wide area DEM data quality. This DEM co-registration method aligns two DEMs irrespective of the linear distortion model, which improves the quality of DEM vertical comparison accuracy significantly and is suitable and helpful for DEM quality assessment. A systematic TomoSAR algorithm and method have been established, tested, analysed and demonstrated for various applications (urban buildings, bridges, dams) to achieve better 3D & 4D tomographic SAR imaging results. These include applying Cosmo-Skymed X band single-polarisation data over the Zipingpu dam, Dujiangyan, Sichuan, China, to map topography; and using ALOS L band data in the San Francisco Bay region to map urban building and bridge. A new ionospheric correction method based on the tile method employing IGS TEC data, a split-spectrum and an ionospheric model via least squares are developed to correct ionospheric distortion to improve the accuracy of 3D & 4D tomographic SAR imaging. Meanwhile, a pixel by pixel orbit baseline estimation method is developed to address the research gaps of baseline estimation for 3D & 4D spaceborne SAR tomography imaging. Moreover, a SAR tomography imaging algorithm and a differential tomography four-dimensional SAR imaging algorithm based on compressive sensing, SAR interferometry phase (InSAR) calibration reference to DEM with DEM error correction, a new phase error calibration and compensation algorithm, based on PS, SVD, PGA, weighted least squares and minimum entropy, are developed to obtain accurate 3D & 4D tomographic SAR imaging results. The new baseline estimation method and consequent TomoSAR processing results showed that an accurate baseline estimation is essential to build up the TomoSAR model. After baseline estimation, phase calibration experiments (via FFT and Capon method) indicate that a phase calibration step is indispensable for TomoSAR imaging, which eventually influences the inversion results. A super-resolution reconstruction CS based study demonstrates X band data with the CS method does not fit for forest reconstruction but works for reconstruction of large civil engineering structures such as dams and urban buildings. Meanwhile, the L band data with FFT, Capon and the CS method are shown to work for the reconstruction of large manmade structures (such as bridges) and urban buildings

    Model-based Filtering of Interfering Signals in Ultrasonic Time Delay Estimations

    Get PDF
    This work presents model-based algorithmic approaches for interference-invariant time delay estimation, which are specifically suited for the estimation of small time delay differences with a necessary resolution well below the sampling time. Therefore, the methods can be applied particularly well for transit-time ultrasonic flow measurements, since the problem of interfering signals is especially prominent in this application
    corecore