810 research outputs found

    A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation

    Get PDF
    The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels

    Location-Enabled IoT (LE-IoT): A Survey of Positioning Techniques, Error Sources, and Mitigation

    Get PDF
    The Internet of Things (IoT) has started to empower the future of many industrial and mass-market applications. Localization techniques are becoming key to add location context to IoT data without human perception and intervention. Meanwhile, the newly-emerged Low-Power Wide-Area Network (LPWAN) technologies have advantages such as long-range, low power consumption, low cost, massive connections, and the capability for communication in both indoor and outdoor areas. These features make LPWAN signals strong candidates for mass-market localization applications. However, there are various error sources that have limited localization performance by using such IoT signals. This paper reviews the IoT localization system through the following sequence: IoT localization system review -- localization data sources -- localization algorithms -- localization error sources and mitigation -- localization performance evaluation. Compared to the related surveys, this paper has a more comprehensive and state-of-the-art review on IoT localization methods, an original review on IoT localization error sources and mitigation, an original review on IoT localization performance evaluation, and a more comprehensive review of IoT localization applications, opportunities, and challenges. Thus, this survey provides comprehensive guidance for peers who are interested in enabling localization ability in the existing IoT systems, using IoT systems for localization, or integrating IoT signals with the existing localization sensors

    A Novel Approach Based on MEMS-Gyro's Data Deep Coupling for Determining the Centroid of Star Spot

    Get PDF
    The traditional approach of star tracker for determining the centroid of spot requires enough energy and good shape, so a relatively long exposure time and stable three-axis state become necessary conditions to maintain high accuracy, these limit its update rate and dynamic performance. In view of these issues, this paper presents an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS-Gyro at star map level through the introduction of EKF. The trajectory predicted by using the angular velocity of three axes can be used to set the extraction window, this enhances the dynamic performance because of the accurate extraction when the satellite has angular speed. The optimal estimations of the centroid position and the drift in the output signal of MEMS-Gyro through this approach reduce the influence of noise of the detector on accuracy of the traditional approach for determining the centroid and effectively correct the output signal of MEMS-Gyro. At the end of this paper, feasibility of this approach is verified by simulation

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Elevation and Deformation Extraction from TomoSAR

    Get PDF
    3D SAR tomography (TomoSAR) and 4D SAR differential tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to provide an essential innovation of SAR Interferometry for many applications, sensing complex scenes with multiple scatterers mapped into the same SAR pixel cell. However, these are still influenced by DEM uncertainty, temporal decorrelation, orbital, tropospheric and ionospheric phase distortion and height blurring. In this thesis, these techniques are explored. As part of this exploration, the systematic procedures for DEM generation, DEM quality assessment, DEM quality improvement and DEM applications are first studied. Besides, this thesis focuses on the whole cycle of systematic methods for 3D & 4D TomoSAR imaging for height and deformation retrieval, from the problem formation phase, through the development of methods to testing on real SAR data. After DEM generation introduction from spaceborne bistatic InSAR (TanDEM-X) and airborne photogrammetry (Bluesky), a new DEM co-registration method with line feature validation (river network line, ridgeline, valley line, crater boundary feature and so on) is developed and demonstrated to assist the study of a wide area DEM data quality. This DEM co-registration method aligns two DEMs irrespective of the linear distortion model, which improves the quality of DEM vertical comparison accuracy significantly and is suitable and helpful for DEM quality assessment. A systematic TomoSAR algorithm and method have been established, tested, analysed and demonstrated for various applications (urban buildings, bridges, dams) to achieve better 3D & 4D tomographic SAR imaging results. These include applying Cosmo-Skymed X band single-polarisation data over the Zipingpu dam, Dujiangyan, Sichuan, China, to map topography; and using ALOS L band data in the San Francisco Bay region to map urban building and bridge. A new ionospheric correction method based on the tile method employing IGS TEC data, a split-spectrum and an ionospheric model via least squares are developed to correct ionospheric distortion to improve the accuracy of 3D & 4D tomographic SAR imaging. Meanwhile, a pixel by pixel orbit baseline estimation method is developed to address the research gaps of baseline estimation for 3D & 4D spaceborne SAR tomography imaging. Moreover, a SAR tomography imaging algorithm and a differential tomography four-dimensional SAR imaging algorithm based on compressive sensing, SAR interferometry phase (InSAR) calibration reference to DEM with DEM error correction, a new phase error calibration and compensation algorithm, based on PS, SVD, PGA, weighted least squares and minimum entropy, are developed to obtain accurate 3D & 4D tomographic SAR imaging results. The new baseline estimation method and consequent TomoSAR processing results showed that an accurate baseline estimation is essential to build up the TomoSAR model. After baseline estimation, phase calibration experiments (via FFT and Capon method) indicate that a phase calibration step is indispensable for TomoSAR imaging, which eventually influences the inversion results. A super-resolution reconstruction CS based study demonstrates X band data with the CS method does not fit for forest reconstruction but works for reconstruction of large civil engineering structures such as dams and urban buildings. Meanwhile, the L band data with FFT, Capon and the CS method are shown to work for the reconstruction of large manmade structures (such as bridges) and urban buildings

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Objekt-Manipulation und Steuerung der Greifkraft durch Verwendung von Taktilen Sensoren

    Get PDF
    This dissertation describes a new type of tactile sensor and an improved version of the dynamic tactile sensing approach that can provide a regularly updated and accurate estimate of minimum applied forces for use in the control of gripper manipulation. The pre-slip sensing algorithm is proposed and implemented into two-finger robot gripper. An algorithm that can discriminate between types of contact surface and recognize objects at the contact stage is also proposed. A technique for recognizing objects using tactile sensor arrays, and a method based on the quadric surface parameter for classifying grasped objects is described. Tactile arrays can recognize surface types on contact, making it possible for a tactile system to recognize translation, rotation, and scaling of an object independently.Diese Dissertation beschreibt eine neue Art von taktilen Sensoren und einen verbesserten Ansatz zur dynamischen Erfassung von taktilen daten, der in regelmĂ€ĂŸigen ZeitabstĂ€nden eine genaue Bewertung der minimalen Greifkraft liefert, die zur Steuerung des Greifers nötig ist. Ein Berechnungsverfahren zur Voraussage des Schlupfs, das in einen Zwei-Finger-Greifarm eines Roboters eingebaut wurde, wird vorgestellt. Auch ein Algorithmus zur Unterscheidung von verschiedenen OberflĂ€chenarten und zur Erkennung von Objektformen bei der BerĂŒhrung wird vorgestellt. Ein Verfahren zur Objekterkennung mit Hilfe einer Matrix aus taktilen Sensoren und eine Methode zur Klassifikation ergriffener Objekte, basierend auf den Daten einer rechteckigen OberflĂ€che, werden beschrieben. Mit Hilfe dieser Matrix können unter schiedliche Arten von OberflĂ€chen bei BerĂŒhrung erkannt werden, was es fĂŒr das Tastsystem möglich macht, Verschiebung, Drehung und GrĂ¶ĂŸe eines Objektes unabhĂ€ngig voneinander zu erkennen
    • 

    corecore