7 research outputs found

    Improvements in space radiation-tolerant FPGA implementation of land surface temperature-split window algorithm

    Get PDF
    The trend in satellite remote sensing assignments has continuously been concerning using hardware devices with more flexibility, smaller size, and higher computational power. Therefore, field programmable gate arrays (FPGA) technology is often used by the developers of the scientific community and equipment for carrying out different satellite remote sensing algorithms. This article explains hardware implementation of land surface temperature split window (LST-SW) algorithm based on the FPGA. To get a high-speed process and real-time application, VHSIC hardware description language (VHDL) was employed to design the LST-SW algorithm. The paper presents the benefits of the used Virtex-4QV of radiation tolerant series FPGA. The experimental results revealed that the suggested implementation of the algorithm using Virtex4QV achieved higher throughput of 435.392 Mbps, and faster processing time with value of 2.95 ms. Furthermore, a comparison between the proposed implementation and existing work demonstrated that the proposed implementation has better performance in terms of area utilization; 1.17% reduction in number of Slice used and 1.06% reduction in of LUTs. Moreover, the significant advantage of area utilization would be the none use of block RAMs comparing to existing work using three blocks RAMs. Finally, comparison results show improvements using the proposed implementation with rates of 2.28% higher frequency, 3.66 x higher throughput, and 1.19% faster processing time

    Impact of linear dimensionality reduction methods on the performance of anomaly detection algorithms in hyperspectral images

    Get PDF
    Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms is to use Dimensionality Reduction (DR) techniques. This paper evaluates the effect of three popular linear dimensionality reduction methods on the performance of three benchmark anomaly detection algorithms. The Principal Component Analysis (PCA), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) as DR methods, act as pre-processing step for AD algorithms. The assessed AD algorithms are Reed-Xiaoli (RX), Kernel-based versions of the RX (Kernel-RX) and Dual Window-Based Eigen Separation Transform (DWEST). The AD methods have been applied to two hyperspectral datasets acquired by both the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperspectral Mapper (HyMap) sensors. The evaluation of experiments has been done using Receiver Operation Characteristic (ROC) curve, visual investigation and runtime of the algorithms. Experimental results show that the DR methods can significantly improve the detection performance of the RX method. The detection performance of neither the Kernel-RX method nor the DWEST method changes when using the proposed methods. Moreover, these DR methods increase the runtime of the RX and DWEST significantly and make them suitable to be implemented in real time applications

    FPGA-Based On-Board Geometric Calibration for Linear CCD Array Sensors

    Get PDF
    With increasing demands in real-time or near real-time remotely sensed imagery applications in such as military deployments, quick response to terrorist attacks and disaster rescue, the on-board geometric calibration problem has attracted the attention of many scientists in recent years. This paper presents an on-board geometric calibration method for linear CCD sensor arrays using FPGA chips. The proposed method mainly consists of four modules—Input Data, Coefficient Calculation, Adjustment Computation and Comparison—in which the parallel computations for building the observation equations and least squares adjustment, are implemented using FPGA chips, for which a decomposed matrix inversion method is presented. A Xilinx Virtex-7 FPGA VC707 chip is selected and the MOMS-2P data used for inflight geometric calibration from DLR (Köln, Germany), are employed for validation and analysis. The experimental results demonstrated that: (1) When the widths of floating-point data from 44-bit to 64-bit are adopted, the FPGA resources, including the utilizations of FF, LUT, memory LUT, I/O and DSP48, are consumed at a fast increasing rate; thus, a 50-bit data width is recommended for FPGA-based geometric calibration. (2) Increasing number of ground control points (GCPs) does not significantly consume the FPGA resources, six GCPs is therefore recommended for geometric calibration. (3) The FPGA-based geometric calibration can reach approximately 24 times faster speed than the PC-based one does. (4) The accuracy from the proposed FPGA-based method is almost similar to the one from the inflight calibration if the calibration model and GCPs number are the same

    A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing

    Get PDF
    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    On Board Georeferencing Using FPGA-Based Optimized Second Order Polynomial Equation

    Get PDF
    For real-time monitoring of natural disasters, such as fire, volcano, flood, landslide, and coastal inundation, highly-accurate georeferenced remotely sensed imagery is needed. Georeferenced imagery can be fused with geographic spatial data sets to provide geographic coordinates and positing for regions of interest. This paper proposes an on-board georeferencing method for remotely sensed imagery, which contains five modules: input data, coordinate transformation, bilinear interpolation, and output data. The experimental results demonstrate multiple benefits of the proposed method: (1) the computation speed using the proposed algorithm is 8 times faster than that using PC computer; (2) the resources of the field programmable gate array (FPGA) can meet the requirements of design. In the coordinate transformation scheme, 250,656 LUTs, 499,268 registers, and 388 DSP48s are used. Furthermore, 27,218 LUTs, 45,823 registers, 456 RAM/FIFO, and 267 DSP48s are used in the bilinear interpolation module; (3) the values of root mean square errors (RMSEs) are less than one pixel, and the other statistics, such as maximum error, minimum error, and mean error are less than one pixel; (4) the gray values of the georeferenced image when implemented using FPGA have the same accuracy as those implemented using MATLAB and Visual studio (C++), and have a very close accuracy implemented using ENVI software; and (5) the on-chip power consumption is 0.659W. Therefore, it can be concluded that the proposed georeferencing method implemented using FPGA with second-order polynomial model and bilinear interpolation algorithm can achieve real-time geographic referencing for remotely sensed imagery

    Towards a metadata standard for field spectroscopy

    Get PDF
    This thesis identifies the core components for a field spectroscopy metadata standard to facilitate discoverability, interoperability, reliability, quality assurance and extended life cycles for datasets being exchanged in a variety of data sharing platforms. The research is divided into five parts: 1) an overview of the importance of field spectroscopy, metadata paradigms and standards, metadata quality and geospatial data archiving systems; 2) definition of a core metadataset critical for all field spectroscopy applications; 3) definition of an extended metadataset for specific applications; 4) methods and metrics for assessing metadata quality and completeness in spectral data archives; 5) recommendations for implementing a field spectroscopy metadata standard in data warehouses and ‘big data’ environments. Part 1 of the thesis is a review of the importance of field spectroscopy in remote sensing; metadata paradigms and standards; field spectroscopy metadata practices, metadata quality; and geospatial data archiving systems. The unique metadata requirements for field spectroscopy are discussed. Conventional definitions and metrics for measuring metadata quality are presented. Geospatial data archiving systems for data warehousing and intelligent information exchange are explained. Part 2 of the thesis presents a core metadataset for all field spectroscopy applications, derived from the results of an international expert panel survey. The survey respondents helped to identify a metadataset critical to all field spectroscopy campaigns, and for specific applications. These results form the foundation of a field spectroscopy metadata standard that is practical, flexible enough to suit the purpose for which the data is being collected, and/or has sufficient legacy potential for long-term sharing and interoperability with other datasets. Part 3 presents an extended metadataset for specific application areas within field spectroscopy. The key metadata is presented for three applications: tree crown, soil, and underwater coral reflectance measurements. The performance of existing metadata standards in complying with the field spectroscopy metadataset was measured. Results show they consistently fail to accommodate the needs of both field spectroscopy scientists in general as well as the three application areas. Part 4 presents criteria for measuring the quality and completeness of field spectroscopy metadata in a spectral archive. Existing methods for measuring quality and completeness of metadata were scrutinized against the special requirements of field spectroscopy datasets. Novel field spectroscopy metadata quality parameters were defined. Two spectral libraries were examined as case studies of operationalized metadata. The case studies revealed that publicly available datasets are underperforming on the quality and completeness measures. Part 5 presents recommendations for adoption and implementation of a field spectroscopy standard, both within the field spectroscopy community and within the wider scope of IT infrastructure for storing and sharing field spectroscopy metadata within data warehouses and big data environments. The recommendations are divided into two main sections: community adoption of the standard, and integration of standardized metadatasets into data warehouses and big data platforms. This thesis has identified the core components of a metadata standard for field spectroscopy. The metadata standard serves overall to increase the discoverability, reliability, quality, and life cycle of field spectroscopy metadatasets for wide-scale data exchange
    corecore