87 research outputs found

    A General Destriping Framework for Remote Sensing Images Using Flatness Constraint

    Full text link
    This paper proposes a general destriping framework using flatness constraints, where we can handle various regularization functions in a unified manner. Removing stripe noise, i.e., destriping, from remote sensing images is an essential task in terms of visual quality and subsequent processing. Most of the existing methods are designed by combining a particular image regularization with a stripe noise characterization that cooperates with the regularization, which precludes us to examine different regularizations to adapt to various target images. To resolve this, we formulate the destriping problem as a convex optimization problem involving a general form of image regularization and the flatness constraints, a newly introduced stripe noise characterization. This strong characterization enables us to consistently capture the nature of stripe noise, regardless of the choice of image regularization. For solving the optimization problem, we also develop an efficient algorithm based on a diagonally preconditioned primal-dual splitting algorithm (DP-PDS), which can automatically adjust the stepsizes. The effectiveness of our framework is demonstrated through destriping experiments, where we comprehensively compare combinations of image regularizations and stripe noise characterizations using hyperspectral images (HSI) and infrared (IR) videos.Comment: submitted to IEEE Transactions on Geoscience and Remote Sensin

    Fast Objective Coupled Planar Illumination Microscopy

    Get PDF
    Among optical imaging techniques light sheet fluorescence microscopy stands out as one of the most attractive for capturing high-speed biological dynamics unfolding in three dimensions. The technique is potentially millions of times faster than point-scanning techniques such as two-photon microscopy. This potential is especially poignant for neuroscience applications due to the fact that interactions between neurons transpire over mere milliseconds within tissue volumes spanning hundreds of cubic microns. However current-generation light sheet microscopes are limited by volume scanning rate and/or camera frame rate. We begin by reviewing the optical principles underlying light sheet fluorescence microscopy and the origin of these rate bottlenecks. We present an analysis leading us to the conclusion that Objective Coupled Planar Illumination (OCPI) microscopy is a particularly promising technique for recording the activity of large populations of neurons at high sampling rate. We then present speed-optimized OCPI microscopy, the first fast light sheet technique to avoid compromising image quality or photon efficiency. We enact two strategies to develop the fast OCPI microscope. First, we devise a set of optimizations that increase the rate of the volume scanning system to 40 Hz for volumes up to 700 microns thick. Second, we introduce Multi-Camera Image Sharing (MCIS), a technique to scale imaging rate by incorporating additional cameras. MCIS can be applied not only to OCPI but to any widefield imaging technique, circumventing the limitations imposed by the camera. Detailed design drawings are included to aid in dissemination to other research groups. We also demonstrate fast calcium imaging of the larval zebrafish brain and find a heartbeat-induced motion artifact. We recommend a new preprocessing step to remove the artifact through filtering. This step requires a minimal sampling rate of 15 Hz, and we expect it to become a standard procedure in zebrafish imaging pipelines. In the last chapter we describe essential computational considerations for controlling a fast OCPI microscope and processing the data that it generates. We introduce a new image processing pipeline developed to maximize computational efficiency when analyzing these multi-terabyte datasets, including a novel calcium imaging deconvolution algorithm. Finally we provide a demonstration of how combined innovations in microscope hardware and software enable inference of predictive relationships between neurons, a promising complement to more conventional correlation-based analyses

    Vicarious Methodologies to Assess and Improve the Quality of the Optical Remote Sensing Images: A Critical Review

    Get PDF
    Over the past decade, number of optical Earth observing satellites performing remote sensing has increased substantially, dramatically increasing the capability to monitor the Earth. The quantity of remote sensing satellite increase is primarily driven by improved technology, miniaturization of components, reduced manufacturing, and launch cost. These satellites often lack on-board calibrators that a large satellite utilizes to ensure high quality (e.g., radiometric, geometric, spatial quality, etc.) scientific measurement. To address this issue, this work presents “best” vicarious image quality assessment and improvement techniques for those kinds of optical satellites which lacks on-board calibration system. In this article, image quality categories have been explored, and essential quality parameters (e.g., absolute and relative calibration, aliasing, etc.) have been identified. For each of the parameters, appropriate characterization methods are identified along with its specifications or requirements. In cases of multiple methods, recommendation has been made based-on the strengths and weaknesses of each method. Furthermore, processing steps have been presented, including examples. Essentially, this paper provides a comprehensive study of the criteria that needs to be assessed to evaluate remote sensing satellite data quality, and best vicarious methodologies to evaluate identified quality parameters such as coherent noise, ground sample distance, etc

    IRAS sky survey atlas: Explanatory supplement

    Get PDF
    This Explanatory Supplement accompanies the IRAS Sky Survey Atlas (ISSA) and the ISSA Reject Set. The first ISSA release in 1991 covers completely the high ecliptic latitude sky, absolute value of beta is greater than 50 deg, with some coverage down to the absolute value of beta approx. equal to 40 deg. The second ISSA release in 1992 covers ecliptic latitudes of 50 deg greater than the absolute value of beta greater than 20 deg, with some coverage down to the absolute value of beta approx. equal to 13 deg. The remaining fields covering latitudes within 20 deg of the ecliptic plane are of reduced quality compared to the rest of the ISSA fields and therefore are released as a separate IPAC product, the ISSA Reject Set. The reduced quality is due to contamination by zodiacal emission residuals. Special care should be taken when using the ISSA Reject images. In addition to information on the ISSA images, some information is provided in this Explanatory Supplement on the IRAS Zodiacal History File (ZOHF), Version 3.0, which was described in the December 1988 release memo. The data described in this Supplement are available at the National Space Science Data Center (NSSDC) at the Goddard Space Flight Center. The interested reader is referred to the NSSDC for access to the IRAS Sky Survey Atlas (ISSA)

    Structure-aware image denoising, super-resolution, and enhancement methods

    Get PDF
    Denoising, super-resolution and structure enhancement are classical image processing applications. The motive behind their existence is to aid our visual analysis of raw digital images. Despite tremendous progress in these fields, certain difficult problems are still open to research. For example, denoising and super-resolution techniques which possess all the following properties, are very scarce: They must preserve critical structures like corners, should be robust to the type of noise distribution, avoid undesirable artefacts, and also be fast. The area of structure enhancement also has an unresolved issue: Very little efforts have been put into designing models that can tackle anisotropic deformations in the image acquisition process. In this thesis, we design novel methods in the form of partial differential equations, patch-based approaches and variational models to overcome the aforementioned obstacles. In most cases, our methods outperform the existing approaches in both quality and speed, despite being applicable to a broader range of practical situations.Entrauschen, Superresolution und Strukturverbesserung sind klassische Anwendungen der Bildverarbeitung. Ihre Existenz bedingt sich in dem Bestreben, die visuelle Begutachtung digitaler Bildrohdaten zu unterstützen. Trotz erheblicher Fortschritte in diesen Feldern bedürfen bestimmte schwierige Probleme noch weiterer Forschung. So sind beispielsweise Entrauschungsund Superresolutionsverfahren, welche alle der folgenden Eingenschaften besitzen, sehr selten: die Erhaltung wichtiger Strukturen wie Ecken, Robustheit bezüglich der Rauschverteilung, Vermeidung unerwünschter Artefakte und niedrige Laufzeit. Auch im Gebiet der Strukturverbesserung liegt ein ungelöstes Problem vor: Bisher wurde nur sehr wenig Forschungsaufwand in die Entwicklung von Modellen investieret, welche anisotrope Deformationen in bildgebenden Verfahren bewältigen können. In dieser Arbeit entwerfen wir neue Methoden in Form von partiellen Differentialgleichungen, patch-basierten Ansätzen und Variationsmodellen um die oben erwähnten Hindernisse zu überwinden. In den meisten Fällen übertreffen unsere Methoden nicht nur qualitativ die bisher verwendeten Ansätze, sondern lösen die gestellten Aufgaben auch schneller. Zudem decken wir mit unseren Modellen einen breiteren Bereich praktischer Fragestellungen ab

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    On the relationship between neuronal codes and mental models

    Get PDF
    Das übergeordnete Ziel meiner Arbeit an dieser Dissertation war ein besseres Verständnis des Zusammenhangs von mentalen Modellen und den zugrundeliegenden Prinzipien, die zur Selbstorganisation neuronaler Verschaltung führen. Die Dissertation besteht aus vier individuellen Publikationen, die dieses Ziel aus unterschiedlichen Perspektiven angehen. Während die Selbstorganisation von Sparse-Coding-Repräsentationen in neuronalem Substrat bereits ausgiebig untersucht worden ist, sind viele Forschungsfragen dazu, wie Sparse-Coding für höhere, kognitive Prozesse genutzt werden könnte noch offen. Die ersten zwei Studien, die in Kapitel 2 und Kapitel 3 enthalten sind, behandeln die Frage, inwieweit Repräsentationen, die mit Sparse-Coding entstehen, mentalen Modellen entsprechen. Wir haben folgende Selektivitäten in Sparse-Coding-Repräsentationen identifiziert: mit Stereo-Bildern als Eingangsdaten war die Repräsentation selektiv für die Disparitäten von Bildstrukturen, welche für das Abschätzen der Entfernung der Strukturen zum Beobachter genutzt werden können. Außerdem war die Repräsentation selektiv für die die vorherrschende Orientierung in Texturen, was für das Abschätzen der Neigung von Oberflächen genutzt werden kann. Mit optischem Fluss von Eigenbewegung als Eingangsdaten war die Repräsentation selektiv für die Richtung der Eigenbewegung in den sechs Freiheitsgraden. Wegen des direkten Zusammenhangs der Selektivitäten mit physikalischen Eigenschaften können Repräsentationen, die mit Sparse-Coding entstehen, als frühe sensorische Modelle der Umgebung dienen. Die kognitiven Prozesse hinter räumlichem Wissen ruhen auf mentalen Modellen, welche die Umgebung representieren. Wir haben in der dritten Studie, welche in Kapitel 4 enthalten ist, ein topologisches Modell zur Navigation präsentiert, Es beschreibt einen dualen Populations-Code, bei dem der erste Populations-Code Orte anhand von Orts-Feldern (Place-Fields) kodiert und der zweite Populations-Code Bewegungs-Instruktionen, basierend auf der Verknüpfung von Orts-Feldern, kodiert. Der Fokus lag nicht auf der Implementation in biologischem Substrat oder auf einer exakten Modellierung physiologischer Ergebnisse. Das Modell ist eine biologisch plausible, einfache Methode zur Navigation, welche sich an einen Zwischenschritt emergenter Navigations-Fähigkeiten in einer evolutiven Navigations-Hierarchie annähert. Unser automatisierter Test der Sehleistungen von Mäusen, welcher in Kapitel 5 beschrieben wird, ist ein Beispiel von Verhaltens-Tests im Wahrnehmungs-Handlungs-Zyklus (Perception-Action-Cycle). Das Ziel dieser Studie war die Quantifizierung des optokinetischen Reflexes. Wegen des reichhaltigen Verhaltensrepertoires von Mäusen sind für die Quantifizierung viele umfangreiche Analyseschritte erforderlich. Tiere und Menschen sind verkörperte (embodied) lebende Systeme und daher aus stark miteinander verwobenen Modulen oder Entitäten zusammengesetzt, welche außerdem auch mit der Umgebung verwoben sind. Um lebende Systeme als Ganzes zu studieren ist es notwendig Hypothesen, zum Beispiel zur Natur mentaler Modelle, im Wahrnehmungs-Handlungs-Zyklus zu testen. Zusammengefasst erweitern die Studien dieser Dissertation unser Verständnis des Charakters früher sensorischer Repräsentationen als mentale Modelle, sowie unser Verständnis höherer, mentalen Modellen für die räumliche Navigation. Darüber hinaus enthält es ein Beispiel für das Evaluieren von Hypothesn im Wahr\-neh\-mungs-Handlungs-Zyklus.The superordinate aim of my work towards this thesis was a better understanding of the relationship between mental models and the underlying principles that lead to the self-organization of neuronal circuitry. The thesis consists of four individual publications, which approach this goal from differing perspectives. While the formation of sparse coding representations in neuronal substrate has been investigated extensively, many research questions on how sparse coding may be exploited for higher cognitive processing are still open. The first two studies, included as chapter 2 and chapter 3, asked to what extend representations obtained with sparse coding match mental models. We identified the following selectivities in sparse coding representations: with stereo images as input, the representation was selective for the disparity of image structures, which can be used to infer the distance of structures to the observer. Furthermore, it was selective to the predominant orientation in textures, which can be used to infer the orientation of surfaces. With optic flow from egomotion as input, the representation was selective to the direction of egomotion in 6 degrees of freedom. Due to the direct relation between selectivity and physical properties, these representations, obtained with sparse coding, can serve as early sensory models of the environment. The cognitive processes behind spatial knowledge rest on mental models that represent the environment. We presented a topological model for wayfinding in the third study, included as chapter 4. It describes a dual population code, where the first population code encodes places by means of place fields, and the second population code encodes motion instructions based on links between place fields. We did not focus on an implementation in biological substrate or on an exact fit to physiological findings. The model is a biologically plausible, parsimonious method for wayfinding, which may be close to an intermediate step of emergent skills in an evolutionary navigational hierarchy. Our automated testing for visual performance in mice, included in chapter 5, is an example of behavioral testing in the perception-action cycle. The goal of this study was to quantify the optokinetic reflex. Due to the rich behavioral repertoire of mice, quantification required many elaborate steps of computational analyses. Animals and humans are embodied living systems, and therefore composed of strongly enmeshed modules or entities, which are also enmeshed with the environment. In order to study living systems as a whole, it is necessary to test hypothesis, for example on the nature of mental models, in the perception-action cycle. In summary, the studies included in this thesis extend our view on the character of early sensory representations as mental models, as well as on high-level mental models for spatial navigation. Additionally it contains an example for the evaluation of hypotheses in the perception-action cycle

    Elevation and Deformation Extraction from TomoSAR

    Get PDF
    3D SAR tomography (TomoSAR) and 4D SAR differential tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to provide an essential innovation of SAR Interferometry for many applications, sensing complex scenes with multiple scatterers mapped into the same SAR pixel cell. However, these are still influenced by DEM uncertainty, temporal decorrelation, orbital, tropospheric and ionospheric phase distortion and height blurring. In this thesis, these techniques are explored. As part of this exploration, the systematic procedures for DEM generation, DEM quality assessment, DEM quality improvement and DEM applications are first studied. Besides, this thesis focuses on the whole cycle of systematic methods for 3D & 4D TomoSAR imaging for height and deformation retrieval, from the problem formation phase, through the development of methods to testing on real SAR data. After DEM generation introduction from spaceborne bistatic InSAR (TanDEM-X) and airborne photogrammetry (Bluesky), a new DEM co-registration method with line feature validation (river network line, ridgeline, valley line, crater boundary feature and so on) is developed and demonstrated to assist the study of a wide area DEM data quality. This DEM co-registration method aligns two DEMs irrespective of the linear distortion model, which improves the quality of DEM vertical comparison accuracy significantly and is suitable and helpful for DEM quality assessment. A systematic TomoSAR algorithm and method have been established, tested, analysed and demonstrated for various applications (urban buildings, bridges, dams) to achieve better 3D & 4D tomographic SAR imaging results. These include applying Cosmo-Skymed X band single-polarisation data over the Zipingpu dam, Dujiangyan, Sichuan, China, to map topography; and using ALOS L band data in the San Francisco Bay region to map urban building and bridge. A new ionospheric correction method based on the tile method employing IGS TEC data, a split-spectrum and an ionospheric model via least squares are developed to correct ionospheric distortion to improve the accuracy of 3D & 4D tomographic SAR imaging. Meanwhile, a pixel by pixel orbit baseline estimation method is developed to address the research gaps of baseline estimation for 3D & 4D spaceborne SAR tomography imaging. Moreover, a SAR tomography imaging algorithm and a differential tomography four-dimensional SAR imaging algorithm based on compressive sensing, SAR interferometry phase (InSAR) calibration reference to DEM with DEM error correction, a new phase error calibration and compensation algorithm, based on PS, SVD, PGA, weighted least squares and minimum entropy, are developed to obtain accurate 3D & 4D tomographic SAR imaging results. The new baseline estimation method and consequent TomoSAR processing results showed that an accurate baseline estimation is essential to build up the TomoSAR model. After baseline estimation, phase calibration experiments (via FFT and Capon method) indicate that a phase calibration step is indispensable for TomoSAR imaging, which eventually influences the inversion results. A super-resolution reconstruction CS based study demonstrates X band data with the CS method does not fit for forest reconstruction but works for reconstruction of large civil engineering structures such as dams and urban buildings. Meanwhile, the L band data with FFT, Capon and the CS method are shown to work for the reconstruction of large manmade structures (such as bridges) and urban buildings
    • …
    corecore