622 research outputs found

    The state-of-the-art progress in cloud detection, identification, and tracking approaches: a systematic review

    Get PDF
    A cloud is a mass of water vapor floating in the atmosphere. It is visible from the ground and can remain at a variable height for some time. Clouds are very important because their interaction with the rest of the atmosphere has a decisive influence on weather, for instance by sunlight occlusion or by bringing rain. Weather denotes atmosphere behavior and is determinant in several human activities, such as agriculture or energy capture. Therefore, cloud detection is an important process about which several methods have been investigated and published in the literature. The aim of this paper is to review some of such proposals and the papers that have been analyzed and discussed can be, in general, classified into three types. The first one is devoted to the analysis and explanation of clouds and their types, and about existing imaging systems. Regarding cloud detection, dealt with in a second part, diverse methods have been analyzed, i.e., those based on the analysis of satellite images and those based on the analysis of images from cameras located on Earth. The last part is devoted to cloud forecast and tracking. Cloud detection from both systems rely on thresholding techniques and a few machine-learning algorithms. To compute the cloud motion vectors for cloud tracking, correlation-based methods are commonly used. A few machine-learning methods are also available in the literature for cloud tracking, and have been discussed in this paper too

    A first assessment of the Sentinel-2 Level 1-C cloud mask product to support informed surface analyses

    Get PDF
    Abstract Cloud detection in optical remote sensing images is a crucial problem because undetected clouds can produce misleading results in the analyses of surface and atmospheric parameters. Sentinel-2 provides high spatial resolution satellite data distributed with associated cloud masks. In this paper, we evaluate the ability of Sentinel-2 Level-1C cloud mask products to discriminate clouds over a variety of biogeographic scenarios and in different cloudiness conditions. Reference cloud masks for the identification of misdetection were generated by applying a local thresholding method that analyses Sentinel-2 Band 2 (0.490 μm) and Band 10 (1.375 μm) separately; histogram-based thresholds were locally tuned by checking the single bands and the natural color composite (B4B3B2); in doubtful cases, NDVI and DEM were also analyzed to refine the masks; the B2B11B12 composite was used to separate snow. The analysis of the cloud classification errors obtained for our test sites allowed us to get important inferences of general value. The L1C cloud mask generally underestimated the presence of clouds (average Omission Error, OE, 37.4%); this error increased (OE > 50%) for imagery containing opaque clouds with a large transitional zone (between the cloud core and clear areas) and cirrus clouds, fragmentation emerged as a major source of omission errors (R2 0.73). Overestimation was prevalently found in the presence of holes inside the main cloud bodies. Two extreme environments were particularly critical for the L1C cloud mask product. Detection over Amazonian rainforests was highly inefficient (OE > 70%) due to the presence of complex cloudiness and high water vapor content. On the other hand, Alpine orography under dry atmosphere created false cirrus clouds. Altogether, cirrus detection was the most inefficient. According to our results, Sentinel-2 L1C users should take some simple precautions while waiting for ESA improved cloud detection products

    Multi-View Polarimetric Scattering Cloud Tomography and Retrieval of Droplet Size

    Get PDF
    Tomography aims to recover a three-dimensional (3D) density map of a medium or an object. In medical imaging, it is extensively used for diagnostics via X-ray computed tomography (CT). We define and derive a tomography of cloud droplet distributions via passive remote sensing. We use multi-view polarimetric images to fit a 3D polarized radiative transfer (RT) forward model. Our motivation is 3D volumetric probing of vertically-developed convectively-driven clouds that are ill-served by current methods in operational passive remote sensing. Current techniques are based on strictly 1D RT modeling and applied to a single cloudy pixel, where cloud geometry defaults to that of a plane-parallel slab. Incident unpolarized sunlight, once scattered by cloud-droplets, changes its polarization state according to droplet size. Therefore, polarimetric measurements in the rainbow and glory angular regions can be used to infer the droplet size distribution. This work defines and derives a framework for a full 3D tomography of cloud droplets for both their mass concentration in space and their distribution across a range of sizes. This 3D retrieval of key microphysical properties is made tractable by our novel approach that involves a restructuring and differentiation of an open-source polarized 3D RT code to accommodate a special two-step optimization technique. Physically-realistic synthetic clouds are used to demonstrate the methodology with rigorous uncertainty quantification

    A Year in Space for the CubeSat Multispectral Observing System: CUMULOS

    Get PDF
    CUMULOS is a three-camera system flying as a secondary payload on the Integrated Solar Array and Reflectarray Antenna (ISARA) mission with the goals of researching the use of uncooled commercial infrared cameras for Earth remote sensing and demonstrating unique nighttime remote sensing capabilities. Three separate cameras comprise the CUMULOS payload: 1) a visible (VIS) Si CMOS camera, 2) a shortwave infrared (SWIR) InGaAs camera, and 3) a longwave infrared (LWIR) vanadium oxide microbolometer. This paper reviews on-orbit operations during the past year, in-space calibration observations and techniques, and Earth remote sensing highlights from the first year of space operations. CUMULOS operations commenced on 8 June 2018 following the successful completion of the primary ISARA mission. Some of the unique contributions from the CUMULOS payloads include: 1) demonstrating the use of bright stars for on-orbit radiometric calibration of CubeSat payloads, 2) acquisition of science-quality nighttime lights data at 130-m resolution, and 3) operating the first simple Earth observing infrared payloads successfully flown on a CubeSat. Sample remote sensing results include images of: cities at night, ship lights (including fishing vessels), oil industry gas flares, serious wildfires, volcanic activity, and daytime and nighttime clouds. The CUMULOS VIS camera has measured calibrated nightlights imagery of major cities such as Los Angeles, Singapore, Shanghai, Tokyo, Kuwait City, Abu Dhabi, Jeddah, Istanbul, and London at more than 5x the resolution of VIIRS. The utility of these data for measuring light pollution, and mapping urban growth and infrastructure development at higher resolution than VIIRS is being studied, with an emphasis placed on Los Angeles. The Carr , Camp and Woolsey fires from the 2018 California fire season were imaged with all three cameras and results highlight the excellent wildfire imaging performance that can be achieved by small sensors. The SWIR camera has exhibited extreme sensitivity to flare and fire hotspots, and was even capable of detecting airglow-illuminated nighttime cloud structures by taking advantage of the strong OH emissions within its 0.9-1.7 micron bandpass. The LWIR microbolometer has proven successful at providing cloud context imagery for our nightlights mapping experiments, can detect very large fires and the brightest flare hotspots, and can also image terrain temperature variation and urban heat islands at 300-m resolution. CUMULOS capabilities show the potential of CubeSats and small sensors to perform several VIIRS-like nighttime mission areas in which wide area coverage can be traded for greater resolution over a smaller field of view. The sensor has been used in collaboration with VIIRS researchers to explore these mission areas and side-by-side results will be presented illustrating the capabilities as well as the limitations of small aperture LEO CubeSat systems

    The Emergence of the Modern Universe: Tracing the Cosmic Web

    Full text link
    This is the report of the Ultraviolet-Optical Working Group (UVOWG) commissioned by NASA to study the scientific rationale for new missions in ultraviolet/optical space astronomy approximately ten years from now, when the Hubble Space Telescope (HST) is de-orbited. The UVOWG focused on a scientific theme, The Emergence of the Modern Universe, the period from redshifts z = 3 to 0, occupying over 80% of cosmic time and beginning after the first galaxies, quasars, and stars emerged into their present form. We considered high-throughput UV spectroscopy (10-50x throughput of HST/COS) and wide-field optical imaging (at least 10 arcmin square). The exciting science to be addressed in the post-HST era includes studies of dark matter and baryons, the origin and evolution of the elements, and the major construction phase of galaxies and quasars. Key unanswered questions include: Where is the rest of the unseen universe? What is the interplay of the dark and luminous universe? How did the IGM collapse to form the galaxies and clusters? When were galaxies, clusters, and stellar populations assembled into their current form? What is the history of star formation and chemical evolution? Are massive black holes a natural part of most galaxies? A large-aperture UV/O telescope in space (ST-2010) will provide a major facility in the 21st century for solving these scientific problems. The UVOWG recommends that the first mission be a 4m aperture, SIRTF-class mission that focuses on UV spectroscopy and wide-field imaging. In the coming decade, NASA should investigate the feasibility of an 8m telescope, by 2010, with deployable optics similar to NGST. No high-throughput UV/Optical mission will be possible without significant NASA investments in technology, including UV detectors, gratings, mirrors, and imagers.Comment: Report of UV/O Working Group to NASA, 72 pages, 13 figures, Full document with postscript figures available at http://casa.colorado.edu/~uvconf/UVOWG.htm

    The life cycle of anvil cirrus clouds from a combination of passive and active satellite remote sensing

    Get PDF
    Anvil cirrus clouds form in the upper troposphere from the outflow of ice crystals from deep convective cumulonimbus clouds. By reflecting incoming solar radiation as well as absorbing terrestrial thermal radiation, and re-emitting it at significantly lower temperatures, they play an important role for the Earth’s radiation budget. Nevertheless the processes that govern their life cycle are not well understood and, hence, they remain one of the largest uncertainties in atmospheric remote sensing and climate and weather modelling. In this thesis the temporal evolution of the anvil cirrus properties throughout their life cycle is investigated, as is their relationship with the meteorological conditions. For a comprehensive retrieval of the anvil cirrus properties, a new algorithm for the remote sensing of cirrus clouds called CiPS (Cirrus Properties from SEVIRI) is developed. Utilising a set of artificial neural networks, CiPS combines the large spatial coverage and high temporal resolution of the imaging radiometer SEVIRI aboard the geostationary satellites Meteosat Second Generation, with the high vertical resolution and sensitivity to thin cirrus clouds of the lidar CALIOP aboard the polar orbiting satellite CALIPSO. In comparison to CALIOP, CiPS detects 71 % and 95 % of all cirrus clouds with an ice optical thickness (IOT) of 0.1 and 1.0 respectively. Furthermore, CiPS retrieves the corresponding cloud top height, IOT, ice water path (IWP) and, by parameterisation, effective ice crystal radius. This way, macrophysical, microphysical and optical properties can be combined to interpret the temporal evolution of the anvil cirrus clouds. Together with a tool for identifying convective activity and a new cirrus tracking algorithm, CiPS is used to analyse the life cycle of 132 anvil cirrus clouds observed over southern Europe and northern Africa in July 2015. Although the anvil cirrus clouds grow optically thick during the convective phase, they become thinner at a rapid pace as convection ceases. Two hours after the last observed convective activity, 92±7 % of the anvil cirrus area has IOT_CiPS < 1 and IWP_CiPS < 30 g m−2 on average, with highest probability density around 0.1–0.2 and 1.5–3 g m−2 respectively. During the same time period, the cloud top height is observed to decrease. Since this is observed for both long-lived and short-lived anvil cirrus, it is deduced that in this life phase the amount of ice in the anvil is mainly controlled by sedimentation. This is in line with a corresponding decrease in the estimated effective radius. While the convective strength has no evident effect on the IOT and IWP, stronger vertical updraught is clearly correlated with higher cloud top height and larger effective radius. Larger ice crystals are, however, observed to be removed effectively within 2-3 h after convection has ceased, suggesting that the convective strength has no impact on the ice crystal sizes in ageing anvils. In this life stage, upper tropospheric relative humidity, as derived from ERA5 reanalysis data, is shown to have a larger impact on the anvil cirrus life cycle, where higher relative humidity govern larger and especially more long-lived anvil cirrus clouds

    Earth imaging with microsatellites: An investigation, design, implementation and in-orbit demonstration of electronic imaging systems for earth observation on-board low-cost microsatellites.

    Get PDF
    This research programme has studied the possibilities and difficulties of using 50 kg microsatellites to perform remote imaging of the Earth. The design constraints of these missions are quite different to those encountered in larger, conventional spacecraft. While the main attractions of microsatellites are low cost and fast response times, they present the following key limitations: Payload mass under 5 kg, Continuous payload power under 5 Watts, peak power up to 15 Watts, Narrow communications bandwidths (9.6 / 38.4 kbps), Attitude control to within 5&deg;, No moving mechanics. The most significant factor is the limited attitude stability. Without sub-degree attitude control, conventional scanning imaging systems cannot preserve scene geometry, and are therefore poorly suited to current microsatellite capabilities. The foremost conclusion of this thesis is that electronic cameras, which capture entire scenes in a single operation, must be used to overcome the effects of the satellite's motion. The potential applications of electronic cameras, including microsatellite remote sensing, have erupted with the recent availability of high sensitivity field-array CCD (charge-coupled device) image sensors. The research programme has established suitable techniques and architectures necessary for CCD sensors, cameras and entire imaging systems to fulfil scientific/commercial remote sensing despite the difficult conditions on microsatellites. The author has refined these theories by designing, building and exploiting in-orbit five generations of electronic cameras. The major objective of meteorological scale imaging was conclusively demonstrated by the Earth imaging camera flown on the UoSAT-5 spacecraft in 1991. Improved cameras have since been carried by the KITSAT-1 (1992) and PoSAT-1 (1993) microsatellites. PoSAT-1 also flies a medium resolution camera (200 metres) which (despite complete success) has highlighted certain limitations of microsatellites for high resolution remote sensing. A reworked, and extensively modularised, design has been developed for the four camera systems deployed on the FASat-Alfa mission (1995). Based on the success of these missions, this thesis presents many recommendations for the design of microsatellite imaging systems. The novelty of this research programme has been the principle of designing practical camera systems to fit on an existing, highly restrictive, satellite platform, rather than conceiving a fictitious small satellite to support a high performance scanning imager. This pragmatic approach has resulted in the first incontestable demonstrations of the feasibility of remote sensing of the Earth from inexpensive microsatellites

    Cloud geometry for passive remote sensing

    Get PDF
    An important cause for disagreements between current climate models is lack of understanding of cloud processes. In order to test and improve the assumptions of such models, detailed and large scale observations of clouds are necessary. Passive remote sensing methods are well-established to obtain cloud properties over a large observation area in a short period of time. In case of the visible to near infrared part of the electromagnetic spectrum, a quick measurement process is achieved by using the sun as high-intensity light source to illuminate a cloud scene and by taking simultaneous measurements on all pixels of an imaging sensor. As the sun as light source can not be controlled, it is not possible to measure the time light travels from source to cloud to sensor, which is how active remote sensing determines distance information. But active light sources do not provide enough radiant energy to illuminate a large scene, which would be required to observe it in an instance. Thus passive imaging remains an important remote sensing method. Distance information and accordingly cloud surface location information is nonetheless crucial information: cloud fraction and cloud optical thickness largely determines the cloud radiative effect and cloud height primarily influences a cloud's influence on the Earth's thermal radiation budget. In combination with ever increasing spatial resolution of passive remote sensing methods, accurate cloud surface location information becomes more important, as the largest source of retrieval uncertainties at this spatial scale, influences of 3D radiative transfer effects, can be reduced using this information. This work shows how the missing location information is derived from passive remote sensing. Using all sensors of the improved hyperspectral and polarization resolving imaging system specMACS, a unified dataset, including classical hyperspectral measurements as well as cloud surface location information and derived properties, is created. This thesis shows how RGB cameras are used to accurately derive cloud surface geometry using stereo techniques, complementing the passive remote sensing of cloud microphysics on board the German High-Altitude Long-Range research aircraft (HALO). Measured surface locations are processed into a connected surface representation, which in turn is used to assign height and location to other passive remote sensing observations. Furthermore, cloud surface orientation and a geometric shadow mask are derived, supplementing microphysical retrieval methods. The final system is able to accurately map visible cloud surfaces while flying above cloud fields. The impact of the new geometry information on microphysical retrieval uncertainty is studied using theoretical radiative transfer simulations and measurements. It is found that in some cases, information about surface orientation allows to improve classical cloud microphysical retrieval methods. Furthermore, surface information helps to identify measurement regions where a good microphysical retrieval quality is expected. By excluding likely biased regions, the overall microphysical retrieval uncertainty can be reduced. Additionally, using the same instrument payload and based on knowledge of the 3D cloud surface, new approaches for the retrieval of cloud droplet radius exploiting measurements of parts of the polarized angular scattering phase function become possible. The necessary setup and improvements of the hyperspectral and polarization resolving measurement system specMACS, which have been developed throughout four airborne field campaigns using the HALO research aircraft are introduced in this thesis.Ein wichtiger Grund für Unterschiede zwischen aktuellen Klimamodellen sind nicht ausreichend verstandene Wolkenprozesse. Um die zugrundeliegenden Annahmen dieser Modelle zu testen und zu verbessern ist es notwendig detaillierte und großskalige Beobachtungen von Wolken durch zu führen. Methoden der passiven Fernerkundung haben sich für die schnelle Erfassung von Wolkeneigenschaften in einem großen Beobachtungsgebiet etabliert. Für den sichtbaren bis nahinfraroten Bereich des elektromagnetischen Spektrums kann eine schnelle Messung erreicht werden, in dem die Sonne als starke Lichtquelle genutzt wird und die Wolkenszene durch simultane Messung über alle Pixel eines Bildsensors erfasst wird. Da die Sonne als Lichtquelle nicht gesteuert werden kann, ist es nicht möglich die Zeit zu messen die von einem Lichtstrahl für den Weg von der Quelle zur Wolke und zum Sensor benötigt wird, so wie es bei aktiven Verfahren zur Distanzbestimmung üblich ist. Allerdings können aktive Lichtquellen nicht genügend Energie bereitstellen um eine große Szene gut genug zu beleuchten um diese Szene in einem kurzen Augenblick vollständig zu erfassen. Aus diesem Grund werden passive bildgebende Verfahren weiterhin eine wichtige Methode zur Fernerkundung bleiben. Trotzdem ist der Abstand zur beobachteten Wolke und damit der Ort der Wolke eine entscheidende Information: Wolkenbedeckungsgrad und die optische Dicke einer Wolke bestimmen einen Großteil des Strahlungseffektes von Wolken und die Höhe der Wolken ist der Haupteinflussfaktor von Wolken auf die thermische Strahlungsbilanz der Erde. Einhergehend mit der weiterhin zunehmenden Auflösung von passiven Fernerkundungsmethoden werden genaue Informationen über den Ort von Wolkenoberflächen immer wichtiger. Dreidimensionale Strahlungstransporteffekte werden auf kleineren räumlichen Skalen zum dominierenden Faktor für Fehler in Messverfahren für Wolkenmikrophysik. Dieser Einfluss auf die Messverfahren kann durch die Nutzung von Informationen über die Lage der Wolken reduziert und die Ergebnisse somit verbessert werden. Diese Arbeit zeigt, wie die fehlenden Ortsinformationen aus passiven Fernerkundungsmethoden gewonnen werden können. Damit kann ein vereinheitlichter Datensatz aller Sensoren des verbesserten specMACS-Systems für hyperspektrale und polarisationsaufgelöste Bilderfassung erstellt werden, in dem außer den gemessenen Strahlungsdichten auch die Positionen der beobachteten Wolkenoberflächen und daraus abgeleitete Größen enthalten sind. In dieser Arbeit wird gezeigt, wie RGB-Kameras genutzt werden, um mit Hilfe stereographischer Techniken die Geometrie der beobachteten Wolken ab zu leiten und so die Möglichkeiten zur passiven Fernerkundung auf dem Forschungsflugzeug HALO zu erweitern. Aus den so gemessenen Positionen der Wolkenoberflächen wird eine geschlossene Darstellung der Wolkenoberflächen berechnet. Dies ermöglicht es die Daten aus anderen passiven Fernerkundungsmethoden um Höhe und Ort der Messung zu erweitern. Außerdem ist es so möglich die Orientierung der Wolkenoberflächen und eine Schattenmaske auf Grund der nun bekannten Beobachtungsgeometrie zu berechnen. Das fertige System ist in der Lage, die sichtbaren Wolkenoberflächen aus Daten von einem Überflug zu rekonstruieren. Mit Hilfe theoretischer Strahlungstransportsimulationen und Messungen wird der Einfluss der neu gewonnenen Informationen auf bestehende Rekonstruktionsmethoden für Wolkenmikrophysik untersucht. In manchen Fällen helfen die neu gewonnenen Informationen direkt die Ergebnisse dieser Methoden zu verbessern und in jedem Fall ermöglichen es die Positionsdaten Bereiche zu identifizieren für die bekannt ist, dass bisherige Rekonstruktionsmethoden nicht funktionieren. Durch Ausschluss solcher Bereiche wird der Gesamtfehler von Mirkophysikrekonstruktionen weiterhin reduziert. Das aktuelle specMACS System ermöglicht auch polarisationsaufgelöste Messungen, wodurch eine sehr genaue Bestimmung der Wolkentropfengrößen möglich wird. Die nun verfügbaren Positionsdaten der Wolkenoberflächen helfen die Genauigkeit dieses Verfahrens deutlich zu verbessern. Die notwendigen Auf- und Umbauten des hyperspektralen und polarisationsauflösenden Messsystems specMACS, die während vier Flugzeuggestützer Messkampagnen auf dem Forschungsflugzeug HALO entwickelt wurden sind in dieser Arbeit beschrieben
    corecore