4 research outputs found

    Drinking Water Infrastructure Assessment with Teleconnection Signals, Satellite Data Fusion and Mining

    Get PDF
    Adjustment of the drinking water treatment process as a simultaneous response to climate variations and water quality impact has been a grand challenge in water resource management in recent years. This desired and preferred capability depends on timely and quantitative knowledge to monitor the quality and availability of water. This issue is of great importance for the largest reservoir in the United States, Lake Mead, which is located in the proximity of a big metropolitan region - Las Vegas, Nevada. The water quality in Lake Mead is impaired by forest fires, soil erosion, and land use changes in nearby watersheds and wastewater effluents from the Las Vegas Wash. In addition, more than a decade of drought has caused a sharp drop by about 100 feet in the elevation of Lake Mead. These hydrological processes in the drought event led to the increased concentration of total organic carbon (TOC) and total suspended solids (TSS) in the lake. TOC in surface water is known as a precursor of disinfection byproducts in drinking water, and high TSS concentration in source water is a threat leading to possible clogging in the water treatment process. Since Lake Mead is a principal source of drinking water for over 25 million people, high concentrations of TOC and TSS may have a potential health impact. Therefore, it is crucial to develop an early warning system which is able to support rapid forecasting of water quality and availability. In this study, the creation of the nowcasting water quality model with satellite remote sensing technologies lays down the foundation for monitoring TSS and TOC, on a near real-time basis. Yet the novelty of this study lies in the development of a forecasting model to predict TOC and TSS values with the aid of remote sensing technologies on a daily basis. The forecasting process is aided by an iterative scheme via updating the daily satellite imagery in concert with retrieving the long-term memory from the past states with the aid of nonlinear autoregressive neural network with external input on a rolling basis onward. To account for the potential impact of long-term hydrological droughts, teleconnection signals were included on a seasonal basis in the Upper Colorado River basin which provides 97% of the inflow into Lake Mead. Identification of teleconnection patterns at a local scale is challenging, largely due to the coexistence of non-stationary and non-linear signals embedded within the ocean-atmosphere system. Empirical mode decomposition as well as wavelet analysis are utilized to extract the intrinsic trend and the dominant oscillation of the sea surface temperature (SST) and precipitation time series. After finding possible associations between the dominant oscillation of seasonal precipitation and global SST through lagged correlation analysis, the statistically significant index regions in the oceans are extracted. With these characterized associations, individual contribution of these SST forcing regions that are linked to the related precipitation responses are further quantified through the use of the extreme learning machine. Results indicate that the non-leading SST regions also contribute saliently to the terrestrial precipitation variability compared to some of the known leading SST regions and confirm the capability of predicting the hydrological drought events one season ahead of time. With such an integrated advancement, an early warning system can be constructed to bridge the current gap in source water monitoring for water supply

    Fusion de données provenant de différents capteurs satellitaires pour le suivi de la qualité de l'eau en zones côtières. Application au littoral de la région PACA

    Get PDF
    Monitoring coastal areas requires both a good spatial resolution, good spectral resolution associated with agood signal to noise ratio and finally a good temporal resolution to visualize rapid changes in water color.Available now, and even those planed soon, sensors do not provide both a good spatial, spectral ANDtemporal resolution. In this study, we are interested in the image fusion of two future sensors which are bothpart of the Copernicus program of the European Space Agency: MSI on Sentinel-2 and OLCI on Sentinel-3.Such as MSI and OLCI do not provide image yet, it was necessary to simulate them. We then used thehyperspectral imager HICO and we then proposed three methods: an adaptation of the method ARSIS fusionof multispectral images (ARSIS), a fusion method based on the non-negative factorization tensors (Tensor)and a fusion method based on the inversion de matrices (Inversion).These three methods were first evaluated using statistical parameters between images obtained by fusionand the "perfect" image as well as the estimation results of biophysical parameters obtained by minimizingthe radiative transfer model in water.Le suivi des zones côtières nécessite à la fois une bonne résolution spatiale, une bonne résolution spectraleassociée à un bon rapport signal sur bruit et enfin une bonne résolution temporelle pour visualiser deschangements rapides de couleur de l’eau.Les capteurs disponibles actuellement, et même ceux prévus prochainement, n’apportent pas à la fois unebonne résolution spatiale, spectrale ET temporelle. Dans cette étude, nous nous intéressons à la fusion de 2futurs capteurs qui s’inscrivent tous deux dans le programme Copernicus de l’agence spatiale européenne:MSI sur Sentinel-2 et OLCI sur Sentinel-3.Comme les capteurs MSI et OLCI ne fournissent pas encore d’images, il a fallu les simuler. Pour cela nousavons eu recours aux images hyperspectrales du capteur HICO. Nous avons alors proposé 3 méthodes : uneadaptation de la méthode ARSIS à la fusion d’images multispectrales (ARSIS), une méthode de fusion baséesur la factorisation de tenseurs non-négatifs (Tenseur) et une méthode de fusion basée sur l’inversion dematrices (Inversion)Ces 3 méthodes ont tout d’abord été évaluées à l’aide de paramètres statistiques entre les images obtenuespar fusion et l’image « parfaite » ainsi que sur les résultats d’estimation de paramètres biophysiques obtenuspar minimisation du modèle de transfert radiatif dans l’eau

    Multi-frame reconstruction using super-resolution, inpainting, segmentation and codecs

    Get PDF
    In this thesis, different aspects of video and light field reconstruction are considered such as super-resolution, inpainting, segmentation and codecs. For this purpose, each of these strategies are analyzed based on a specific goal and a specific database. Accordingly, databases which are relevant to film industry, sport videos, light fields and hyperspectral videos are used for the sake of improvement. This thesis is constructed around six related manuscripts, in which several approaches are proposed for multi-frame reconstruction. Initially, a novel multi-frame reconstruction strategy is proposed for lightfield super-resolution in which graph-based regularization is applied along with edge preserving filtering for improving the spatio-angular quality of lightfield. Second, a novel video reconstruction is proposed which is built based on compressive sensing (CS), Gaussian mixture models (GMM) and sparse 3D transform-domain block matching. The motivation of the proposed technique is the improvement in visual quality performance of the video frames and decreasing the reconstruction error in comparison with the former video reconstruction methods. In the next approach, student-t mixture models and edge preserving filtering are applied for the purpose of video super-resolution. Student-t mixture model has a heavy tail which makes it robust and suitable as a video frame patch prior and rich in terms of log likelihood for information retrieval. In another approach, a hyperspectral video database is considered, and a Bayesian dictionary learning process is used for hyperspectral video super-resolution. To that end, Beta process is used in Bayesian dictionary learning and a sparse coding is generated regarding the hyperspectral video super-resolution. The spatial super-resolution is followed by a spectral video restoration strategy, and the whole process leveraged two different dictionary learnings, in which the first one is trained for spatial super-resolution and the second one is trained for the spectral restoration. Furthermore, in another approach, a novel framework is proposed for replacing advertisement contents in soccer videos in an automatic way by using deep learning strategies. For this purpose, a UNET architecture is applied (an image segmentation convolutional neural network technique) for content segmentation and detection. Subsequently, after reconstructing the segmented content in the video frames (considering the apparent loss in detection), the unwanted content is replaced by new one using a homography mapping procedure. In addition, in another research work, a novel video compression framework is presented using autoencoder networks that encode and decode videos by using less chroma information than luma information. For this purpose, instead of converting Y'CbCr 4:2:2/4:2:0 videos to and from RGB 4:4:4, the video is kept in Y'CbCr 4:2:2/4:2:0 and merged the luma and chroma channels after the luma is downsampled to match the chroma size. An inverse function is performed for the decoder. The performance of these models is evaluated by using CPSNR, MS-SSIM, and VMAF metrics. The experiments reveal that, as compared to video compression involving conversion to and from RGB 4:4:4, the proposed method increases the video quality by about 5.5% for Y'CbCr 4:2:2 and 8.3% for Y'CbCr 4:2:0 while reducing the amount of computation by nearly 37% for Y'CbCr 4:2:2 and 40% for Y'CbCr 4:2:0. The thread that ties these approaches together is reconstruction of the video and light field frames based on different aspects of problems such as having loss of information, blur in the frames, existing noise after reconstruction, existing unpleasant content, excessive size of information and high computational overhead. In three of the proposed approaches, we have used Plug-and-Play ADMM model for the first time regarding reconstruction of videos and light fields in order to address both information retrieval in the frames and tackling noise/blur at the same time. In two of the proposed models, we applied sparse dictionary learning to reduce the data dimension and demonstrate them as an efficient linear combination of basis frame patches. Two of the proposed approaches are developed in collaboration with industry, in which deep learning frameworks are used to handle large set of features and to learn high-level features from the data
    corecore