1,503 research outputs found
Influence of turbidity and clouds on satellite total ozone data over Madrid (Spain)
This article focuses on the comparison of the total ozone column data from three satellite instruments; Total Ozone Mapping Spectrometers (TOMS) on board the Earth Probe (EP), Ozone Monitoring Instrument (OMI) on board AURA and Global Ozone Monitoring Experiment (GOME) on board ERS/2, with ground-based measurement recorded by a well calibrated Brewer spectrophotometer located in Madrid during the period 1996–2008. A cluster classification based on solar radiation (global, direct and diffuse), cloudiness and aerosol index allow selecting hazy, cloudy, very cloudy and clear days. Thus, the differences between Brewer and satellite total ozone data for each cluster have been analyzed. The accuracy of EP-TOMS total ozone data is affected by moderate cloudiness, showing a mean absolute bias error (MABE) of 2.0%. In addition, the turbidity also has a significant influence on EP-TOMS total ozone data with a MABE ~1.6%. Those data are in contrast with clear days with MABE ~1.2%. The total ozone data derived from the OMI instrument show clear bias at clear and hazy days with small uncertainties (~0.8%). Finally, the total ozone observations obtained with the GOME instrument show a very smooth dependence with respect to clouds and turbidity, showing a robust retrieval algorithm over these conditions.Manuel Ant´on
thanks Ministerio de Ciencia e Innovaci´on and Fondo Social Europeo
for the award of a postdoctoral grant (Juan de la Cierva). This
work was partially supported by Ministerio de Ciencia e Innovacion
under project CGL2008-05939-C03-02/CLI
Standardization of an enzyme-linked immunosorbent assay (Elisa) for the determination of avidity of avian IGY immunoglobulin.
Projeto/Plano de Ação: 11.11.11.111
Optimizacion del diseino de una red de distribucion de agua potable
En el presente reporte se present an los resultados obtenidos por el grupo de trabajo que estudio el problema de disenar de manera optima, una red de distribucion de agua potable. Esencialmente se discuten dos clases de estrategias. En primer lugar, aquellas cuya finalidad es reducir significativamente los recursos computacionales requeridos por los algoritmos im- plementados por el IMTA. Estos algoritmos son de caracter heuristico y generan una solucion factible que no es optima. En ciertos casos se sabe que las soluciones obtenidas por dichos algoritmos estan relativamente lejos del optima y no son aceptables desde el punto de vista del disenador. La segunda clase de estrategias propuestas, esta destinada precisamente a aliviar este problema. Se sugieren tecnicas originadas en optimizacion continua yen flujo en redes
Policy conflict analysis for diffserv quality of service management
Policy-based management provides the ability to (re-)configure differentiated services networks so that desired Quality of Service (QoS) goals are achieved. This requires implementing network provisioning decisions, performing admission control, and adapting bandwidth allocation to emerging traffic demands. A policy-based approach facilitates flexibility and adaptability as policies can be dynamically changed without modifying the underlying implementation. However, inconsistencies may arise in the policy specification. In this paper we provide a comprehensive set of QoS policies for managing Differentiated Services (DiffServ) networks, and classify the possible conflicts that can arise between them. We demonstrate the use of Event Calculus and formal reasoning for the analysis of both static and dynamic conflicts in a semi-automated fashion. In addition, we present a conflict analysis tool that provides network administrators with a user-friendly environment for determining and resolving potential inconsistencies. The tool has been extensively tested with large numbers of policies over a range of conflict types
Geophysical validation and long-term consistency between GOME-2/MetOp-A total ozone column and measurements from the sensors GOME/ERS-2, SCIAMACHY/ENVISAT and OMI/Aura
The main aim of the paper is to assess the consistency of five years of Global Ozone Monitoring Experiment-2/Metop-A [GOME-2] total ozone columns and the long-term total ozone satellite monitoring database already in existence through an extensive inter-comparison and validation exercise using as reference Brewer and Dobson ground-based measurements. The behaviour of the GOME-2 measurements is being weighed against that of GOME (1995–2011), Ozone Monitoring Experiment [OMI] (since 2004) and the Scanning Imaging Absorption spectroMeter for Atmospheric CartograpHY [SCIAMACHY] (since 2002) total ozone column products. Over the background truth of the ground-based measurements, the total ozone columns are inter-evaluated using a suite of established validation techniques; the GOME-2 time series follow the same patterns as those observed by the other satellite sensors. In particular, on average, GOME-2 data underestimate GOME data by about 0.80%, and underestimate SCIAMACHY data by 0.37% with no seasonal dependence of the differences between GOME-2, GOME and SCIAMACHY. The latter is expected since the three datasets are based on similar DOAS algorithms. This underestimation of GOME-2 is within the uncertainty of the reference data used in the comparisons. Compared to the OMI sensor, on average GOME-2 data underestimate OMI_DOAS (collection 3) data by 1.28%, without any significant seasonal dependence of the differences between them. The lack of seasonality might be expected since both the GOME data processor [GDP] 4.4 and OMI_DOAS are DOAS-type algorithms and both consider the variability of the stratospheric temperatures in their retrievals. Compared to the OMI_TOMS (collection 3) data, no bias was found. We hence conclude that the GOME-2 total ozone columns are well suitable to continue the long-term global total ozone record with the accuracy needed for climate monitoring studies
Recommended from our members
A framework for data regression of heat transfer data using machine learning
Data availability: The data that has been used is confidential.Machine Learning (ML) algorithms are emerging in various industries as a powerful complement/alternative to traditional data regression methods. A major reason is that, unlike deterministic models, they can be used even in the absence of detailed phenomenological knowledge. Not surprisingly, the use of ML algorithms is being explored also in heat transfer applications. It is of particular interest in systems dealing with complex geometries and underlying phenomena (e.g. fluid phase change, multi-phase flow, heavy fouling build-up). However, heat transfer systems present specific challenges that need addressing, such as the scarcity of high-quality data, the inconsistencies across published data sources, the complex (and often correlated) influence of inputs, the split of data between training and testing sets, and the limited extrapolation capabilities to unseen conditions. In an attempt to help overcome some of these challenges and, more importantly, to provide a systematic approach, this article reviews and analyses past efforts in the application of ML algorithms to heat transfer applications, and proposes a regression framework for their deployment to estimate key quantities (e.g. heat transfer coefficient), to be used for improved design and operation of heat exchangers. The framework consists of six steps: i) data pre-treatment, ii) feature selection, iii) data splitting philosophy, iv) training and testing, v) tuning of hyperparameters, and vi) performance assessment with specific indicators, to support the choice of accurate and robust models. A relevant case study involving the estimation of the condensation heat transfer coefficient in microfin tubes is used to illustrate the proposed framework. Two data-driven algorithms, Deep Neural Networks and Random Forest, are tested and compared in terms of their estimation and extrapolation capabilities. The results show that ML algorithms are generally more accurate in predicting the heat transfer coefficient than a well-known semi-empirical correlation proposed in past studies, where the mean absolute error of the most suitable ML model is 535 [Wm2K-1], compared to the error using the correlation of 1061 [Wm2K-1]. In terms of extrapolation, the selected ML model has a mean absolute error of 1819 [Wm2K-1], while for the correlation is 1111 [Wm2K-1], indicating a disadvantage of the use of semi-empirical models, although the comparison was not entirely suitable, given that the correlation was used as is and no training was done. In addition, feature selection enables simpler models that depend only on features that are potentially most related to the target variable. Special attention is needed however, as overfitting and limited extrapolation capabilities are common difficulties that are encountered when deploying these models.Hexxcell Ltd
- …