22 research outputs found

    Sensitivity Analysis for Predicting Sub-Micron Aerosol Concentrations Based on Meteorological Parameters

    Get PDF
    Sub-micron aerosols are a vital air pollutant to be measured because they pose health effects. These particles are quantified as particle number concentration (PN). However, PN measurements are not always available in air quality measurement stations, leading to data scarcity. In order to compensate this, PN modeling needs to be developed. This paper presents a PN modeling framework using sensitivity analysis tested on a one year aerosol measurement campaign conducted in Amman, Jordan. The method prepares a set of different combinations of all measured meteorological parameters to be descriptors of PN concentration. In this case, we resort to artificial neural networks in the forms of a feed-forward neural network (FFNN) and a time-delay neural network (TDNN) as modeling tools, and then, we attempt to find the best descriptors using all these combinations as model inputs. The best modeling tools are FFNN for daily averaged data (with R 2 = 0.77 ) and TDNN for hourly averaged data (with R 2 = 0.66 ) where the best combinations of meteorological parameters are found to be temperature, relative humidity, pressure, and wind speed. As the models follow the patterns of diurnal cycles well, the results are considered to be satisfactory. When PN measurements are not directly available or there are massive missing PN concentration data, PN models can be used to estimate PN concentration using available measured meteorological parameters

    Sensitivity Analysis for Predicting Sub-Micron Aerosol Concentrations Based on Meteorological Parameters

    Get PDF
    Sub-micron aerosols are a vital air pollutant to be measured because they pose health effects. These particles are quantified as particle number concentration (PN). However, PN measurements are not always available in air quality measurement stations, leading to data scarcity. In order to compensate this, PN modeling needs to be developed. This paper presents a PN modeling framework using sensitivity analysis tested on a one year aerosol measurement campaign conducted in Amman, Jordan. The method prepares a set of different combinations of all measured meteorological parameters to be descriptors of PN concentration. In this case, we resort to artificial neural networks in the forms of a feed-forward neural network (FFNN) and a time-delay neural network (TDNN) as modeling tools, and then, we attempt to find the best descriptors using all these combinations as model inputs. The best modeling tools are FFNN for daily averaged data (with R 2 = 0.77 ) and TDNN for hourly averaged data (with R 2 = 0.66 ) where the best combinations of meteorological parameters are found to be temperature, relative humidity, pressure, and wind speed. As the models follow the patterns of diurnal cycles well, the results are considered to be satisfactory. When PN measurements are not directly available or there are massive missing PN concentration data, PN models can be used to estimate PN concentration using available measured meteorological parameters

    Mutual Information Input Selector and Probabilistic Machine Learning Utilisation for Air Pollution Proxies

    Get PDF
    An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.Peer reviewe

    Mutual Information Input Selector and Probabilistic Machine Learning Utilisation for Air Pollution Proxies

    Get PDF
    An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.Peer reviewe

    COVID-19 Pandemic Development in Jordan-Short-Term and Long-Term Forecasting

    Get PDF
    In this study, we proposed three simple approaches to forecast COVID-19 reported cases in a Middle Eastern society (Jordan). The first approach was a short-term forecast (STF) model based on a linear forecast model using the previous days as a learning data-base for forecasting. The second approach was a long-term forecast (LTF) model based on a mathematical formula that best described the current pandemic situation in Jordan. Both approaches can be seen as complementary: the STF can cope with sudden daily changes in the pandemic whereas the LTF can be utilized to predict the upcoming waves’ occurrence and strength. As such, the third approach was a hybrid forecast (HF) model merging both the STF and the LTF models. The HF was shown to be an efficient forecast model with excellent accuracy. It is evident that the decision to enforce the curfew at an early stage followed by the planned lockdown has been effective in eliminating a serious wave in April 2020. Vaccination has been effective in combating COVID-19 by reducing infection rates. Based on the forecasting results, there is some possibility that Jordan may face a third wave of the pandemic during the Summer of 2021.In this study, we proposed three simple approaches to forecast COVID-19 reported cases in a Middle Eastern society (Jordan). The first approach was a short-term forecast (STF) model based on a linear forecast model using the previous days as a learning data-base for forecasting. The second approach was a long-term forecast (LTF) model based on a mathematical formula that best described the current pandemic situation in Jordan. Both approaches can be seen as complementary: the STF can cope with sudden daily changes in the pandemic whereas the LTF can be utilized to predict the upcoming waves' occurrence and strength. As such, the third approach was a hybrid forecast (HF) model merging both the STF and the LTF models. The HF was shown to be an efficient forecast model with excellent accuracy. It is evident that the decision to enforce the curfew at an early stage followed by the planned lockdown has been effective in eliminating a serious wave in April 2020. Vaccination has been effective in combating COVID-19 by reducing infection rates. Based on the forecasting results, there is some possibility that Jordan may face a third wave of the pandemic during the Summer of 2021.Peer reviewe

    Delineation of dew formation zones in Iran using long-term model simulations and cluster analysis

    Get PDF
    Dew is a non-conventional source of water that has been gaining interest over the last two decades, especially in arid and semi-arid regions. In this study, we performed a long-term (1979-2018) energy balance model simulation to estimate dew formation potential in Iran aiming to identify dew formation zones and to investigate the impacts of long-term variation in meteorological parameters on dew formation. The annual average of dew occurrence in Iran was similar to 102 d, with the lowest number of dewy days in summer (similar to 7 d) and the highest in winter (similar to 45 d). The average daily dew yield was in the range of 0.03-0.14 Lm(-2) and the maximum was in the range of 0.29-0.52 Lm(-2). Six dew formation zones were identified based on cluster analysis of the time series of the simulated dew yield. The distribution of dew formation zones in Iran was closely aligned with topography and sources of moisture. Therefore, the coastal zones in the north and south of Iran (i.e., Caspian Sea and Oman Sea), showed the highest dew formation potential, with 53 and 34 Lm(-2) yr(-2), whereas the dry interior regions (i.e., central Iran and the Lut Desert), with the average of 12-18 Lm(-2) yr(-2), had the lowest potential for dew formation. Dew yield estimation is very sensitive to the choice of the heat transfer coefficient. The uncertainty analysis of the heat transfer coefficient using eight different parameterizations revealed that the parameterization used in this study the Richards (2004) formulation - gives estimates that are similar to the average of all methods and are neither much lower nor much higher than the majority of other parameterizations and the largest differences occur for the very low values of daily dew yield. Trend analysis results revealed a significant (p < 0:05) negative trend in the yearly dew yield in most parts of Iran during the last 4 decades (1979-2018). Such a negative trend in dew formation is likely due to an increase in air temperature and a decrease in relative humidity and cloudiness over the 40 years.Peer reviewe

    Short-Term and Long-Term COVID-19 Pandemic Forecasting Revisited with the Emergence of OMICRON Variant in Jordan

    Get PDF
    Three simple approaches to forecast the COVID-19 epidemic in Jordan were previously proposed by Hussein, et al.: a short-term forecast (STF) based on a linear forecast model with a learning database on the reported cases in the previous 5–40 days, a long-term forecast (LTF) based on a mathematical formula that describes the COVID-19 pandemic situation, and a hybrid forecast (HF), which merges the STF and the LTF models. With the emergence of the OMICRON variant, the LTF failed to forecast the pandemic due to vital reasons related to the infection rate and the speed of the OMICRON variant, which is faster than the previous variants. However, the STF remained suitable for the sudden changes in epi curves because these simple models learn for the previous data of reported cases. In this study, we revisited these models by introducing a simple modification for the LTF and the HF model in order to better forecast the COVID-19 pandemic by considering the OMICRON variant. As another approach, we also tested a time-delay neural network (TDNN) to model the dataset. Interestingly, the new modification was to reuse the same function previously used in the LTF model after changing some parameters related to shift and time-lag. Surprisingly, the mathematical function type was still valid, suggesting this is the best one to be used for such pandemic situations of the same virus family. The TDNN was data-driven, and it was robust and successful in capturing the sudden change in +qPCR cases before and after of emergence of the OMICRON variant

    Short-Term and Long-Term COVID-19 Pandemic Forecasting Revisited with the Emergence of OMICRON Variant in Jordan

    Get PDF
    Three simple approaches to forecast the COVID-19 epidemic in Jordan were previously proposed by Hussein, et al.: a short-term forecast (STF) based on a linear forecast model with a learning database on the reported cases in the previous 5–40 days, a long-term forecast (LTF) based on a mathematical formula that describes the COVID-19 pandemic situation, and a hybrid forecast (HF), which merges the STF and the LTF models. With the emergence of the OMICRON variant, the LTF failed to forecast the pandemic due to vital reasons related to the infection rate and the speed of the OMICRON variant, which is faster than the previous variants. However, the STF remained suitable for the sudden changes in epi curves because these simple models learn for the previous data of reported cases. In this study, we revisited these models by introducing a simple modification for the LTF and the HF model in order to better forecast the COVID-19 pandemic by considering the OMICRON variant. As another approach, we also tested a time-delay neural network (TDNN) to model the dataset. Interestingly, the new modification was to reuse the same function previously used in the LTF model after changing some parameters related to shift and time-lag. Surprisingly, the mathematical function type was still valid, suggesting this is the best one to be used for such pandemic situations of the same virus family. The TDNN was data-driven, and it was robust and successful in capturing the sudden change in +qPCR cases before and after of emergence of the OMICRON variant

    Myoglobin-Based Classification of Minced Meat Using Hyperspectral Imaging

    Get PDF
    Minced meat substitution is one of the most common frauds which not only affects consumer health but impacts their lifestyles and religious customs as well. A number of methods have been proposed to overcome these frauds; however, these mostly rely on laboratory measures and are often subject to human error. Therefore, this study proposes novel hyperspectral imaging (400–1000 nm) based non-destructive isos-bestic myoglobin (Mb) spectral features for minced meat classification. A total of 60 minced meat spectral cubes were pre-processed using true-color image formulation to extract regions of interest, which were further normalized using the Savitzky–Golay filtering technique. The proposed pipeline outperformed several state-of-the-art methods by achieving an average accuracy of 88.88%
    corecore