62 research outputs found

    A bankable method of assessing the performance of a CPV plant

    Full text link
    Concentrating Photovoltaics (CPV) is an alternative to flat-plate module photovoltaic (PV) technology. The bankability of CPV projects is an important issue to pave the way toward a swift and sustained growth in this technology. The bankability of a PV plant is generally addressed through the modeling of its energy yield under a baseline loss scenario, followed by an on-site measurement campaign aimed at verifying its energy performance. This paper proposes a procedure for assessing the performance of a CPV project, articulated around four main successive steps: Solar Resource Assessment, Yield Assessment, Certificate of Provisional Acceptance, and Certificate of Final Acceptance. This methodology allows the long-term energy production of a CPV project to be estimated with an associated uncertainty of ≈5%. To our knowledge, no such method has been proposed to the CPV industry yet, and this critical situation has hindered or made impossible the completion of several important CPV projects undertaken in the world. The main motive for this proposed method is to bring a practical solution to this urgent problem. This procedure can be operated under a wide range of climatic conditions, and makes it possible to assess the bankability of a CPV plant whose design uses any of the technologies currently available on the market. The method is also compliant with both international standards and local regulations. In consequence, its applicability is both general and international

    Turbidity determination from broadband irradiance measurements: A detailed multicoefficient approach

    No full text
    A physically modeled method is presented to obtain accurate turbidity determinations from broadband direct irradiance measurements. The method uses parameterizations of various extinction processes affecting the transfer of shortwave radiation in a cloudless atmosphere. The integration over the shortwave solar spectrum is performed with a more realistic weighting function than is conventionally used. The calculation and properties of the broadband aerosol optical depth are discussed in detail as a function of the aerosol optical characteristics. The method is general, as it can predict any one of the four turbidity coefficients currently used in climatological studies as defined by Ångström, Linke, Unsworth-Monteith, and Schüepp. Formal interrelationships are proposed so that climatological data based on different coefficients can be consistently intercompared without recourse to empirical formulas. The new parameterizations are more detailed than those of the literature, particularly regarding the optical depth of the clean dry atmosphere that now depends explicitly on the stratospheric ozone and nitrogen dioxide amounts. This inevitably induces changes in the prediction of the broadband turbidity coefficients (Linke and Unsworth-Monteith), particularly at small zenith angles when compared to older calculations. These coefficients are also shown to depend on zenith angle and precipitable water, causing parasitic variations of turbidity during a day or the year even if the aerosol characteristics do not vary. The masking effect of tropospheric nitrogen dioxide is presented, as well as a method to correct the predicted turbidity for circumsolar radiation. A detailed error analysis is discussed, showing that the instrumental error and the estimation error on precipitable water are the main limiting factors of the method. Although smaller potential error is obtained at larger zenith angles, accurate estimates of precipitable water are necessary for valid turbidity predictions when applied to clean dry atmospheres. A limited test of the method is presented, using spectral radiative data from five different sites as the reference. The method performs well, provided that accurate precipitable water data can be obtained. In contrast, the older Louche\u27s method is shown to produce unrealistic negative values under clean dry conditions. Monthly average turbidity over 3-4 years was also obtained from hourly irradiance at two sites with widely different aerosol regimes. Compared to the present results, Louche\u27s method is found to overpredict the Unsworth-Monteith coefficient at both sites, while simultaneously underpredicting the Ångström coefficient at the clearest site

    Critical evaluation of precipitable water and atmospheric turbidity in Canada using measured hourly solar irradiance

    No full text
    Global and diffuse radiation and surface meteorological measurements at Edmonton, Montreal, Port Hardy, Toronto and Winnipeg for the years 1977-1984 are analyzed to yield estimates of atmospheric precipitable water and turbidity. Three methods of estimating the precipitable water and two methods of estimating the turbidity are used and compared. Laboratory measurements of pyranometer response as a function of zenith angle are used to correct the global radiation measurements. Circumsolar radiation is removed from the direct radiation obtained from the difference of measured global and diffuse radiation. The magnitude of this circumsolar correction is discussed in the light of recent measurements and calculations of the circumsolar ratio. Turbidity time series are presented, showing a clearly defined El Chichon eruption signature in 1983-1984. A comparison with earlier results is included

    Improving the separation of direct and diffuse solar radiation components using machine learning by gradient boosting

    Get PDF
    Based on a large and recently developed database of 1-min irradiance and ancillary data observations at 54 world stations, this study uses the gradient boosting Machine Learning (ML) technique to improve the process of components separation, through which the direct and diffuse solar radiation components are estimated from 1-min global horizontal irradiance data. Here, the XGBoost implementation of gradient boosting is used both with ensembles of linear and ensembles of non-linear weak prediction models. The predictions of 140 separation models of the literature are combined using XGBoost to overall improve the random errors of the predictions of the individual separation models at any of the validation sites. The minimum prediction error is essentially achieved by a combination of 26 out of the original 140 models, with no meaningful reduction in error by combining more models. Most of these 26 models use at least three inputs in addition to clearness index. In parallel, XGBoost is also used to separate the components directly from the inputs to the separation models. From the 24 possible inputs used in the original 140 separation models, only 14 are found relevant. These 14 inputs could be used with appropriate formalism to subsequently develop a better separation model. It is found that when the training and validation datasets are not collocated, the RMSD of the predictions increases, on average, 2% with respect to the case of collocated datasets. Overall, the present results indicate that a data-driven ML approach combining a limited number of existing models can be used to considerably decrease the currently large random errors associated with such models when used separately at high temporal frequencyThe first two authors have been funded by the Spanish Ministry of Science under project ENE2014-56126-C2-2-R (AOPRIN-SOL project). Jose Antonio Ruiz Arias was supported by the Spanish Ministry of Economy and Competitiveness under the project ENE2014-56126-C2-1-R
    • …
    corecore