Táto stránka zatiaľ nie je preložená do slovenčiny. Pozrieť si ju môžete v angličtine.
There is a popular saying - one of the few things that we certainly know is that nothing is totally certain. It may sound contradictory, but even when talking science, one should always assume so.
Particularly for solar resource assessment, using a reliable methodology for estimating the possible deviations is essential for successful development of new solar power plants. Good knowledge of the measuring and modelling principles and related accuracies of weather data helps select the optimum design of the power plant, evaluate risks and return on investment.
Therefore, it is important for providers of solar radiation data not only to develop better weather models and measuring techniques (i.e. to deliver more accurate data), but also to improve understanding of their performance (i.e. to better estimate the data uncertainties).
In addition to accurate solar radiation data and other meteorological parameters, having reliable estimates of the expected data confidence intervals may be a conditio sine qua non to gain access to financial resources for the project. Information on data uncertainty opens the door for the estimation of financial risks. This is customarily done by investors and bankers on the basis of the P75, P90, P95 and P99 statistical scores.
From the technical perspective, optimizing the components of a power plant without knowing the data uncertainty is practically impossible for engineers, who need to make sure that selected equipment will end up working under manufacturer’s recommended conditions.
The process of estimating solar radiation data accuracy can be sometimes unclear. Below I have summarized it in four steps:
Step 1 Conduct model-measurement comparisons. Most often known as ‘validation statistics’, it consists of systematic comparison of top-class high-quality instruments (for solar radiation, secondary standard pyranometers and first class pyrheliometers) with the model estimates at the same locations and periods. Ideally, this has to be repeated for as many locations and the longer period as possible. For general evaluation of a model, this should be done for meteorological stations representing all geographical regions.
Still, ground measurements are never perfect, and the highest quality and well operated GHI data can have an uncertainty in the range of ±2 to ±3%. Therefore, this should be always considered in the data model uncertainty estimation. Before any comparison, the measured data need to go through a rigorous quality assessment to remove from the comparison values that are affected by measurements errors. The measurement issues can be only seen in the high-resolution data, optimally sub-hourly or hourly values. Data that is aggregated on daily or monthly level is of no use, because it is impossible to perform proper quality screening, and errors in the measurements cannot be quantified.
Results for Global Horizontal Irradiation (GHI) from Solargis model are presented in the images below.
Step 2 Build model error distributions. Once the comparison has been done at a sufficient number of validation sites, we can have a first guess of the model uncertainty. As a direct result of Step 1, we have the frequency distribution of model errors with respect to the ground measurements, which represents the number of error occurrences for each error size. A commonly used statistical model to represent this error distribution is the Normal distribution, which is fully characterized by the mean and standard deviation of errors. These two parameters are computed at all validation locations, seasons and for all sites combined.
Step 3 Analyze and localize the model error distributions. The characteristics of the different empirical error distributions found in Step 2 are analyzed and confronted with the characteristics of each validation site and validation period. The aim is to associate validation site and period properties such as site elevation, cloud variability or atmospheric turbidity with properties of the error distribution. As a result of this, we can identify factors affecting performance of solar model, for instance high mountains, snow conditions, reflecting deserts, proximity to sea coast or urbanized areas. The limited availability of public reference stations in certain regions also requires us to consider more conservative estimates of uncertainty.
Step 4 Generalize to a global error model. Assembling together the findings in Step 3, the aim at this step is to build an error model that responses to the most important factors at determining the model data uncertainty at global scale, and therefore being able to estimate the level of uncertainty for any requested site. This is not an easy task either and requires deep and expert knowledge of the model and its internal algorithm and inputs.
In conclusion, the inference of solar radiation uncertainty involves a constant iteration of the steps just described. While new solar projects are developed, new weather stations are installed and science progresses, we are all becoming more certain about solar radiation uncertainty.