20 research outputs found

    Improving feedback through computer-based language proficiency assessment

    Get PDF
    This paper reports on the proposed transfer of a paper-based English proficiency exam to an online platform. We discuss both the potential predetermined advantages, which were the impetus for the project, and also some emergent benefits, which prompted an in-depth analysis and reconceptualisation of the exam’s role, which in turn we hope will promote positive washback as well as washforward. This change will be afforded through more granular feedback on student performance, which will be facilitated by the online platform

    On the Reprocessing and Reanalysis of Observations for Climate. Climate Science for Serving Society: Research, Modelling and Prediction Priorities, edited by

    No full text
    ABSTRACT The long observational record is critical to our understanding of the Earth's climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. The purpose of this paper is to both review recent progress in reprocessing and reanalyzing observations, and summarize the challenges that must be overcome in order to improve our understanding of climate and variability. Reprocessing improves data quality through more scrutiny and improved retrieval techniques for individual observing systems, while reanalysis merges many disparate observations with models through data assimilation, yet both aim to provide a climatology of Earth processes. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Oceanic reanalyses have made significant advances in recent years, but will only be discussed here in terms of progress toward integrated Earth system analyses. Climate data sets are generally adequate for process studies and large-scale climate variability. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. It must be emphasized that careful investigation of the data and processing methods are required to use the observations appropriately. Reprocessing Observations A major difficulty in understanding past climate change is that, with very few exceptions, the systems used to make the observations that climate scientists now rely on were not designed with their needs in mind. Early measurements were often made out of simple scientific curiosity or needs other than for understanding climate or forecasting it; latterly, many systems have been driven by other needs such as operational weather forecasting, or by accelerating improvements in technology. This has two major consequences. The first consequence is that although large numbers of observations are available in digital archives, many more still exist only as paper records, or on obsolete electronic media and are therefore not available for analysis. Measurements made by early satellites, whaling ships, missions of exploration, colonial administrators, and commercial concerns (to name only a few) are found in archives scattered around the world. Finding, photographing and digitizing observations from paper records and locating machines capable of reading old data tapes, punch cards, strip charts or magnetic tapes are each time-consuming and costly, but they are vital to improving our understanding of the climate. Furthermore, there is a growing need for longer, higher quality data bases of synoptic timescale phenomena in order to address questions and concerns about changing climate and weather extremes, risks and impacts under both natural climatic variability and anthropogenic climate change. Such demands are leading to a greater emphasis on the recovery, imaging, digitization, quality control and archiving of, plus ready access to, daily to sub-daily historical weather observations. These new data will ultimately improve the quality of the various reanalyses that rely on them. There is also a sense of urgency as many observations are recorded on perishable media such as paper and magnetic tapes which 3 degrade over time. Without intervention, our ability to understand and reconstruct the past is disintegrating in a disturbingly literal sense. The second major consequence is that current observation system requirements for climate monitoring and model validation such as those specified by GCOS (http://www.wmo.int/pages/prog/gcos/index.php?name=ClimateMonitoringPrinciples) -typically emphasising continuity and stability over resolution and timeliness -are met by few historical observing systems. Changes in instrumentation, reporting times and station locations introduce non-climatic artifacts in the data necessitating consistent reprocessing to recover homogeneous climate records. Nevertheless, reliable assessments of changes in the global climate have been made such as the IPCC's statement that "warming of the climate system is unequivocal". This assessment relies on the many multi-decadal climate series which now exist. Reprocessing of observations aims to improve the quality of the data through better algorithms and to understand and communicate the errors and consequent uncertainties in the raw and processed observations. Reanalyses differ from reprocessed observational data sets in that sophisticated data assimilation techniques are used in combination with global forecast models to produce global estimates of continuous data fields based on multiple observational sources (to be discussed in the following section). Data recovery and archiving A vital first step for the understanding of historical data and hence past climate is to digitize and make freely available the vast numbers of measurements, other observations and related metadata that currently exist only in hard copy archives or on inaccessible (or obsolete) electronic media. Some estimates suggest that the number of undigitised observations prior to the , 2011, 2012). These projects and initiatives urgently need to be imbedded in an overarching, sustainable, fully funded and staffed international infrastructure that oversees data rescue activities, and compliments the various implementation and strategy plans and documents on data through international coordinating bodies, such as GCOS, GEO, WMO and WCRP. The consolidation of meteorological, hydrological and oceanographic reports and observations into large archives facilitates the creation of a range of 'summary' data sets which are widely used in climate science and can also act as a focus for an international community of researchers. However, further consolidation could bring greater benefits. A land equivalent of the ICOADS, for example, would bring together many of the elements needed to fully describe the meteorological situation and potentially reduce the efforts that are currently expended to maintain and grow a large number of different datasets. In fact, both the terrestrial and marine data efforts need to be integrated and better linked up under an international framework that supports their activities in a fully sustainable manner. Data set creation and evaluation The difficulties of converting raw observations into data sets which are of use to climate Because of the obvious difficulties with observationally-based data sets, it is dangerous to consider them as unproblematic data points which one can use to build and challenge theories and hypotheses regarding the climate. The reality is not so simple. The data sets are themselves based on assumptions and hypotheses concerning the means by which the observed quantity is physically related to the climatological variable of interest. In the first example given above, the MSUs are sensitive to microwave emissions from oxygen molecules in the atmosphere. To convert the measured radiances to atmospheric temperature requires knowledge of atmospheric structure, the physical state of the satellite, quantum mechanics and orbital geometry. In the first two examples above, the earliest attempts to create homogeneous data series underestimated the uncertainties because they did not consider a wide enough range of systematic effects. The physical understanding of the system under study was incomplete. Such problems are not unique to the study of climate data; see for example, Kirshner (2004) on the difficulties of estimating the Hubble constant. The uncertainty highlighted by the differences between independently processed data sets is often referred to as structural uncertainty. It arises from the many different choices made in the processing chain from raw observations to finished product. Part of this difference will arise from the different systematic effects consideredimplicitly and explicitly -by the groups, but part will also arise from the different ways independent groups tackle the same problems. In most cases there are a wide variety of ways in which a particular problem can be approached and no single method can be proved definitively to be correct. The uncertainty associated with small changes in method (for example, using a 99% significance cutoff as opposed to 95% for identifying station breaks) can be assessed using monte-carlo techniques (see e.g. This slow evolution underlies what drives improvements in the understanding of the data. It also highlights the fact that no reprocessing is likely to be final and definitive. These considerations show the ongoing importance of making multiple, independent data sets of the same variable and many analyses that rely on climate data sets use multiple data sets to show that their results are not sensitive to structural uncertainty. Assessing the quality of anything is a difficult task Climate research encompasses a large range of studies, from process studies, overlapping more traditional research, that focus on large space-time scale interactions and coupling (ie, feedbacks) to global, long-term monitoring (change detection) and attribution (change explanation). Planning for the needs of all of these uses is difficult. The need for greater transparency and traceability of raw data characteristics, analysis methods and data product uncertainties also have to help users judge whether a particular product is useful for a particular study. Given the large range of data products currently available --both raw and analyses --it is 14 sometimes difficult for users to identify, locate and obtain what they need unless there is an organized set of information available. A number of approaches can help users find the data they need. First, users need information about the various data sets. Journal papers and technical reports describing data set construction are often less useful as user guides, with technical details hidden behind journal paywalls or spread across a series of publications. Initiatives such as the Climate Data Guide project aims to provide expert and concise reviews of data and quality (http://climatedataguide.ucar.edu/). By comparing data sets side by side in a common setting, it should be easier for users to understand the relative strengths and weaknesses of different data sets. Second, the users need to be able to find the data. This is easiest to do if there exists a common method for data discovery. At the basic level of individual meteorological reports, there exist a large number of archives (as mentioned before). At a higher level, there is no single repository for gridded and otherwise processed observational data sets that is analogous to the CMIP archive of model data Third, the information and data sets need to be integrated. There is not as yet a systematic way to gather value that has been added by a community that works with the data. 2. Terrestrial and marine data efforts need to be integrated and better linked up under an international framework that supports their activities in a fully sustainable manner. 3. An archive of observational data sets analogous to the CMIP archive of model data, should be set up and integrated with user-oriented information such as the Climate Data Guide. Reanalysis of Observations Reanalyses differ from reprocessed observational data sets in that sophisticated data assimilation techniques are used in combination with global forecast models to produce global estimates of continuous data fields based on multiple observational sources. One advantage of this approach is that reanalysis data products are available at all points in space and time, and that many ancillary variables, not easily or routinely observed, are generated by the forecast model subject to the constraints provided by the observations. An important disadvantage of the reanalysis technique, however, is that the effect of model biases on the reanalyzed fields depends on the strength of the observational constraint, which varies both in space and time. This needs to be taken into account when reanalysis data are used for weather and climate research (e.g. The types of observations assimilated span the breadth of remotely sensed and instrumental insitu observations. Dealing with the complexities and uncertainties in the observing system, including data selection, quality control and bias correction, can have a crucial effect on the quality of the resulting reanalysis data. Given the importance of reanalysis for weather and climate research and applications, successive generations of advanced reanalysis products can be anticipated. In the near future, coupling ocean, land and atmosphere will allow an integrated aspect of the reanalysis of historical observations, but may also increase the presence of model uncertainty. However, with the complexity of all the components of the Earth system, realizing the true potential of such advancements will require coordination, not only among developers of future reanalyses but also with the research community
    corecore