435 research outputs found
EXPLORING PAST CLIMATE VARIABILITY IN THE GREATER ALPINE REGION
The presentation discusses the potential, the needs and the state of the art of climate variability data quality and analysis in the instrumental period. The greater alpine region is used as an example.
Problems and solutions concerning the non climatic noise in time series is discussed (the homogeneity and outlier problem) and some first results based on the new HISTALP datasets are shown
Die Stadt und der Fortschritt : ein Gedanke
Wissenschaftliches Kolloquium vom 27. bis 30. Juni 1996 in Weimar an der Bauhaus-Universität zum Thema: ‚Techno-Fiction. Zur Kritik der technologischen Utopien
Sortenwahl und Schädlingsdruck im ökologischen Rapsanbau
In den Jahren 2009 und 2010 wurden Sortenversuche mit 7 Linien- und 4 Hybridsorten auf dem Versuchsbetrieb des Institutes für Ökologischen Landbau (Trenthorst) durchgeführt. Die Witterungsbedingungen in 2009 führten zu einer starken physiologischen Knospenwelke; in 2010 war die Vorwinterentwicklung des Raps durch die Trockenheit nach der Aussaat und dem schneereichen Winter in Mitleidenschaft gezogen. In beiden Jahren lag der Ertrag auf niedrigem bis mittlerem Niveau. Unter diesen Bedingungen erzielte die Sorte Visby den höchsten Ertrag mit 20,8 bzw. 20,6 dt ha-1. In 2009 lag der Ertrag der Sorte Robust mit 20,5 dt ha-1 auf vergleichbarem Niveau, in 2010 wies Robust jedoch den geringsten Ertrag (10,3 dt ha-1) auf, während Lorenz mit 17,7 dt ha-1 die beste Liniensorte war. Der geringere Befall mit Rapsglanzkäfern in 2010 hatte keinen positiven Einfluss auf das Ertragsniveau, weil der Raps durch die Witterungsbedingungen im Winter bereits zu geschwächt war
Forcasting Evolving Time Series of Energy Demand and Supply
Real-time balancing of energy demand and supply requires accurate and efficient forecasting in order to take future consumption and production into account. These balancing capabilities are reasoned by emerging energy market developments, which also pose new challenges to forecasting in the energy domain not addressed so far: First, real-time balancing requires accurate forecasts at any point in time. Second, the hierarchical market organization motivates forecasting in a distributed system environment. In this paper, we present an approach that adapts forecasting to the hierarchical organization of today’s energy markets. Furthermore, we introduce a forecasting framework, which allows efficient forecasting and forecast model maintenance of time series that evolve due to continuous streams of measurements. This framework includes model evaluation and adaptation techniques that enhance the model maintenance process by exploiting context knowledge from previous model adaptations. With this approach (1) more accurate forecasts can be produced within the same time budget, or (2) forecasts with similar accuracy can be produced in less time
Indexing forecast models for matching and maintenance
Forecasts are important to decision-making and risk assessment in many domains. There has been recent interest in integrating forecast queries inside a DBMS. Answering a forecast query requires the creation of forecast models. Creating a forecast model is an expensive process and may require several scans over the base data as well as expensive operations to estimate model parameters. However, if forecast queries are issued repeatedly, answer times can be reduced significantly if forecast models are reused. Due to the possibly high number of forecast queries, existing models need to be found quickly. Therefore, we propose a model index that efficiently stores forecast models and allows for the efficient reuse of existing ones. Our experiments illustrate that the model index shows a negligible overhead for update transactions, but it yields significant improvements during query execution
Managed Query Processing within the SAP HANA Database Platform
The SAP HANA database extends the scope of traditional database engines as it supports data models beyond regular tables, e.g. text, graphs or hierarchies. Moreover, SAP HANA also provides developers with a more fine-grained control to define their database application logic, e.g. exposing specific operators which are difficult to express in SQL. Finally, the SAP HANA database implements efficient communication to dedicated client applications using more effective communication mechanisms than available with standard interfaces like JDBC or ODBC. These features of the HANA database are complemented by the extended scripting engine–an application server for server-side JavaScript applications–that is tightly integrated into the query processing and application lifecycle management. As a result, the HANA platform offers more concise models and code for working with the HANA platform and provides superior runtime performance. This paper describes how these specific capabilities of the HANA platform can be consumed and gives a holistic overview of the HANA platform starting from query modeling, to the deployment, and efficient execution. As a distinctive feature, the HANA platform integrates most steps of the application lifecycle, and thus makes sure that all relevant artifacts stay consistent whenever they are modified. The HANA platform also covers transport facilities to deploy and undeploy applications in a complex system landscape
DIPBench: An Independent Benchmark for Data-Intensive Integration Processes
The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration system's performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration system's performance, and we illustrate our reference benchmark implementation for federated DBMS
GCIP: Exploiting the Generation and Optimization of Integration Processes
As a result of the changing scope of data management towards the management of highly distributed systems and applications, integration processes have gained in importance. Such integration processes represent an abstraction of workflow-based integration tasks. In practice, integration processes are pervasive and the performance of complete IT infrastructures strongly depends on the performance of the central integration platform that executes the specified integration processes. In this area, the three major problems are: (1) significant development efforts, (2) low portability, and (3) inefficient execution. To overcome those problems, we follow a model-driven generation approach for integration processes. In this demo proposal, we want to introduce the so-called GCIP Framework (Generation of Complex Integration Processes) which allows the modeling of integration process and the generation of different concrete integration tasks. The model-driven approach opens opportunities for rule-based and workload-based optimization techniques
Optimizing Notifications of Subscription-Based Forecast Queries
Integrating sophisticated statistical methods into database management systems is gaining more and more attention in research and industry. One important statistical method is time series forecasting, which is crucial for decision management in many domains. In this context, previous work addressed the processing of ad-hoc and recurring forecast queries. In contrast, we focus on subscription-based forecast queries that arise when an application (subscriber) continuously requires forecast values for further processing. Forecast queries exhibit the unique characteristic that the underlying forecast model is updated with each new actual value and better forecast values might be available. However, (re-)sending new forecast values to the subscriber for every new value is infeasible because this can cause significant overhead at the subscriber side. The subscriber therefore wishes to be notified only when forecast values have changed relevant to the application. In this paper, we reduce the costs of the subscriber by optimizing the notifications sent to the subscriber, i.e., by balancing the number of notifications and the notification length. We introduce a generic cost model to capture arbitrary subscriber cost functions and discuss different optimization approaches that reduce the subscriber costs while ensuring constrained forecast values deviations. Our experimental evaluation on real datasets shows the validity of our approach with low computational costs
- …