109 research outputs found

    Möglichkeiten und Grenzen von Joint Implementation im Bereich fossiler Kraftwerke am Beispiel der VR China

    Full text link
    In der Klimakonvention von Rio de Janeiro wurde die Möglichkeit des Joint Implementation (JI) zur Erreichung von Minderungsverpflichtungen für Klimagase eröffnet. JI-Projekte er-möglichen den Industrieländern, ihre in internationalen Verträgen eingegangenen Verpflich-tungen durch die Finanzierung von Umweltschutzmaßnahmen in anderen Ländern zu erfüllen. Große CO2-Minderungspotentiale bestehen in China. Bis zum Jahr 2010 sollen dort zusätzli-che Kohlekraftwerke mit einer Leistung von 13.000MW jährlich gebaut werden. Der dadurch zu erwartende Anstieg der CO2-Emissionen wird die Effekte der CO2-Minderungsmaßnahmen in den Industrieländern größtenteils kompensieren. Durch CO2-Reduktionen im Rahmen von JI-Projekten beim Neubau und bei der Umrüstung von Kohlekraftwerken in China kann der Problemdruck, der durch den Ausbau des Energiesektors in China entsteht verringert werden. Gleichzeitig würden sich für die Kohlekraftwerksindustrie neue Marktchancen eröffnen. Vor diesem Hintergrund hat sich das ZEW im Rahmen eines internen Projekts zum Thema "Chancen von Umwelttechnologien unter dem Einfluß umweltpolitischer Rahmenbedingun-gen" u.a. mit den Perspektiven der Kraftwerksindustrie im Zusammenhang mit der Durchfüh-rung von Joint Implementation-Maßnahmen beschäftigt. Ziel dieser Teilstudie war es, am Beispiel der Bundesrepublik Deutschland und der VR China zu untersuchen, welche Möglichkeiten das Instrument des Joint Implementation bietet, um Projekte im Bereich fossiler Kraftwerke durchzuführen. Zunächst wurde dazu die zukünftige Bedeutung der VR China und der Bundesrepublik Deutschland für eine globale CO2-Minderungsstrategie herausgearbeitet. Anschließend wurde die Art und Weise und der Umfang, in dem verschiedene technische Optionen zur Emissionsreduktion beitragen können, analysiert. Dieser Arbeitsschritt bildete die Grundlage zur Diskussion der Einsatzmöglichkei-ten fortschrittlicher Kraftwerkstechniken zur Ausschöpfung von Reduktionspotentialen in der Bundesrepublik Deutschland und der VR China. Darauf aufbauend wurde schließlich ein Konzept erarbeitet, das eine effiziente Nutzung von Reduktionspotentialen in der VR China durch JI-Kraftwerksprojekte ermöglicht

    A Sample Advisor for Approximate Query Processing

    Get PDF
    The rapid growth of current data warehouse systems makes random sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatic sample selection remained (almost) unaddressed. In this paper, we tackle the problem with a sample advisor. We propose a cost model to evaluate a sample for a given query. Based on this, our sample advisor determines the optimal set of samples for a given set of queries specified by an expert. We further propose an extension to utilize recorded workload information. In this case, the sample advisor takes the set of queries and a given memory bound into account for the computation of a sample advice. Additionally, we consider the merge of samples in case of overlapping sample advice and present both an exact and a heuristic solution. Within our evaluation, we analyze the properties of the cost model and compare the proposed algorithms. We further demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments

    Optimizing Sample Design for Approximate Query Processing

    Get PDF
    The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments

    Sample Footprints fĂĽr Data-Warehouse-Datenbanken

    Get PDF
    Durch stetig wachsende Datenmengen in aktuellen Data-Warehouse-Datenbanken erlangen Stichproben eine immer größer werdende Bedeutung. Insbesondere interaktive Analysen können von den signifikant kürzeren Antwortzeiten der approximativen Anfrageverarbeitung erheblich profitieren. Linked-Bernoulli-Synopsen bieten in diesem Szenario speichereffiziente, schemaweite Synopsen, d. h. Synopsen mit Stichproben jeder im Schema enthaltenen Tabelle bei minimalem Mehraufwand für die Erhaltung der referenziellen Integrität innerhalb der Synopse. Dies ermöglicht eine effiziente Unterstützung der näherungsweisen Beantwortung von Anfragen mit beliebigen Fremdschlüsselverbundoperationen. In diesem Artikel wird der Einsatz von Linked-Bernoulli-Synopsen in Data-Warehouse-Umgebungen detaillierter analysiert. Dies beinhaltet zum einen die Konstruktion speicherplatzbeschränkter, schemaweiter Synopsen, wobei unter anderem folgende Fragen adressiert werden: Wie kann der verfügbare Speicherplatz auf die einzelnen Stichproben aufgeteilt werden? Was sind die Auswirkungen auf den Mehraufwand? Zum anderen wird untersucht, wie Linked-Bernoulli-Synopsen für die Verwendung in Data-Warehouse-Datenbanken angepasst werden können. Hierfür werden eine inkrementelle Wartungsstrategie sowie eine Erweiterung um eine Ausreißerbehandlung für die Reduzierung von Schätzfehlern approximativer Antworten von Aggregationsanfragen mit Fremdschlüsselverbundoperationen vorgestellt. Eine Vielzahl von Experimenten zeigt, dass Linked-Bernoulli-Synopsen und die in diesem Artikel präsentierten Verfahren vielversprechend für den Einsatz in Data-Warehouse-Datenbanken sind.With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. In this scenario, Linked Bernoulli Synopses provide memory-efficient schema-level synopses, i. e., synopses that consist of random samples of each table in the schema with minimal overhead for retaining foreign-key integrity within the synopsis. This provides efficient support to the approximate answering of queries with arbitrary foreign-key joins. In this article, we focus on the application of Linked Bernoulli Synopses in data warehouse environments. On the one hand, we analyze the instantiation of memory-bounded synopses. Among others, we address the following questions: How can the given space be partitioned among the individual samples? What is the impact on the overhead? On the other hand, we consider further adaptations of Linked Bernoulli Synopses for usage in data warehouse databases. We show how synopses can incrementally be kept up-to-date when the underlying data changes. Further, we suggest additional outlier handling methods to reduce the estimation error of approximate answers of aggregation queries with foreign-key joins. With a variety of experiments, we show that Linked Bernoulli Synopses and the proposed techniques have great potential in the context of data warehouse databases

    Linked Bernoulli Synopses: Sampling along Foreign Keys

    Get PDF
    Random sampling is a popular technique for providing fast approximate query answers, especially in data warehouse environments. Compared to other types of synopses, random sampling bears the advantage of retaining the dataset’s dimensionality; it also associates probabilistic error bounds with the query results. Most of the available sampling techniques focus on table-level sampling, that is, they produce a sample of only a single database table. Queries that contain joins over multiple tables cannot be answered with such samples because join results on random samples are often small and skewed. On the contrary, schema-level sampling techniques by design support queries containing joins. In this paper, we introduce Linked Bernoulli Synopses, a schema-level sampling scheme based upon the well-known Join Synopses. Both schemes rely on the idea of maintaining foreign-key integrity in the synopses; they are therefore suited to process queries containing arbitrary foreign-key joins. In contrast to Join Synopses, however, Linked Bernoulli Synopses correlate the sampling processes of the different tables in the database so as to minimize the space overhead, without destroying the uniformity of the individual samples. We also discuss how to compute Linked Bernoulli Synopses which maximize the effective sampling fraction for a given memory budget. The computation of the optimum solution is often computationally prohibitive so that approximate solutions are needed. We propose a simple heuristic approach which is fast and seems to produce close-to-optimum results in practice. We conclude the paper with an evaluation of our methods on both synthetic and real-world datasets

    Flow cytometric quantification of apoptotic and proliferating cells applying an improved method for dissociation of spheroids

    Get PDF
    Spheroids are a promising tool for many cell culture applications, but their microscopic analysis is limited. Flow cytometry on a single cell basis, which requires a gentle but also efficient dissociation of spheroids, could be an alternative analysis. Mono-culture and coculture spheroids consisting of human fibroblasts and human endothelial cells were generated by the liquid overlay technique and were dissociated using AccuMax as a dissociation agent combined with gentle mechanical forces. This study aimed to quantify the number of apoptotic and proliferative cells. We were able to dissociate spheroids of differing size, age, and cellular composition in a single-step dissociation protocol within 10 min. The number of single cells was higher than 95% and in most cases, the viability of the cells after dissociation was higher than 85%. Coculture spheroids exhibited a higher sensitivity as shown by lower viability, higher amount of cellular debris, and a higher amount of apoptotic cells. Considerable expression of the proliferation marker Ki67 could only be seen in 1-day-old spheroids but was already downregulated on Day 3. In summary, our dissociation protocol enabled a fast and gentle dissociation of spheroids for the subsequent flow cytometric analysis. The chosen cell type had a strong influence on cell viability and apoptosis. Initially high rates of proliferative cells decreased rapidly and reached values of healthy tissue 3 days after generation of the spheroids. In conclusion, the flow cytometry of dissociated spheroids could be a promising analytical tool, which could be ideally combined with microscopic techniques

    Efficient Forecasting for Hierarchical Time Series

    Get PDF
    Forecasting is used as the basis for business planning in many application areas such as energy, sales and traffic management. Time series data used in these areas is often hierarchically organized and thus, aggregated along the hierarchy levels based on their dimensional features. Calculating forecasts in these environments is very time consuming, due to ensuring forecasting consistency between hierarchy levels. To increase the forecasting efficiency for hierarchically organized time series, we introduce a novel forecasting approach that takes advantage of the hierarchical organization. There, we reuse the forecast models maintained on the lowest level of the hierarchy to almost instantly create already estimated forecast models on higher hierarchical levels. In addition, we define a hierarchical communication framework, increasing the communication flexibility and efficiency. Our experiments show significant runtime improvements for creating a forecast model at higher hierarchical levels, while still providing a very high accuracy

    pEDM: Online-Forecasting for Smart Energy Analytics

    Get PDF
    Continuous balancing of energy demand and supply is a fundamental prerequisite for the stability of energy grids and requires accurate forecasts of electricity consumption and production at any point in time. Today's Energy Data Management (EDM) systems already provide accurate predictions, but typically employ a very time-consuming and inflexible forecasting process. However, emerging trends such as intra-day trading and an increasing share of renewable energy sources need a higher forecasting efficiency. Additionally, the wide variety of applications in the energy domain pose different requirements with respect to runtime and accuracy and thus, require flexible control of the forecasting process. To solve this issue, we introduce our novel online forecasting process as part of our EDM system called pEDM. The online forecasting process rapidly provides forecasting results and iteratively refines them over time. Thus, we avoid long calculation times and allow applications to adapt the process to their needs. Our evaluation shows that our online forecasting process offers a very efficient and flexible way of providing forecasts to the requesting applications

    Forecasting in Hierarchical Environments

    Get PDF
    Forecasting is an important data analysis technique and serves as the basis for business planning in many application areas such as energy, sales and traffic management. The currently employed statistical models already provide very accurate predictions, but the forecasting calculation process is very time consuming. This is especially true since many application domains deal with hierarchically organized data. Forecasting in these environments is especially challenging due to ensuring forecasting consistency between hierarchy levels, which leads to an increased data processing and communication effort. For this purpose, we introduce our novel hierarchical forecasting approach, where we propose to push forecast models to the entities on the lowest hierarch level and reuse these models to efficiently create forecast models on higher hierarchical levels. With that we avoid the time-consuming parameter estimation process and allow an almost instant calculation of forecasts
    • …
    corecore