444 research outputs found
CLOUD COMPUTING ECOSYSTEM MODEL: REFINEMENT AND EVALUATION
A business ecosystem has evolved in the field of cloud computing whereby new types of market actors have emerged breaking up the traditional value chain of IT service provision. In order to create a pro-found understanding of this ecosystem, several scholars tried to capture it in a model. However, these models differ considerably from each other. The goal of this paper, therefore, is to develop a revised and comprehensive cloud computing ecosystem model according to the design science paradigm. For this purpose, the recently published Passau Cloud Computing Ecosystem Model (PaCE Model) is de-veloped significantly further by integrating the insights of an analysis of the existing cloud ecosystem models regarding ten criteria and by considering findings from the general cloud and business ecosys-tem literature. To ensure the integrity of the enhanced PaCE Model, the Internet is manually searched for companies occupying the roles of the model. As a result, the model comprises 26 roles and in-cludes the basic service flows. Since the missing market transparency is regarded as one of the main reasons for the low cloud adoption, the intended contribution is to foster a better understanding of the cloud ecosystem and to provide a conceptual framework for further research
GignoMDA
Database Systems are often used as persistent layer for applications. This implies that database schemas are generated out of transient programming class descriptions. The basic idea of the MDA approach generalizes this principle by providing a framework to generate applications (and database schemas) for different programming platforms. Within our GignoMDA project [3]--which is subject of this demo proposal--we have extended classic concepts for code generation. That means, our approach provides a single point of truth describing all aspects of database applications (e.g. database schema, project documentation,...) with great potential for cross-layer optimization. These new cross-layer optimization hints are a novel way for the challenging global optimization issue of multi-tier database applications. The demo at VLDB comprises an in-depth explanation of our concepts and the prototypical implementation by directly demonstrating the modeling and the automatic generation of database applications
Robust Real-time Query Processing with QStream
Processing data streams with Quality-of-Service (QoS) guarantees is an emerging area in existing streaming applications. Although it is possible to negotiate the result quality and to reserve the required processing resources in advance, it remains a challenge to adapt the DSMS to data stream characteristics which are not known in advance or are difficult to obtain. Within this paper we present the second generation of our QStream DSMS which addresses the above challenge by using a real-time capable operating system environment for resource reservation and by applying an adaptation mechanism if the data stream characteristics change spontaneously
Using Cloud Technologies to Optimize Data-Intensive Service Applications
The role of data analytics increases in several application domains to cope with the large amount of captured data. Generally, data analytics are data-intensive processes, whose efficient execution is a challenging task. Each process consists of a collection of related structured activities, where huge data sets have to be exchanged between several loosely coupled services. The implementation of such processes in a service-oriented environment offers some advantages, but the efficient realization of data flows is difficult. Therefore, we use this paper to propose a novel SOA-aware approach with a special focus on the data flow. The tight interaction of new cloud technologies with SOA technologies enables us to optimize the execution of data-intensive service applications by reducing the data exchange tasks to a minimum. Fundamentally, our core concept to optimize the data flows is found in data clouds. Moreover, we can exploit our approach to derive efficient process execution strategies regarding different optimization objectives for the data flows
Boundary Graph Neural Networks for 3D Simulations
The abundance of data has given machine learning considerable momentum in
natural sciences and engineering. However, the modeling of simulated physical
processes remains difficult. A key problem is the correct handling of geometric
boundaries. While triangularized geometric boundaries are very common in
engineering applications, they are notoriously difficult to model by machine
learning approaches due to their heterogeneity with respect to size and
orientation. In this work, we introduce Boundary Graph Neural Networks (BGNNs),
which dynamically modify graph structures to address boundary conditions.
Boundary graph structures are constructed via modifying edges, augmenting node
features, and dynamically inserting virtual nodes. The new BGNNs are tested on
complex 3D granular flow processes of hoppers and rotating drums which are
standard components of industrial machinery. Using precise simulations that are
obtained by an expensive and complex discrete element method, BGNNs are
evaluated in terms of computational efficiency as well as prediction accuracy
of particle flows and mixing entropies. Even if complex boundaries are present,
BGNNs are able to accurately reproduce 3D granular flows within simulation
uncertainties over hundreds of thousands of simulation timesteps, and most
notably particles completely stay within the geometric objects without using
handcrafted conditions or restrictions
An Application-Specific Instruction Set for Accelerating Set-Oriented Database Primitives
The key task of database systems is to efficiently manage large amounts of data. A high query throughput and a low query latency are essential for the success of a database system. Lately, research focused on exploiting hardware features like superscalar execution units, SIMD, or multiple cores to speed up processing. Apart from these software optimizations for given hardware, even tailor-made processing circuits running on FPGAs are built to run mostly stateless query plans with incredibly high throughput. A similar idea, which was already considered three decades ago, is to build tailor-made hardware like a database processor. Despite their superior performance, such application-specific processors were not considered to be beneficial because general-purpose processors eventually always caught up so that the high development costs did not pay off. In this paper, we show that the development of a database processor is much more feasible nowadays through the availability of customizable processors. We illustrate exemplarily how to create an instruction set extension for set-oriented database rimitives. The resulting application-specific processor provides not only a high performance but it also enables very energy-efficient processing. Our processor requires in various configurations more than 960x less energy than a high-end x86 processor while providing the same performance
The assessment of left ventricular mechanical dyssynchrony from gated 99mTc-tetrofosmin SPECT and gated 18F-FDG PET by QGS: a comparative study
BACKGROUND Due to partly conflicting studies, further research is warranted with the QGS software package, with regard to the performance of gated FDG PET phase analysis as compared to gated MPS as well as the establishment of possible cut-off values for FDG PET to define dyssynchrony. METHODS Gated MPS and gated FDG PET datasets of 93 patients were analyzed with the QGS software. BW, Phase SD, and Entropy were calculated and compared between the methods. The performance of gated PET to identify dyssynchrony was measured against SPECT as reference standard. ROC analysis was performed to identify the best discriminator of dyssynchrony and to define cut-off values. RESULTS BW and Phase SD differed significantly between the SPECT and PET. There was no significant difference in Entropy with a high linear correlation between methods. There was only moderate agreement between SPECT and PET to identify dyssynchrony. Entropy was the best single PET parameter to predict dyssynchrony with a cut-off point at 62%. CONCLUSION Gated MPS and gated FDG PET can assess LVMD. The methods cannot be used interchangeably. Establishing reference ranges and cut-off values is difficult due to the lack of an external gold standard. Further prospective research is necessary
HW/SW-database-codesign for compressed bitmap index processing
Compressed bitmap indices are heavily used in scientific and commercial database systems because they largely improve query performance for various workloads. Early research focused on finding tailor-made index compression schemes that are amenable for modern processors. Improving performance further typically comes at the expense of a lower compression rate, which is in many applications not acceptable because of memory limitations. Alternatively, tailor-made hardware allows to achieve a performance that can only hardly be reached with software running on general-purpose CPUs. In this paper, we will show how to create a custom instruction set framework for compressed bitmap processing that is generic enough to implement most of the major compressed bitmap indices. For evaluation, we implemented WAH, PLWAH, and COMPAX operations using our framework and compared the resulting implementation to multiple state-of-the-art processors. We show that the custom-made bitmap processor achieves speedups of up to one order of magnitude by also using two orders of magnitude less energy compared to a modern energy-efficient Intel processor. Finally, we discuss how to embed our processor with database-specific instruction sets into database system environments
Relativistic MHD and black hole excision: Formulation and initial tests
A new algorithm for solving the general relativistic MHD equations is
described in this paper. We design our scheme to incorporate black hole
excision with smooth boundaries, and to simplify solving the combined Einstein
and MHD equations with AMR. The fluid equations are solved using a finite
difference Convex ENO method. Excision is implemented using overlapping grids.
Elliptic and hyperbolic divergence cleaning techniques allow for maximum
flexibility in choosing coordinate systems, and we compare both methods for a
standard problem. Numerical results of standard test problems are presented in
two-dimensional flat space using excision, overlapping grids, and elliptic and
hyperbolic divergence cleaning.Comment: 22 pages, 8 figure
Energy-Efficient Databases Using Sweet Spot Frequencies
Database management systems (DBMS) are typically tuned for high performance and scalability. Nevertheless, carbon footprint and energy efficiency are also becoming increasing concerns. Unfortunately, existing studies mainly present theoretical contributions but fall short on proposing practical techniques. These could be used by administrators or query optimizers to increase the energy efficiency of the DBMS. Thus, this paper explores the effect of so-called sweet spots, which are energy-efficient CPU frequencies, on the energy required to execute queries. From our findings, we derive the Sweet Spot Technique, which relies on identifying energy-efficient sweet spots and the optimal number of threads that minimizes energy consumption for a query or an entire database workload. The technique is simple and has a practical implementation leading to energy savings of up to 50% compared to using the nominal frequency and maximum number of threads
- …