18,725 research outputs found
Mediated data integration and transformation for web service-based software architectures
Service-oriented architecture using XML-based web services has been widely accepted by many organisations as the standard infrastructure to integrate heterogeneous and autonomous data sources. As a result, many Web service providers are built up on top of the data sources to share the data by supporting provided and required interfaces and methods of data access in a unified manner. In the context of data integration, problems arise when Web services are assembled to deliver an integrated view of data, adaptable to the specific needs of individual clients and providers. Traditional approaches of data integration and transformation are not suitable to automate the construction of connectors dedicated to connect selected Web services to render integrated and tailored views of data. We propose a declarative approach that addresses the oftenneglected data integration and adaptivity aspects of serviceoriented
architecture
Data integration through service-based mediation for web-enabled information systems
The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Traditional techniques for data representation and transformations between documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty
arises from the high degree of complexity of data structures, for example in business and technology applications, and from the constant change of data and its
representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity
arises. We introduce a specific data integration solution for Web applications such as Web-enabled information systems. Our contribution is an integration technology
framework for Web-enabled information systems comprising, firstly, a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration and, secondly, a mediator architecture based on information services and the constructed connectors to handle the integration process
SU(2) Lattice Gauge Theory Simulations on Fermi GPUs
In this work we explore the performance of CUDA in quenched lattice SU(2)
simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware
and software architecture developed by NVIDIA for computing on the GPU. We
present an analysis and performance comparison between the GPU and CPU in
single and double precision. Analyses with multiple GPUs and two different
architectures (G200 and Fermi architectures) are also presented. In order to
obtain a high performance, the code must be optimized for the GPU architecture,
i.e., an implementation that exploits the memory hierarchy of the CUDA
programming model.
We produce codes for the Monte Carlo generation of SU(2) lattice gauge
configurations, for the mean plaquette, for the Polyakov Loop at finite T and
for the Wilson loop. We also present results for the potential using many
configurations () without smearing and almost configurations
with APE smearing. With two Fermi GPUs we have achieved an excellent
performance of the speed over one CPU, in single precision, around
110 Gflops/s. We also find that, using the Fermi architecture, double precision
computations for the static quark-antiquark potential are not much slower (less
than slower) than single precision computations.Comment: 20 pages, 11 figures, 3 tables, accepted in Journal of Computational
Physic
The RD53 Collaboration's SystemVerilog-UVM Simulation Framework and its General Applicability to Design of Advanced Pixel Readout Chips
The foreseen Phase 2 pixel upgrades at the LHC have very challenging
requirements for the design of hybrid pixel readout chips. A versatile pixel
simulation platform is as an essential development tool for the design,
verification and optimization of both the system architecture and the pixel
chip building blocks (Intellectual Properties, IPs). This work is focused on
the implemented simulation and verification environment named VEPIX53, built
using the SystemVerilog language and the Universal Verification Methodology
(UVM) class library in the framework of the RD53 Collaboration. The environment
supports pixel chips at different levels of description: its reusable
components feature the generation of different classes of parameterized input
hits to the pixel matrix, monitoring of pixel chip inputs and outputs,
conformity checks between predicted and actual outputs and collection of
statistics on system performance. The environment has been tested performing a
study of shared architectures of the trigger latency buffering section of pixel
chips. A fully shared architecture and a distributed one have been described at
behavioral level and simulated; the resulting memory occupancy statistics and
hit loss rates have subsequently been compared.Comment: 15 pages, 10 figures (11 figure files), submitted to Journal of
Instrumentatio
- âŠ