7,770 research outputs found
The "MIND" Scalable PIM Architecture
MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a
Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on
each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND
architecture
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document
The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts
Theory and Practice of Transactional Method Caching
Nowadays, tiered architectures are widely accepted for constructing large
scale information systems. In this context application servers often form the
bottleneck for a system's efficiency. An application server exposes an object
oriented interface consisting of set of methods which are accessed by
potentially remote clients. The idea of method caching is to store results of
read-only method invocations with respect to the application server's interface
on the client side. If the client invokes the same method with the same
arguments again, the corresponding result can be taken from the cache without
contacting the server. It has been shown that this approach can considerably
improve a real world system's efficiency.
This paper extends the concept of method caching by addressing the case where
clients wrap related method invocations in ACID transactions. Demarcating
sequences of method calls in this way is supported by many important
application server standards. In this context the paper presents an
architecture, a theory and an efficient protocol for maintaining full
transactional consistency and in particular serializability when using a method
cache on the client side. In order to create a protocol for scheduling cached
method results, the paper extends a classical transaction formalism. Based on
this extension, a recovery protocol and an optimistic serializability protocol
are derived. The latter one differs from traditional transactional cache
protocols in many essential ways. An efficiency experiment validates the
approach: Using the cache a system's performance and scalability are
considerably improved
The POOL Data Storage, Cache and Conversion Mechanism
The POOL data storage mechanism is intended to satisfy the needs of the LHC
experiments to store and analyze the data from the detector response of
particle collisions at the LHC proton-proton collider. Both the data rate and
the data volumes will largely differ from the past experience. The POOL data
storage mechanism is intended to be able to cope with the experiment's
requirements applying a flexible multi technology data persistency mechanism.
The developed technology independent approach is flexible enough to adopt new
technologies, take advantage of existing schema evolution mechanisms and allows
users to access data in a technology independent way. The framework consists of
several components, which can be individually adopted and integrated into
existing experiment frameworks.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, PDF, 6 figures. PSN MOKT00
Cache Serializability: Reducing Inconsistency in Edge Transactions
Read-only caches are widely used in cloud infrastructures to reduce access
latency and load on backend databases. Operators view coherent caches as
impractical at genuinely large scale and many client-facing caches are updated
in an asynchronous manner with best-effort pipelines. Existing solutions that
support cache consistency are inapplicable to this scenario since they require
a round trip to the database on every cache transaction.
Existing incoherent cache technologies are oblivious to transactional data
access, even if the backend database supports transactions. We propose T-Cache,
a novel caching policy for read-only transactions in which inconsistency is
tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache
improves cache consistency despite asynchronous and unreliable communication
between the cache and the database. We define cache-serializability, a variant
of serializability that is suitable for incoherent caches, and prove that with
unbounded resources T-Cache implements this new specification. With limited
resources, T-Cache allows the system manager to choose a trade-off between
performance and consistency.
Our evaluation shows that T-Cache detects many inconsistencies with only
nominal overhead. We use synthetic workloads to demonstrate the efficacy of
T-Cache when data accesses are clustered and its adaptive reaction to workload
changes. With workloads based on the real-world topologies, T-Cache detects
43-70% of the inconsistencies and increases the rate of consistent transactions
by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability:
Reducing Inconsistency in Edge Transactions," Distributed Computing Systems
(ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201
Grid Databases for Shared Image Analysis in the MammoGrid Project
The MammoGrid project aims to prove that Grid infrastructures can be used for
collaborative clinical analysis of database-resident but geographically
distributed medical images. This requires: a) the provision of a
clinician-facing front-end workstation and b) the ability to service real-world
clinician queries across a distributed and federated database. The MammoGrid
project will prove the viability of the Grid by harnessing its power to enable
radiologists from geographically dispersed hospitals to share standardized
mammograms, to compare diagnoses (with and without computer aided detection of
tumours) and to perform sophisticated epidemiological studies across national
boundaries. This paper outlines the approach taken in MammoGrid to seamlessly
connect radiologist workstations across a Grid using an "information
infrastructure" and a DICOM-compliant object model residing in multiple
distributed data stores in Italy and the UKComment: 10 pages, 5 figure
- …