15 research outputs found
POOR MANâS TRACE CACHE: A VARIABLE DELAY SLOT ARCHITECTURE
We introduce a novel fetch architecture called Poor Manâs Trace Cache (PMTC). PMTC constructs taken-path instruction traces via instruction replication in static code and inserts them after unconditional direct and select conditional direct control transfer instructions. These traces extend to the end of the cache line. Since available space for trace insertion may vary by the position of the control transfer instruction within the line, we refer to these fetch slots as variable delay slots. This approach ensures traces are fetched along with the control transfer instruction that initiated the trace. Branch, jump and return instruction semantics as well as the fetch unit are modified to utilize traces in delay slots. PMTC yields the following benefits: 1. Average fetch bandwidth increases as the front end can fetch across taken control transfer instructions in a single cycle. 2. The dynamic number of instruction cache lines fetched by the processor is reduced as multiple non contiguous basic blocks along a given path are encountered in one fetch cycle. 3. Replication of a branch instruction along multiple paths provides path separability for branches, which positively impacts branch prediction accuracy. PMTC mechanism requires minimal modifications to the processorâs fetch unit and the trace insertion algorithm can easily be implemented within the assembler without compiler support
An Efficient NoC-based Framework To Improve Dataflow Thread Management At Runtime
This doctoral thesis focuses on how the application threads that are based on dataflow
execution model can be managed at Network-on-Chip (NoC) level. The roots of the
dataflow execution model date back to the early 1970âs. Applications adhering to such
program execution model follow a simple producer-consumer communication scheme for
synchronising parallel thread related activities. In dataflow execution environment, a
thread can run if and only if all its required inputs are available. Applications running
on a large and complex computing environment can significantly benefit from the
adoption of dataflow model.
In the first part of the thesis, the work is focused on the thread distribution mechanism.
It has been shown that how a scalable hash-based thread distribution mechanism
can be implemented at the router level with low overheads. To enhance the support further,
a tool to monitor the dataflow threadsâ status and a simple, functional model is
also incorporated into the design. Next, a software defined NoC has been proposed to
manage the distribution of dataflow threads by exploiting its reconfigurability.
The second part of this work is focused more on NoC microarchitecture level. Traditional
2D-mesh topology is combined with a standard ring, to understand how such
hybrid network topology can outperform the traditional topology (such as 2D-mesh). Finally,
a mixed-integer linear programming based analytical model has been proposed
to verify if the application threads mapped on to the free cores is optimal or not. The
proposed mathematical model can be used as a yardstick to verify the solution quality
of the newly developed mapping policy. It is not trivial to provide a complete low-level
framework for dataflow thread execution for better resource and power management.
However, this work could be considered as a primary framework to which improvements
could be carried out
Ad hoc cloud computing
Commercial and private cloud providers offer virtualized resources via a set of co-located
and dedicated hosts that are exclusively reserved for the purpose of offering
a cloud service. While both cloud models appeal to the mass market, there are many
cases where outsourcing to a remote platform or procuring an in-house infrastructure
may not be ideal or even possible.
To offer an attractive alternative, we introduce and develop an ad hoc cloud computing
platform to transform spare resource capacity from an infrastructure ownerâs
locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud
platform. The foundation of the ad hoc cloud relies on transferring and instantiating
lightweight virtual machines on-demand upon near-optimal hosts while virtual machine
checkpoints are distributed in a P2P fashion to other members of the ad hoc
cloud. Virtual machines found to be non-operational are restored elsewhere ensuring
the continuity of cloud jobs.
In this thesis we investigate the feasibility, reliability and performance of ad hoc
cloud computing infrastructures. We firstly show that the combination of both volunteer
computing and virtualization is the backbone of the ad hoc cloud. We outline the
process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC
distributes virtual machines to volunteer hosts allowing volunteer applications to be
executed in the sandbox environment to solve many of the downfalls of BOINC; this
however also provides the basis for an ad hoc cloud computing platform to be developed.
We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline
the transformational process and integrated extensions. These include a BOINC job
submission system, cloud job and virtual machine restoration schedulers and a periodic
P2P checkpoint distribution component. Furthermore, as current monitoring tools are
unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure
monitoring and management tool called the Cloudlet Control Monitoring System is
developed and presented.
We evaluate each of our individual contributions as well as the reliability, performance
and overheads associated with an ad hoc cloud deployed on a realistically
simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a
feasible concept but also a viable computational alternative that offers high levels of
reliability and can at least offer reasonable performance, which at times may exceed
the performance of a commercial cloud infrastructure
Topical Workshop on Electronics for Particle Physics
The purpose of the workshop was to present results and original concepts for electronics research and development relevant to particle physics experiments as well as accelerator and beam instrumentation at future facilities; to review the status of electronics for the LHC experiments; to identify and encourage common efforts for the development of electronics; and to promote information exchange and collaboration in the relevant engineering and physics communities
Abstracts on Radio Direction Finding (1899 - 1995)
The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography).
Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM.
The contents of these files are:
1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format];
2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format];
3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion
Combining SOA and BPM Technologies for Cross-System Process Automation
This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation
Particle Physics Reference Library
This second open access volume of the handbook series deals with detectors, large experimental facilities and data handling, both for accelerator and non-accelerator based experiments. It also covers applications in medicine and life sciences. A joint CERN-Springer initiative, the âParticle Physics Reference Libraryâ provides revised and updated contributions based on previously published material in the well-known Landolt-Boernstein series on particle physics, accelerators and detectors (volumes 21A,B1,B2,C), which took stock of the field approximately one decade ago. Central to this new initiative is publication under full open access