790 research outputs found
Automatic generation of hardware/software interfaces
Enabling new applications for mobile devices often requires the use of specialized hardware to reduce power consumption. Because of time-to-market pressure, current design methodologies for embedded applications require an early partitioning of the design, allowing the hardware and software to be developed simultaneously, each adhering to a rigid interface contract. This approach is problematic for two reasons: (1) a detailed hardware-software interface is difficult to specify until one is deep into the design process, and (2) it prevents the later migration of functionality across the interface motivated by efficiency concerns or the addition of features. We address this problem using the Bluespec Codesign Language~(BCL) which permits the designer to specify the hardware-software partition in the source code, allowing the compiler to synthesize efficient software and hardware along with transactors for communication between the partitions. The movement of functionality across the hardware-software boundary is accomplished by simply specifying a new partitioning, and since the compiler automatically generates the desired interface specifications, it eliminates yet another error-prone design task. In this paper we present BCL, an extension of a commercially available hardware design language (Bluespec SystemVerilog), a new software compiling scheme, and preliminary results generated using our compiler for various hardware-software decompositions of an Ogg Vorbis audio decoder, and a ray-tracing application.National Science Foundation (U.S.) (NSF (#CCF-0541164))National Research Foundation of Korea (grant from the Korean Government (MEST) (#R33-10095)
4-D Tomographic Inference: Application to SPECT and MR-driven PET
Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference
Computational modelling of aerodynamic disturbances on spacecraft within a concurrent engineering framework
This research was motivated by the need to perform an accurate
aerodynamic analysis of the drag deorbit device concept under development within
the Space Research Centre, Cranfield University. Its purpose is to deorbit satellites
from low Earth orbit at the end of the useful lives, in order to help reduce the
growing problem of space debris.
It has been found that existing spacecraft aerodynamic analysis tools do not
adequately support concurrent engineering. Furthermore, use of concurrent
engineering in the space industry is currently limited to Phase A (preliminary design
studies). To remedy this, the Spacecraft Engineering, Design, and Analysis Tools
(SEDAT) Concept has been proposed.
Inspired by the approach employed by enterprise applications, it proposes
that all the computer tools used on a spacecraft project should be incorporated into
one system as separate modules, presented via a single client, and connected to a
centralised Relational Database Management System. To demonstrate the concept
and assess its potential a SEDAT System and accompanying Free Molecular Flow
(FMF) spacecraft aerodynamic analysis module have been developed.
The FMF Module is explicitly designed to facilitate concurrent engineering
and make use of the maximum variety of Gas-Surface Interaction Models (GSIMs)
and their associated data. It also incorporates a new Hybrid method of FMF analysis
that combines the Ray-Tracing Panel (RTP) and Test-Particle Monte Carlo (TPMC)
methods, enabling it to analyse complex geometries that are subject to surface
shielding and multiple molecular reflections.
Studies have been performed using a Hybrid version of the Schaaf and
Chambre GSIM. One of these studies analysed a drag deorbit device design using a
range of accommodation coefficients, including the latest empirically based
incidence-dependent coefficients. Based on this analysis, recommendations have
been made regarding the material selection and structural design of the device
Index to NASA Tech Briefs, January - June 1967
Technological innovations for January-June 1967, abstracts and subject inde
Practical and effective higher-order optimizations
Inlining is an optimization that replaces a call to a function with that function’s body. This optimization not only reduces the overhead of a function call, but can expose additional optimization oppor-tunities to the compiler, such as removing redundant operations or unused conditional branches. Another optimization, copy propaga-tion, replaces a redundant copy of a still-live variable with the origi-nal. Copy propagation can reduce the total number of live variables, reducing register pressure and memory usage, and possibly elimi-nating redundant memory-to-memory copies. In practice, both of these optimizations are implemented in nearly every modern com-piler. These two optimizations are practical to implement and effec-tive in first-order languages, but in languages with lexically-scoped first-class functions (aka, closures), these optimizations are no
Real-world Machine Learning Systems: A survey from a Data-Oriented Architecture Perspective
Machine Learning models are being deployed as parts of real-world systems
with the upsurge of interest in artificial intelligence. The design,
implementation, and maintenance of such systems are challenged by real-world
environments that produce larger amounts of heterogeneous data and users
requiring increasingly faster responses with efficient resource consumption.
These requirements push prevalent software architectures to the limit when
deploying ML-based systems. Data-oriented Architecture (DOA) is an emerging
concept that equips systems better for integrating ML models. DOA extends
current architectures to create data-driven, loosely coupled, decentralised,
open systems. Even though papers on deployed ML-based systems do not mention
DOA, their authors made design decisions that implicitly follow DOA. The
reasons why, how, and the extent to which DOA is adopted in these systems are
unclear. Implicit design decisions limit the practitioners' knowledge of DOA to
design ML-based systems in the real world. This paper answers these questions
by surveying real-world deployments of ML-based systems. The survey shows the
design decisions of the systems and the requirements these satisfy. Based on
the survey findings, we also formulate practical advice to facilitate the
deployment of ML-based systems. Finally, we outline open challenges to
deploying DOA-based systems that integrate ML models.Comment: Under revie
Recommended from our members
A Component Architecture for High-Performance Scientific Computing
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry
The vanadium pentoxide-catalysed oxidation of pentenes
Imperial Users onl
- …