3,029 research outputs found
Automatic Specialization of Protocol Stacks in OS kernels
Awarded best paperInternational audienceFast and optimized protocol stacks play a major role in the performance of network services. This role is especially important in embedded class systems, where performance metrics such as data throughput tend to be limited by the CPU. It is common on such systems, to have protocol stacks that are optimized by hand for better performance and smaller code footprint. In this paper, we propose a strategy to automate this process. Our approach uses program specialization, and enables applications using the network to request specialized code based on the current usage scenario. The specialized code is generated dynamically and loaded in the kernel to be used by the application. We have successfully applied our approach to the TCP/IP implementation in the Linux kernel and used the optimized protocol stack in existing applications. These applications were minimally modied to request the specialization of code based on the current usage context, and to use the specialized code generated instead of its generic version. Specialization can be performed locally, or deferred to a remote specialization server using a novel mechanism [1]. Experiments conducted on three platforms show that the specialized code runs about 25% faster and its size reduces by up to 20 times. The throughput of the protocol stack improves by up to 21%
Description and Optimization of Abstract Machines in a Dialect of Prolog
In order to achieve competitive performance, abstract machines for Prolog and
related languages end up being large and intricate, and incorporate
sophisticated optimizations, both at the design and at the implementation
levels. At the same time, efficiency considerations make it necessary to use
low-level languages in their implementation. This makes them laborious to code,
optimize, and, especially, maintain and extend. Writing the abstract machine
(and ancillary code) in a higher-level language can help tame this inherent
complexity. We show how the semantics of most basic components of an efficient
virtual machine for Prolog can be described using (a variant of) Prolog. These
descriptions are then compiled to C and assembled to build a complete bytecode
emulator. Thanks to the high level of the language used and its closeness to
Prolog, the abstract machine description can be manipulated using standard
Prolog compilation and optimization techniques with relative ease. We also show
how, by applying program transformations selectively, we obtain abstract
machine implementations whose performance can match and even exceed that of
state-of-the-art, highly-tuned, hand-crafted emulators.Comment: 56 pages, 46 figures, 5 tables, To appear in Theory and Practice of
Logic Programming (TPLP
Verifying Programs via Intermediate Interpretation
We explore an approach to verification of programs via program transformation applied to an interpreter of a programming language. A specialization technique known as Turchin's supercompilation is used to specialize some interpreters with respect to the program models. We show that several safety properties of functional programs modeling a class of cache coherence protocols can be proved by a supercompiler and compare the results with our earlier work on direct verification via supercompilation not using intermediate interpretation. Our approach was in part inspired by an earlier work by De E. Angelis et al. (2014-2015) where verification via program transformation and intermediate interpretation was studied in the context of specialization of constraint logic programs
AgentAPI: an API for the development of managed agents
Managed agents, namely SNMP agents, costs too much to develop, test and maintain. Although assuming simplicity since its origins, the SNMP model has several intrinsic aspects that make the development of management applications a complex task. However, there are tools available which intend to simplify this process by generating automatic code based on the management information definition. Unfortunately, these tools are usually complicated to use and require a strong background of programming experience and
network management knowledge. This paper describes an API for managed agent development which also provides multiprotocol capabilities. Without changing the code, the resulting agent can be managed by SNMP, web browsers, wap browsers, CORBA or any other access method either simultaneously or individually
Recommended from our members
Using Program Specialization to Speed SystemC Fixed-Point Simulation
Generic simulation components, such as fixed-precision arithmetic routines, make it easier to quickly assemble system simulations, but generic components tend to simulate more slowly than their manually-written specialized counterparts. So a system modeler is normally forced to choose between building a simulation quickly or running it quickly.This paper explores the use of program specialization as a way to address this conundrum. Through hints provided by the author of a generic library and aggressive compiler optimizations, program specialization can automatically rewrite a generic component into a specialized one with performance comparable to a careful manual implementation. As a result, the user of such a specializable library can quickly assemble a simulation from generic components whose performance can equal that of a more tedious implementation.Experimental results show that program specialization provides a three- to seven-times speed-up on an important class of simulations: signal processing kernels in SystemC that manipulate fixed-precision numbers
Assessment of the Efficacy of Pulsed Biphasic Defibrillation Shocks for Treatment of Out-of-hospital Cardiac Arrest
This study evaluates the efficacy of a Pulsed Biphasic Waveform (PBW) for treatment of out-of-hospital cardiac arrest (OHCA) patients in ventricular fibrillation (VF). Large database (2001-2006), collected with automated external defibrillators (AED), (FRED®, Schiller Medical SAS, France), is processed.In Study1 we compared the defibrillation efficacy of two energy stacks (90-130-180 J) vs. (130-130-180 J) in 248 OHCA VF patients. The analysis of the first shock PBW efficacy proves that energies as low as 90 J are able to terminate VF in a large proportion of OHCA patients (77% at 5 s and 69% at 30 s). Although the results show a trend towards the benefit of higher energy PBW with 130 J (86% at 5 s, 73% at 30 s), the difference in shock efficacy does not reach statistical significance. Both PBW energy stacks (90-130-180 J) and (130-130-180 J) achieve equal success rates of defibrillation. Analysis of the post-shock rhythm after the first shock is also provided.For Study2 of 21 patients with PBW shocks (130-130-180 J), we assessed some attending OHCA circumstances: call-to-shock delay (median 16min, range 11-41 min), phone advices of CPR (67%). About 50% of the patients were admitted alive to hospital, and 19% were discharged from hospital. After the first shock, patients admitted to hospital are more often presenting organized rhythm (OR) (27% to 55%) than patients not admitted (0% to 10%), with significant difference at 15 s and 30 s. Post-shock VFs appear significantly rare until 15s for patients admitted to hospital (0% to 9%) than for patients not admitted to hospital (40% to 50%). Return of OR (ROOR) and efficacy to defibrillate VF at 5 s and 15 s with first shock are important markers to predict patient admission to hospital
A Pipeline for Volume Electron Microscopy of the Caenorhabditis elegans Nervous System.
The "connectome," a comprehensive wiring diagram of synaptic connectivity, is achieved through volume electron microscopy (vEM) analysis of an entire nervous system and all associated non-neuronal tissues. White et al. (1986) pioneered the fully manual reconstruction of a connectome using Caenorhabditis elegans. Recent advances in vEM allow mapping new C. elegans connectomes with increased throughput, and reduced subjectivity. Current vEM studies aim to not only fill the remaining gaps in the original connectome, but also address fundamental questions including how the connectome changes during development, the nature of individuality, sexual dimorphism, and how genetic and environmental factors regulate connectivity. Here we describe our current vEM pipeline and projected improvements for the study of the C. elegans nervous system and beyond
Recent Advanced Computing Methods Employed in Web Service Automation - A Survey
Web Service Automation gains momentum for the past two decades. So, various computational algorithms have been developed on different aspects of web service categorization and resource allocation. Research activities are more on comparing the algorithms over time and space complexity. Web designers and service providers make their contribution to enrich the IT products in this area. In this paper, a detail study is attempted on the above aspects of web service Automation. We open an area of web technology for implementation of newer algorithms. Keywords - Web Service Allocation, Zero Knowledge Authentication, Logic Programming, Service Computing, Distributed Algorithms, Cloud computing
- …