974,576 research outputs found
Towards an unified experimentation framework for protocol engineering
The design and development process of complex systems require an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the solution. In this article, an Unified Experimentation Framework (UEF) providing experimentation facilities at both design and development stages is introduced. This UEF provides a mean to achieve experiment in both simulation mode with UML2 models of the designed protocol and emulation mode using real protocol implementation. A practical use case of the experimentation framework is illustrated in the context of satellite environment
The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research
Copyright @ 2008 Wiley Periodicals Inc. This is the accepted version of the following article: Donovan, C. (2008), The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. New Directions for Evaluation, 2008: 47â60, which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/ev.260/abstract.The author regards development of Australia's ill-fated Research Quality Framework (RQF) as a âlive experimentâ in determining the most appropriate approach to evaluating the extra-academic returns, or âimpact,â of a nation's publicly funded research. The RQF was at the forefront of an international movement toward richer qualitative, contextual approaches that aimed to gauge the wider economic, social, environmental, and cultural benefits of research. Its construction and implementation sent mixed messages and created confusion about what impact is, and how it is best measured, to the extent that this bold live experiment did not come to fruition
The Joint COntrols Project Framework
The Framework is one of the subprojects of the Joint COntrols Project (JCOP),
which is collaboration between the four LHC experiments and CERN. By sharing
development, this will reduce the overall effort required to build and maintain
the experiment control systems. As such, the main aim of the Framework is to
deliver a common set of software components, tools and guidelines that can be
used by the four LHC experiments to build their control systems. Although
commercial components are used wherever possible, further added value is
obtained by customisation for HEP-specific applications. The supervisory layer
of the Framework is based on the SCADA tool PVSS, which was selected after a
detailed evaluation. This is integrated with the front-end layer via both OPC
(OLE for Process Control), an industrial standard, and the CERN-developed DIM
(Distributed Information Management System) protocol. Several components are
already in production and being used by running fixed-target experiments at
CERN as well as for the LHC experiment test beams. The paper will give an
overview of the key concepts behind the project as well as the state of the
current development and future plans.Comment: Paper from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, PDF. PSN THGT00
A Validation Framework for the Long Term Preservation of High Energy Physics Data
The study group on data preservation in high energy physics, DPHEP, is moving
to a new collaboration structure, which will focus on the implementation of
preservation projects, such as those described in the group's large scale
report published in 2012. One such project is the development of a validation
framework, which checks the compatibility of evolving computing environments
and technologies with the experiments software for as long as possible, with
the aim of substantially extending the lifetime of the analysis software, and
hence of the usability of the data. The framework is designed to automatically
test and validate the software and data of an experiment against changes and
upgrades to the computing environment, as well as changes to the experiment
software itself. Technically, this is realised using a framework capable of
hosting a number of virtual machine images, built with different configurations
of operating systems and the relevant software, including any necessary
external dependencies.Comment: Proceedings of a poster presented at CHEP 2013, Amsterdam, October
14-18 201
Formally based semi-automatic implementation of an open security protocol
International audienceThis paper presents an experiment in which an implementation of the client side of the SSH Transport Layer Protocol (SSH-TLP) was semi-automatically derived according to a model-driven development paradigm that leverages formal methods in order to obtain high correctness assurance. The approach used in the experiment starts with the formalization of the protocol at an abstract level. This model is then formally proved to fulfill the desired secrecy and authentication properties by using the ProVerif prover. Finally, a sound Java implementation is semi-automatically derived from the verified model using an enhanced version of the Spi2Java framework. The resulting implementation correctly interoperates with third party servers, and its execution time is comparable with that of other manually developed Java SSH-TLP client implementations. This case study demonstrates that the adopted model-driven approach is viable even for a real security protocol, despite the complexity of the models needed in order to achieve an interoperable implementation
Simulation in ALICE
ALICE, the experiment dedicated to the study of heavy ion collisions at the
LHC, uses an object-oriented framework for simulation, reconstruction and
analysis (AliRoot) based on ROOT. Here, we describe the general ALICE
simulation strategy and those components of the framework related to
simulation. Two main requirements have driven the development of the simulation
components. First, the possibility to run different transport codes with the
same user code for geometry and detector response has led to the development of
the Virtual Monte Carlo concept. Second, simulation has to provide tools to
efficiently study events ranging from low-multiplicity pp collisions to Pb-Pb
collisions with up to 80000 primary particles per event. This has led to the
development of a variety of collaborating generator classes and specific
classes for event merging.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, LaTeX, 5 eps figures. PSN
TUMT00
Optical Random Riemann Waves in Integrable Turbulence
We examine integrable turbulence (IT) in the framework of the defocusing
cubic one-dimensional nonlinear Schr\"{o}dinger equation. This is done
theoretically and experimentally, by realizing an optical fiber experiment in
which the defocusing Kerr nonlinearity strongly dominates linear dispersive
effects. Using a dispersive-hydrodynamic approach, we show that the development
of IT can be divided into two distinct stages, the initial, pre-breaking stage
being described by a system of interacting random Riemann waves. We explain the
low-tailed statistics of the wave intensity in IT and show that the Riemann
invariants of the asymptotic nonlinear geometric optics system represent the
observable quantities that provide new insight into statistical features of the
initial stage of the IT development by exhibiting stationary probability
density functions
- âŠ