327 research outputs found
Analytical computation of the off-axis Effective Area of grazing incidence X-ray mirrors
Focusing mirrors for X-ray telescopes in grazing incidence, introduced in the
70s, are characterized in terms of their performance by their imaging quality
and effective area, which in turn determines their sensitivity. Even though the
on-axis effective area is assumed in general to characterize the collecting
power of an X-ray optic, the telescope capability of imaging extended X-ray
sources is also determined by the variation in its effective area with the
off-axis angle. [...] The complex task of designing optics for future X-ray
telescopes entails detailed computations of both imaging quality and effective
area on- and off-axis. Because of their apparent complexity, both aspects have
been, so far, treated by using ray-tracing routines aimed at simulating the
interaction of X-ray photons with the reflecting surfaces of a given focusing
system. Although this approach has been widely exploited and proven to be
effective, it would also be attractive to regard the same problem from an
analytical viewpoint, to assess an optical design of an X-ray optical module
with a simpler calculation than a ray-tracing routine. [...] We have developed
useful analytical formulae for the off-axis effective area of a
double-reflection mirror in the double cone approximation, requiring only an
integration and the standard routines to calculate the X-ray coating
reflectivity for a given incidence angle. [...] Algebraic expressions are
provided for the mirror geometric area, as a function of the off-axis angle.
Finally, the results of the analytical computations presented here are
validated by comparison with the corresponding predictions of a ray-tracing
code.Comment: 12 pages, 11 figures, accepted for publication in "Astronomy &
Astrophysics", section "Instruments, observational techniques, and data
processing". Updated version after grammatical revision and typos correctio
Characterization of multilayer stack parameters from X-ray reflectivity data using the PPM program: measurements and comparison with TEM results
Future hard (10 -100 keV) X-ray telescopes (SIMBOL-X, Con-X, HEXIT-SAT, XEUS)
will implement focusing optics with multilayer coatings: in view of the
production of these optics we are exploring several deposition techniques for
the reflective coatings. In order to evaluate the achievable optical
performance X-Ray Reflectivity (XRR) measurements are performed, which are
powerful tools for the in-depth characterization of multilayer properties
(roughness, thickness and density distribution). An exact extraction of the
stack parameters is however difficult because the XRR scans depend on them in a
complex way. The PPM code, developed at ERSF in the past years, is able to
derive the layer-by-layer properties of multilayer structures from
semi-automatic XRR scan fittings by means of a global minimization procedure in
the parameters space. In this work we will present the PPM modeling of some
multilayer stacks (Pt/C and Ni/C) deposited by simple e-beam evaporation.
Moreover, in order to verify the predictions of PPM, the obtained results are
compared with TEM profiles taken on the same set of samples. As we will show,
PPM results are in good agreement with the TEM findings. In addition, we show
that the accurate fitting returns a physically correct evaluation of the
variation of layers thickness through the stack, whereas the thickness trend
derived from TEM profiles can be altered by the superposition of roughness
profiles in the sample image
Towards Runtime Verification via Event Stream Processing in Cloud Computing Infrastructures
Software bugs in cloud management systems often cause erratic behavior, hindering detection, and recovery of failures. As a consequence, the failures are not timely detected and notified, and can silently propagate through the system. To face these issues, we propose a lightweight approach to runtime verification, for monitoring and failure detection of cloud computing systems. We performed a preliminary evaluation of the proposed approach in the OpenStack cloud management platform, an “off-the-shelf” distributed system, showing that the approach can be applied with high failure detection coverage
Maximizing Compressor Efficiency While Maintaining Reliability.
LecturePg. 91-100The natural gas and chemical processing industries have historically and necessarily demanded high reliability from their centrifugal compressors, which has led to a significant emphasis on field experience of designs. This emphasis has sometimes resulted in new units which reflect design and manufacturing practices which can be improved upon. Users can, in many cases, significantly increase efficiencies by considering designs which use more recently developed technologies that create only refinements of machines historically used in their applications. Accurately machined three-dimensional impellers are an example of an under-utilized available technology for multistage compressors. These designs used in several stages can provide significant efficiency gains, particularly at higher flows. At lower flows, impeller efficiencies can be improved by a process called abrasive flow machining. This process can provide similar benefits in process compressors by improving surface finish in areas that cannot be reached with conventional metal finishing techniques. Advancements in machine tool technology have also allowed changes in compressor casing designs. Numerical control (NC) machine tools can be used to machine inlets and variable area discharge volutes in the same axial casing space, thereby improving efficiencies through generous volute sizing without requiring additional diameter and bearing span. Specific examples of uses of these design and manufacturing technologies and comparisons to alternative designs are detailed. The data presented show that these technologies can be used with confidence to provide high compressor efficiencies while maintaining reliability
Simbol-X Hard X-ray Focusing Mirrors: Results Obtained During the Phase A Study
Simbol-X will push grazing incidence imaging up to 80 keV, providing a strong
improvement both in sensitivity and angular resolution compared to all
instruments that have operated so far above 10 keV. The superb hard X-ray
imaging capability will be guaranteed by a mirror module of 100 electroformed
Nickel shells with a multilayer reflecting coating. Here we will describe the
technogical development and solutions adopted for the fabrication of the mirror
module, that must guarantee an Half Energy Width (HEW) better than 20 arcsec
from 0.5 up to 30 keV and a goal of 40 arcsec at 60 keV. During the phase A,
terminated at the end of 2008, we have developed three engineering models with
two, two and three shells, respectively. The most critical aspects in the
development of the Simbol-X mirrors are i) the production of the 100 mandrels
with very good surface quality within the timeline of the mission; ii) the
replication of shells that must be very thin (a factor of 2 thinner than those
of XMM-Newton) and still have very good image quality up to 80 keV; iii) the
development of an integration process that allows us to integrate these very
thin mirrors maintaining their intrinsic good image quality. The Phase A study
has shown that we can fabricate the mandrels with the needed quality and that
we have developed a valid integration process. The shells that we have produced
so far have a quite good image quality, e.g. HEW <~30 arcsec at 30 keV, and
effective area. However, we still need to make some improvements to reach the
requirements. We will briefly present these results and discuss the possible
improvements that we will investigate during phase B.Comment: 6 pages, 3 figures, invited talk at the conference "2nd International
Simbol-X Symposium", Paris, 2-5 december, 200
EVIL: Exploiting Software via Natural Language
Writing exploits for security assessment is a challenging task. The writer needs to master programming and obfuscation techniques to develop a successful exploit. To make the task easier, we propose an approach (EVIL) to automatically generate exploits in assembly/Python language from descriptions in natural language. The approach leverages Neural Machine Translation (NMT) techniques and a dataset that we developed for this work. We present an extensive experimental study to evaluate the feasibility of EVIL, using both automatic and manual analysis, and both at generating individual statements and entire exploits. The generated code achieved high accuracy in terms of syntactic and semantic correctness
Systems-theoretic Safety Assessment of Robotic Telesurgical Systems
Robotic telesurgical systems are one of the most complex medical
cyber-physical systems on the market, and have been used in over 1.75 million
procedures during the last decade. Despite significant improvements in design
of robotic surgical systems through the years, there have been ongoing
occurrences of safety incidents during procedures that negatively impact
patients. This paper presents an approach for systems-theoretic safety
assessment of robotic telesurgical systems using software-implemented
fault-injection. We used a systemstheoretic hazard analysis technique (STPA) to
identify the potential safety hazard scenarios and their contributing causes in
RAVEN II robot, an open-source robotic surgical platform. We integrated the
robot control software with a softwareimplemented fault-injection engine which
measures the resilience of the system to the identified safety hazard scenarios
by automatically inserting faults into different parts of the robot control
software. Representative hazard scenarios from real robotic surgery incidents
reported to the U.S. Food and Drug Administration (FDA) MAUDE database were
used to demonstrate the feasibility of the proposed approach for safety-based
design of robotic telesurgical systems.Comment: Revise based on reviewers feedback. To appear in the the
International Conference on Computer Safety, Reliability, and Security
(SAFECOMP) 201
- …