7,188 research outputs found
Potential of multisensor data and strategies for data acquisition and analysis
Registration and simultaneous analysis of multisensor images is useful because the multiple data sets can be compressed through image processing techniques to facilitate interpretation. This also allows integration of other spatial data sets. Techniques being developed to analyze multisensor images involve comparison of image data with a library of attributes based on physical properties measured by each sensor. This results in the ability to characterize geologic units based on their similarity to the library attributes, as well as discriminate among them. Several studies can provide information on ways to optimize multisensor remote sensing. Continued analyses of the Death Valley and San Rafael Swell data sets can provide insight into tradeoffs in spectral and spatial resolutions of the various sensors used to obtain the coregistered data sets. These include imagery from LANDSAT, SEASAT, HCMM, SIR-A, 11-channel VIS-NIR, thermal inertia images, and aircraft L- and X-band radar
Parallel Recursive State Compression for Free
This paper focuses on reducing memory usage in enumerative model checking,
while maintaining the multi-core scalability obtained in earlier work. We
present a tree-based multi-core compression method, which works by leveraging
sharing among sub-vectors of state vectors.
An algorithmic analysis of both worst-case and optimal compression ratios
shows the potential to compress even large states to a small constant on
average (8 bytes). Our experiments demonstrate that this holds up in practice:
the median compression ratio of 279 measured experiments is within 17% of the
optimum for tree compression, and five times better than the median compression
ratio of SPIN's COLLAPSE compression.
Our algorithms are implemented in the LTSmin tool, and our experiments show
that for model checking, multi-core tree compression pays its own way: it comes
virtually without overhead compared to the fastest hash table-based methods.Comment: 19 page
Ageing of the LHCb outer tracker
The modules of the LHCb outer tracker have shown to suffer severe gain loss under moderate irradiation. This process is called ageing. Ageing of the modules results from contamination of the gas system by glue, araldite AY 103-1, used in their construction. In this thesis the ageing process will be shown. The schemes known to reduce, reverse, or prevent ageing have been investigated to determine their effect on the detector performance. The addition of O2 to the gas mixture lowers the detector response by an acceptable amount and does not affect the gas transport properties significantly. The ageing rate is decreased after extensive flushing and HV training could eventually repair the irradiation damage. The risks of HV training have been assessed. Furthermore, several gaseous and aquatic additions have been tested for their capability to prevent, or moderate ageing, but none showed significant improvement
Distributed Markovian Bisimulation Reduction aimed at CSL Model Checking
The verification of quantitative aspects like performance and dependability by means of model checking has become an important and vivid area of research over the past decade.\ud
\ud
An important result of that research is the logic CSL (continuous stochastic logic) and its corresponding model checking algorithms. The evaluation of properties expressed in CSL makes it necessary to solve large systems of linear (differential) equations, usually by means of numerical analysis. Both the inherent time and space complexity of the numerical algorithms make it practically infeasible to model check systems with more than 100 million states, whereas realistic system models may have billions of states.\ud
\ud
To overcome this severe restriction, it is important to be able to replace the original state space with a probabilistically equivalent, but smaller one. The most prominent equivalence relation is bisimulation, for which also a stochastic variant exists (Markovian bisimulation). In many cases, this bisimulation allows for a substantial reduction of the state space size. But, these savings in space come at the cost of an increased time complexity. Therefore in this paper a new distributed signature-based algorithm for the computation of the bisimulation quotient of a given state space is introduced.\ud
\ud
To demonstrate the feasibility of our approach in both a sequential, and more important, in a distributed setting, we have performed a number of case studies
Exact solution of the Zeeman effect in single-electron systems
Contrary to popular belief, the Zeeman effect can be treated exactly in
single-electron systems, for arbitrary magnetic field strengths, as long as the
term quadratic in the magnetic field can be ignored. These formulas were
actually derived already around 1927 by Darwin, using the classical picture of
angular momentum, and presented in their proper quantum-mechanical form in 1933
by Bethe, although without any proof. The expressions have since been more or
less lost from the literature; instead, the conventional treatment nowadays is
to present only the approximations for weak and strong fields, respectively.
However, in fusion research and other plasma physics applications, the magnetic
fields applied to control the shape and position of the plasma span the entire
region from weak to strong fields, and there is a need for a unified treatment.
In this paper we present the detailed quantum-mechanical derivation of the
exact eigenenergies and eigenstates of hydrogen-like atoms and ions in a static
magnetic field. Notably, these formulas are not much more complicated than the
better-known approximations. Moreover, the derivation allows the value of the
electron spin gyromagnetic ratio to be different from 2. For
completeness, we then review the details of dipole transitions between two
hydrogenic levels, and calculate the corresponding Zeeman spectrum. The various
approximations made in the derivation are also discussed in details.Comment: 18 pages, 4 figures. Submitted to Physica Script
On the Nature of MeV-blazars
Broad-band spectra of the FSRQ (flat-spectrum-radio quasars) detected in the
high energy gamma-ray band imply that there may be two types of such objects:
those with steep gamma-ray spectra, hereafter called MeV-blazars, and those
with flat gamma-ray spectra, GeV-blazars. We demonstrate that this difference
can be explained in the context of the ERC (external-radiation-Compton) model
using the same electron injection function. A satisfactory unification is
reachable, provided that: (a) spectra of GeV-blazars are produced by internal
shocks formed at the distances where cooling of relativistic electrons in a jet
is dominated by Comptonization of broad emission lines, whereas spectra of
MeV-blazars are produced at the distances where cooling of relativistic
electrons is dominated by Comptonization of near-IR radiation from hot dust;
(b) electrons are accelerated via a two step process and their injection
function takes the form of a double power-law, with the break corresponding to
the threshold energy for the diffusive shock acceleration. Direct predictions
of our model are that, on average, variability time scales of the MeV-blazars
should be longer than variability time scales of the GeV-blazars, and that both
types of the blazar phenomenon can appear in the same object.Comment: Accepted for publication in the Astrophysical Journa
Expression of San Andreas Fault on Seasat Radar Image
On a Seasat image (23.5-cm wavelength) of the Durmid Hills in
southern California, the San Andreas fault is expressed as a prominent
southeast-trending tonal lineament that is bright on the southwest
side and dark on the northeast side. Field investigation established
that the bright signature corresponds to outcrops of the Borrego
Formation, which weathers to a rough surface. The dark signature
corresponds to sand and silt deposits of Lake Coahuila which are
smooth at the wavelength of the Seasat radar. These signatures and
field characteristics agree with calculations of the smooth and
rough radar criteria. On Landsat and Skylab images of the Durmid
Hills, the Borrego and Lake Coahuila surfaces have similar bright
tones and the San Andreas fault is not detectable. On a side-looking
airborne radar image (0.86-cm wavelength), both the Borrego and Lake
Coahuila surfaces appear rough, which results in bright signatures on
both sides of the San Andreas fault. Because of this lack of roughness
contrast, the fault cannot be distinguished. The wavelength of
the Seasat radar system is well suited for mapping geologic features
in the Durmid Hills that are obscure on other remote sensing images
Perceiving emotions in visual stimuli: social verbal context facilitates emotion detection of words but not of faces
Building on the notion that processing of emotional stimuli is sensitive to context, in two experimental tasks we explored whether the detection of emotion in emotional words (task 1) and facial expressions (task 2) is facilitated by social verbal context. Three different levels of contextual supporting information were compared, namely (1) no information, (2) the verbal expression of an emotionally matched word pronounced with a neutral intonation, and (3) the verbal expression of an emotionally matched word pronounced with emotionally matched intonation. We found that increasing levels of supporting contextual information enhanced emotion detection for words, but not for facial expressions. We also measured activity of the corrugator and zygomaticus muscle to assess facial simulation, as processing of emotional stimuli can be facilitated by facial simulation. While facial simulation emerged for facial expressions, the level of contextual supporting information did not qualify this effect. All in all, our findings suggest that adding emotional-relevant voice elements positively influence emotion detection.Ā· Ā· Ā· Ā·info:eu-repo/semantics/publishedVersio
Program Correctness by Transformation
Deductive program verification can be used effectively to verify high-level programs, but can be challenging for low-level, high-performance code. In this paper, we argue that compilation and program transformations should be made annotation-aware, i.e. during compilation and program transformation, not only the code should be changed, but also the corresponding annotations. As a result, if the original high-level program could be verified, also the resulting low-level program can be verified. We illustrate this approach on a concrete case, where loop annotations that capture possible loop parallelisations are translated into specifications of an OpenCL kernel that corresponds to the parallel loop. We also sketch how several commonly used OpenCL kernel transformations can be adapted to also transform the corresponding program annotations. Finally, we conclude the paper with a list of research challenges that need to be addressed to further develop this approach
- ā¦