127,517 research outputs found
Accessible user interface support for multi-device ubiquitous applications: architectural modifiability considerations
The market for personal computing devices is rapidly expanding from PC, to mobile, home entertainment systems, and even the automotive industry. When developing software targeting such ubiquitous devices, the balance between development costs and market coverage has turned out to be a challenging issue. With the rise of Web technology and the Internet of things, ubiquitous applications have become a reality. Nonetheless, the diversity of presentation and interaction modalities still drastically limit the number of targetable devices and the accessibility toward end users. This paper presents webinos, a multi-device application middleware platform founded on the Future Internet infrastructure. Hereto, the platform's architectural modifiability considerations are described and evaluated as a generic enabler for supporting applications, which are executed in ubiquitous computing environments
Shallow landsliding and catchment connectivity within the Houpoto Forest, New Zealand.
Active landslides and their contribution to catchment connectivity have been investigated within the Houpoto Forest, North Island, New Zealand. The aim was to quantify the proportion of buffered versus coupled landslides and explore how specific physical conditions influenced differences in landslide connectivity. Landsliding and land use changes between 2007 and 2010 were identified and mapped from aerial photography, and the preliminary analyses and interpretations of these data are presented here. The data indicate that forest harvesting made some slopes more susceptible to failure, and consequently many landslides were triggered during subsequent heavy rainfall events. Failures were particularly widespread during two high magnitude (> 200 mm/day) rainfall events, as recorded in 2010 imagery. Connectivity was analysed by quantifying the relative areal extents of coupled and buffered landslides identified in the different images. Approximately 10 % of the landslides were identified as being coupled to the local stream network, and thus directly contributing to the sediment budget. Following liberation of landslides during high-magnitude events, low-magnitude events are thought to be capable of transferring more of this sediment to the channel. Subsequent re-planting of the slopes appears to have helped recovery by increasing the thresholds for failure, thus reducing the number of landslides during subsequent high-magnitude rainfall events. Associated with this is a reduction in slope-channel connectivity. These preliminary results highlight how site specific preconditioning, preparatory and triggering factors contribute to landslide distribution and connectivity, in addition to how efficient re-afforestation improves the rate of slope recovery
Mesoscopic simulation of diffusive contaminant spreading in gas flows at low pressure
Many modern production and measurement facilities incorporate multiphase
systems at low pressures. In this region of flows at small, non-zero Knudsen-
and low Mach numbers the classical mesoscopic Monte Carlo methods become
increasingly numerically costly. To increase the numerical efficiency of
simulations hybrid models are promising. In this contribution, we propose a
novel efficient simulation approach for the simulation of two phase flows with
a large concentration imbalance in a low pressure environment in the low
intermediate Knudsen regime. Our hybrid model comprises a lattice-Boltzmann
method corrected for the lower intermediate Kn regime proposed by Zhang et al.
for the simulation of an ambient flow field. A coupled event-driven
Monte-Carlo-style Boltzmann solver is employed to describe particles of a
second species of low concentration. In order to evaluate the model, standard
diffusivity and diffusion advection systems are considered.Comment: 9 pages, 8 figure
A flexible scintillation light apparatus for rare event searches
Compelling experimental evidences of neutrino oscillations and their
implication that neutrinos are massive particles have given neutrinoless double
beta decay a central role in astroparticle physics. In fact, the discovery of
this elusive decay would be a major breakthrough, unveiling that neutrino and
antineutrino are the same particle and that the lepton number is not conserved.
It would also impact our efforts to establish the absolute neutrino mass scale
and, ultimately, understand elementary particle interaction unification. All
current experimental programs to search for neutrinoless double beta decay are
facing with the technical and financial challenge of increasing the
experimental mass while maintaining incredibly low levels of spurious
background. The new concept described in this paper could be the answer which
combines all the features of an ideal experiment: energy resolution, low cost
mass scalability, isotope choice flexibility and many powerful handles to make
the background negligible. The proposed technology is based on the use of
arrays of silicon detectors cooled to 120 K to optimize the collection of the
scintillation light emitted by ultra-pure crystals. It is shown that with a 54
kg array of natural CaMoO4 scintillation detectors of this type it is possible
to yield a competitive sensitivity on the half-life of the neutrinoless double
beta decay of 100Mo as high as ~10E24 years in only one year of data taking.
The same array made of 40CaMoO4 scintillation detectors (to get rid of the
continuous background coming from the two neutrino double beta decay of 48Ca)
will instead be capable of achieving the remarkable sensitivity of ~10E25 years
on the half-life of 100Mo neutrinoless double beta decay in only one year of
measurement.Comment: 12 pages, 4 figures. Prepared for submission to EPJ
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
Thoughts on heavy-ion physics in the high luminosity era: the soft sector
This document summarizes thoughts on opportunities in the soft-QCD sector
from high-energy nuclear collisions at high luminosities.Comment: 19 page
Time and position sensitive single photon detector for scintillator read-out
We have developed a photon counting detector system for combined neutron and
gamma radiography which can determine position, time and intensity of a
secondary photon flash created by a high-energy particle or photon within a
scintillator screen. The system is based on a micro-channel plate
photomultiplier concept utilizing image charge coupling to a position- and
time-sensitive read-out anode placed outside the vacuum tube in air, aided by a
standard photomultiplier and very fast pulse-height analyzing electronics. Due
to the low dead time of all system components it can cope with the high
throughput demands of a proposed combined fast neutron and dual discrete energy
gamma radiography method (FNDDER). We show tests with different types of
delay-line read-out anodes and present a novel pulse-height-to-time converter
circuit with its potential to discriminate gamma energies for the projected
FNDDER devices for an automated cargo container inspection system (ACCIS).Comment: Proceedings of FNDA 201
- …