1,294 research outputs found
Investigating accessibility indicators for feedback from a travel to a land use model
Activity locations such as work locations or leisure facilities are not uniformly distributed geographically. Also, the travel access to different locations is not uniform. It is plausible to assume that locations with easier access to other activity locations are more attractive than locations with less access. In consequence, urban simulation models such as UrbanSim use accessibility measures, such as ``number of jobs with 30 minutes by car', for several of their submodels. A problem, however, is that accessibility variables are not easy to compute within UrbanSim, for two reasons: 1) UrbanSim does not contain a travel model, and in consequence is not able to compute by itself the congestion effects resulting from land use decisions 2) The travel times are fed back from the travel model in the form of zone-to-zone travel time matrices. As is well known, such matrices grow quadratically in the number of zones. This limits the number of attributes that can be passed, for example different values for different times-of-day and/or for different activity purposes. These issues could be solved within UrbanSim, but only with considerable implementation effort. For that reason, it is important to consider how accessibility measures could be fed back from a travel model to UrbanSim. The present study will look at the question in how far location-based accessibility measures that are computed in the travel model and then fed back to UrbanSim could be used for this purposes. Those accessibility measures are no longer measures belonging to pairs of locations, but just belong to one location; a typical representative is a logsum term. In consequence, the number of entries now grows linearly in the number of locations, allowing much more freedom both in the number of considered locations and in the number of attributes that could be attached to every location that is considered in this way. This paper will address issues such as different spatial resulutions of such accessibility measures, comparisons between different accessibility measures, and computing times.
Testing for monotonicity in the Hubble diagram
General relativistic kinematics and the cosmological principle alone imply a
monotonicity constraint in the Hubble diagram, which we confront to present-day
supernova data. We use the running gradient method of statistical inference by
Hall & Heckman (2000). We find no significant departure from monotonicity. The
method seems well adapted and we recommend its use with future data.Comment: 5 pages, 3 figure
Investigating the Impact of the Blogsphere: Using PageRank to Determine the Distribution of Attention
Much has been written in recent years about the blogosphere and its impact on political, educational and scientific debates. Lately the issue has received significant attention from the industry. As the blogosphere continues to grow, even doubling its size every six months, this paper investigates its apparent impact on the overall Web itself. We use the popular Google PageRank algorithm which employs a model of Web used to measure the distribution of user attention across sites in the blogosphere. The paper is based on an analysis of the PageRank distribution for 8.8 million blogs in 2005 and 2006. This paper addresses the following key questions: How is PageRank distributed across the blogosphere? Does it indicate the existence of measurable, visible effects of blogs on the overall mediasphere? Can we compare the distribution of attention to blogs as characterised by the PageRank with the situation for other forms of Web content? Has there been a growth in the impact of the blogosphere on the Web over the two years analysed here? Finally, it will also be necessary to examine the limitations of a PageRank-centred approach
An Online Bootstrap for Time Series
Resampling methods such as the bootstrap have proven invaluable in the field
of machine learning. However, the applicability of traditional bootstrap
methods is limited when dealing with large streams of dependent data, such as
time series or spatially correlated observations. In this paper, we propose a
novel bootstrap method that is designed to account for data dependencies and
can be executed online, making it particularly suitable for real-time
applications. This method is based on an autoregressive sequence of
increasingly dependent resampling weights. We prove the theoretical validity of
the proposed bootstrap scheme under general conditions. We demonstrate the
effectiveness of our approach through extensive simulations and show that it
provides reliable uncertainty quantification even in the presence of complex
data dependencies. Our work bridges the gap between classical resampling
techniques and the demands of modern data analysis, providing a valuable tool
for researchers and practitioners in dynamic, data-rich environments
Quantum Cloning of Continuous Variable Entangled States
We consider the quantum cloning of continuous variable entangled states. This
is achieved by introducing two symmetric entanglement cloning machines (or
e-cloners): a local e-cloner and a global e-cloner; where we look at the
preservation of entanglement in the clones under the condition that the
fidelity of the clones is maximized. These cloning machines are implemented
using simple linear optical elements such as beam splitters and homodyne
detection along with squeeze gates. We show that the global e-cloner
out-performs the local e-cloner both in terms of the fidelity of the cloned
states as well as the strength of the entanglement of the clones. There is a
minimum strength of entanglement (3dB for the inseparability criterion and
5.7dB for the EPR paradox criterion) of the input state of the global e-cloner
that is required to preserve the entanglement in the clones.Comment: 11 pages, 6 figure
The disappearing screen: scenarios for audible interfaces
The world of ubiquitous computing, which by definition includes mobile devices of every kind, leads us to an era of small computer devices, usable in everyday situations. Computers are becoming smaller and operate discreetly in the background. This paper deals with the disappearance of the screen that is described and specified according to Lev Manovich. In doing research on radio frequency identification, this paper shows one possible way to interact with ubiquitous computers—primarily exploring suitability and scenarios for audible interfaces. The paper describes a research project of the University of Arts Berlin and the University of St. Gallen and proposes future research question
Conditional quantum-state engineering using ancillary squeezed-vacuum states
We investigate an optical scheme to conditionally engineer quantum states
using a beam splitter, homodyne detection and a squeezed vacuum as an ancillar
state. This scheme is efficient in producing non-Gaussian quantum states such
as squeezed single photons and superpositions of coherent states (SCSs). We
show that a SCS with well defined parity and high fidelity can be generated
from a Fock state of , and conjecture that this can be generalized for
an arbitrary Fock state. We describe our experimental demonstration of this
scheme using coherent input states and measuring experimental fidelities that
are only achievable using quantum resources.Comment: 10 pages, 14 figures, use pdf version, high quality figures available
on reques
- …