8,191 research outputs found
Modelling network travel time reliability under stochastic demand
A technique is proposed for estimating the probability distribution of total network travel time, in the light of normal day-to-day variations in the travel demand matrix over a road traffic network. A solution method is proposed, based on a single run of a standard traffic assignment model, which operates in two stages. In stage one, moments of the total travel time distribution are computed by an analytic method, based on the multivariate moments of the link flow vector. In stage two, a flexible family of density functions is fitted to these moments. It is discussed how the resulting distribution may in practice be used to characterise unreliability. Illustrative numerical tests are reported on a simple network, where the method is seen to provide a means for identifying sensitive or vulnerable links, and for examining the impact on network reliability of changes to link capacities. Computational considerations for large networks, and directions for further research, are discussed
Reliability of Mobile Agents for Reliable Service Discovery Protocol in MANET
Recently mobile agents are used to discover services in mobile ad-hoc network
(MANET) where agents travel through the network, collecting and sometimes
spreading the dynamically changing service information. But it is important to
investigate how reliable the agents are for this application as the
dependability issues(reliability and availability) of MANET are highly affected
by its dynamic nature.The complexity of underlying MANET makes it hard to
obtain the route reliability of the mobile agent systems (MAS); instead we
estimate it using Monte Carlo simulation. Thus an algorithm for estimating the
task route reliability of MAS (deployed for discovering services) is proposed,
that takes into account the effect of node mobility in MANET. That mobility
pattern of the nodes affects the MAS performance is also shown by considering
different mobility models. Multipath propagation effect of radio signal is
considered to decide link existence. Transient link errors are also considered.
Finally we propose a metric to calculate the reliability of service discovery
protocol and see how MAS performance affects the protocol reliability. The
experimental results show the robustness of the proposed algorithm. Here the
optimum value of network bandwidth (needed to support the agents) is calculated
for our application. However the reliability of MAS is highly dependent on link
failure probability
Estimation of Dynamic Latent Variable Models Using Simulated Nonparametric Moments
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.dynamic latent variable models; simulation-based estimation; simulated moments; kernel regression; nonparametric estimation
Training deep neural density estimators to identify mechanistic models of neural dynamics
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics
Real-time Loss Estimation for Instrumented Buildings
Motivation. A growing number of buildings have been instrumented to measure and record
earthquake motions and to transmit these records to seismic-network data centers to be archived and
disseminated for research purposes. At the same time, sensors are growing smaller, less expensive to
install, and capable of sensing and transmitting other environmental parameters in addition to
acceleration. Finally, recently developed performance-based earthquake engineering methodologies
employ structural-response information to estimate probabilistic repair costs, repair durations, and
other metrics of seismic performance. The opportunity presents itself therefore to combine these
developments into the capability to estimate automatically in near-real-time the probabilistic seismic
performance of an instrumented building, shortly after the cessation of strong motion. We refer to
this opportunity as (near-) real-time loss estimation (RTLE).
Methodology. This report presents a methodology for RTLE for instrumented buildings. Seismic
performance is to be measured in terms of probabilistic repair cost, precise location of likely physical
damage, operability, and life-safety. The methodology uses the instrument recordings and a Bayesian
state-estimation algorithm called a particle filter to estimate the probabilistic structural response of
the system, in terms of member forces and deformations. The structural response estimate is then
used as input to component fragility functions to estimate the probabilistic damage state of structural
and nonstructural components. The probabilistic damage state can be used to direct structural
engineers to likely locations of physical damage, even if they are concealed behind architectural
finishes. The damage state is used with construction cost-estimation principles to estimate
probabilistic repair cost. It is also used as input to a quantified, fuzzy-set version of the FEMA-356
performance-level descriptions to estimate probabilistic safety and operability levels.
CUREE demonstration building. The procedure for estimating damage locations, repair costs, and
post-earthquake safety and operability is illustrated in parallel demonstrations by CUREE and
Kajima research teams. The CUREE demonstration is performed using a real 1960s-era, 7-story, nonductile
reinforced-concrete moment-frame building located in Van Nuys, California. The building is
instrumented with 16 channels at five levels: ground level, floors 2, 3, 6, and the roof. We used the
records obtained after the 1994 Northridge earthquake to hindcast performance in that earthquake.
The building is analyzed in its condition prior to the 1994 Northridge Earthquake. It is found that,
while hindcasting of the overall system performance level was excellent, prediction of detailed damage
locations was poor, implying that either actual conditions differed substantially from those shown on
the structural drawings, or inappropriate fragility functions were employed, or both. We also found
that Bayesian updating of the structural model using observed structural response above the base of
the building adds little information to the performance prediction. The reason is probably that
Real-Time Loss Estimation for Instrumented Buildings
ii
structural uncertainties have only secondary effect on performance uncertainty, compared with the
uncertainty in assembly damageability as quantified by their fragility functions. The implication is
that real-time loss estimation is not sensitive to structural uncertainties (saving costly multiple
simulations of structural response), and that real-time loss estimation does not benefit significantly
from installing measuring instruments other than those at the base of the building.
Kajima demonstration building. The Kajima demonstration is performed using a real 1960s-era
office building in Kobe, Japan. The building, a 7-story reinforced-concrete shearwall building, was not
instrumented in the 1995 Kobe earthquake, so instrument recordings are simulated. The building is
analyzed in its condition prior to the earthquake. It is found that, while hindcasting of the overall
repair cost was excellent, prediction of detailed damage locations was poor, again implying either that
as-built conditions differ substantially from those shown on structural drawings, or that
inappropriate fragility functions were used, or both. We find that the parameters of the detailed
particle filter needed significant tuning, which would be impractical in actual application. Work is
needed to prescribe values of these parameters in general.
Opportunities for implementation and further research. Because much of the cost of applying
this RTLE algorithm results from the cost of instrumentation and the effort of setting up a structural
model, the readiest application would be to instrumented buildings whose structural models are
already available, and to apply the methodology to important facilities. It would be useful to study
under what conditions RTLE would be economically justified. Two other interesting possibilities for
further study are (1) to update performance using readily observable damage; and (2) to quantify the
value of information for expensive inspections, e.g., if one inspects a connection with a modeled 50%
failure probability and finds that the connect is undamaged, is it necessary to examine one with 10%
failure probability
Modeling Endogenous Mobility in Earnings Determination
We evaluate the bias from endogenous job mobility in fixed-effects estimates of worker- and firm-specific earnings heterogeneity using longitudinally linked employer-employee data from the LEHD infrastructure file system of the U.S. Census Bureau. First, we propose two new residual diagnostic tests of the assumption that mobility is exogenous to unmodeled determinants of earnings. Both tests reject exogenous mobility. We relax the exogenous mobility assumptions by modeling the evolution of the matched data as an evolving bipartite graph using a Bayesian latent class framework. Our results suggest that endogenous mobility biases estimated firm effects toward zero. To assess validity, we match our estimates of the wage components to out-of-sample estimates of revenue per worker. The corrected estimates attribute much more of the variation in revenue per worker to variation in match quality and worker quality than the uncorrected estimates
Modeling Endogenous Mobility in Wage Determination
We evaluate the bias from endogenous job mobility in fixed-effects estimates of worker- and firm-specific earnings heterogeneity using longitudinally linked employer-employee data from the LEHD infrastructure file system of the U.S. Census Bureau. First, we propose two new residual diagnostic tests of the assumption that mobility is exogenous to unmodeled determinants of earnings. Both tests reject exogenous mobility. We relax the exogenous mobility assumptions by modeling the evolution of the matched data as an evolving bipartite graph using a Bayesian latent class framework. Our results suggest that endogenous mobility biases estimated firm effects toward zero. To assess validity, we match our estimates of the wage components to out-of-sample estimates of revenue per worker. The corrected estimates attribute much more of the variation in revenue per worker to variation in match quality and worker quality than the uncorrected estimates
- …