40,631 research outputs found
Verifiably-safe software-defined networks for CPS
Next generation cyber-physical systems (CPS) are expected to be deployed in domains which require scalability as well as performance under dynamic conditions. This scale and dynamicity will require that CPS communication networks be programmatic (i.e., not requiring manual intervention at any stage), but still maintain iron-clad safety guarantees. Software-defined networking standards like OpenFlow provide a means for scalably building tailor-made network architectures, but there is no guarantee that these systems are safe, correct, or secure. In this work we propose a methodology and accompanying tools for specifying and modeling distributed systems such that existing formal verification techniques can be transparently used to analyze critical requirements and properties prior to system implementation. We demonstrate this methodology by iteratively modeling and verifying an OpenFlow learning switch network with respect to network correctness, network convergence, and mobility-related properties. We posit that a design strategy based on the complementary pairing of software-defined networking and formal verification would enable the CPS community to build next-generation systems without sacrificing the safety and reliability that these systems must deliver
On Coordinating Collaborative Objects
A collaborative object represents a data type (such as a text document)
designed to be shared by a group of dispersed users. The Operational
Transformation (OT) is a coordination approach used for supporting optimistic
replication for these objects. It allows the users to concurrently update the
shared data and exchange their updates in any order since the convergence of
all replicas, i.e. the fact that all users view the same data, is ensured in
all cases. However, designing algorithms for achieving convergence with the OT
approach is a critical and challenging issue. In this paper, we propose a
formal compositional method for specifying complex collaborative objects. The
most important feature of our method is that designing an OT algorithm for the
composed collaborative object can be done by reusing the OT algorithms of
component collaborative objects. By using our method, we can start from correct
small collaborative objects which are relatively easy to handle and
incrementally combine them to build more complex collaborative objects.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499
Is Gravitational Lensing by Intercluster Filaments Always Negligible?
Intercluster filaments negligibly contribute to the weak lensing signal in
general relativity (GR), . In the context of
relativistic modified Newtonian dynamics (MOND) introduced by Bekenstein,
however, a single filament inclined by from the line of
sight can cause substantial distortion of background sources pointing towards
the filament's axis (); this is rigorous
for infinitely long uniform filaments, but also qualitatively true for short
filaments (Mpc), and even in regions where the projected matter
density of the filament is equal to zero. Since galaxies and galaxy clusters
are generally embedded in filaments or are projected on such structures, this
contribution complicates the interpretation of the weak lensing shear map in
the context of MOND. While our analysis is of mainly theoretical interest
providing order-of-magnitude estimates only, it seems safe to conclude that
when modeling systems with anomalous weak lensing signals, e.g. the "bullet
cluster" of Clowe et al., the "cosmic train wreck" of Abell 520 from Mahdavi et
al., and the "dark clusters" of Erben et al., filamentary structures might
contribute in a significant and likely complex fashion. On the other hand, our
predictions of a (conceptual) difference in the weak lensing signal could, in
principle, be used to falsify MOND/TeVeS and its variations.Comment: 11 pages, 6 figures, published versio
Estimating average marginal effects in nonseparable structural systems
We provide nonparametric estimators of derivative ratio-based average marginal effects of an endogenous cause, X, on a response of interest, Y , for a system of recursive structural equations. The system need not exhibit linearity, separability, or monotonicity. Our estimators are local indirect least squares estimators analogous to those of Heckman and Vytlacil (1999, 2001) who treat a latent index model involving a binary X. We treat the traditional case of an observed exogenous instrument (OXI)and the case where one observes error-laden proxies for an unobserved exogenous instrument (PXI). For PXI, we develop and apply new results for estimating densities and expectations conditional on mismeasured variables. For both OXI and PXI, we use infnite order flat-top kernels to obtain uniformly convergent and asymptotically normal nonparametric estimators of instrument-conditioned effects, as well as root-n consistent and asymptotically normal estimators of average effects.
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
A Theoretical Analysis of NDCG Type Ranking Measures
A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.Comment: COLT 201
- …