73,460 research outputs found
Recommended from our members
Analyzing software data bindings in large-scale systems
One central feature of the structure of a software system is the coupling among its components (e.g., subsystems. modules) and the cohesion within them. The purpose of this study is to quantify ratios of coupling and cohesion and use them in the generation of hierarchical system descriptions. The ability of the hierarchical descriptions to localize errors by identifying error-prone system structure is evaluated using actual error data. Measures of data interaction, called data bindings, are used as the basis for calculating software coupling and cohesion. A 135,000 source line system from a production environment has been selected for empirical analysis. Software error data was collected from high-level system design through system test and from some field operation of the system. A set of five tools is applied to calculate the data bindings automatically, and cluster analysis is used to determine a hierarchical description of each of the system's 77 subsystems. An analysis of variance model is used to characterize subsystems and individual routines that had either many/few errors or high/low error correction effort
An adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Fast and Accurate Camera Covariance Computation for Large 3D Reconstruction
Estimating uncertainty of camera parameters computed in Structure from Motion
(SfM) is an important tool for evaluating the quality of the reconstruction and
guiding the reconstruction process. Yet, the quality of the estimated
parameters of large reconstructions has been rarely evaluated due to the
computational challenges. We present a new algorithm which employs the sparsity
of the uncertainty propagation and speeds the computation up about ten times
\wrt previous approaches. Our computation is accurate and does not use any
approximations. We can compute uncertainties of thousands of cameras in tens of
seconds on a standard PC. We also demonstrate that our approach can be
effectively used for reconstructions of any size by applying it to smaller
sub-reconstructions.Comment: ECCV 201
Systematic inference of the long-range dependence and heavy-tail distribution parameters of ARFIMA models
Long-Range Dependence (LRD) and heavy-tailed distributions are ubiquitous in natural and socio-economic data. Such data can be self-similar whereby both LRD and heavy-tailed distributions contribute to the self-similarity as measured by the Hurst exponent. Some methods widely used in the physical sciences separately estimate these two parameters, which can lead to estimation bias. Those which do simultaneous estimation are based on frequentist methods such as Whittle’s approximate maximum likelihood estimator. Here we present a new and systematic Bayesian framework for the simultaneous inference of the LRD and heavy-tailed distribution parameters of a parametric ARFIMA model with non-Gaussian innovations. As innovations we use the α-stable and t-distributions which have power law tails. Our algorithm also provides parameter uncertainty estimates. We test our algorithm using synthetic data, and also data from the Geostationary Operational Environmental Satellite system (GOES) solar X-ray time series. These tests show that our algorithm is able to accurately and robustly estimate the LRD and heavy-tailed distribution parameters
Simple, compact and robust approximate string dictionary
This paper is concerned with practical implementations of approximate string
dictionaries that allow edit errors. In this problem, we have as input a
dictionary of strings of total length over an alphabet of size
. Given a bound and a pattern of length , a query has to
return all the strings of the dictionary which are at edit distance at most
from , where the edit distance between two strings and is defined as
the minimum-cost sequence of edit operations that transform into . The
cost of a sequence of operations is defined as the sum of the costs of the
operations involved in the sequence. In this paper, we assume that each of
these operations has unit cost and consider only three operations: deletion of
one character, insertion of one character and substitution of a character by
another. We present a practical implementation of the data structure we
recently proposed and which works only for one error. We extend the scheme to
. Our implementation has many desirable properties: it has a very
fast and space-efficient building algorithm. The dictionary data structure is
compact and has fast and robust query time. Finally our data structure is
simple to implement as it only uses basic techniques from the literature,
mainly hashing (linear probing and hash signatures) and succinct data
structures (bitvectors supporting rank queries).Comment: Accepted to a journal (19 pages, 2 figures
- …