14,855 research outputs found
Economic Small-World Behavior in Weighted Networks
The small-world phenomenon has been already the subject of a huge variety of
papers, showing its appeareance in a variety of systems. However, some big
holes still remain to be filled, as the commonly adopted mathematical
formulation suffers from a variety of limitations, that make it unsuitable to
provide a general tool of analysis for real networks, and not just for
mathematical (topological) abstractions. In this paper we show where the major
problems arise, and how there is therefore the need for a new reformulation of
the small-world concept. Together with an analysis of the variables involved,
we then propose a new theory of small-world networks based on two leading
concepts: efficiency and cost. Efficiency measures how well information
propagates over the network, and cost measures how expensive it is to build a
network. The combination of these factors leads us to introduce the concept of
{\em economic small worlds}, that formalizes the idea of networks that are
"cheap" to build, and nevertheless efficient in propagating information, both
at global and local scale. This new concept is shown to overcome all the
limitations proper of the so-far commonly adopted formulation, and to provide
an adequate tool to quantitatively analyze the behaviour of complex networks in
the real world. Various complex systems are analyzed, ranging from the realm of
neural networks, to social sciences, to communication and transportation
networks. In each case, economic small worlds are found. Moreover, using the
economic small-world framework, the construction principles of these networks
can be quantitatively analyzed and compared, giving good insights on how
efficiency and economy principles combine up to shape all these systems.Comment: 17 pages, 10 figures, 4 table
Fully Automatic Expression-Invariant Face Correspondence
We consider the problem of computing accurate point-to-point correspondences
among a set of human face scans with varying expressions. Our fully automatic
approach does not require any manually placed markers on the scan. Instead, the
approach learns the locations of a set of landmarks present in a database and
uses this knowledge to automatically predict the locations of these landmarks
on a newly available scan. The predicted landmarks are then used to compute
point-to-point correspondences between a template model and the newly available
scan. To accurately fit the expression of the template to the expression of the
scan, we use as template a blendshape model. Our algorithm was tested on a
database of human faces of different ethnic groups with strongly varying
expressions. Experimental results show that the obtained point-to-point
correspondence is both highly accurate and consistent for most of the tested 3D
face models
Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data
- …