56,819 research outputs found
Application of Statistical Physics to Politics
The concept and technics of real space renormalization group are applied to
study majority rule voting in hierarchical structures. It is found that
democratic voting can lead to totalitarianism by keeping in power a small
minority. Conditions of this paradox are analyzed and singled out. Indeed
majority rule produces critical thresholds to absolute power. Values of these
thresholds can vary from 50% up to at least 77%. The associated underlying
mechanism could provide an explanation for both former apparent eternity of
communist leaderships as well as their sudden collapse.Comment: Proceedings of the NATO Advanced Research Workshop, Budapest (May
1999) Eds: A. Gadomski et a
Local Stereo Matching Using Adaptive Local Segmentation
We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The preprocessing step smoothes low textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences; and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the fronto-parallel assumption, our algorithm is the best ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face
Keep Ballots Secret: On the Futility of Social Learning in Decision Making by Voting
We show that social learning is not useful in a model of team binary decision
making by voting, where each vote carries equal weight. Specifically, we
consider Bayesian binary hypothesis testing where agents have any
conditionally-independent observation distribution and their local decisions
are fused by any L-out-of-N fusion rule. The agents make local decisions
sequentially, with each allowed to use its own private signal and all precedent
local decisions. Though social learning generally occurs in that precedent
local decisions affect an agent's belief, optimal team performance is obtained
when all precedent local decisions are ignored. Thus, social learning is
futile, and secret ballots are optimal. This contrasts with typical studies of
social learning because we include a fusion center rather than concentrating on
the performance of the latest-acting agents
Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS
Many distributed machine learning frameworks have recently been built to
speed up the large-scale data learning process. However, most distributed
machine learning used in these frameworks still uses an offline algorithm model
which cannot cope with the data stream problems. In fact, large-scale data are
mostly generated by the non-stationary data stream where its pattern evolves
over time. To address this problem, we propose a novel Evolving Large-scale
Data Stream Analytics framework based on a Scalable Parsimonious Network based
on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving
algorithm is distributed over the worker nodes in the cloud to learn
large-scale data stream. Scalable PANFIS framework incorporates the active
learning (AL) strategy and two model fusion methods. The AL accelerates the
distributed learning process to generate an initial evolving large-scale data
stream model (initial model), whereas the two model fusion methods aggregate an
initial model to generate the final model. The final model represents the
update of current large-scale data knowledge which can be used to infer future
data. Extensive experiments on this framework are validated by measuring the
accuracy and running time of four combinations of Scalable PANFIS and other
Spark-based built in algorithms. The results indicate that Scalable PANFIS with
AL improves the training time to be almost two times faster than Scalable
PANFIS without AL. The results also show both rule merging and the voting
mechanisms yield similar accuracy in general among Scalable PANFIS algorithms
and they are generally better than Spark-based algorithms. In terms of running
time, the Scalable PANFIS training time outperforms all Spark-based algorithms
when classifying numerous benchmark datasets.Comment: 20 pages, 5 figure
Beam Loss Monitors at LHC
One of the main functions of the LHC beam loss measurement system is the
protection of equipment against damage caused by impacting particles creating
secondary showers and their energy dissipation in the matter. Reliability
requirements are scaled according to the acceptable consequences and the
frequency of particle impact events on equipment. Increasing reliability often
leads to more complex systems. The downside of complexity is a reduction of
availability; therefore, an optimum has to be found for these conflicting
requirements. A detailed review of selected concepts and solutions for the LHC
system will be given to show approaches used in various parts of the system
from the sensors, signal processing, and software implementations to the
requirements for operation and documentation.Comment: 16 pages, contribution to the 2014 Joint International Accelerator
School: Beam Loss and Accelerator Protection, Newport Beach, CA, USA , 5-14
Nov 201
- ā¦