386 research outputs found
Graphical model-based approaches to target tracking in sensor networks: an overview of some recent work and challenges
Sensor Networks have provided a technology base for distributed target tracking applications among others. Conventional centralized approaches to the problem lack scalability in such a scenario where a large number of sensors provide measurements simultaneously under a possibly non-collaborating environment. Therefore research efforts have focused on scalable, robust, and distributed algorithms for the inference tasks related to target tracking, i.e. localization, data association, and track maintenance. Graphical models provide a rigorous tool for development of such algorithms by modeling the information structure of a given task and providing distributed solutions through message passing algorithms. However, the limited communication capabilities and energy resources of sensor networks pose the additional difculty of considering the tradeoff between the communication cost and the accuracy of the result. Also the network structure and the information structure are different aspects of the problem and a mapping between the physical entities and the information structure is needed. In this paper we discuss available formalisms based on graphical models for target tracking in sensor networks with a focus on the aforementioned issues. We point out additional constraints that must be asserted in order to achieve further insight and more effective solutions
Complexity of Discrete Energy Minimization Problems
Discrete energy minimization is widely-used in computer vision and machine
learning for problems such as MAP inference in graphical models. The problem,
in general, is notoriously intractable, and finding the global optimal solution
is known to be NP-hard. However, is it possible to approximate this problem
with a reasonable ratio bound on the solution quality in polynomial time? We
show in this paper that the answer is no. Specifically, we show that general
energy minimization, even in the 2-label pairwise case, and planar energy
minimization with three or more labels are exp-APX-complete. This finding rules
out the existence of any approximation algorithm with a sub-exponential
approximation ratio in the input size for these two problems, including
constant factor approximations. Moreover, we collect and review the
computational complexity of several subclass problems and arrange them on a
complexity scale consisting of three major complexity classes -- PO, APX, and
exp-APX, corresponding to problems that are solvable, approximable, and
inapproximable in polynomial time. Problems in the first two complexity classes
can serve as alternative tractable formulations to the inapproximable ones.
This paper can help vision researchers to select an appropriate model for an
application or guide them in designing new algorithms.Comment: ECCV'16 accepte
Exact Inference Techniques for the Analysis of Bayesian Attack Graphs
Attack graphs are a powerful tool for security risk assessment by analysing
network vulnerabilities and the paths attackers can use to compromise network
resources. The uncertainty about the attacker's behaviour makes Bayesian
networks suitable to model attack graphs to perform static and dynamic
analysis. Previous approaches have focused on the formalization of attack
graphs into a Bayesian model rather than proposing mechanisms for their
analysis. In this paper we propose to use efficient algorithms to make exact
inference in Bayesian attack graphs, enabling the static and dynamic network
risk assessments. To support the validity of our approach we have performed an
extensive experimental evaluation on synthetic Bayesian attack graphs with
different topologies, showing the computational advantages in terms of time and
memory use of the proposed techniques when compared to existing approaches.Comment: 14 pages, 15 figure
Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model
This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images
Cycle-based Cluster Variational Method for Direct and Inverse Inference
We elaborate on the idea that loop corrections to belief propagation could be
dealt with in a systematic way on pairwise Markov random fields, by using the
elements of a cycle basis to define region in a generalized belief propagation
setting. The region graph is specified in such a way as to avoid dual loops as
much as possible, by discarding redundant Lagrange multipliers, in order to
facilitate the convergence, while avoiding instabilities associated to minimal
factor graph construction. We end up with a two-level algorithm, where a belief
propagation algorithm is run alternatively at the level of each cycle and at
the inter-region level. The inverse problem of finding the couplings of a
Markov random field from empirical covariances can be addressed region wise. It
turns out that this can be done efficiently in particular in the Ising context,
where fixed point equations can be derived along with a one-parameter log
likelihood function to minimize. Numerical experiments confirm the
effectiveness of these considerations both for the direct and inverse MRF
inference.Comment: 47 pages, 16 figure
Extraction of arterial and venous trees from disconnected vessel segments in fundus images
The accurate automated extraction of arterial and venous (AV) trees in fundus images
subserves investigation into the correlation of global features of the retinal vasculature
with retinal abnormalities. The accurate extraction of AV trees also provides
the opportunity to analyse the physiology and hemodynamic of blood flow in retinal
vessel trees. A number of common diseases, including Diabetic Retinopathy, Cardiovascular
and Cerebrovascular diseases, directly affect the morphology of the retinal
vasculature. Early detection of these pathologies may prevent vision loss and reduce
the risk of other life-threatening diseases.
Automated extraction of AV trees requires complete segmentation and accurate
classification of retinal vessels. Unfortunately, the available segmentation techniques
are susceptible to a number of complications including vessel contrast, fuzzy edges,
variable image quality, media opacities, and vessel overlaps. Due to these sources of
errors, the available segmentation techniques produce partially segmented vascular
networks. Thus, extracting AV trees by accurately connecting and classifying the
disconnected segments is extremely complex.
This thesis provides a novel graph-based technique for accurate extraction of AV
trees from a network of disconnected and unclassified vessel segments in fundus
viii
images. The proposed technique performs three major tasks: junction identification,
local configuration, and global configuration.
A probabilistic approach is adopted that rigorously identifies junctions by examining
the mutual associations of segment ends. These associations are determined by
dynamically specifying regions at both ends of all segments. A supervised Naïve
Bayes inference model is developed that estimates the probability of each possible
configuration at a junction. The system enumerates all possible configurations and
estimates posterior probability of each configuration. The likelihood function estimates
the conditional probability of the configuration using the statistical parameters
of distribution of colour and geometrical features of joints. The parameters of feature
distributions and priors of configuration are obtained through supervised learning
phases. A second Naïve Bayes classifier estimates class probabilities of each vessel
segment utilizing colour and spatial properties of segments.
The global configuration works by translating the segment network into an STgraph
(a specialized form of dependency graph) representing the segments and their
possible connective associations. The unary and pairwise potentials for ST-graph
are estimated using the class and configuration probabilities obtained earlier. This
translates the classification and configuration problems into a general binary labelling
graph problem. The ST-graph is interpreted as a flow network for energy minimization
a minimum ST-graph cut is obtained using the Ford-Fulkerson algorithm, from
which the estimated AV trees are extracted.
The performance is evaluated by implementing the system on test images of
DRIVE dataset and comparing the obtained results with the ground truth data. The
ground truth data is obtained by establishing a new dataset for DRIVE images with
manually classified vessels. The system outperformed benchmark methods and
produced excellent results
Discovering the core semantics of event from social media
© 2015 Elsevier B.V. As social media is opening up such as Twitter and Sina Weibo,1 large volumes of short texts are flooding on the Web. The ocean of short texts dilutes the limited core semantics of event in cyberspace by redundancy, noises and irrelevant content on the web, which make it difficult to discover the core semantics of event. The major challenges include how to efficiently learn the semantic association distribution by small-scale association relations and how to maximize the coverage of the semantic association distribution by the minimum number of redundancy-free short texts. To solve the above issues, we explore a Markov random field based method for discovering the core semantics of event. This method makes semantics collaborative computation for learning association relation distribution and makes information gradient computation for discovering k redundancy-free texts as the core semantics of event. We evaluate our method by comparing with two state-of-the-art methods on the TAC dataset and the microblog dataset. The results show our method outperforms other methods in extracting core semantics accurately and efficiently. The proposed method can be applied to short text automatic generation, event discovery and summarization for big data analysis
- …