120 research outputs found
Gaussian Processes with Context-Supported Priors for Active Object Localization
We devise an algorithm using a Bayesian optimization framework in conjunction
with contextual visual data for the efficient localization of objects in still
images. Recent research has demonstrated substantial progress in object
localization and related tasks for computer vision. However, many current
state-of-the-art object localization procedures still suffer from inaccuracy
and inefficiency, in addition to failing to provide a principled and
interpretable system amenable to high-level vision tasks. We address these
issues with the current research.
Our method encompasses an active search procedure that uses contextual data
to generate initial bounding-box proposals for a target object. We train a
convolutional neural network to approximate an offset distance from the target
object. Next, we use a Gaussian Process to model this offset response signal
over the search space of the target. We then employ a Bayesian active search
for accurate localization of the target.
In experiments, we compare our approach to a state-of-theart bounding-box
regression method for a challenging pedestrian localization task. Our method
exhibits a substantial improvement over this baseline regression method.Comment: 10 pages, 4 figure
Non-Adaptive Policies for 20 Questions Target Localization
The problem of target localization with noise is addressed. The target is a
sample from a continuous random variable with known distribution and the goal
is to locate it with minimum mean squared error distortion. The localization
scheme or policy proceeds by queries, or questions, weather or not the target
belongs to some subset as it is addressed in the 20-question framework. These
subsets are not constrained to be intervals and the answers to the queries are
noisy. While this situation is well studied for adaptive querying, this paper
is focused on the non adaptive querying policies based on dyadic questions. The
asymptotic minimum achievable distortion under such policies is derived.
Furthermore, a policy named the Aurelian1 is exhibited which achieves
asymptotically this distortion
Blocking Adult Images Based on Statistical Skin Detection
This work is aimed at the detection of adult images that appear in Internet. Skin detection is of the paramount importance in the detection of adult images. We build a maximum entropy model for this task. This model, called the First Order Model in this paper, is subject to constraints on the color gradients of neighboring pixels. Parameter estimation as well as optimization cannot be tackled without approximations. With Bethe tree approximation, parameter estimation is eradicated and the Belief Propagation algorithm permits to obtain exact and fast solution for skin probabilities at pixel locations. We show by the Receiver Operating Characteristics (ROC) curves that our skin detection improves the performance in the previous work in the context of skin pixel detecton rate and false positive rate. The output of skin detection is a grayscale skin map with the gray level indicating the belief of skin. We then calculate 9 simple features from this map which form a feature vector. We use the fit ellipses to catch the characteristics of skin distribution. Two fit ellipses are used for each skin map---the fit ellipse of all skin regions and the fit ellipse of the largest skin region. They are called respectively Global Fit Ellipse and Local Fit Ellipse in this paper. A multi-layer perceptron classifier is trained for these features. Plenty of experimental results are presented including photographs and a ROC curve calculated over a test set of 5,084 photographs, which show stimulating performance for such simple features
A Finite-Horizon Approach to Active Level Set Estimation
We consider the problem of active learning in the context of spatial sampling
for level set estimation (LSE), where the goal is to localize all regions where
a function of interest lies above/below a given threshold as quickly as
possible. We present a finite-horizon search procedure to perform LSE in one
dimension while optimally balancing both the final estimation error and the
distance traveled for a fixed number of samples. A tuning parameter is used to
trade off between the estimation accuracy and distance traveled. We show that
the resulting optimization problem can be solved in closed form and that the
resulting policy generalizes existing approaches to this problem. We then show
how this approach can be used to perform level set estimation in higher
dimensions under the popular Gaussian process model. Empirical results on
synthetic data indicate that as the cost of travel increases, our method's
ability to treat distance nonmyopically allows it to significantly improve on
the state of the art. On real air quality data, our approach achieves roughly
one fifth the estimation error at less than half the cost of competing
algorithms
Active Testing for Face Detection and Localization
We provide a novel search technique which uses a hierarchical model and a mutual information gain heuristic to efficiently prune the search space when localizing faces in images. We show exponential gains in computation over traditional sliding window approaches, while keeping similar performance levels
Longitudinal changes in Alzheimer’s-related plasma biomarkers and brain amyloid
Introduction Understanding longitudinal plasma biomarker trajectories relative to brain amyloid changes can help devise Alzheimer’s progression assessment strategies. Methods We examined the temporal order of changes in plasma amyloid-β ratio (Aβ42/Aβ40), glial fibrillary acidic protein (GFAP), neurofilament light chain (NfL), and phosphorylated tau ratios (p-tau181/Aβ42, p-tau231/Aβ42) relative to 11C-Pittsburgh compound B (PiB) positron emission tomography (PET) cortical amyloid burden (PiB−/+). Participants (n = 199) were cognitively normal at index visit with a median 6.1-year follow-up. Results PiB groups exhibited different rates of longitudinal change in Aβ42/Aβ40 (β = 5.41 × 10-4, SE = 1.95 × 10-4, p = 0.0073). Change in brain amyloid correlated with change in GFAP (r = 0.5, 95% CI = [0.26, 0.68]). Greatest relative decline in Aβ42/Aβ40 (-1%/year) preceded brain amyloid positivity by 41 years (95% CI = [32, 53]). Discussion Plasma Aβ42/Aβ40 may begin declining decades prior to brain amyloid accumulation, whereas p-tau ratios, GFAP, and NfL increase closer in time
Where Have All the Interactions Gone? Estimating the Coverage of Two-Hybrid Protein Interaction Maps
Yeast two-hybrid screens are an important method for mapping pairwise physical interactions between proteins. The fraction of interactions detected in independent screens can be very small, and an outstanding challenge is to determine the reason for the low overlap. Low overlap can arise from either a high false-discovery rate (interaction sets have low overlap because each set is contaminated by a large number of stochastic false-positive interactions) or a high false-negative rate (interaction sets have low overlap because each misses many true interactions). We extend capture–recapture theory to provide the first unified model for false-positive and false-negative rates for two-hybrid screens. Analysis of yeast, worm, and fly data indicates that 25% to 45% of the reported interactions are likely false positives. Membrane proteins have higher false-discovery rates on average, and signal transduction proteins have lower rates. The overall false-negative rate ranges from 75% for worm to 90% for fly, which arises from a roughly 50% false-negative rate due to statistical undersampling and a 55% to 85% false-negative rate due to proteins that appear to be systematically lost from the assays. Finally, statistical model selection conclusively rejects the Erdös-Rényi network model in favor of the power law model for yeast and the truncated power law for worm and fly degree distributions. Much as genome sequencing coverage estimates were essential for planning the human genome sequencing project, the coverage estimates developed here will be valuable for guiding future proteomic screens. All software and datasets are available in Datasets S1 and S2, Figures S1–S5, and Tables S1−S6, and are also available from our Web site, http://www.baderzone.org
Learning High-Dimensional Nonparametric Differential Equations via Multivariate Occupation Kernel Functions
Learning a nonparametric system of ordinary differential equations (ODEs)
from trajectory snapshots in a -dimensional state space requires
learning functions of variables. Explicit formulations scale
quadratically in unless additional knowledge about system properties, such
as sparsity and symmetries, is available. In this work, we propose a linear
approach to learning using the implicit formulation provided by vector-valued
Reproducing Kernel Hilbert Spaces. By rewriting the ODEs in a weaker integral
form, which we subsequently minimize, we derive our learning algorithm. The
minimization problem's solution for the vector field relies on multivariate
occupation kernel functions associated with the solution trajectories. We
validate our approach through experiments on highly nonlinear simulated and
real data, where may exceed 100. We further demonstrate the versatility of
the proposed method by learning a nonparametric first order quasilinear partial
differential equation.Comment: 22 pages, 3 figures, submitted to Neurips 202
Learning treatment effect in neurodegenerative diseases with a Bayesian mixed-effect model
International audienc
- …