1,551 research outputs found

    An Unsupervised Approach to Modelling Visual Data

    Get PDF
    For very large visual datasets, producing expert ground-truth data for training supervised algorithms can represent a substantial human effort. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. The primary motivation for this thesis comes from the problem of labelling large visual datasets of the seafloor obtained by an Autonomous Underwater Vehicle (AUV) for ecological analysis. It is expensive to label this data, as taxonomical experts for the specific region are required, whereas automatically generated summaries can be used to focus the efforts of experts, and inform decisions on additional sampling. The contributions in this thesis arise from modelling this visual data in entirely unsupervised ways to obtain comprehensive visual summaries. Firstly, popular unsupervised image feature learning approaches are adapted to work with large datasets and unsupervised clustering algorithms. Next, using Bayesian models the performance of rudimentary scene clustering is boosted by sharing clusters between multiple related datasets, such as regular photo albums or AUV surveys. These Bayesian scene clustering models are extended to simultaneously cluster sub-image segments to form unsupervised notions of “objects” within scenes. The frequency distribution of these objects within scenes is used as the scene descriptor for simultaneous scene clustering. Finally, this simultaneous clustering model is extended to make use of whole image descriptors, which encode rudimentary spatial information, as well as object frequency distributions to describe scenes. This is achieved by unifying the previously presented Bayesian clustering models, and in so doing rectifies some of their weaknesses and limitations. Hence, the final contribution of this thesis is a practical unsupervised algorithm for modelling images from the super-pixel to album levels, and is applicable to large datasets

    Guide for incoming first-year students

    Get PDF
    Advice complied by Boston University School of Medicine students for incoming first year students and third or fourth year students preparing for clinical rotations

    Earlier Intervention in the Management of Hypercholesterolemia What Are We Waiting For?

    Get PDF
    The thesis advanced here is that we are initiating treatment of hypercholesterolemia (and other risk factors) too late in life. Initiating treatment at, for example, age 30 years instead of age 60 years might very well prevent not just 30% of events, as in the 5-year statin trials, but perhaps as many as 60%

    The Distributional Effects of Early School Stratification - Non-Parametric Evidence from Germany

    Get PDF
    The effects of early school stratification on scholastic performance have been subject to controversial debates in educational policy and science. We exploit a unique variation in Lower Saxony, Germany, where performance based tracking was preponed from grade 7 to grade 5 in 2004. We measure the long-run effects of early school stratification on individual PISA test scores along the entire skill distriubution using the changes-in-changes estimator. Our results indicate that preponed school tracking increased test scores at the upper tail and lowered test scores at the lower tail of the skill distribution, compensating each other on average

    Differences-in-Differences with multiple Treatments under Control

    Get PDF
    Numerous quasi-experimental identification strategies making use of the differencein- differences setup suffer from multiple treatments which can be separated into sequential and simultaneous treatments. While for causal inferences under sequential treatments a staggered difference-in-differences approach might be applied, for causal inferences under simultaneous treatments the standard differences-indifferences approach is normally not applicable. Accordingly, we present an adjusted differences-in-differences identification strategy that can neutralize the effects of additional treatments implemented simultaneously through the definition and the specific composition of the control group and an amended common trend assumption. Even though the adjusted difference-in-differences strategy identifies the average treatment effect on the treated, we also show that the adjusted strategy is capable of identifying the average treatment effect under stronger common trend assumptions and the absence of interaction effects between the treatments

    Deviations in Representations Induced by Adversarial Attacks

    Full text link
    Deep learning has been a popular topic and has achieved success in many areas. It has drawn the attention of researchers and machine learning practitioners alike, with developed models deployed to a variety of settings. Along with its achievements, research has shown that deep learning models are vulnerable to adversarial attacks. This finding brought about a new direction in research, whereby algorithms were developed to attack and defend vulnerable networks. Our interest is in understanding how these attacks effect change on the intermediate representations of deep learning models. We present a method for measuring and analyzing the deviations in representations induced by adversarial attacks, progressively across a selected set of layers. Experiments are conducted using an assortment of attack algorithms, on the CIFAR-10 dataset, with plots created to visualize the impact of adversarial attacks across different layers in a network
    • …
    corecore