1,419 research outputs found

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF

    Statistical Mechanics Approach to Inverse Problems on Networks

    Get PDF
    Statistical Mechanics has gained a central role in modern Inference and Computer Science. Many optimization and inference problems can be cast in a Statistical Mechanics framework, and various concepts and methods developed in this area of Physics can be very helpful not only in the theoretical analysis, but also constitute valuable tools for solving single instance cases of hard inference and computational tasks. In this work, I address various inverse problems on networks, from models of epidemic spreading to learning in neural networks, and apply a variety of methods which have been developed in the context of Disordered Systems, namely Replica and Cavity methods from the theoretical side, and their algorithmic incarnation, Belief Propagation, to solve hard inverse problems which can be formulated in a Bayesian framework

    Computer Vision-based Monitoring of Harvest Quality

    Get PDF

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility

    Visual Representation Learning with Limited Supervision

    Get PDF
    The quality of a Computer Vision system is proportional to the rigor of data representation it is built upon. Learning expressive representations of images is therefore the centerpiece to almost every computer vision application, including image search, object detection and classification, human re-identification, object tracking, pose understanding, image-to-image translation, and embodied agent navigation to name a few. Deep Neural Networks are most often seen among the modern methods of representation learning. The limitation is, however, that deep representation learning methods require extremely large amounts of manually labeled data for training. Clearly, annotating vast amounts of images for various environments is infeasible due to cost and time constraints. This requirement of obtaining labeled data is a prime restriction regarding pace of the development of visual recognition systems. In order to cope with the exponentially growing amounts of visual data generated daily, machine learning algorithms have to at least strive to scale at a similar rate. The second challenge consists in the learned representations having to generalize to novel objects, classes, environments and tasks in order to accommodate to the diversity of the visual world. Despite the evergrowing number of recent publications tangentially addressing the topic of learning generalizable representations, efficient generalization is yet to be achieved. This dissertation attempts to tackle the problem of learning visual representations that can generalize to novel settings while requiring few labeled examples. In this research, we study the limitations of the existing supervised representation learning approaches and propose a framework that improves the generalization of learned features by exploiting visual similarities between images which are not captured by provided manual annotations. Furthermore, to mitigate the common requirement of large scale manually annotated datasets, we propose several approaches that can learn expressive representations without human-attributed labels, in a self-supervised fashion, by grouping highly-similar samples into surrogate classes based on progressively learned representations. The development of computer vision as science is preconditioned upon the seamless ability of a machine to record and disentangle pictures' attributes that were expected to only be conceived by humans. As such, particular interest was dedicated to the ability to analyze the means of artistic expression and style which depicts a more complex task than merely breaking an image down to colors and pixels. The ultimate test for this ability is the task of style transfer which involves altering the style of an image while keeping its content. An effective solution of style transfer requires learning such image representation which would allow disentangling image style and its content. Moreover, particular artistic styles come with idiosyncrasies that affect which content details should be preserved and which discarded. Another pitfall here is that it is impossible to get pixel-wise annotations of style and how the style should be altered. We address this problem by proposing an unsupervised approach that enables encoding the image content in such a way that is required by a particular style. The proposed approach exchanges the style of an input image by first extracting the content representation in a style-aware way and then rendering it in a new style using a style-specific decoder network, achieving compelling results in image and video stylization. Finally, we combine supervised and self-supervised representation learning techniques for the task of human and animals pose understanding. The proposed method enables transfer of the representation learned for recognition of human poses to proximal mammal species without using labeled animal images. This approach is not limited to dense pose estimation and could potentially enable autonomous agents from robots to self-driving cars to retrain themselves and adapt to novel environments based on learning from previous experiences

    Analytic and numerical analysis of the cosmic 21cm signal

    Get PDF
    Cosmology in the 21st century has matured into a precision science. Measurements of the cosmic microwave background, galaxy surveys, weak lensing studies and supernovae surveys all but confirm that we live in a geometrically flat Universe dominated by a dark energy component where most of the matter is dark. Yet, challenges to this model remain as well as periods in its evolution unobserved at present. The next decade will see the construction of a new generation of telescopes poised to answer some of these remaining questions and peer into unseen depths. Because of the technological advances of the previous decades and the scale of the new generation of telescopes, for the first time, cosmology will be constrained through the observation of the cosmic 21cm signal emitted by hydrogen atoms across the Universe. Being the ubiquitous element present throughout the different evolutionary stages of the Universe, neutral hydrogen holds great potential to answer many of the remaining challenges which face cosmology today. In the context of 21cm radiation, we identify two approaches which will increase the information gain from future observations, a numerical as well as an analytic approach. The numerical challenges of future analyses are a consequence of the data rates of next generation telescopes, and we address this here introducing machine learning techniques as a possible solution. Artificial neural networks have gained much attention in both the scientific and commercial world, and we apply one such network here as a way to emulate numerical simulations necessary for parameter inference from future data. Further, we identify the potential of the bispectrum, the Fourier transform of the three-point statistic, as a cosmological probe in the context of low redshift 21cm intensity mapping experiments. This higher order statistical analysis can constrain cosmological parameters beyond the capabilities of CMB observations and power spectrum analyses of the 21cm signal. Lastly, we focus on a fully 3D expansion of the 21cm power spectrum in the natural spherical basis for large angle observations, drawing on the success of the technique in weak lensing studies.Open Acces
    • …
    corecore