414 research outputs found

    First implications of LHCb data on models beyond the Standard Model

    Full text link
    We discuss the theoretical and experimental details of two of the main results obtained by LHCb with the 2011 data, namely the measurement of the mixing-induced CP-violation in the decay B_s -> J/psi phi and the upper limits on the decays B_(s) -> mu+ mu-. Then we describe the possible strategies to obtain new constraints on two different New Physics models in the light of these results.Comment: 5 pages, Proceedings of "QCD@Work 2012" - June 18-21, 2012 - Lecce (Italy

    Phenomenological tests of the Two-Higgs-Doublet Model with MFV and flavour-blind phases

    Full text link
    In the context of a Two-Higgs-Doublet Model in which Minimal Flavour Violation (MFV) is imposed, one can allow the presence of flavour-blind CP-violating phases without obtaining electric dipole moments that overcome the experimental bounds. This choice permits to accommodate the hinted large phase in the BsB_s mixing and, at the same time, to soften the observed anomaly in the relation between ϵK\epsilon_K and SψKSS_{\psi K_S}.Comment: 8 pages, 2 figures, Proceedings of "DISCRETE 2010" - December 6-11, 2010 - Rome (Italy

    A deep representation for depth images from synthetic data

    Full text link
    Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach

    From source to target and back: symmetric bi-directional adaptive GAN

    Get PDF
    The effectiveness of generative adversarial approaches in producing images according to a specific style or visual domain has recently opened new directions to solve the unsupervised domain adaptation problem. It has been shown that source labeled images can be modified to mimic target samples making it possible to train directly a classifier in the target domain, despite the original lack of annotated data. Inverse mappings from the target to the source domain have also been evaluated but only passing through adapted feature spaces, thus without new image generation. In this paper we propose to better exploit the potential of generative adversarial networks for adaptation by introducing a novel symmetric mapping among domains. We jointly optimize bi-directional image transformations combining them with target self-labeling. Moreover we define a new class consistency loss that aligns the generators in the two directions imposing to conserve the class identity of an image passing through both domain mappings. A detailed qualitative and quantitative analysis of the reconstructed images confirm the power of our approach. By integrating the two domain specific classifiers obtained with our bi-directional network we exceed previous state-of-the-art unsupervised adaptation results on four different benchmark datasets

    A virocentric perspective: The essential is invisible to the eye

    Get PDF
    A lo largo de los años transcurridos desde el descubrimiento de los virus, y en repetidas ocasiones, los expertos han cambiado de opinión acerca de su identidad. Al principio fueron considerados venenos, luego partículas con una forma de vida peculiar y más tarde sustancias bioquímicas. Los virus ocupan hoy, en el pensamiento biológico, una zona gris entre lo vivo y lo inerte: incapaces de autorreplicarse, lo cual consiguen, sin embargo, en el interior de una célula viva. De esta manera, condicionan de forma determinante el comportamiento de tal hospedador. Durante buena parte de la era moderna de la biología, la inclusión de los virus en el mundo inerte trajo consigo una consecuencia negativa, dado que se prescindió de ellos en el estudio de la evolución. Para nuestra fortuna, hoy la ciencia comienza a valorar el papel decisivo de los virus en la historia de la vida.Over the years since the discovery of the virus, and in many occasions, experts have changed their minds about their identity. At first they were considered poisons, then particles with a peculiar life style and later biochemical substances. Viruses occupy today, in biological thought, a gray area between the living and the nonliving: unable to self-replicate, but achieving it, however, within a living cell. This, decisively determines the behavior of such host. For many years along modern biology, the inclusion of viruses in the inert world brought a negative consequence, since they were omitted in the study of evolution. Fortunately for us, science now begins to appreciate the critical role of viruses in the history of life.Fil: Carlucci, Maria Josefina. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales; Argentin

    Learning to see across domains and modalities

    Get PDF
    Deep learning has recently raised hopes and expectations as a general solution for many applications (computer vision, natural language processing, speech recognition, etc.); indeed it has proven effective, but it also showed a strong dependence on large quantities of data. Generally speaking, deep learning models are especially susceptible to overfitting, due to their large number of internal parameters. Luckily, it has also been shown that, even when data is scarce, a successful model can be trained by reusing prior knowledge. Thus, developing techniques for \textit{transfer learning} (as this process is known), in its broadest definition, is a crucial element towards the deployment of effective and accurate intelligent systems into the real world. This thesis will focus on a family of transfer learning methods applied to the task of visual object recognition, specifically image classification. The visual recognition problem is central to computer vision research: many desired applications, from robotics to information retrieval, demand the ability to correctly identify categories, places, and objects. Transfer learning is a general term, and specific settings have been given specific names: when the learner has access to only unlabeled data from the target domain (where the model should perform) and labeled data from a different domain (the source), the problem is called unsupervised domain adaptation (DA). The first part of this thesis will focus on three methods for this setting. The three presented techniques for domain adaptation are fully distinct: the first one proposes the use of Domain Alignment layers to structurally align the distributions of the source and target domains in feature space. While the general idea of aligning feature distribution is not novel, we distinguish our method by being one of the very few that do so without adding losses. The second method is based on GANs: we propose a bidirectional architecture that jointly learns how to map the source images into the target visual style and vice-versa, thus alleviating the domain shift at the pixel level. The third method features an adversarial learning process that transforms both the images and the features of both domains in order to map them to a common, agnostic, space. While the first part of the thesis presented general purpose DA methods, the second part will focus on the real life issues of robotic perception, specifically RGB-D recognition. Robotic platforms are usually not limited to color perception; very often they also carry a Depth camera. Unfortunately, the depth modality is rarely used for visual recognition due to the lack of pretrained models from which to transfer and little data to train one on from scratch. We will first explore the use of synthetic data as proxy for real images by training a Convolutional Neural Network (CNN) on virtual depth maps, rendered from 3D CAD models, and then testing it on real robotic datasets. The second approach leverages the existence of RGB pretrained models, by learning how to map the depth data into the most discriminative RGB representation and then using existing models for recognition. This second technique is actually a pretty generic Transfer Learning method which can be applied to share knowledge across modalities

    Bridging Between Computer and Robot Vision Through Data Augmentation: A Case Study on Object Recognition

    Get PDF
    Despite the impressive progress brought by deep network in visual object recognition, robot vision is still far from being a solved problem. The most successful convolutional architectures are developed starting from ImageNet, a large scale collection of images of object categories downloaded from the Web. This kind of images is very different from the situated and embodied visual experience of robots deployed in unconstrained settings. To reduce the gap between these two visual experiences, this paper proposes a simple yet effective data augmentation layer that zooms on the object of interest and simulates the object detection outcome of a robot vision system. The layer, that can be used with any convolutional deep architecture, brings to an increase in object recognition performance of up to 7{\%}, in experiments performed over three different benchmark databases. An implementation of our robot data augmentation layer has been made publicly available
    • …
    corecore