2,750 research outputs found

    Chameleon fields and solar physics

    Full text link
    In this article we discuss some aspects of solar physics from the standpoint of the so-called chameleon fields (i.e. quantum fields, typically scalar, where the mass is an increasing function of the matter density of the environment). Firstly, we analyze the effects of a chameleon-induced deviation from standard gravity just below the surface of the Sun. In particular, we develop solar models which take into account the presence of the chameleon and we show that they are inconsistent with the helioseismic data. This inconsistency presents itself not only with the typical chameleon set-up discussed in the literature (where the mass scale of the potential is fine-tuned to the meV), but also if we remove the fine-tuning on the scale of the potential. Secondly, we point out that, in a model recently considered in the literature (we call this model "Modified Fujii's Model"), a conceivable interpretation of the solar oscillations is given by quantum vacuum fluctuations of a chameleon.Comment: 17 pages including figure

    Low In solubility and band offsets in the small-xx β\beta-Ga2_2O3_3/(Ga1−x_{1-x}Inx_x)2_2O3_3 system

    Full text link
    Based on first-principles calculations, we show that the maximum reachable concentration xx in the (Ga1−x_{1-x}Inx_x)2_2O3_3 alloy in the low-xx regime (i.e. In solubility in β\beta-Ga2_2O3_3) is around 10%. We then calculate the band alignment at the (100) interface between β\beta-Ga2_2O3_3 and (Ga1−x_{1-x}Inx_x)2_2O3_3 at 12%, the nearest computationally treatable concentration. The alignment is strongly strain-dependent: it is of type-B staggered when the alloy is epitaxial on Ga2_2O3_3, and type-A straddling in a free-standing superlattice. Our results suggest a limited range of applicability of low-In-content GaInO alloys.Comment: 3 pages, 3 figure

    The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots

    Full text link
    Deep networks have brought significant advances in robot perception, enabling to improve the capabilities of robots in several visual tasks, ranging from object detection and recognition to pose estimation, semantic scene segmentation and many others. Still, most approaches typically address visual tasks in isolation, resulting in overspecialized models which achieve strong performances in specific applications but work poorly in other (often related) tasks. This is clearly sub-optimal for a robot which is often required to perform simultaneously multiple visual recognition tasks in order to properly act and interact with the environment. This problem is exacerbated by the limited computational and memory resources typically available onboard to a robotic platform. The problem of learning flexible models which can handle multiple tasks in a lightweight manner has recently gained attention in the computer vision community and benchmarks supporting this research have been proposed. In this work we study this problem in the robot vision context, proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art algorithms in this novel challenging scenario. We also define a new evaluation protocol, better suited to the robot vision setting. Results shed light on the strengths and weaknesses of existing approaches and on open issues, suggesting directions for future research.Comment: This work has been submitted to IROS/RAL 201

    Best Sources Forward: Domain Generalization through Source-Specific Nets

    Get PDF
    A long standing problem in visual object categorization is the ability of algorithms to generalize across different testing conditions. The problem has been formalized as a covariate shift among the probability distributions generating the training data (source) and the test data (target) and several domain adaptation methods have been proposed to address this issue. While these approaches have considered the single source-single target scenario, it is plausible to have multiple sources and require adaptation to any possible target domain. This last scenario, named Domain Generalization (DG), is the focus of our work. Differently from previous DG methods which learn domain invariant representations from source data, we design a deep network with multiple domain-specific classifiers, each associated to a source domain. At test time we estimate the probabilities that a target sample belongs to each source domain and exploit them to optimally fuse the classifiers predictions. To further improve the generalization ability of our model, we also introduced a domain agnostic component supporting the final classifier. Experiments on two public benchmarks demonstrate the power of our approach

    AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs

    Full text link
    The ability to categorize is a cornerstone of visual intelligence, and a key functionality for artificial, autonomous visual machines. This problem will never be solved without algorithms able to adapt and generalize across visual domains. Within the context of domain adaptation and generalization, this paper focuses on the predictive domain adaptation scenario, namely the case where no target data are available and the system has to learn to generalize from annotated source images plus unlabeled samples with associated metadata from auxiliary domains. Our contributionis the first deep architecture that tackles predictive domainadaptation, able to leverage over the information broughtby the auxiliary domains through a graph. Moreover, we present a simple yet effective strategy that allows us to take advantage of the incoming target data at test time, in a continuous domain adaptation scenario. Experiments on three benchmark databases support the value of our approach.Comment: CVPR 2019 (oral

    Robust Place Categorization With Deep Domain Generalization

    Get PDF
    Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution

    Learning Deep NBNN Representations for Robust Place Categorization

    Full text link
    This paper presents an approach for semantic place categorization using data obtained from RGB cameras. Previous studies on visual place recognition and classification have shown that, by considering features derived from pre-trained Convolutional Neural Networks (CNNs) in combination with part-based classification models, high recognition accuracy can be achieved, even in presence of occlusions and severe viewpoint changes. Inspired by these works, we propose to exploit local deep representations, representing images as set of regions applying a Na\"{i}ve Bayes Nearest Neighbor (NBNN) model for image classification. As opposed to previous methods where CNNs are merely used as feature extractors, our approach seamlessly integrates the NBNN model into a fully-convolutional neural network. Experimental results show that the proposed algorithm outperforms previous methods based on pre-trained CNN models and that, when employed in challenging robot place recognition tasks, it is robust to occlusions, environmental and sensor changes

    Assessment Methods for Innovative Operational Measures and Technologies for Intermodal Freight Terminals

    Get PDF
    The topic of freight transport by rail, is a complex theme and, in recent years, a main issue of European policy. The legislation evolution and the White Paper 2011 have demonstrated the European intention to re-launch this sector. The challenge is to promote the intermodal transport system to the detriment of road freight transport. In this context, the intermodal freight terminals play a primary role for the supply chain, they are the connection point between the various transport nodes and the nodal points where the freight are handled, stored and transferred between different modes to final customer. To achieve the purpose, proposed by the EC, are necessary the performances improvement of existing intermodal freight terminals and the development of innovative intermodal freight terminals. Many terminal performances improvement is have been proposed and sometime experimented. They are based both on operational measures (e.g. horizontal and parallel handling, faster and fully direct handling) and on innovative technologies (e.g. automatic system for horizontal and parallel handling, automated gate for data exchange) inside the terminals, with often-contradictory results. The research work described in this paper (developed within the EU project Capacity4Rail) focusses on the assessment of effects that these innovations can have in the intermodal freight terminals. The innovative operational measures and technologies have been combined in different scenarios, to be evaluated by a methodological approach including to other an analytical methods and simulation models. The output of this assessment method are key performance indicators (KPI) setup according to terminals typologies the proposals and related to different aspects (e.g. management, operation and organization. In the present work suitable KPIs (e.g. total/partial transit times) for to evaluate have been applied. Finally, in addition to methodological framework illustrated, a real case of study will be illustrated: the intermodal rail-road freight terminal Munich-Riem (Germany)

    Kitting in the Wild through Online Domain Adaptation

    Get PDF
    Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain

    Helioseismology and screening of nuclear reactions in the Sun

    Get PDF
    We show that models for screening of nuclear reactions in the Sun can be tested by means of helioseismology. As well known, solar models using the weak screening factors are in agreement with data. We find that the solar model calculated with the anti screening factors of Tsytovitch is not consistent with helioseismology, both for the sound speed profile and for the depth of the convective envelope. Moreover, the difference between the no-screening and weak screening model is significant in comparison with helioseismic uncertainty. In other words, the existence of screening can be proved by means of helioseismology
    • …
    corecore