58 research outputs found

    Discovery in Physics

    Get PDF
    Volume 2 covers knowledge discovery in particle and astroparticle physics. Instruments gather petabytes of data and machine learning is used to process the vast amounts of data and to detect relevant examples efficiently. The physical knowledge is encoded in simulations used to train the machine learning models. The interpretation of the learned models serves to expand the physical knowledge resulting in a cycle of theory enhancement

    Automatic Foreground Initialization for Binary Image Segmentation

    Get PDF
    Foreground segmentation is a fundamental problem in computer vision. A popular approach for foreground extraction is through graph cuts in energy minimization framework. Most existing graph cuts based image segmentation algorithms rely on user’s initialization. In this work, we aim to find an automatic initialization for graph cuts. Unlike many previous methods, no additional training dataset is needed. Collecting a training set is not only expensive and time consuming, but it also may bias the algorithm to the particular data distribution of the collected dataset. We assume that the foreground differs significantly from the background in some unknown feature space and try to find the rectangle that is most different from the rest of the image by measuring histograms dissimilarity. We extract multiple features, design a ranking function to select good features, and compute histograms based on integral images. The standard graph cuts binary segmentation is applied, based on the color models learned from the initial rectangular segmentation. Then the steps of refining the color models and re-segmenting the image iterate in the grabcut manner, until convergence, which is guaranteed. The foreground detection algorithm performs well and the segmentation is further improved by graph cuts. We evaluate our method on three datasets with manually labelled foreground regions, and show that we reach the similar level of accuracy compared to previous work. Our approach, however, has an advantage over the previous work that we do not require a training dataset

    Discovering Influential Nodes from Social Trust Network

    Get PDF
    The goal of viral marketing is that, by the virtue of mouth to mouth word spread, a small set of influential customers can influence greater number of customers. Influence maximization (IM) task is to discover such influential nodes (or customers) from a social network. Existing algorithms adopt Greedy based approaches, which assume only positive influence among users. But in real life network, such as trust network, one can also get negatively influenced. In this research we propose a model, called T-GT model, considering both positive and negative influence. To solve IM under this model, a trust network where relationships among users are either `trust\u27 or `distrust\u27 is considered. We first compute positive and negative influence by mining frequent patterns of actions performed. Then using local search a new algorithm, called MineSeedLS, is proposed. Experimental results on real trust network shows that our approach outperforms Greedy based approach by almost 35%

    LARNet:Towards Lightweight, Accurate and Real-time Salient Object Detection

    Get PDF
    Salient object detection (SOD) has rapidly developed in recent years, and detection performance has greatly improved. However, the price of these improvements is increasingly complex networks that require more computing resources and sacrifice real-time performance. This makes it difficult to deploy these approaches on devices with limited computing resources (such as mobile phones, embedded platforms, etc.). Considering recently developed lightweight SOD models, their detection and real-time performance are always compromised in demanding practical application scenarios. To solve these problems, we propose a novel lightweight SOD method called LARNet and its corresponding extremely lightweight method LARNet* according to application requirements. These methods balance the relationship between lightweight requirements, detection accuracy and real-time performance. First, we propose a saliency backbone network tailored for SOD, which removes the need for pre-training with ImageNet and effectively reduces feature redundancy. Subsequently, we propose a novel context gating module (CGM), which simulates the physiological mechanism of human brain neurons and visual information processing, and realizes the deep fusion of multilevel features at the global level. Finally, the saliency map is output after fusion of multi-level features. Extensive experiments on popular benchmark datasets demonstrate that the proposed LARNet (LARNet*) achieves 98 (113) FPS on a GPU and 3 (6) FPS on a CPU. With approximately 680K (90K) parameters, the model has significant performance advantages over (extremely) lightweight methods, even surpassing some heavyweight model

    Exploring Aspects of Image Segmentation: Diversity, Global Reasoning, and Panoptic Formulation

    Get PDF
    Image segmentation is the task of partitioning an image intomeaningful regions. It is a fundamental part of the visual scene understanding problem with many real-world applications, such as photo-editing, robotics, navigation, autonomous driving and bio-imaging. It has been extensively studied for several decades and has transformed into a set of problems which define meaningfulness of regions differently. The set includes two high-level tasks: semantic segmentation (each region assigned with a semantic label) and instance segmentation (each region representing object instance). Due to their practical importance, both tasks attract a lot of research attention. In this work we explore several aspects of these tasks and propose novel approaches and new paradigms. While most research efforts are directed at developing models that produce a single best segmentation, we consider the task of producing multiple diverse solutions given a single input image. This allows to hedge against the intrinsic ambiguity of segmentation task. We propose a new global model with multiple solutions for a trained segmentation model. This new model generalizes previously proposed approaches for the task. We present several approximate and exact inference techniques that suit a wide spectrum of possible applications and demonstrate superior performance comparing to previous methods. Then, we present a new bottom-up paradigm for the instance segmentation task. The new scheme is substantially different from the previous approaches that produce each instance independently. Our approach named InstanceCut reasons globally about the optimal partitioning of an image into instances based on local clues. We use two types of local pixel-level clues extracted by efficient fully convolutional networks: (i) an instance-agnostic semantic segmentation and (ii) instance boundaries. Despite the conceptual simplicity of our approach, it demonstrates promising performance. Finally, we put forward a novel Panoptic Segmentation task. It unifies semantic and instance segmentation tasks. The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step towards real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we first offer a novel panoptic quality metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using this metric, we perform a rigorous study of both human and machine performance for panoptic segmentation on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation
    • …
    corecore