4,251 research outputs found

    Vacuum Alignment in SUSY A4 Models

    Full text link
    In this note we discuss the vacuum alignment in supersymmetric models with spontaneously broken flavour symmetries in the presence of soft supersymmetry (SUSY) breaking terms. We show that the inclusion of soft SUSY breaking terms can give rise to non-vanishing vacuum expectation values (VEVs) for the auxiliary components of the flavon fields. These non-zero VEVs can have an important impact on the phenomenology of this class of models, since they can induce an additional flavour violating contribution to the sfermion soft mass matrix of right-left (RL) type. We carry out an explicit computation in a class of SUSY A4 models predicting tri-bimaximal mixing in the lepton sector. The flavour symmetry breaking sector is described in terms of flavon and driving supermultiplets. We find non-vanishing VEVs for the auxiliary components of the flavon fields and for the scalar components of the driving fields which are of order m_{SUSY} x and m_{SUSY}, respectively. Thereby, m_{SUSY} is the generic soft SUSY breaking scale which is expected to be around 1 TeV and is the VEV of scalar components of the flavon fields. Another effect of these VEVs can be the generation of a mu term.Comment: 23 pages; added a new section on the relation to Supergravity; version accepted for publication in JHE

    Deep Regionlets for Object Detection

    Full text link
    In this paper, we propose a novel object detection framework named "Deep Regionlets" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a "gating network" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-the-art algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.Comment: Accepted to ECCV 201

    Time series classification with ensembles of elastic distance measures

    Get PDF
    Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move–split–merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area

    A Review of Object Detection Models based on Convolutional Neural Network

    Full text link
    Convolutional Neural Network (CNN) has become the state-of-the-art for object detection in image task. In this chapter, we have explained different state-of-the-art CNN based object detection models. We have made this review with categorization those detection models according to two different approaches: two-stage approach and one-stage approach. Through this chapter, it has shown advancements in object detection models from R-CNN to latest RefineDet. It has also discussed the model description and training details of each model. Here, we have also drawn a comparison among those models.Comment: 17 pages, 11 figures, 1 tabl

    'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems

    Full text link
    An examination of object recognition challenge leaderboards (ILSVRC, PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small differences amongst themselves in terms of error rate/mAP. To better differentiate the top performers, additional criteria are required. Moreover, the (test) images, on which the performance scores are based, predominantly contain fully visible objects. Therefore, `harder' test images, mimicking the challenging conditions (e.g. occlusion) in which humans routinely recognize objects, need to be utilized for benchmarking. To address the concerns mentioned above, we make two contributions. First, we systematically vary the level of local object-part content, global detail and spatial context in images from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12. Second, we propose an object-part based benchmarking procedure which quantifies classifiers' robustness to a range of visibility and contextual settings. The benchmarking procedure relies on a semantic similarity measure that naturally addresses potential semantic granularity differences between the category labels in training and test datasets, thus eliminating manual mapping. We use our procedure on the PPSS-12 dataset to benchmark top-performing classifiers trained on the ILSVRC-2012 dataset. Our results show that the proposed benchmarking procedure enables additional differentiation among state-of-the-art object classifiers in terms of their ability to handle missing content and insufficient object detail. Given this capability for additional differentiation, our approach can potentially supplement existing benchmarking procedures used in object recognition challenge leaderboards.Comment: Extended version of our ACCV-2016 paper. Author formatting modifie

    HoughNet: Integrating Near and Long-Range Evidence for Bottom-Up Object Detection

    Get PDF
    © 2020, Springer Nature Switzerland AG.This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet’s best model achieves 46.4 AP (and 65.1 AP50), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in another task, namely, “labels to photo” image generation by integrating the voting module of HoughNet to two different GAN models and showing that the accuracy is significantly improved in both cases. Code is available at https://github.com/nerminsamet/houghnet

    The Golden Ratio Prediction for the Solar Angle from a Natural Model with A5 Flavour Symmetry

    Full text link
    We formulate a consistent model predicting, in the leading order approximation, maximal atmospheric mixing angle, vanishing reactor angle and tan {\theta}_12 = 1/{\phi} where {\phi} is the Golden Ratio. The model is based on the flavour symmetry A5 \times Z5 \times Z3, spontaneously broken by a set of flavon fields. By minimizing the scalar potential of the theory up to the next-to-leading order in the symmetry breaking parameter, we demonstrate that this mixing pattern is naturally achieved in a finite portion of the parameter space, through the vacuum alignment of the flavon fields. The leading order approximation is stable against higher-order corrections. We also compare our construction to other models based on discrete symmetry groups.Comment: 28 pages, 2 figures. Minor changes, references added. Corrected typos in Appendix A. Version appeared on JHE

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Position and Velocity Control for Telemanipulation with Interoperability Protocol

    Get PDF
    International audienceIn this paper we describe how a generic interoperability teler-obotics protocol can be applied for master-slave robotic systems operating in position-position, position-speed and hybrid control modes. The interoperability protocol allows robust and efficient data exchange for teleoperation systems, however it was not shown how it can fit switching position and rate control modes. Here we propose the general framework of hybrid position and rate control modes with interoperability protocol. Furthermore, we demonstrate experimentally that the framework is suitable for robotics teleoperation systems in which a human-operator can switch between position-position and position-speed master and slave robots' workspace mapping

    Decaying Dark Matter in the Supersymmetric Standard Model with Freeze-in and Seesaw mechanims

    Get PDF
    Inspired by the decaying dark matter (DM) which can explain cosmic ray anomalies naturally, we consider the supersymmetric Standard Model with three right-handed neutrinos (RHNs) and R-parity, and introduce a TeV-scale DM sector with two fields \phi_{1,2} and a Z3Z_3 discrete symmetry. The DM sector only interacts with the RHNs via a very heavy field exchange and then we can explain the cosmic ray anomalies. With the second right-handed neutrino N_2 dominant seesaw mechanism at the low scale around 10^4 GeV, we show that \phi_{1,2} can obtain the vacuum expectation values around the TeV scale, and then the lightest state from \phi_{1,2} is the decay DM with lifetime around \sim 10^{26}s. In particular, the DM very long lifetime is related to the tiny neutrino masses, and the dominant DM decay channels to \mu and \tau are related to the approximate \mu-\tau symmetry. Furthermore, the correct DM relic density can be obtained via the freeze-in mechanism, the small-scale problem for power spectrum can be solved due to the decays of the R-parity odd meta-stable states in the DM sector, and the baryon asymmetry can be generated via the soft leptogensis.Comment: 24 pages,3 figure
    • …
    corecore