339,675 research outputs found

    Image classification by visual bag-of-words refinement and reduction

    Full text link
    This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification. Although very influential in the literature, the traditional visual BOW model has two distinct drawbacks. Firstly, for efficiency purposes, the visual vocabulary is commonly constructed by directly clustering the low-level visual feature vectors extracted from local keypoints, without considering the high-level semantics of images. That is, the visual BOW model still suffers from the semantic gap, and thus may lead to significant performance degradation in more challenging tasks (e.g. social image classification). Secondly, typically thousands of visual words are generated to obtain better performance on a relatively large image dataset. Due to such large vocabulary size, the subsequent image classification may take sheer amount of time. To overcome the first drawback, we develop a graph-based method for visual BOW refinement by exploiting the tags (easy to access although noisy) of social images. More notably, for efficient image classification, we further reduce the refined visual BOW model to a much smaller size through semantic spectral clustering. Extensive experimental results show the promising performance of the proposed framework for visual BOW refinement and reduction

    Liquid Intersection Types

    Full text link
    We present a new type system combining refinement types and the expressiveness of intersection type discipline. The use of such features makes it possible to derive more precise types than in the original refinement system. We have been able to prove several interesting properties for our system (including subject reduction) and developed an inference algorithm, which we proved to be sound.Comment: In Proceedings ITRS 2014, arXiv:1503.0437

    Interacting errors in large-eddy simulation: a review of recent developments

    Get PDF
    The accuracy of large-eddy simulations is limited, among others, by the quality of the subgrid parameterisation and the numerical contamination of the smaller retained flow structures. We review the effects of discretisation and modelling errors from two different perspectives. We first show that spatial discretisation induces its own filter and compare the dynamic importance of this numerical filter to the basic large-eddy filter. The spatial discretisation modifies the large-eddy closure problem as is expressed by the difference between the discrete 'numerical stress tensor' and the continuous 'turbulent stress tensor'. This difference consists of a high-pass contribution associated with the specific numerical filter. Several central differencing methods are analysed and the importance of the subgrid resolution is established. Second, we review a database approach to assess the total simulation error and its numerical and modelling contributions. The interaction between the different sources of error is shown to lead to their partial cancellation. From this analysis one may identify an 'optimal refinement strategy' for a given subgrid model, discretisation method and flow conditions, leading to minimal total simulation error at a given computational cost. We provide full detail for homogeneous decaying turbulence in a 'Smagorinsky fluid'. The optimal refinement strategy is compared with the error reduction that arises from grid refinement of the dynamic eddy-viscosity model. The main trends of the optimal refinement strategy as a function of resolution and Reynolds number are found to be adequately followed by the dynamic model. This yields significant error reduction upon grid refinement although at coarse resolutions significant error levels remain. To address this deficiency, a new successive inverse polynomial interpolation procedure is proposed with which the optimal Smagorinsky constant may be efficiently approximated at a given resolution. The computational overhead of this optimisation procedure is shown to be well justified in view of the achieved reduction of the error level relative to the 'no-model' and dynamic model predictions

    MeGARA: Menu-based Game Abstraction and Abstraction Refinement of Markov Automata

    Full text link
    Markov automata combine continuous time, probabilistic transitions, and nondeterminism in a single model. They represent an important and powerful way to model a wide range of complex real-life systems. However, such models tend to be large and difficult to handle, making abstraction and abstraction refinement necessary. In this paper we present an abstraction and abstraction refinement technique for Markov automata, based on the game-based and menu-based abstraction of probabilistic automata. First experiments show that a significant reduction in size is possible using abstraction.Comment: In Proceedings QAPL 2014, arXiv:1406.156

    On p-Robust Saturation for hp-AFEM

    Full text link
    We consider the standard adaptive finite element loop SOLVE, ESTIMATE, MARK, REFINE, with ESTIMATE being implemented using the pp-robust equilibrated flux estimator, and MARK being D\"orfler marking. As a refinement strategy we employ pp-refinement. We investigate the question by which amount the local polynomial degree on any marked patch has to be increase in order to achieve a pp-independent error reduction. The resulting adaptive method can be turned into an instance optimal hphp-adaptive method by the addition of a coarsening routine
    corecore