4,562 research outputs found

    Learning a Loopy Model For Semantic Segmentation Exactly

    Full text link
    Learning structured models using maximum margin techniques has become an indispensable tool for com- puter vision researchers, as many computer vision applications can be cast naturally as an image labeling problem. Pixel-based or superpixel-based conditional random fields are particularly popular examples. Typ- ically, neighborhood graphs, which contain a large number of cycles, are used. As exact inference in loopy graphs is NP-hard in general, learning these models without approximations is usually deemed infeasible. In this work we show that, despite the theoretical hardness, it is possible to learn loopy models exactly in practical applications. To this end, we analyze the use of multiple approximate inference techniques together with cutting plane training of structural SVMs. We show that our proposed method yields exact solutions with an optimality guarantees in a computer vision application, for little additional computational cost. We also propose a dynamic caching scheme to accelerate training further, yielding runtimes that are comparable with approximate methods. We hope that this insight can lead to a reconsideration of the tractability of loopy models in computer vision

    A survey on data and transaction management in mobile databases

    Full text link
    The popularity of the Mobile Database is increasing day by day as people need information even on the move in the fast changing world. This database technology permits employees using mobile devices to connect to their corporate networks, hoard the needed data, work in the disconnected mode and reconnect to the network to synchronize with the corporate database. In this scenario, the data is being moved closer to the applications in order to improve the performance and autonomy. This leads to many interesting problems in mobile database research and Mobile Database has become a fertile land for many researchers. In this paper a survey is presented on data and Transaction management in Mobile Databases from the year 2000 onwards. The survey focuses on the complete study on the various types of Architectures used in Mobile databases and Mobile Transaction Models. It also addresses the data management issues namely Replication and Caching strategies and the transaction management functionalities such as Concurrency Control and Commit protocols, Synchronization, Query Processing, Recovery and Security. It also provides Research Directions in Mobile databases.Comment: 20 Pages; International Journal of Database Management Systems (IJDMS) Vol.4, No.5, October 2012. arXiv admin note: text overlap with arXiv:0908.0076, arXiv:1005.1747, arXiv:1108.6195 by other author

    Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images

    Full text link
    Recent progress of deep image classification models has provided great potential to improve state-of-the-art performance in related computer vision tasks. However, the transition to semantic segmentation is hampered by strict memory limitations of contemporary GPUs. The extent of feature map caching required by convolutional backprop poses significant challenges even for moderately sized Pascal images, while requiring careful architectural considerations when the source resolution is in the megapixel range. To address these concerns, we propose a novel DenseNet-based ladder-style architecture which features high modelling power and a very lean upsampling datapath. We also propose to substantially reduce the extent of feature map caching by exploiting inherent spatial efficiency of the DenseNet feature extractor. The resulting models deliver high performance with fewer parameters than competitive approaches, and allow training at megapixel resolution on commodity hardware. The presented experimental results outperform the state-of-the-art in terms of prediction accuracy and execution speed on Cityscapes, Pascal VOC 2012, CamVid and ROB 2018 datasets. Source code will be released upon publication.Comment: 12 pages, 6 figures, under revie

    PRESTO: Probabilistic Cardinality Estimation for RDF Queries Based on Subgraph Overlapping

    Full text link
    In query optimisation accurate cardinality estimation is essential for finding optimal query plans. It is especially challenging for RDF due to the lack of explicit schema and the excessive occurrence of joins in RDF queries. Existing approaches typically collect statistics based on the counts of triples and estimate the cardinality of a query as the product of its join components, where errors can accumulate even when the estimation of each component is accurate. As opposed to existing methods, we propose PRESTO, a cardinality estimation method that is based on the counts of subgraphs instead of triples and uses a probabilistic method to estimate cardinalities of RDF queries as a whole. PRESTO avoids some major issues of existing approaches and is able to accurately estimate arbitrary queries under a bound memory constraint. We evaluate PRESTO with YAGO and show that PRESTO is more accurate for both simple and complex queries

    ExpTime Tableaux for the Description Logic SHIQ Based on Global State Caching and Integer Linear Feasibility Checking

    Full text link
    We give the first ExpTime (complexity-optimal) tableau decision procedure for checking satisfiability of a knowledge base in the description logic SHIQ when numbers are coded in unary. Our procedure is based on global state caching and integer linear feasibility checking

    Machine Intelligence Techniques for Next-Generation Context-Aware Wireless Networks

    Full text link
    The next generation wireless networks (i.e. 5G and beyond), which would be extremely dynamic and complex due to the ultra-dense deployment of heterogeneous networks (HetNets), poses many critical challenges for network planning, operation, management and troubleshooting. At the same time, generation and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from people-centric to machine-oriented communications, making the operation of future wireless networks even more complex. In mitigating the complexity of future network operation, new approaches of intelligently utilizing distributed computational resources with improved context-awareness becomes extremely important. In this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control, communication, and networking functions closer to end users, have a great potential for enabling efficient operation of future wireless networks. These promising architectures make the adoption of artificial intelligence (AI) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices for designing a tightly integrated network. Towards this end, this article provides a comprehensive survey on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP) techniques for enhancing the efficiency of wireless network operation. In particular, we provide comprehensive discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network planning, operation and management of the next generation wireless networks. A brief case study utilizing the AI techniques for this network has also been provided.Comment: ITU Special Issue N.1 The impact of Artificial Intelligence (AI) on communication networks and services, (To appear

    ExpTime Tableaux with Global Caching for the Description Logic SHOQ

    Full text link
    We give the first ExpTime (complexity-optimal) tableau decision procedure for checking satisfiability of a knowledge base in the description logic SHOQ, which extends the basic description logic ALC with transitive roles, hierarchies of roles, nominals and quantified number restrictions. The complexity is measured using unary representation for numbers. Our procedure is based on global caching and integer linear feasibility checking.Comment: arXiv admin note: substantial text overlap with arXiv:1205.583

    Alternating Directions Dual Decomposition

    Full text link
    We propose AD3, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to iteratively solve local subproblems and a controller node to combine these local solutions into a global update. The key characteristic of AD3 is that each local subproblem has a quadratic regularizer, leading to a faster consensus than subgradient-based dual decomposition, both theoretically and in practice. We provide closed-form solutions for these AD3 subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD3 applicable to a wide range of problems. Experiments on synthetic and realworld problems show that AD3 compares favorably with the state-of-the-art

    Weakly-supervised Semantic Parsing with Abstract Examples

    Full text link
    Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways. First, a large search space of potential programs needs to be explored at training time to find a correct program. Second, spurious programs that accidentally lead to a correct denotation add noise to training. In this work we propose that in closed worlds with clear semantic types, one can substantially alleviate these problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form. We show that these abstractions can be defined with a handful of lexical rules and that they result in sharing between different examples that alleviates the difficulties in training. To test our approach, we develop the first semantic parser for CNLVR, a challenging visual reasoning dataset, where the search space is large and overcoming spuriousness is critical, because denotations are either TRUE or FALSE, and thus random programs are likely to lead to a correct denotation. Our method substantially improves performance, and reaches 82.5% accuracy, a 14.7% absolute accuracy improvement compared to the best reported accuracy so far.Comment: CNLVR,NLVR. Accepted to ACL 201

    Caching Policy for Cache-enabled D2D Communications by Learning User Preference

    Full text link
    Prior works in designing caching policy do not distinguish content popularity with user preference. In this paper, we illustrate the caching gain by exploiting individual user behavior in sending requests. After showing the connection between the two concepts, we provide a model for synthesizing user preference from content popularity. We then optimize the caching policy with the knowledge of user preference and active level to maximize the offloading probability for cache-enabled device-to-device communications, and develop a low-complexity algorithm to find the solution. In order to learn user preference, we model the user request behavior resorting to probabilistic latent semantic analysis, and learn the model parameters by expectation maximization algorithm. By analyzing a Movielens dataset, we find that the user preferences are less similar, and the active level and topic preference of each user change slowly over time. Based on this observation, we introduce a prior knowledge based learning algorithm for user preference, which can shorten the learning time. Simulation results show remarkable performance gain of the caching policy with user preference over existing policy with content popularity, both with realistic dataset and synthetic data validated by the real dataset
    • …
    corecore