1,697 research outputs found

    Deciding the finiteness of the number of simple permutations contained in a wreath-closed class is polynomial

    Get PDF
    We present an algorithm running in time O(n ln n) which decides if a wreath-closed permutation class Av(B) given by its finite basis B contains a finite number of simple permutations. The method we use is based on an article of Brignall, Ruskuc and Vatter which presents a decision procedure (of high complexity) for solving this question, without the assumption that Av(B) is wreath-closed. Using combinatorial, algorithmic and language theoretic arguments together with one of our previous results on pin-permutations, we are able to transform the problem into a co-finiteness problem in a complete deterministic automaton

    Efficient similarity computations on parallel machines using data shaping

    Get PDF
    Similarity computation is a fundamental operation in all forms of data. Big Data is, typically, characterized by attributes such as volume, velocity, variety, veracity, etc. In general, Big Data variety appears as structured, semi-structured or unstructured forms. The volume of Big Data in general, and semi-structured data in particular, is increasing at a phenomenal rate. Big Data phenomenon is posing new set of challenges to similarity computation problems occurring in semi-structured data. Technology and processor architecture trends suggest very strongly that future processors shall have ten\u27s of thousands of cores (hardware threads). Another crucial trend is that ratio between on-chip and off-chip memory to core counts is decreasing. State-of-the-art parallel computing platforms such as General Purpose Graphics Processors (GPUs) and MICs are promising for high performance as well high throughput computing. However, processing semi-structured component of Big Data efficiently using parallel computing systems (e.g. GPUs) is challenging. Reason being most of the emerging platforms (e.g. GPUs) are organized as Single Instruction Multiple Thread/Data machines which are highly structured, where several cores (streaming processors) operate in lock-step manner, or they require a high degree of task-level parallelism. We argue that effective and efficient solutions to key similarity computation problems need to operate in a synergistic manner with the underlying computing hardware. Moreover, semi-structured form input data needs to be shaped or reorganized with the goal to exploit the enormous computing power of \textit{state-of-the-art} highly threaded architectures such as GPUs. For example, shaping input data (via encoding) with minimal data-dependence can facilitate flexible and concurrent computations on high throughput accelerators/co-processors such as GPU, MIC, etc. We consider various instances of traditional and futuristic problems occurring in intersection of semi-structured data and data analytics. Preprocessing is an operation common at initial stages of data processing pipelines. Typically, the preprocessing involves operations such as data extraction, data selection, etc. In context of semi-structured data, twig filtering is used in identifying (and extracting) data of interest. Duplicate detection and record linkage operations are useful in preprocessing tasks such as data cleaning, data fusion, and also useful in data mining, etc., in order to find similar tree objects. Likewise, tree edit is a fundamental metric used in context of tree problems; and similarity computation between trees another key problem in context of Big Data. This dissertation makes a case for platform-centric data shaping as a potent mechanism to tackle the data- and architecture-borne issues in context of semi-structured data processing on GPU and GPU-like parallel architecture machines. In this dissertation, we propose several data shaping techniques for tree matching problems occurring in semi-structured data. We experiment with real world datasets. The experimental results obtained reveal that the proposed platform-centric data shaping approach is effective for computing similarities between tree objects using GPGPUs. The techniques proposed result in performance gains up to three orders of magnitude, subject to problem and platform

    On the implementation and refinement of outerplanar graph algorithms.

    Get PDF

    Activities of daily life recognition using process representation modelling to support intention analysis

    Get PDF
    Purpose – This paper aims to focus on applying a range of traditional classification- and semantic reasoning-based techniques to recognise activities of daily life (ADLs). ADL recognition plays an important role in tracking functional decline among elderly people who suffer from Alzheimer’s disease. Accurate recognition enables smart environments to support and assist the elderly to lead an independent life for as long as possible. However, the ability to represent the complex structure of an ADL in a flexible manner remains a challenge. Design/methodology/approach – This paper presents an ADL recognition approach, which uses a hierarchical structure for the representation and modelling of the activities, its associated tasks and their relationships. This study describes an approach in constructing ADLs based on a task-specific and intention-oriented plan representation language called Asbru. The proposed method is particularly flexible and adaptable for caregivers to be able to model daily schedules for Alzheimer’s patients. Findings – A proof of concept prototype evaluation has been conducted for the validation of the proposed ADL recognition engine, which has comparable recognition results with existing ADL recognition approaches. Originality/value – The work presented in this paper is novel, as the developed ADL recognition approach takes into account all relationships and dependencies within the modelled ADLs. This is very useful when conducting activity recognition with very limited features
    • …
    corecore