174 research outputs found

    Algorithms to Compute the Lyndon Array

    Get PDF
    We first describe three algorithms for computing the Lyndon array that have been suggested in the literature, but for which no structured exposition has been given. Two of these algorithms execute in quadratic time in the worst case, the third achieves linear time, but at the expense of prior computation of both the suffix array and the inverse suffix array of x. We then go on to describe two variants of a new algorithm that avoids prior computation of global data structures and executes in worst-case n log n time. Experimental evidence suggests that all but one of these five algorithms require only linear execution time in practice, with the two new algorithms faster by a small factor. We conjecture that there exists a fast and worst-case linear-time algorithm to compute the Lyndon array that is also elementary (making no use of global data structures such as the suffix array)

    A generic shape descriptor using Bezier curves

    Get PDF
    Bezier curves are robust tool for a wide array of applications ranging from computer-aided design to calligraphic character, outlining and object shape description. In terms of the control point generation process, existing shape descriptor techniques that employ Bezier curves do not distinguish between regions where an object's shape changes rapidly and those where the change is more gradual or flat. This can lead to an erroneous shape description, particularly where there are significantly sharp changes in shape, such as at sharp corners. This paper presents a novel shape description algorithm called a generic shape descriptor using Bezier curves (SDBC), which defines a new strategy for Bezier control point generation by integrating domain specific information about the shape of an object in a particular region. The strategy also includes an improved dynamic fixed length coding scheme for control points. The SDBC framework has been rigorously tested upon a number of arbitrary shapes, and both quantitative and qualitative analyses have confirmed its superior performance in comparison with existing algorithms

    New Dynamic Enhancements to the Vertex-Based Rate-Distortion Optimal Shape Coding Framework

    Get PDF
    Existing vertex-based operational rate-distortion (ORD) optimal shape coding algorithms use a vertex band around the shape boundary as the source of candidate control points (CP) usually in combination with a tolerance band (TB) and sliding window (SW) arrangement, as their distortion measuring technique. These algorithms however, employ a fixed vertex-band width irrespective of the shape and admissible distortion (AD), so the full bit-rate reduction potential is not fulfilled. Moreover, despite the causal impact of the SW-length upon both the bit-rate and computational-speed, there is no formal mechanism for determining the most suitable SW-length. This paper introduces the concept of a variable width admissible CP band and new adaptive SW-length selection strategy to address these issues. The presented quantitative and qualitative results analysis endorses the superior performance achieved by integrating these enhancements into the existing vertex-based ORD optimal algorithms

    Geometric distortion measurement for shape coding: a contemporary review

    Get PDF
    Geometric distortion measurement and the associated metrics involved are integral to the rate-distortion (RD) shape coding framework, with importantly the efficacy of the metrics being strongly influenced by the underlying measurement strategy. This has been the catalyst for many different techniques with this paper presenting a comprehensive review of geometric distortion measurement, the diverse metrics applied and their impact on shape coding. The respective performance of these measuring strategies is analysed from both a RD and complexity perspective, with a recent distortion measurement technique based on arc-length-parameterisation being comparatively evaluated. Some contemporary research challenges are also investigated, including schemes to effectively quantify shape deformation

    Deep learning based classification of sheep behaviour from accelerometer data with imbalance

    Get PDF
    Classification of sheep behaviour from a sequence of tri-axial accelerometer data has the potential to enhance sheep management. Sheep behaviour is inherently imbalanced (e.g., more ruminating than walking) resulting in underperforming classification for the minority activities which hold importance. Existing works have not addressed class imbalance and use traditional machine learning techniques, e.g., Random Forest (RF). We investigated Deep Learning (DL) models, namely, Long Short Term Memory (LSTM) and Bidirectional LSTM (BLSTM), appropriate for sequential data, from imbalanced data. Two data sets were collected in normal grazing conditions using jaw-mounted and ear-mounted sensors. Novel to this study, alongside typical single classes, e.g., walking, depending on the behaviours, data samples were labelled with compound classes, e.g., walking_grazing. The number of steps a sheep performed in the observed 10 s time window was also recorded and incorporated in the models. We designed several multi-class classification studies with imbalance being addressed using synthetic data. DL models achieved superior performance to traditional ML models, especially with augmented data (e.g., 4-Class + Steps: LSTM 88.0%, RF 82.5%). DL methods showed superior generalisability on unseen sheep (i.e., F1-score: BLSTM 0.84, LSTM 0.83, RF 0.65). LSTM, BLSTM and RF achieved sub-millisecond average inference time, making them suitable for real-time applications. The results demonstrate the effectiveness of DL models for sheep behaviour classification in grazing conditions. The results also demonstrate the DL techniques can generalise across different sheep. The study presents a strong foundation of the development of such models for real-time animal monitoring

    Automatic annotation of coral reefs using deep learning

    Get PDF
    Healthy coral reefs play a vital role in maintaining biodiversity in tropical marine ecosystems. Deep sea exploration and imaging have provided us with a great opportunity to look into the vast and complex marine ecosystems. Data acquisition from the coral reefs has facilitated the scientific investigation of these intricate ecosystems. Millions of digital images of the sea floor have been collected with the help of Remotely Operated Vehicles (ROVs) and Autonomous Underwater Vehicles (AUVs). Automated technology to monitor the health of the oceans allows for transformational ecological outcomes by standardizing methods for detecting and identifying species. Manual annotation is a tediously repetitive and a time consuming task for marine experts. It takes 10-30 minutes for a marine expert to meticulously annotate a single image. This paper aims to automate the analysis of large available AUV imagery by developing advanced deep learning tools for rapid and large-scale automatic annotation of marine coral species. Such an automated technology would greatly benefit marine ecological studies in terms of cost, speed, accuracy and thus in better quantifying the level of environmental change marine ecosystems can tolerate. We propose a deep learning based classification method for coral reefs. We also report the application of the proposed technique towards the automatic annotation of unlabelled mosaics of the coral reef in the Abrolhos Islands, Western Australia. Our proposed method automatically quantifies the coral coverage in this region and detects a decreasing trend in coral population which is in line with conclusions by marine ecologists

    Coral classification with hybrid feature representations

    Get PDF
    © 2016 IEEE. Coral reefs exhibit significant within-class variations, complex between-class boundaries and inconsistent image clarity. This makes coral classification a challenging task. In this paper, we report the application of generic CNN representations combined with hand-crafted features for coral reef classification to take advantage of the complementary strengths of these representation types. We extract CNN based features from patches centred at labelled pixels at multiple scales. We use texture and color based hand-crafted features extracted from the same patches to complement the CNN features. Our proposed method achieves a classification accuracy that is higher than the state-of-art methods on the MLC benchmark dataset for corals
    corecore