375,535 research outputs found

    A new approach for detecting local features

    Get PDF

    PRISM: Sparse Recovery of the Primordial Power Spectrum

    Get PDF
    The primordial power spectrum describes the initial perturbations in the Universe which eventually grew into the large-scale structure we observe today, and thereby provides an indirect probe of inflation or other structure-formation mechanisms. Here, we introduce a new method to estimate this spectrum from the empirical power spectrum of cosmic microwave background (CMB) maps. A sparsity-based linear inversion method, coined \textbf{PRISM}, is presented. This technique leverages a sparsity prior on features in the primordial power spectrum in a wavelet basis to regularise the inverse problem. This non-parametric approach does not assume a strong prior on the shape of the primordial power spectrum, yet is able to correctly reconstruct its global shape as well as localised features. These advantages make this method robust for detecting deviations from the currently favoured scale-invariant spectrum. We investigate the strength of this method on a set of WMAP 9-year simulated data for three types of primordial power spectra: a nearly scale-invariant spectrum, a spectrum with a small running of the spectral index, and a spectrum with a localised feature. This technique proves to easily detect deviations from a pure scale-invariant power spectrum and is suitable for distinguishing between simple models of the inflation. We process the WMAP 9-year data and find no significant departure from a nearly scale-invariant power spectrum with the spectral index ns=0.972n_s = 0.972. A high resolution primordial power spectrum can be reconstructed with this technique, where any strong local deviations or small global deviations from a pure scale-invariant spectrum can easily be detected

    A novel quality assessment for visual secret sharing schemes

    Get PDF
    To evaluate the visual quality in visual secret sharing schemes, most of the existing metrics fail to generate fair and uniform quality scores for tested reconstructed images. We propose a new approach to measure the visual quality of the reconstructed image for visual secret sharing schemes. We developed an object detection method in the context of secret sharing, detecting outstanding local features and global object contour. The quality metric is constructed based on the object detection-weight map. The effectiveness of the proposed quality metric is demonstrated by a series of experiments. The experimental results show that our quality metric based on secret object detection outperforms existing metrics. Furthermore, it is straightforward to implement and can be applied to various applications such as performing the security test of the visual secret sharing process

    Video Object Counting in Unconstrained Environments Using Density-Based Clustering

    Get PDF
    In this thesis, we present a video object counting approach using multiple local feature matching. We explain the development of a dataset with which to test our approach. Our dataset uses a new approach which we designed to extract object ground truth. We also provide a comparison of common single object trackers. We develop a multi-object tracker named Learn-Select-Track and use it to track the colours of objects of interest to filter out false positive object localisations. We discuss the implementation of the HDBSCAN algorithm which we use in our novel approach for matching multiple local feature descriptors. We show that the detected clusters provide very good matches for the features and demonstrate our approach to cluster analysis and validation. We develop a simple yet efficient way of learning the features of the object of interest which is independent of the number of objects in the frame. We also develop a computationally simple way of detecting the other objects in the frame by using a combination of the detected clusters, the features of the object of interest and vector algebra. Our approach is capable of detecting partially visible and occluded objects as well. We present three ways of extracting object count estimations from videos and provide empirical evidence to show that our approach can be used in a wide variety of scenarios

    Yawn analysis with mouth occlusion detection

    Get PDF
    tOne of the most common signs of tiredness or fatigue is yawning. Naturally, identification of fatiguedindividuals would be helped if yawning is detected. Existing techniques for yawn detection are centred onmeasuring the mouth opening. This approach, however, may fail if the mouth is occluded by the hand, as itis frequently the case. The work presented in this paper focuses on a technique to detect yawning whilstalso allowing for cases of occlusion. For measuring the mouth opening, a new technique which appliesadaptive colour region is introduced. For detecting yawning whilst the mouth is occluded, local binarypattern (LBP) features are used to also identify facial distortions during yawning. In this research, theStrathclyde Facial Fatigue (SFF) database which contains genuine video footage of fatigued individuals isused for training, testing and evaluation of the system

    The automated detection of proliferative diabetic retinopathy using dual ensemble classification

    Get PDF
    Objective: Diabetic retinopathy (DR) is a retinal vascular disease that is caused by complications of diabetes. Proliferative diabetic retinopathy (PDR) is the advanced stage of the disease which carries a high risk of severe visual impairment. This stage is characterized by the growth of abnormal new vessels. We aim to develop a method for the automated detection of new vessels from retinal images. Methods: This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel maps which each hold vital information. Local morphology, gradient and intensity features are measured using each binary vessel map to produce two separate 21-D feature vectors. Independent classification is performed for each feature vector using an ensemble system of bagged decision trees. These two independent outcomes are then combined to a produce a final decision. Results: Sensitivity and specificity results using a dataset of 60 images are 1.0000 and 0.9500 on a per image basis. Conclusions: The described automated system is capable of detecting the presence of new vessels

    Detecting linear trend changes in data sequences

    Get PDF
    We propose TrendSegment, a methodology for detecting multiple change-points corresponding to linear trend changes in one dimensional data. A core ingredient of TrendSegment is a new Tail-Greedy Unbalanced Wavelet transform: a conditionally orthonormal, bottom-up transformation of the data through an adaptively constructed unbalanced wavelet basis, which results in a sparse representation of the data. Due to its bottom-up nature, this multiscale decomposition focuses on local features in its early stages and on global features next which enables the detection of both long and short linear trend segments at once. To reduce the computational complexity, the proposed method merges multiple regions in a single pass over the data. We show the consistency of the estimated number and locations of change-points. The practicality of our approach is demonstrated through simulations and two real data examples, involving Iceland temperature data and sea ice extent of the Arctic and the Antarctic. Our methodology is implemented in the R package trendsegmentR, available from CRAN

    Contrastive Learning for Lane Detection via Cross-Similarity

    Full text link
    Detecting road lanes is challenging due to intricate markings vulnerable to unfavorable conditions. Lane markings have strong shape priors, but their visibility is easily compromised. Factors like lighting, weather, vehicles, pedestrians, and aging colors challenge the detection. A large amount of data is required to train a lane detection approach that can withstand natural variations caused by low visibility. This is because there are numerous lane shapes and natural variations that exist. Our solution, Contrastive Learning for Lane Detection via cross-similarity (CLLD), is a self-supervised learning method that tackles this challenge by enhancing lane detection models resilience to real-world conditions that cause lane low visibility. CLLD is a novel multitask contrastive learning that trains lane detection approaches to detect lane markings even in low visible situations by integrating local feature contrastive learning (CL) with our new proposed operation cross-similarity. Local feature CL focuses on extracting features for small image parts, which is necessary to localize lane segments, while cross-similarity captures global features to detect obscured lane segments using their surrounding. We enhance cross-similarity by randomly masking parts of input images for augmentation. Evaluated on benchmark datasets, CLLD outperforms state-of-the-art contrastive learning, especially in visibility-impairing conditions like shadows. Compared to supervised learning, CLLD excels in scenarios like shadows and crowded scenes.Comment: 10 page
    • …
    corecore