1,357 research outputs found

    A Local Density-Based Approach for Local Outlier Detection

    Full text link
    This paper presents a simple but effective density-based outlier detection approach with the local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure the local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only kk nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods.Comment: 22 pages, 14 figures, submitted to Pattern Recognition Letter

    The XMM Cluster Survey: X-ray analysis methodology

    Get PDF
    The XMM Cluster Survey (XCS) is a serendipitous search for galaxy clusters using all publicly available data in the XMM-Newton Science Archive. Its main aims are to measure cosmological parameters and trace the evolution of X-ray scaling relations. In this paper we describe the data processing methodology applied to the 5,776 XMM observations used to construct the current XCS source catalogue. A total of 3,675 > 4-sigma cluster candidates with > 50 background-subtracted X-ray counts are extracted from a total non-overlapping area suitable for cluster searching of 410 deg^2. Of these, 993 candidates are detected with > 300 background-subtracted X-ray photon counts, and we demonstrate that robust temperature measurements can be obtained down to this count limit. We describe in detail the automated pipelines used to perform the spectral and surface brightness fitting for these candidates, as well as to estimate redshifts from the X-ray data alone. A total of 587 (122) X-ray temperatures to a typical accuracy of < 40 (< 10) per cent have been measured to date. We also present the methodology adopted for determining the selection function of the survey, and show that the extended source detection algorithm is robust to a range of cluster morphologies by inserting mock clusters derived from hydrodynamical simulations into real XMM images. These tests show that the simple isothermal beta-profiles is sufficient to capture the essential details of the cluster population detected in the archival XMM observations. The redshift follow-up of the XCS cluster sample is presented in a companion paper, together with a first data release of 503 optically-confirmed clusters.Comment: MNRAS accepted, 45 pages, 38 figures. Our companion paper describing our optical analysis methodology and presenting a first set of confirmed clusters has now been submitted to MNRA

    The (un)resolved X-ray background in the Lockman Hole

    Full text link
    Most of the soft and a growing fraction of the harder X-ray background has been resolved into emission from point sources, yet the resolved fraction above 7 keV has only been poorly constrained. We use ~700 ks of XMM-Newton observations of the Lockman Hole and a photometric approach to estimate the total flux attributable to resolved sources in a number of different energy bands. We find the resolved fraction of the X-ray background to be ~90 per cent below 2 keV but it decreases rapidly at higher energies with the resolved fraction above ~7 keV being only ~50 per cent. The integrated X-ray spectrum from detected sources has a slope of Gamma~1.75, much softer than the Gamma=1.4 of the total background spectrum. The unresolved background component has the spectral signature of highly obscured AGN.Comment: 6 pages, 6 figures, MNRAS Letters, in press, changed to reflect accepted versio

    Neural Networks for improved signal source enumeration and localization with unsteered antenna arrays

    Get PDF
    Direction of Arrival estimation using unsteered antenna arrays, unlike mechanically scanned or phased arrays, requires complex algorithms which perform poorly with small aperture arrays or without a large number of observations, or snapshots. In general, these algorithms compute a sample covriance matrix to obtain the direction of arrival and some require a prior estimate of the number of signal sources. Herein, artificial neural network architectures are proposed which demonstrate improved estimation of the number of signal sources, the true signal covariance matrix, and the direction of arrival. The proposed number of source estimation network demonstrates robust performance in the case of coherent signals where conventional methods fail. For covariance matrix estimation, four different network architectures are assessed and the best performing architecture achieves a 20 times improvement in performance over the sample covariance matrix. Additionally, this network can achieve comparable performance to the sample covariance matrix with 1/8-th the amount of snapshots. For direction of arrival estimation, preliminary results are provided comparing six architectures which all demonstrate high levels of accuracy and demonstrate the benefits of progressively training artificial neural networks by training on a sequence of sub- problems and extending to the network to encapsulate the entire process

    The Development of Hybrid Process Control Systems For Fluidized Bed Pellet Coating Processes

    Get PDF
    The conventional basic control for pharmaceutical batch processes has several drawbacks. The basic control often uses constant process settings discovered by trial and error. The rigid process operation provides limited process understanding and forgoes the opportunities of process optimization. Product quality attributes are measured by the low efficient off-line tests, therefore these cannot be used to monitor and inform the process to make appropriate adjustments. Frequent reprocessing and batch failures are possible consequences if the process is not under effective control. These issues raise serious concerns of the process capability of a pharmaceutical manufacturing process. An alternative process control strategy is perceived as a logical way to improve the process capability. To demonstrate the strategy, a hybrid control system is proposed in this work. A challenging aqueous drug layering process, which had a batch failure rate of 30% when operated using basic control, was investigated as a model system to develop and demonstrate the hybrid control system. The hybrid control consisted of process manipulation, monitoring and optimization. First principle control was developed to manipulate the process. It used a theory of environmental equivalency to regulate a consistent drying rate for the drug layering process. The process manipulation method successfully eliminated the batch failures previously encountered in the basic control approach. Process monitoring was achieved by building an empirical analytical model using in-line Near-Infrared spectroscopy. The model allowed real time quantitative analysis of drug layered content and was able to determine the endpoint of the process. It achieved quality assurance without relying on the end product tests. Process optimization was accomplished by discovering optimum process settings in an operation space. The operation space was constructed using edge of failure analysis on a design space. It provided setpoints with higher confidence to meet the specifications. The integration of the control elements enabled a complete hybrid control system. The results showed the process capability of the drug layering process was significantly improved by using the hybrid control. The effectiveness was substantiated by statistical evidence of the process capability indices

    Detecting covariance symmetries for classification of polarimetric SAR images

    Get PDF
    The availability of multiple images of the same scene acquired with the same radar but with different polarizations, both in transmission and reception, has the potential to enhance the classification, detection and/or recognition capabilities of a remote sensing system. A way to take advantage of the full-polarimetric data is to extract, for each pixel of the considered scene, the polarimetric covariance matrix, coherence matrix, Muller matrix, and to exploit them in order to achieve a specific objective. A framework for detecting covariance symmetries within polarimetric SAR images is here proposed. The considered algorithm is based on the exploitation of special structures assumed by the polarimetric coherence matrix under symmetrical properties of the returns associated with the pixels under test. The performance analysis of the technique is evaluated on both simulated and real L-band SAR data, showing a good classification level of the different areas within the image

    A multi-family GLRT for detection in polarimetric SAR images

    Get PDF
    This paper deals with detection from multipolarization SAR images. The problem is cast in terms of a composite hypothesis test aimed at discriminating between the Polarimetric Covariance Matrix (PCM) equality (absence of target in the tested region) and the situation where the region under test exhibits a PCM with at least an ordered eigenvalue smaller than that of a reference covariance. This last setup reflects the physical condition where the back scattering associated with the target leads to a signal, in some eigen-directions, weaker than the one gathered from a reference area where it is apriori known the absence of targets. A Multi-family Generalized Likelihood Ratio Test (MGLRT) approach is pursued to come up with an adaptive detector ensuring the Constant False Alarm Rate (CFAR) property. At the analysis stage, the behaviour of the new architecture is investigated in comparison with a benchmark (but non-implementable) and some other adaptive sub-optimum detectors available in open literature. The study, conducted in the presence of both simulated and real data, confirms the practical effectiveness of the new approach

    A new robust algorithm for isolated word endpoint detection

    Full text link
    Teager Energy and Energy-Entropy Features are two approaches, which have recently been used for locating the endpoints of an utterance. However, each of them has some drawbacks for speech in noisy environments. This paper proposes a novel method to combine these two approaches to locate endpoint intervals and yet make a final decision based on energy, which requires far less time than the feature based methods. After the algorithm description, an experimental evaluation is presented, comparing the automatically determined endpoints with those determined by skilled personnel. It is shown that the accuracy of this algorithm is quite satisfactory and acceptable
    • 

    corecore