554,782 research outputs found

    EEG processing with TESPAR for depth of anesthesia detection

    Get PDF
    Poster presentation: Introduction Adequate anesthesia is crucial to the success of surgical interventions and subsequent recovery. Neuroscientists, surgeons, and engineers have sought to understand the impact of anesthetics on the information processing in the brain and to properly assess the level of anesthesia in an non-invasive manner. Studies have indicated a more reliable depth of anesthesia (DOA) detection if multiple parameters are employed. Indeed, commercial DOA monitors (BIS, Narcotrend, M-Entropy and A-line ARX) use more than one feature extraction method. Here, we propose TESPAR (Time Encoded Signal Processing And Recognition) a time domain signal processing technique novel to EEG DOA assessment that could enhance existing monitoring devices. ..

    The TREC-2002 video track report

    Get PDF
    TREC-2002 saw the second running of the Video Track, the goal of which was to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. The track used 73.3 hours of publicly available digital video (in MPEG-1/VCD format) downloaded by the participants directly from the Internet Archive (Prelinger Archives) (internetarchive, 2002) and some from the Open Video Project (Marchionini, 2001). The material comprised advertising, educational, industrial, and amateur films produced between the 1930's and the 1970's by corporations, nonprofit organizations, trade associations, community and interest groups, educational institutions, and individuals. 17 teams representing 5 companies and 12 universities - 4 from Asia, 9 from Europe, and 4 from the US - participated in one or more of three tasks in the 2001 video track: shot boundary determination, feature extraction, and search (manual or interactive). Results were scored by NIST using manually created truth data for shot boundary determination and manual assessment of feature extraction and search results. This paper is an introduction to, and an overview of, the track framework - the tasks, data, and measures - the approaches taken by the participating groups, the results, and issues regrading the evaluation. For detailed information about the approaches and results, the reader should see the various site reports in the final workshop proceedings

    Feature extraction in batik image geometric motif using canny edge detection

    Get PDF
    One of Indonesia's priceless cultural heritages is batik. Even UNESCO was admitting that batik is an intellectual, cultural right of the Indonesian (October 2). Unfortunately, many Indonesian do not have sufficient knowledge about the various types of the existence of batik's motifs. In fact, in each of these motifs, many treasures must be maintained. Therefore, it is necessary to develop a model that can recognize batik motifs automatically. The model can be built using various kinds of pattern recognition algorithms. One of the most important stages in the introduction of batik motifs is the feature extraction. Feature extraction is needed to determine the parameters that able to define character a batik's motif. One feature extraction model that can be done is by using edge detection. This research focuses on feature extraction using Canny edge detection. The result of edge detection is forming the pattern of a batik motif. The pattern contains pixel values 0 and 1. These values can later be used as input at the classification stage

    Texture Characteristic of Local Binary Pattern on Face Recognition with Probabilistic Linear Discriminant Analysis

    Get PDF
    Face recognition is an identification system that uses the characteristics of a person's face for processing. There is a feature in the face image so that it can be distinguished between one face and another face. One way to recognize face images is to analyze the texture of the face image. Texture analysis generally requires a feature extraction process. In different images, the characteristics will also differ. This characteristic will be the basis for the recognition of facial images. However, existing face recognition methods experience efficiency problems and rely heavily on the extraction of the right features. This study aims to study the texture characteristics of the extraction results using the Local Binary Pattern (LBP) method which is applied to deal with the introduction of Probabilistic Linear Discriminant Analysis (PLDA). The data used in this study are human face images from the AR Faces database, consisting of 136 objects (76 men and 60 women), each of which has 7 types of images Based on the results of testing shows the LBP method can produce the highest accuracy with a value of 95.53% in the introduction of PLDA

    Hybrid HC-PAA-G3K for novelty detection on industrial systems

    Get PDF
    Piecewise aggregate approximation (PAA) provides a powerful yet computationally efficient tool for dimensionality reduction and feature extraction. A new distance-based hierarchical clustering (HC) is now proposed to adjust the PAA segment frame sizes. The proposed hybrid HC-PAA is validated by a generic clustering method ‘G3Kmeans’ (G3K). The efficacy of the hybrid HC-PAA-G3K methodology is demonstrated using an application case study based on novelty detection on industrial gas turbines. Results show the hybrid HC-PAA provides improved performance with regard to cluster separation, compared to traditional PAA. The proposed method therefore provides a robust algorithm for feature extraction and novelty detection. There are two main contributions of the paper: 1) application of HC to modify conventional PAA segment frame size; 2) introduction of ‘G3Kmeans’ to improve the performance of the traditional K-means clustering methods

    Ekstraksi Fitur Perpotongan Dan Lengkungan Untuk Mengenali Huruf Cetak

    Full text link
    Human techniques in manipulating picture are more and more, especially in character recognition. Various types of techniques to manipulate it also has advantages and disadvantages. One technique that is often used is to use artificial neural network. While the recognition technique using the feature extraction is still rarely used.Therefore, an application was made to recognize a feature extraction of capital letters in the image. The technique used is to find the intersection or edge and curvature features from images containing capital letters so that it can be done the recognition of features from such capital letters.Through the test results, has obtained an example of the introduction of the intersection and curvature features stored in the matrix as compared to the basic structure of the letter. Matrix generated from this testing , for some particular font will be quite satisfactory result . But there are still shortcomings in recognizing letters in a font that is not symmetrical

    TREC video retrieval evaluation: a case study and status report

    Get PDF
    The TREC Video Retrieval Evaluation is a multiyear, international effort, funded by the US Advanced Research and Development Agency (ARDA) and the National Institute of Standards and Technology (NIST) to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. Now beginning its fourth year, it aims over time to develop both a better understanding of how systems can effectively accomplish such retrieval and how one can reliably benchmark their performance. This paper can be seen as a case study in the development of video retrieval systems and their evaluation as well as a report on their status to-date. After an introduction to the evolution of the evaluation over the past three years, the paper reports on the most recent evaluation TRECVID 2003: the evaluation framework — the 4 tasks (shot boundary determination, high-level feature extraction, story segmentation and typing, search), 133 hours of US television news data, and measures —, the results, and the approaches taken by the 24 participating groups

    Evaluating Feature Extraction Methods for Biomedical Word Sense Disambiguation

    Get PDF
    Evaluating Feature Extraction Methods for Biomedical WSD Clint Cuffy, Sam Henry and Bridget McInnes, PhD Virginia Commonwealth University, Richmond, Virginia, USA Introduction. Biomedical text processing is currently a high active research area but ambiguity is still a barrier to the processing and understanding of these documents. Many word sense disambiguation (WSD) approaches represent instances of an ambiguous word as a distributional context vector. One problem with using these vectors is noise -- information that is overly general and does not contribute to the word’s representation. Feature extraction approaches attempt to compensate for sparsity and reduce noise by transforming the data from high-dimensional space to a space of fewer dimensions. Currently, word embeddings [1] have become an increasingly popular method to reduce the dimensionality of vector representations. In this work, we evaluate word embeddings in a knowledge-based word sense disambiguation method. Methods. Context requiring disambiguation consists of an instance of an ambiguous word, and multiple denotative senses. In our method, each word is replaced with its respective word embedding and either summed or averaged to form a single instance vector representation. This also is performed for each sense of an ambiguous word using the sense’s definition obtained from the Unified Medical Language System (UMLS). We calculate the cosine similarity between each sense and instance vectors, and assign the instance the sense with the highest value. Evaluation. We evaluate our method on three biomedical WSD datasets: NLM-WSD, MSH-WSD and Abbrev. The word embeddings were trained on the titles and abstracts from the 2016 Medline baseline. We compare using two word embedding models, Skip-gram and Continuous Bag of Words (CBOW), and vary the word vector representational lengths, from one-hundred to one-thousand, and compare differences in accuracy. Results. The overall outcome of this method demonstrates fairly high accuracy at disambiguating biomedical instance context from groups of denotative senses. The results showed the Skip-gram model obtained a higher disambiguation accuracy than CBOW but the increase was not significant for all of the datasets. Similarly, vector representations of differing lengths displayed minimal change in results, often differing by mere tenths in percentage. We also compared our results to current state-of-the-art knowledge-based WSD systems, including those that have used word embeddings, showing comparable or higher disambiguation accuracy. Conclusion. Although biomedical literature can be ambiguous, our knowledge-based feature extraction method using word embeddings demonstrates a high accuracy in disambiguating biomedical text while eliminating variations of associated noise. In the future, we plan to explore additional dimensionality reduction methods and training data. [1] T. Mikolov, I. Sutskever, K. Chen, G. Corrado and J. Dean, Distributed representations of words and phrases and their compositionality, Advances in neural information processing systems, pp. 3111-3119, 2013.https://scholarscompass.vcu.edu/uresposters/1278/thumbnail.jp

    VIBRATION FEATURE EXTRACTION METHODS FOR GEAR FAULTS DIAGNOSIS -A REVIEW

    Get PDF
    The key point of condition monitoring and fault diagnosis of gearboxes is a fault feature extraction. The study of fault feature detection in rotating machinery from vibration analysis and diagnosis has attracted sustained attention during past decades. In most cases determination of the condition of a gearbox requires study of more than one feature or a combination of several techniques. This paper attempts to survey and summarize the recent research and development of feature extraction methods for gear fault diagnosis, providing references for researchers concerning with this topic and helping them identify further research topics. First, the feature extraction methods for gear faults diagnosis are briefly introduced, the usefulness of the method is illustrated and the problems and the corresponding solutions are listed. Then, recent applications of feature extraction methods for gear faults diagnosis are summarized, in terms of industrial gearboxes. Finally, the open problems of feature extraction methods for gear fault diagnosis are discussed and potential future research directions are identified. It is expected that this review will serve as an introduction summary of vibration feature extraction methods for gear faults diagnosis for those new to the concepts of its applications to gear fault diagnosis based on vibratio

    Direction Selective Contour Detection for Salient Objects

    Get PDF
    The active contour model is a widely used technique for automatic object contour extraction. Existing methods based on this model can perform with high accuracy even in case of complex contours, but challenging issues remain, like the need for precise contour initialization for high curvature boundary segments or the handling of cluttered backgrounds. To deal with such issues, this paper presents a salient object extraction method, the first step of which is the introduction of an improved edge map that incorporates edge direction as a feature. The direction information in the small neighborhoods of image feature points are extracted, and the images’ prominent orientations are defined for direction-selective edge extraction. Using such improved edge information, we provide a highly accurate shape contour representation, which we also combine with texture features. The principle of the paper is to interpret an object as the fusion of its components: its extracted contour and its inner texture. Our goal in fusing textural and structural information is twofold: it is applied for automatic contour initialization, and it is also used to establish an improved external force field. This fusion then produces highly accurate salient object extractions. We performed extensive evaluations which confirm that the presented object extraction method outperforms parametric active contour models and achieves higher efficiency than the majority of the evaluated automatic saliency methods
    corecore