6 research outputs found

    LCD ํŒจ๋„ ์ƒ์˜ ๋ถˆ๋Ÿ‰ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ Laplacian ๊ทธ๋ž˜ํ”„๋ฅผ ์ด์šฉํ•œ ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ปดํ“จํ„ฐ๊ณตํ•™๊ณผ, 2012. 8. ์œ ์„์ธ.LCD ํŒจ๋„ ์œ„์— ์กด์žฌํ•˜๋Š” ๋ถˆ๋Ÿ‰์€ ํฌ๊ฒŒ 4๊ฐ€์ง€ ์ข…๋ฅ˜๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ๋‹ค. ๊ฐ ๋ถˆ๋Ÿ‰์€ ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ๋‹ค๋ค„์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ •ํ™•ํ•œ ๋ถ„๋ฅ˜๋ฐฉ๋ฒ•์ด ํ•„์š”ํ•˜๋‹ค. ์ •ํ™•ํ•œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ถˆ๋Ÿ‰์€ ์ž˜ ๋‚˜ํƒ€๋‚ด๋Š” ํŠน์„ฑ๋“ค๊ณผ ์ข‹์€ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๋ถˆ๋Ÿ‰์„ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๋ถˆ๋Ÿ‰ ์˜์—ญ์˜ ๋ฐ๊ธฐ, ๋ชจ์–‘, ํ†ต๊ณ„์ ์ธ ํŠน์„ฑ๋“ค์„ ์‚ฌ์šฉํ•˜๊ณ , ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ์„ ์ด์šฉํ•œ ๋ฒ ์ด์ฆˆ ๋ถ„๋ฅ˜๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ถˆ๋Ÿ‰์„ ๋ถ„๋ฅ˜ํ•˜๊ฒŒ ๋œ๋‹ค. ํ•˜์ง€๋งŒ ๋ถˆ๋Ÿ‰์„ ๋‚˜ํƒ€๋‚ด๋Š” ํŠน์„ฑ ์ค‘์— ๋…ธ์ด์ฆˆ๊ฐ€ ์กด์žฌํ•˜๊ฑฐ๋‚˜ ๋ถ„๋ฅ˜์— ๊ด€๋ จ์ด ์—†๋Š” ํŠน์„ฑ๋“ค์ด ๋งŽ์ด ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ๊ฐ€ ์•ˆ ์ข‹๊ฒŒ ๋‚˜์˜ฌ ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—ฌ๊ธฐ์„œ๋Š” ํŠน์„ฑ ์ถ”์ถœ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๊ฒŒ ๋œ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ•์€ ์—ฐ๊ด€์„ฑ์ด ์ ์€ ํŠน์„ฑ๋“ค์ด ๋ถ„๋ฅ˜์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์ค„์—ฌ์ค„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ฐ์ดํ„ฐ์˜ ์ฐจ์›์„ ์ค„์—ฌ์ฃผ์–ด ๋ถ„์„์„ ์šฉ์ดํ•˜๊ฒŒ ํ•ด์ฃผ๋ฉด์„œ ๊ณ„์‚ฐ ์†๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ํšจ๊ณผ๋ฅผ ๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์ฃผ์š” ์„ฑ๋ถ„ ๋ถ„์„ ๋ฐฉ๋ฒ•์€ ์ด๋Ÿฌํ•œ ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ•์˜ ์ค‘ ๊ฐ€์žฅ ์œ ๋ช…ํ•œ ๋ฐฉ๋ฒ•์ค‘์˜ ํ•˜๋‚˜๋กœ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ธ๋‹ค๊ณ  ์•Œ๋ ค์ ธ ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์ฃผ์š” ์„ฑ๋ถ„ ๋ถ„์„ ๋ฐฉ๋ฒ• ์—ญ์‹œ ๋งŽ์€ ์ˆ˜์˜ ์˜๋ฏธ ์—†๋Š” ํŠน์„ฑ๋“ค์ด ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋‚˜์œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ค€๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ŠคํŽ™ํŠธ๋Ÿด ๊ทธ๋ž˜ํ”„ ์ด๋ก ์„ ์ด์šฉํ•œ ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ฃผ์š” ์„ฑ๋ถ„ ๋ถ„์„ ๋ฐฉ๋ฒ•์ด ๋ฐ์ดํ„ฐ์˜ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๊ณผ๋Š” ๋‹ฌ๋ฆฌ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ ์ƒ˜ํ”Œ ๊ฐ„์˜ ์œ ์‚ฌ๋„๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๊ทธ๋ž˜ํ”„ ๋ผํ”Œ๋ผ์‹œ์•ˆ ๋งคํŠธ๋ฆญ์Šค๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๊ทธ ๊ณ ์œ  ๊ฐ’๊ณผ ๊ณ ์œ  ๋ฒกํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณด๋ฉด ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด ๊ทธ๋ž˜ํ”„ ๋ผํ”Œ๋ผ์‹œ์•ˆ์„ ์ด์šฉํ•œ ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ•์€ ์ฃผ์š” ์„ฑ๋ถ„ ๋ถ„์„์„ ์‚ฌ์šฉํ•œ ๊ฒฝ์šฐ๋ณด๋‹ค ๋” ์ข‹์€ ๋ถ„๋ฅ˜ ์„ฑ๊ณต๋ฅ ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋˜ํ•œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ์ž„์˜๋กœ ์˜๋ฏธ ์—†๋Š” ํŠน์„ฑ์ด ์ถ”๊ฐ€๋œ ๊ฒฝ์šฐ์˜ ์‹คํ—˜์— ๋Œ€ํ•ด์„œ๋„ ๋งค์šฐ ๊พธ์ค€ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์คŒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค.There are four types of defects on LCD panel. For exact classification for the defects, good feature selection and classifier are necessary. In this paper, various features such as brightness, shape and statistical features are stated and Bayes classifier using Gaussian mixture model is used as classifier. But noisy or irrelevant features can harass the classification result. Feature extraction method can reduce the influence of irrelevant features and dimensionality to analyze complicated data well. Principal Component Analysis was one of the most famous feature extraction method had appropriate performance. However PCA would produce poor result if many noisy features exist. To solve that problem of PCA, feature extraction method based on spectral graph theory is proposed. Unlike PCA, proposed method using graph Laplacian matrix based on similarity instead of covariance matrix for analyzing spectral system. Experimental result shows that feature extraction method using graph Laplacian produces better performance than the result using PCA. And also proposed method is very robust to randomly added noisy features.Chapter 1. Introduction 1 Chapter 2. Defects and features 4 2.1. Definition 4 2.2. Brightness features 6 2.3. Shape features 6 2.4. Statistical features 9 Chapter 3. Feature extraction 11 3.1. Principal component analysis 11 3.2. Graph Laplacian 12 Chapter 4. Gaussian mixture models 14 4.1. Training 14 4.2. Classification 17 Chapter 5. Experiment 18 Chapter 6. Conclusion 21 ReferenceMaste

    Simultaneous Bayesian Clustering and Feature Selection Through Studentโ€™s t{t} Mixtures Model

    Get PDF
    In this paper, we proposed a generative model for feature selection under the unsupervised learning context. The model assumes that data are independently and identically sampled from a finite mixture of Student?s t distributions, which can reduce the sensitiveness to outliers. Latent random variables that represent the features? salience are included in the model for the indication of the relevance of features. As a result, the model is expected to simultaneously realize clustering, feature selection, and outlier detection. Inference is carried out by a tree-structured variational Bayes algorithm. Full Bayesian treatment is adopted in the model to realize automatic model selection. Controlled experimental studies showed that the developed model is capable of modeling the data set with outliers accurately. Further- more, experiment results showed that the developed algorithm compares favorably against existing unsupervised probability model-based Bayesian feature selection algorithms on artificial and real data sets. Moreover, the application of the developed algorithm on real leukemia gene expression data indicated that it is able to identify the discriminating genes successfully

    Aco-based feature selection algorithm for classification

    Get PDF
    Dataset with a small number of records but big number of attributes represents a phenomenon called โ€œcurse of dimensionalityโ€. The classification of this type of dataset requires Feature Selection (FS) methods for the extraction of useful information. The modified graph clustering ant colony optimisation (MGCACO) algorithm is an effective FS method that was developed based on grouping the highly correlated features. However, the MGCACO algorithm has three main drawbacks in producing a features subset because of its clustering method, parameter sensitivity, and the final subset determination. An enhanced graph clustering ant colony optimisation (EGCACO) algorithm is proposed to solve the three (3) MGCACO algorithm problems. The proposed improvement includes: (i) an ACO feature clustering method to obtain clusters of highly correlated features; (ii) an adaptive selection technique for subset construction from the clusters of features; and (iii) a genetic-based method for producing the final subset of features. The ACO feature clustering method utilises the ability of various mechanisms such as intensification and diversification for local and global optimisation to provide highly correlated features. The adaptive technique for ant selection enables the parameter to adaptively change based on the feedback of the search space. The genetic method determines the final subset, automatically, based on the crossover and subset quality calculation. The performance of the proposed algorithm was evaluated on 18 benchmark datasets from the University California Irvine (UCI) repository and nine (9) deoxyribonucleic acid (DNA) microarray datasets against 15 benchmark metaheuristic algorithms. The experimental results of the EGCACO algorithm on the UCI dataset are superior to other benchmark optimisation algorithms in terms of the number of selected features for 16 out of the 18 UCI datasets (88.89%) and the best in eight (8) (44.47%) of the datasets for classification accuracy. Further, experiments on the nine (9) DNA microarray datasets showed that the EGCACO algorithm is superior than the benchmark algorithms in terms of classification accuracy (first rank) for seven (7) datasets (77.78%) and demonstrates the lowest number of selected features in six (6) datasets (66.67%). The proposed EGCACO algorithm can be utilised for FS in DNA microarray classification tasks that involve large dataset size in various application domains

    Effective detection of security compromises in enterprises using feature engineering

    Get PDF
    We present a method to effectively detect malicious activity in the data of enterprise logs. Our method involves feature engineering, or generating new features by applying operators on the features of the raw data. We apply the Fourier expansion of Boolean functions to generate parity functions on feature subsets, or parity features. We also investigate a heuristic method of applying Boolean operators to raw data features, generating propositional features. We demonstrate with real data sets that the engineered features enhance the performance of classifiers and clustering algorithms. As compared to classification of raw data features, the engineered features achieve up to 50.6% improvement in malicious recall while sacrificing no more than 0.47% in accuracy. Clustering with respect to the engineered features finds up to 6 "pure" malicious clusters, as compared to 0 "pure" clusters with raw data features. In one case, exactly one (1) engineered feature could achieve higher performance than 91 raw data features. In general, a small number (<10) of engineered features achieve higher performance than raw data features
    corecore