1,385 research outputs found

    Simple but Effective Unsupervised Classification for Specified Domain Images: A Case Study on Fungi Images

    Full text link
    High-quality labeled datasets are essential for deep learning. Traditional manual annotation methods are not only costly and inefficient but also pose challenges in specialized domains where expert knowledge is needed. Self-supervised methods, despite leveraging unlabeled data for feature extraction, still require hundreds or thousands of labeled instances to guide the model for effective specialized image classification. Current unsupervised learning methods offer automatic classification without prior annotation but often compromise on accuracy. As a result, efficiently procuring high-quality labeled datasets remains a pressing challenge for specialized domain images devoid of annotated data. Addressing this, an unsupervised classification method with three key ideas is introduced: 1) dual-step feature dimensionality reduction using a pre-trained model and manifold learning, 2) a voting mechanism from multiple clustering algorithms, and 3) post-hoc instead of prior manual annotation. This approach outperforms supervised methods in classification accuracy, as demonstrated with fungal image data, achieving 94.1% and 96.7% on public and private datasets respectively. The proposed unsupervised classification method reduces dependency on pre-annotated datasets, enabling a closed-loop for data classification. The simplicity and ease of use of this method will also bring convenience to researchers in various fields in building datasets, promoting AI applications for images in specialized domains

    ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ๋ฅผ ํ†ตํ•œ ๊ณต์ •์‹œ์Šคํ…œ ๊ณตํ•™์—์„œ์˜ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต ์ ‘๊ทผ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ™”ํ•™์ƒ๋ฌผ๊ณตํ•™๋ถ€, 2021.8. ์ด์ข…๋ฏผ.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are โ€˜scarceโ€™ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design. First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved. Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure. Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.๊ณ„์ธก๊ธฐ์ˆ ์˜ ๋ฐœ๋‹ฌ๋กœ ์–‘์งˆ์˜, ๊ทธ๋ฆฌ๊ณ  ๋ฐฉ๋Œ€ํ•œ ์–‘์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ์˜ ์ทจ๋“์ด ๊ฐ€๋Šฅํ•ด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋งŽ์€ ๊ฒฝ์šฐ ์‹œ์Šคํ…œ ์ฐจ์›์˜ ํฌ๊ธฐ์— ๋น„ํ•ด์„œ ์ผ๋ถ€ ์šด์ „์กฐ๊ฑด์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ๋งŒ์ด ์ทจ๋“๋˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” โ€˜ํฌ์†Œโ€™ํ•˜๊ฒŒ ๋œ๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” ์‹œ์Šคํ…œ ๊ฑฐ๋™ ์ž์ฒด์™€ ๋”๋ถˆ์–ด ๊ณ„์ธก์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋…ธ์ด์ฆˆ๋กœ ์ธํ•œ ๋ณธ์งˆ์ ์ธ ํ™•๋ฅ ์  ๊ฑฐ๋™์„ ๋ณด์ธ๋‹ค. ๋”ฐ๋ผ์„œ ์‹œ์Šคํ…œ์˜ ์˜ˆ์ธก๋ชจ๋ธ์€ ์˜ˆ์ธก ๊ฐ’์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๊ธฐ์ˆ ํ•˜๋Š” ๊ฒƒ์ด ์š”๊ตฌ๋˜๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์˜ค์ง„์„ ์˜ˆ๋ฐฉํ•˜๊ณ  ์ž ์žฌ์  ์ธ๋ช… ํ”ผํ•ด์™€ ๊ฒฝ์ œ์  ์†์‹ค์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋Œ€ํ•œ ๋ณดํŽธ์ ์ธ ์ ‘๊ทผ๋ฒ•์€ ํ™•๋ฅ ์ถ”์ •๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰ํ™” ํ•˜๋Š” ๊ฒƒ์ด๋‚˜, ํ˜„์กดํ•˜๋Š” ์ถ”์ •๊ธฐ๋ฒ•๋“ค์€ ์žฌ๊ท€์  ์ƒ˜ํ”Œ๋ง์— ์˜์กดํ•˜๋Š” ํŠน์„ฑ์ƒ ๊ณ ์ฐจ์›์ด๋ฉด์„œ๋„ ๋‹ค๋Ÿ‰์ธ ๊ณต์ •๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜๊ธฐ ์–ด๋ ต๋‹ค๋Š” ๊ทผ๋ณธ์ ์ธ ํ•œ๊ณ„๋ฅผ ๊ฐ€์ง„๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ์— ๊ธฐ๋ฐ˜ํ•œ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต์„ ์ ์šฉํ•˜์—ฌ ์‹œ์Šคํ…œ์— ๋‚ด์žฌ๋œ ๋ถˆํ™•์‹ค์„ฑ์„ ๋ชจ๋ธ๋งํ•˜๋ฉด์„œ๋„ ๋™์‹œ์— ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋จผ์ €, ๊ณต์ •์˜ ๋ชจ๋‹ˆํ„ฐ๋ง์— ์žˆ์–ด ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ (Gaussian mixture model)์„ ๋ถ„๋ฅ˜์ž๋กœ ์‚ฌ์šฉํ•˜๋Š” ํ™•๋ฅ ์  ๊ฒฐํ•จ ๋ถ„๋ฅ˜ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ด๋•Œ ๋ถ„๋ฅ˜์ž์˜ ํ•™์Šต์—์„œ์˜ ๊ณ„์‚ฐ ๋ณต์žก๋„๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์ฐจ์›์œผ๋กœ ํˆฌ์˜์‹œํ‚ค๋Š”๋ฐ, ์ด๋ฅผ ์œ„ํ•œ ํ™•๋ฅ ์  ๋‹ค์–‘์ฒด ํ•™์Šต (probabilistic manifold learn-ing) ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฐ์ดํ„ฐ์˜ ๋‹ค์–‘์ฒด (manifold)๋ฅผ ๊ทผ์‚ฌํ•˜์—ฌ ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์‚ฌ์ด์˜ ์Œ๋ณ„ ์šฐ๋„ (pairwise likelihood)๋ฅผ ๋ณด์กดํ•˜๋Š” ํˆฌ์˜๋ฒ•์ด ์‚ฌ์šฉ๋œ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ๋ฐ์ดํ„ฐ์˜ ์ข…๋ฅ˜์™€ ์ฐจ์›์— ์˜์กด๋„๊ฐ€ ๋‚ฎ์€ ์ง„๋‹จ ๊ฒฐ๊ณผ๋ฅผ ์–ป์Œ๊ณผ ๋™์‹œ์— ๋ฐ์ดํ„ฐ ๋ ˆ์ด๋ธ”๊ณผ ๊ฐ™์€ ๋น„๊ฑฐ๋ฆฌ์  (non-metric) ์ •๋ณด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒฐํ•จ ์ง„๋‹จ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ๋‘˜์งธ๋กœ, ๋ฒ ์ด์ง€์•ˆ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง(Bayesian deep neural networks)์„ ์‚ฌ์šฉํ•œ ๊ณต์ •์˜ ํ™•๋ฅ ์  ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ•๋ก ์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์‹ ๊ฒฝ๋ง์˜ ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๊ฐ€์šฐ์Šค ๋ถ„ํฌ๋กœ ์น˜ํ™˜๋˜๋ฉฐ, ๋ณ€๋ถ„์ถ”๋ก  (variational inference)์„ ํ†ตํ•˜์—ฌ ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ์ด ์ง„ํ–‰๋œ๋‹ค. ํ›ˆ๋ จ์ด ๋๋‚œ ํ›„ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์œ ํšจ์„ฑ์„ ์ธก์ •ํ•˜์—ฌ ๋ถˆํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์†Œ๊ฑฐํ•˜๋Š” ์‚ฌํ›„ ๋ชจ๋ธ ์••์ถ• ๋ฐฉ๋ฒ•์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๋ฐ˜๋„์ฒด ๊ณต์ •์— ๋Œ€ํ•œ ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋Š” ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๊ณต์ •์˜ ๋ณต์žกํ•œ ๊ฑฐ๋™์„ ํšจ๊ณผ์ ์œผ๋กœ ๋ชจ๋ธ๋ง ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ชจ๋ธ์˜ ์ตœ์  ๊ตฌ์กฐ๋ฅผ ๋„์ถœํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ถ„ํฌํ˜• ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์‚ฌ์šฉํ•œ ๊ฐ•ํ™”ํ•™์Šต์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ™•๋ฅ ์  ๊ณต์ • ์„ค๊ณ„ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ตœ์ ์น˜๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์žฌ๊ท€์ ์œผ๋กœ ๋ชฉ์  ํ•จ์ˆ˜ ๊ฐ’์„ ํ‰๊ฐ€ํ•˜๋Š” ๊ธฐ์กด์˜ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋‹ฌ๋ฆฌ, ๋ชฉ์  ํ•จ์ˆ˜ ๊ณก๋ฉด (objective function surface)์„ ๋งค๊ฐœํ™” ๋œ ํ™•๋ฅ ๋ถ„ํฌ๋กœ ๊ทผ์‚ฌํ•˜๋Š” ์ ‘๊ทผ๋ฒ•์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด์‚ฐํ™” (discretization)๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์—ฐ์†์  ํ–‰๋™ ์ •์ฑ…์„ ํ•™์Šตํ•˜๋ฉฐ, ํ™•์‹ค์„ฑ (certainty)์— ๊ธฐ๋ฐ˜ํ•œ ํƒ์ƒ‰ (exploration) ๋ฐ ํ™œ์šฉ (exploi-tation) ๋น„์œจ์˜ ์ œ์–ด๊ฐ€ ํšจ์œจ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์‚ฌ๋ก€ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ๊ณต์ •์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ง€์‹ (heuristic)์„ ํ•™์Šตํ•˜๊ณ  ์œ ์‚ฌํ•œ ์„ค๊ณ„ ๋ฌธ์ œ์˜ ํ•ด๋ฅผ ๊ตฌํ•˜๋Š” ๋ฐ ์ด์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.Chapter 1 Introduction 1 1.1. Motivation 1 1.2. Outline of the thesis 5 Chapter 2 Backgrounds and preliminaries 9 2.1. Bayesian inference 9 2.2. Monte Carlo 10 2.3. Kullback-Leibler divergence 11 2.4. Variational inference 12 2.5. Riemannian manifold 13 2.6. Finite extended-pseudo-metric space 16 2.7. Reinforcement learning 16 2.8. Directed graph 19 Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20 3.1. Introduction 20 3.2. Methods 25 3.2.1. Uniform manifold approximation 27 3.2.2. Clusterization 28 3.2.3. Projection 31 3.2.4. Mapping of unknown data query 32 3.2.5. Inference 33 3.3. Verification study 38 3.3.1. Dataset description 38 3.3.2. Experimental setup 40 3.3.3. Process monitoring 43 3.3.4. Projection characteristics 47 3.3.5. Fault diagnosis 50 3.3.6. Computational Aspects 56 Chapter 4 Process system modeling with Bayesian neural networks 59 4.1. Introduction 59 4.2. Methods 63 4.2.1. Long Short-Term Memory (LSTM) 63 4.2.2. Bayesian LSTM (BLSTM) 66 4.3. Verification study 68 4.3.1. System description 68 4.3.2. Estimation of the plasma variables 71 4.3.3. Dataset description 72 4.3.4. Experimental setup 72 4.3.5. Weight regularization during training 78 4.3.6. Modeling complex behaviors of the system 80 4.3.7. Uncertainty quantification and model compression 85 Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89 5.1. Introduction 89 5.2. Methods 93 5.2.1. Flowsheet hashing 93 5.2.2. Behavioral cloning 99 5.2.3. Neural Monte Carlo tree search (N-MCTS) 100 5.2.4. Distributional actor-critic networks (DACN) 105 5.2.5. Action masking 110 5.3. Verification study 110 5.3.1. System description 110 5.3.2. Experimental setup 111 5.3.3. Result and discussions 115 Chapter 6 Concluding remarks 120 6.1. Summary of the contributions 120 6.2. Future works 122 Appendix 125 A.1. Proof of Lemma 1 125 A.2. Performance indices for dimension reduction 127 A.3. Model equations for process units 130 Bibliography 132 ์ดˆ ๋ก 149๋ฐ•

    Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies

    Get PDF
    Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms

    Multi-Source Data Fusion for Cyberattack Detection in Power Systems

    Full text link
    Cyberattacks can cause a severe impact on power systems unless detected early. However, accurate and timely detection in critical infrastructure systems presents challenges, e.g., due to zero-day vulnerability exploitations and the cyber-physical nature of the system coupled with the need for high reliability and resilience of the physical system. Conventional rule-based and anomaly-based intrusion detection system (IDS) tools are insufficient for detecting zero-day cyber intrusions in the industrial control system (ICS) networks. Hence, in this work, we show that fusing information from multiple data sources can help identify cyber-induced incidents and reduce false positives. Specifically, we present how to recognize and address the barriers that can prevent the accurate use of multiple data sources for fusion-based detection. We perform multi-source data fusion for training IDS in a cyber-physical power system testbed where we collect cyber and physical side data from multiple sensors emulating real-world data sources that would be found in a utility and synthesizes these into features for algorithms to detect intrusions. Results are presented using the proposed data fusion application to infer False Data and Command injection-based Man-in- The-Middle (MiTM) attacks. Post collection, the data fusion application uses time-synchronized merge and extracts features followed by pre-processing such as imputation and encoding before training supervised, semi-supervised, and unsupervised learning models to evaluate the performance of the IDS. A major finding is the improvement of detection accuracy by fusion of features from cyber, security, and physical domains. Additionally, we observed the co-training technique performs at par with supervised learning methods when fed with our features

    Exploiting geometry, topology, and optimization for knowledge discovery in big data

    Get PDF
    2013 Summer.Includes bibliographical references.In this dissertation, we consider several topics that are united by the theme of topological and geometric data analysis. First, we consider an application in landscape ecology using a well-known vector quantization algorithm to characterize and segment the color content of natural imagery. Color information in an image may be viewed naturally as clusters of pixels with similar attributes. The inherent structure and distribution of these clusters serves to quantize the information in the image and provides a basis for classification. A friendly graphical user interface called Biological Landscape Organizer and Semi-supervised Segmenting Machine (BLOSSM) was developed to aid in this classification. We consider four different choices for color space and five different metrics in which to analyze our data, and results are compared. Second, we present a novel topologically driven clustering algorithm that blends Locally Linear Embedding (LLE) and vector quantization by mapping color information to a lower dimensional space, identifying distinct color regions, and classifying pixels together based on both a proximity measure and color content. It is observed that these techniques permit a significant reduction in color resolution while maintaining the visually important features of images. Third, we develop a novel algorithm which we call Sparse LLE that leads to sparse representations in local reconstructions by using a data weighted 1-norm regularization term in the objective function of an optimization problem. It is observed that this new formulation has proven effective at automatically determining an appropriate number of nearest neighbors for each data point. We explore various optimization techniques, namely Primal Dual Interior Point algorithms, to solve this problem, comparing the computational complexity for each. Fourth, we present a novel algorithm that can be used to determine the boundary of a data set, or the vertices of a convex hull encasing a point cloud of data, in any dimension by solving a quadratic optimization problem. In this problem, each point is written as a linear combination of its nearest neighbors where the coefficients of this linear combination are penalized if they do not construct a convex combination, revealing those points that cannot be represented in this way, the vertices of the convex hull containing the data. Finally, we exploit the relatively new tool from topological data analysis, persistent homology, and consider the use of vector bundles to re-embed data in order to improve the topological signal of a data set by embedding points sampled from a projective variety into successive Grassmannians

    Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture

    Get PDF
    This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy

    A comprehensive review of 3D convolutional neural network-based classification techniques of diseased and defective crops using non-UAV-based hyperspectral images

    Full text link
    Hyperspectral imaging (HSI) is a non-destructive and contactless technology that provides valuable information about the structure and composition of an object. It can capture detailed information about the chemical and physical properties of agricultural crops. Due to its wide spectral range, compared with multispectral- or RGB-based imaging methods, HSI can be a more effective tool for monitoring crop health and productivity. With the advent of this imaging tool in agrotechnology, researchers can more accurately address issues related to the detection of diseased and defective crops in the agriculture industry. This allows to implement the most suitable and accurate farming solutions, such as irrigation and fertilization before crops enter a damaged and difficult-to-recover phase of growth in the field. While HSI provides valuable insights into the object under investigation, the limited number of HSI datasets for crop evaluation presently poses a bottleneck. Dealing with the curse of dimensionality presents another challenge due to the abundance of spectral and spatial information in each hyperspectral cube. State-of-the-art methods based on 1D- and 2D-CNNs struggle to efficiently extract spectral and spatial information. On the other hand, 3D-CNN-based models have shown significant promise in achieving better classification and detection results by leveraging spectral and spatial features simultaneously. Despite the apparent benefits of 3D-CNN-based models, their usage for classification purposes in this area of research has remained limited. This paper seeks to address this gap by reviewing 3D-CNN-based architectures and the typical deep learning pipeline, including preprocessing and visualization of results, for the classification of hyperspectral images of diseased and defective crops. Furthermore, we discuss open research areas and challenges when utilizing 3D-CNNs with HSI data
    • โ€ฆ
    corecore