13 research outputs found

    Variance analysis for Monte Carlo integration

    Full text link

    Low-discrepancy point sampling of 2D manifolds for visual computing

    Get PDF
    Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Low-discrepancy point sampling of 2D manifolds for visual computing

    Get PDF
    Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Data-driven time-frequency analysis of multivariate data

    No full text
    Empirical Mode Decomposition (EMD) is a data-driven method for the decomposition and time-frequency analysis of real world nonstationary signals. Its main advantages over other time-frequency methods are its locality, data-driven nature, multiresolution-based decomposition, higher time-frequency resolution and its ability to capture oscillation of any type (nonharmonic signals). These properties have made EMD a viable tool for real world nonstationary data analysis. Recent advances in sensor and data acquisition technologies have brought to light new classes of signals containing typically several data channels. Currently, such signals are almost invariably processed channel-wise, which is suboptimal. It is, therefore, imperative to design multivariate extensions of the existing nonlinear and nonstationary analysis algorithms as they are expected to give more insight into the dynamics and the interdependence between multiple channels of such signals. To this end, this thesis presents multivariate extensions of the empirical mode de- composition algorithm and illustrates their advantages with regards to multivariate non- stationary data analysis. Some important properties of such extensions are also explored, including their ability to exhibit wavelet-like dyadic filter bank structures for white Gaussian noise (WGN), and their capacity to align similar oscillatory modes from multiple data channels. Owing to the generality of the proposed methods, an improved multi- variate EMD-based algorithm is introduced which solves some inherent problems in the original EMD algorithm. Finally, to demonstrate the potential of the proposed methods, simulations on the fusion of multiple real world signals (wind, images and inertial body motion data) support the analysis

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Low-discrepancy point sampling of 2D manifolds for visual computing

    Get PDF
    Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications

    ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ํŒจํ„ด ๋ถ„์„์„ ์œ„ํ•œ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์žฅ๋ณ‘ํƒ.Pattern recognition within time series data became an important avenue of research in artificial intelligence following the paradigm shift of the fourth industrial revolution. A number of studies related to this have been conducted over the past few years, and research using deep learning techniques are becoming increasingly popular. Due to the nonstationary, nonlinear and noisy nature of time series data, it is essential to design an appropriate model to extract its significant features for pattern recognition. This dissertation not only discusses the study of pattern recognition using various hand-crafted feature engineering techniques using physiological time series signals, but also suggests an end-to-end deep learning design methodology without any feature engineering. Time series signal can be classified into signals having periodic and non-periodic characteristics in the time domain. This thesis proposes two end-to-end deep learning design methodologies for pattern recognition of periodic and non-periodic signals. The first proposed deep learning design methodology is Deep ECGNet. Deep ECGNet offers a design scheme for an end-to-end deep learning model using periodic characteristics of Electrocardiogram (ECG) signals. ECG, recorded from the electrophysiologic patterns of heart muscle during heartbeat, could be a promising candidate to provide a biomarker to estimate event-based stress level. Conventionally, the beat-to-beat alternations, heart rate variability (HRV), from ECG have been utilized to monitor the mental stress status as well as the mortality of cardiac patients. These HRV parameters have the disadvantage of having a 5-minute measurement period. In this thesis, human's stress states were estimated without special hand-crafted feature engineering using only 10-second interval data with the deep learning model. The design methodology of this model incorporates the periodic characteristics of the ECG signal into the model. The main parameters of 1D CNNs and RNNs reflecting the periodic characteristics of ECG were updated corresponding to the stress states. The experimental results proved that the proposed method yielded better performance than those of the existing HRV parameter extraction methods and spectrogram methods. The second proposed methodology is an automatic end-to-end deep learning design methodology using Bayesian optimization for non-periodic signals. Electroencephalogram (EEG) is elicited from the central nervous system (CNS) to yield genuine emotional states, even at the unconscious level. Due to the low signal-to-noise ratio (SNR) of EEG signals, spectral analysis in frequency domain has been conventionally applied to EEG studies. As a general methodology, EEG signals are filtered into several frequency bands using Fourier or wavelet analyses and these band features are then fed into a classifier. This thesis proposes an end-to-end deep learning automatic design method using optimization techniques without this basic feature engineering. Bayesian optimization is a popular optimization technique for machine learning to optimize model hyperparameters. It is often used in optimization problems to evaluate expensive black box functions. In this thesis, we propose a method to perform whole model hyperparameters and structural optimization by using 1D CNNs and RNNs as basic deep learning models and Bayesian optimization. In this way, this thesis proposes the Deep EEGNet model as a method to discriminate human emotional states from EEG signals. Experimental results proved that the proposed method showed better performance than that of conventional method based on the conventional band power feature method. In conclusion, this thesis has proposed several methodologies for time series pattern recognition problems from the feature engineering-based conventional methods to the end-to-end deep learning design methodologies with only raw time series signals. Experimental results showed that the proposed methodologies can be effectively applied to pattern recognition problems using time series data.์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด ์ธ์‹ ๋ฌธ์ œ๋Š” 4์ฐจ ์‚ฐ์—… ํ˜๋ช…์˜ ํŒจ๋Ÿฌ๋‹ค์ž„ ์ „ํ™˜๊ณผ ํ•จ๊ป˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ธ๊ณต ์ง€๋Šฅ์˜ ํ•œ ๋ถ„์•ผ๊ฐ€ ๋˜์—ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ, ์ง€๋‚œ ๋ช‡ ๋…„๊ฐ„ ์ด์™€ ๊ด€๋ จ๋œ ๋งŽ์€ ์—ฐ๊ตฌ๋“ค์ด ์ด๋ฃจ์–ด์ ธ ์™”์œผ๋ฉฐ, ์ตœ๊ทผ์—๋Š” ์‹ฌ์ธต ํ•™์Šต๋ง (deep learning networks) ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์—ฐ๊ตฌ๋“ค์ด ์ฃผ๋ฅผ ์ด๋ฃจ์–ด ์™”๋‹ค. ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ๋Š” ๋น„์ •์ƒ, ๋น„์„ ํ˜• ๊ทธ๋ฆฌ๊ณ  ์žก์Œ (nonstationary, nonlinear and noisy) ํŠน์„ฑ์œผ๋กœ ์ธํ•˜์—ฌ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด ์ธ์‹ ์ˆ˜ํ–‰์„ ์œ„ํ•ด์„ , ๋ฐ์ดํ„ฐ์˜ ์ฃผ์š”ํ•œ ํŠน์ง•์ ์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ ํ™”๋œ ๋ชจ๋ธ์˜ ์„ค๊ณ„๊ฐ€ ํ•„์ˆ˜์ ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋Œ€ํ‘œ์ ์ธ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์ธ ์ƒ์ฒด ์‹ ํ˜ธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๋ฐฉ๋ฒ• (hand-crafted feature engineering methods)์„ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹ ๊ธฐ๋ฒ•์— ๋Œ€ํ•˜์—ฌ ๋…ผํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ถ๊ทน์ ์œผ๋กœ๋Š” ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ •์ด ์—†๋Š” ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ ๋‚ด์šฉ์„ ๋‹ด๊ณ  ์žˆ๋‹ค. ์‹œ๊ณ„์—ด ์‹ ํ˜ธ๋Š” ์‹œ๊ฐ„ ์ถ• ์ƒ์—์„œ ํฌ๊ฒŒ ์ฃผ๊ธฐ์  ์‹ ํ˜ธ์™€ ๋น„์ฃผ๊ธฐ์  ์‹ ํ˜ธ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๋Ÿฌํ•œ ๋‘ ์œ ํ˜•์˜ ์‹ ํ˜ธ๋“ค์— ๋Œ€ํ•œ ํŒจํ„ด ์ธ์‹์„ ์œ„ํ•ด ๋‘ ๊ฐ€์ง€ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง์— ๋Œ€ํ•œ ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•ด ์„ค๊ณ„๋œ ๋ชจ๋ธ์€ ์‹ ํ˜ธ์˜ ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ์ด์šฉํ•œ Deep ECGNet์ด๋‹ค. ์‹ฌ์žฅ ๊ทผ์œก์˜ ์ „๊ธฐ ์ƒ๋ฆฌํ•™์  ํŒจํ„ด์œผ๋กœ๋ถ€ํ„ฐ ๊ธฐ๋ก๋œ ์‹ฌ์ „๋„ (Electrocardiogram, ECG)๋Š” ์ด๋ฒคํŠธ ๊ธฐ๋ฐ˜ ์ŠคํŠธ๋ ˆ์Šค ์ˆ˜์ค€์„ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•œ ์ฒ™๋„ (bio marker)๋ฅผ ์ œ๊ณตํ•˜๋Š” ์œ ํšจํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋‹ค. ์ „ํ†ต์ ์œผ๋กœ ์‹ฌ์ „๋„์˜ ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ (Herat Rate Variability, HRV) ๋งค๊ฐœ๋ณ€์ˆ˜ (parameter)๋Š” ์‹ฌ์žฅ ์งˆํ™˜ ํ™˜์ž์˜ ์ •์‹ ์  ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ ๋ฐ ์‚ฌ๋ง๋ฅ ์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ํ•˜์ง€๋งŒ, ํ‘œ์ค€ ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ ๋งค๊ฐœ ๋ณ€์ˆ˜๋Š” ์ธก์ • ์ฃผ๊ธฐ๊ฐ€ 5๋ถ„ ์ด์ƒ์œผ๋กœ, ์ธก์ • ์‹œ๊ฐ„์ด ๊ธธ๋‹ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ฌ์ธต ํ•™์Šต๋ง ๋ชจ๋ธ์„ ์ด์šฉํ•˜์—ฌ 10์ดˆ ๊ฐ„๊ฒฉ์˜ ECG ๋ฐ์ดํ„ฐ๋งŒ์„ ์ด์šฉํ•˜์—ฌ, ์ถ”๊ฐ€์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์ถ”์ถœ ๊ณผ์ • ์—†์ด ์ธ๊ฐ„์˜ ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ๋ฅผ ์ธ์‹ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค. ์ œ์•ˆ๋œ ์„ค๊ณ„ ๊ธฐ๋ฒ•์€ ECG ์‹ ํ˜ธ์˜ ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ๋ชจ๋ธ์— ๋ฐ˜์˜ํ•˜์˜€๋Š”๋ฐ, ECG์˜ ์€๋‹‰ ํŠน์ง• ์ถ”์ถœ๊ธฐ๋กœ ์‚ฌ์šฉ๋œ 1D CNNs ๋ฐ RNNs ๋ชจ๋ธ์˜ ์ฃผ์š” ๋งค๊ฐœ ๋ณ€์ˆ˜์— ์ฃผ๊ธฐ์  ํŠน์„ฑ์„ ๋ฐ˜์˜ํ•จ์œผ๋กœ์จ, ํ•œ ์ฃผ๊ธฐ ์‹ ํ˜ธ์˜ ์ŠคํŠธ๋ ˆ์Šค ์ƒํƒœ์— ๋”ฐ๋ฅธ ์ฃผ์š” ํŠน์ง•์ ์„ ์ข…๋‹จ ํ•™์Šต๋ง ๋‚ด๋ถ€์ ์œผ๋กœ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์ด ๊ธฐ์กด ์‹ฌ๋ฐ•์ˆ˜ ๋ณ€๋™์„ฑ ๋งค๊ฐœ๋ณ€์ˆ˜์™€ spectrogram ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ํŒจํ„ด ์ธ์‹ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ๋น„ ์ฃผ๊ธฐ์ ์ด๋ฉฐ ๋น„์ •์ƒ, ๋น„์„ ํ˜• ๊ทธ๋ฆฌ๊ณ  ์žก์Œ ํŠน์„ฑ์„ ์ง€๋‹Œ ์‹ ํ˜ธ์˜ ํŒจํ„ด์ธ์‹์„ ์œ„ํ•œ ์ตœ์  ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์ž๋™ ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์ด๋‹ค. ๋‡ŒํŒŒ ์‹ ํ˜ธ (Electroencephalogram, EEG)๋Š” ์ค‘์ถ” ์‹ ๊ฒฝ๊ณ„ (CNS)์—์„œ ๋ฐœ์ƒ๋˜์–ด ๋ฌด์˜์‹ ์ƒํƒœ์—์„œ๋„ ๋ณธ์—ฐ์˜ ๊ฐ์ • ์ƒํƒœ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š”๋ฐ, EEG ์‹ ํ˜ธ์˜ ๋‚ฎ์€ ์‹ ํ˜ธ ๋Œ€ ์žก์Œ๋น„ (SNR)๋กœ ์ธํ•ด ๋‡ŒํŒŒ๋ฅผ ์ด์šฉํ•œ ๊ฐ์ • ์ƒํƒœ ํŒ์ •์„ ์œ„ํ•ด์„œ ์ฃผ๋กœ ์ฃผํŒŒ์ˆ˜ ์˜์—ญ์˜ ์ŠคํŽ™ํŠธ๋Ÿผ ๋ถ„์„์ด ๋‡ŒํŒŒ ์—ฐ๊ตฌ์— ์ ์šฉ๋˜์–ด ์™”๋‹ค. ํ†ต์ƒ์ ์œผ๋กœ ๋‡ŒํŒŒ ์‹ ํ˜ธ๋Š” ํ‘ธ๋ฆฌ์— (Fourier) ๋˜๋Š” ์›จ์ด๋ธ”๋ › (wavelet) ๋ถ„์„์„ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ฃผํŒŒ์ˆ˜ ๋Œ€์—ญ์œผ๋กœ ํ•„ํ„ฐ๋ง ๋œ๋‹ค. ์ด๋ ‡๊ฒŒ ์ถ”์ถœ๋œ ์ฃผํŒŒ์ˆ˜ ํŠน์ง• ๋ฒกํ„ฐ๋Š” ๋ณดํ†ต ์–•์€ ํ•™์Šต ๋ถ„๋ฅ˜๊ธฐ (shallow machine learning classifier)์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜์–ด ํŒจํ„ด ์ธ์‹์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ๋œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๊ธฐ๋ณธ์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ •์ด ์—†๋Š” ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” (Bayesian optimization) ๊ธฐ๋ฒ•์„ ์ด์šฉํ•œ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง ์ž๋™ ์„ค๊ณ„ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์€ ์ดˆ ๋งค๊ฐœ๋ณ€์ˆ˜ (hyperparamters)๋ฅผ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๊ณ„ ํ•™์Šต ๋ถ„์•ผ์˜ ๋Œ€ํ‘œ์ ์ธ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์ธ๋ฐ, ์ตœ์ ํ™” ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ๋ชฉ์  ํ•จ์ˆ˜ (expensive black box function)๋ฅผ ๊ฐ–๊ณ  ์žˆ๋Š” ์ตœ์ ํ™” ๋ฌธ์ œ์— ์ ํ•ฉํ•˜๋‹ค. ์ด๋Ÿฌํ•œ ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™”๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ธฐ๋ณธ์ ์ธ ํ•™์Šต ๋ชจ๋ธ์ธ 1D CNNs ๋ฐ RNNs์˜ ์ „์ฒด ๋ชจ๋ธ์˜ ์ดˆ ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ฐ ๊ตฌ์กฐ์  ์ตœ์ ํ™”๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€์œผ๋ฉฐ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ๋ฐ”ํƒ•์œผ๋กœ Deep EEGNet์ด๋ผ๋Š” ์ธ๊ฐ„์˜ ๊ฐ์ •์ƒํƒœ๋ฅผ ํŒ๋ณ„ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ชจ๋ธ์ด ๊ธฐ์กด์˜ ์ฃผํŒŒ์ˆ˜ ํŠน์ง• ๋ฒกํ„ฐ (band power feature) ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ์ „ํ†ต์ ์ธ ๊ฐ์ • ํŒจํ„ด ์ธ์‹ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ฒฐ๋ก ์ ์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์€ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹๋ฌธ์ œ๋ฅผ ์—ฌ๋Ÿฌ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜์˜ ์ „ํ†ต์ ์ธ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ถ€ํ„ฐ, ์ถ”๊ฐ€์ ์ธ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ถœ ๊ณผ์ • ์—†์ด ์›๋ณธ ๋ฐ์ดํ„ฐ๋งŒ์„ ์ด์šฉํ•˜์—ฌ ์ข…๋‹จ ์‹ฌ์ธต ํ•™์Šต๋ง์„ ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•๊นŒ์ง€ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋˜ํ•œ, ๋‹ค์–‘ํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์ด ์‹œ๊ณ„์—ด ์‹ ํ˜ธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ํŒจํ„ด ์ธ์‹ ๋ฌธ์ œ์— ํšจ๊ณผ์ ์œผ๋กœ ์ ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Pattern Recognition in Time Series 1 1.2 Major Problems in Conventional Approaches 7 1.3 The Proposed Approach and its Contribution 8 1.4 Thesis Organization 10 Chapter 2 Related Works 12 2.1 Pattern Recognition in Time Series using Conventional Methods 12 2.1.1 Time Domain Features 12 2.1.2 Frequency Domain Features 14 2.1.3 Signal Processing based on Multi-variate Empirical Mode Decomposition (MEMD) 15 2.1.4 Statistical Time Series Model (ARIMA) 18 2.2 Fundamental Deep Learning Algorithms 20 2.2.1 Convolutional Neural Networks (CNNs) 20 2.2.2 Recurrent Neural Networks (RNNs) 22 2.3 Hyper Parameters and Structural Optimization Techniques 24 2.3.1 Grid and Random Search Algorithms 24 2.3.2 Bayesian Optimization 25 2.3.3 Neural Architecture Search 28 2.4 Research Trends related to Time Series Data 29 2.4.1 Generative Model of Raw Audio Waveform 30 Chapter 3 Preliminary Researches: Patten Recognition in Time Series using Various Feature Extraction Methods 31 3.1 Conventional Methods using Time and Frequency Features: Motor Imagery Brain Response Classification 31 3.1.1 Introduction 31 3.1.2 Methods 32 3.1.3 Ensemble Classification Method (Stacking & AdaBoost) 32 3.1.4 Sensitivity Analysis 33 3.1.5 Classification Results 36 3.2 Statistical Feature Extraction Methods: ARIMA Model Based Feature Extraction Methodology 38 3.2.1 Introduction 38 3.2.2 ARIMA Model 38 3.2.3 Signal Processing 39 3.2.4 ARIMA Model Conformance Test 40 3.2.5 Experimental Results 40 3.2.6 Summary 43 3.3 Application on Specific Time Series Data: Human Stress States Recognition using Ultra-Short-Term ECG Spectral Feature 44 3.3.1 Introduction 44 3.3.2 Experiments 45 3.3.3 Classification Methods 49 3.3.4 Experimental Results 49 3.3.5 Summary 56 Chapter 4 Master Framework for Pattern Recognition in Time Series 57 4.1 The Concept of the Proposed Framework for Pattern Recognition in Time Series 57 4.1.1 Optimal Basic Deep Learning Models for the Proposed Framework 57 4.2 Two Categories for Pattern Recognition in Time Series Data 59 4.2.1 The Proposed Deep Learning Framework for Periodic Time Series Signals 59 4.2.2 The Proposed Deep Learning Framework for Non-periodic Time Series Signals 61 4.3 Expanded Models of the Proposed Master Framework for Pattern Recogntion in Time Series 63 Chapter 5 Deep Learning Model Design Methodology for Periodic Signals using Prior Knowledge: Deep ECGNet 65 5.1 Introduction 65 5.2 Materials and Methods 67 5.2.1 Subjects and Data Acquisition 67 5.2.2 Conventional ECG Analysis Methods 72 5.2.3 The Initial Setup of the Deep Learning Architecture 75 5.2.4 The Deep ECGNet 78 5.3 Experimental Results 83 5.4 Summary 98 Chapter 6 Deep Learning Model Design Methodology for Non-periodic Time Series Signals using Optimization Techniques: Deep EEGNet 100 6.1 Introduction 100 6.2 Materials and Methods 104 6.2.1 Subjects and Data Acquisition 104 6.2.2 Conventional EEG Analysis Methods 106 6.2.3 Basic Deep Learning Units and Optimization Technique 108 6.2.4 Optimization for Deep EEGNet 109 6.2.5 Deep EEGNet Architectures using the EEG Channel Grouping Scheme 111 6.3 Experimental Results 113 6.4 Summary 124 Chapter 7 Concluding Remarks 126 7.1 Summary of Thesis and Contributions 126 7.2 Limitations of the Proposed Methods 128 7.3 Suggestions for Future Works 129 Bibliography 131 ์ดˆ ๋ก 139Docto

    NEW APPROACHES FOR ASSESSING TIME-VARYING FUNCTIONAL BRAIN CONNECTIVITY USING FMRI DATA

    Get PDF
    It was long assumed that functional connectivity (FC) among brain regions did not vary substantially during a single resting-state functional magnetic resonance imaging (rs-fMRI) run. However, an increasing number of studies have reported on the existence of time-varying functional connectivity (TVC) in rs-fMRI data taking place in a considerably shorter time window than previously thought (i.e., on the order of seconds and minutes). However, the study of TVFC is a relatively new research area and there remain a number of unaddressed problems hindering its ability to fulfill its promise of increasing our knowledge of human brain function. First, while it has previously been shown that autocorrelation can negatively impact estimates of static functional connectivity, its impact on TVC estimates has not been established. Understanding the influence of autocorrelation on TVFC is of high importance, as we hypothesize the autocorrelation within a time series can inflate the sampling variability of TVC estimated using sliding window techniques, leading to the increase of risk of misinterpreting noise as true TVC and negatively impact subsequent estimation of whole-brain time varying functional connectivity. We thus study the impact of autocorrelation on TVC and how to mitigate it. Second, there is a need for new analytic approaches for estimating TVC. Most studies use a sliding window approach, where the correlation between region is computed locally within a specific time window that is moved across time. A shortcoming of this approach is the need to select an a priori window length for analysis. To circumvent this issue, we focus on the use of instantaneous phase synchronization (IPS), which offers single time-point resolution of time-resolved fMRI connectivity. The use of IPS necessitates bandpass filtering the data to obtain valid results. We seek to show how bandpass filtering affects the estimates of IPS metrics such as phase locking value (PLV) and phase coherence. Further, as current metrics discard the temporal transitions from positive to negative associations common in IPS analysis we introduce a new approach within IPS framework for circumventing this issue. Third, the choice of cut-off frequencies when bandpass filtering in IPS analysis is to some extend arbitrary. We seek to compare standard phase synchronization using the Hilbert transform with empirical mode decomposition (EMD) which eliminates the need for bandpass filtering in a data driven manner. While the use of EMD has a number of benefits compared to the Hilbert transform, it has a couple shortcomings: the susceptibility of the EMD to the SNR of the signal and untangling frequencies close to one another. To circumvent this issue and improve the assessment of IPS, we propose the use of an alternative decomposition approach, multivariate variational mode decomposition (MVMD) for phase synchronization analysis.
    corecore