99 research outputs found

    A Novel Unsupervised Graph Wavelet Autoencoder for Mechanical System Fault Detection

    Full text link
    Reliable fault detection is an essential requirement for safe and efficient operation of complex mechanical systems in various industrial applications. Despite the abundance of existing approaches and the maturity of the fault detection research field, the interdependencies between condition monitoring data have often been overlooked. Recently, graph neural networks have been proposed as a solution for learning the interdependencies among data, and the graph autoencoder (GAE) architecture, similar to standard autoencoders, has gained widespread use in fault detection. However, both the GAE and the graph variational autoencoder (GVAE) have fixed receptive fields, limiting their ability to extract multiscale features and model performance. To overcome these limitations, we propose two graph neural network models: the graph wavelet autoencoder (GWAE), and the graph wavelet variational autoencoder (GWVAE). GWAE consists mainly of the spectral graph wavelet convolutional (SGWConv) encoder and a feature decoder, while GWVAE is the variational form of GWAE. The developed SGWConv is built upon the spectral graph wavelet transform which can realize multiscale feature extraction by decomposing the graph signal into one scaling function coefficient and several spectral graph wavelet coefficients. To achieve unsupervised mechanical system fault detection, we transform the collected system signals into PathGraph by considering the neighboring relationships of each data sample. Fault detection is then achieved by evaluating the reconstruction errors of normal and abnormal samples. We carried out experiments on two condition monitoring datasets collected from fuel control systems and one acoustic monitoring dataset from a valve. The results show that the proposed methods improve the performance by around 3%~4% compared to the comparison methods

    ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ํ•™์Šต์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ ์ถฉ๋Œ ๊ฐ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2022.2. ๋ฐ•์ข…์šฐ.์‚ฌ๋žŒ๊ณผ ๊ณต์œ ๋œ ๊ตฌ์กฐํ™”๋˜์ง€ ์•Š์€ ๋™์  ํ™˜๊ฒฝ์—์„œ ์ž‘๋™ํ•˜๋Š” ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋Š” ๋‚ ์นด๋กœ์šด ์ถฉ๋Œ(๊ฒฝ์„ฑ ์ถฉ๋Œ)์—์„œ ๋” ๊ธด ์ง€์† ์‹œ๊ฐ„์˜ ๋ฐ€๊ณ  ๋‹น๊ธฐ๋Š” ๋™์ž‘(์—ฐ์„ฑ ์ถฉ๋Œ)์— ์ด๋ฅด๊ธฐ๊นŒ์ง€์˜ ๋‹ค์–‘ํ•œ ์ถฉ๋Œ์„ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•ด์•ผ ํ•œ๋‹ค. ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’์„ ์ด์šฉํ•ด ์™ธ๋ถ€ ์กฐ์ธํŠธ ํ† ํฌ๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋™์—ญํ•™ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, ์ •ํ™•ํ•œ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„๊ณผ ๊ฐ™์€ ๋ชจํ„ฐ ๋งˆ์ฐฐ์— ๋Œ€ํ•œ ์ ์ ˆํ•œ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ด์ง€๋งŒ, ๋™์—ญํ•™๊ณผ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„ํ•˜๊ณ  ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์„ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ์—๋Š” ์ƒ๋‹นํ•œ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ์ด๋ฅผ ์ ์šฉํ•˜๊ธฐ๋Š” ์–ด๋ ต๋‹ค. ๋˜ํ•œ ์ ์ ˆํ•œ ์‹๋ณ„ ํ›„์—๋„ ๋™์—ญํ•™์— ๋ฐฑ๋ž˜์‹œ, ํƒ„์„ฑ ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๋‚˜ ๋ถˆํ™•์‹ค์„ฑ์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ˆœ์ˆ˜ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์˜ ๊ตฌํ˜„ ์–ด๋ ค์›€์„ ํ”ผํ•˜๊ณ  ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋ณด์ƒํ•˜๋Š” ์ˆ˜๋‹จ์œผ๋กœ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์œ„ํ•œ ์ด ๋„ค ๊ฐ€์ง€์˜ ํ•™์Šต ๊ธฐ๋ฐ˜ ์ถฉ๋Œ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ์ถฉ๋Œ ๋ฐ ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ชจ๋‘ ํ•„์š”ํ•œ ์ง€๋„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜(์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€, ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜)์„ ์‚ฌ์šฉํ•˜๋ฉฐ ๋‚˜๋จธ์ง€ ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋Š” ๋น„์ง€๋„ ์ด์ƒ์น˜ ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜(๋‹จ์ผ ํด๋ž˜์Šค ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹ , ์˜คํ† ์ธ์ฝ”๋” ๊ธฐ๋ฐ˜)์— ๊ธฐ๋ฐ˜ํ•œ๋‹ค. ๋กœ๋ด‡ ๋™์—ญํ•™ ๋ชจ๋ธ๊ณผ ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ์ถ”๊ฐ€์ ์ธ ์™ธ๋ถ€ ์„ผ์„œ๋‚˜ ๋งˆ์ฐฐ ๋ชจ๋ธ๋ง, ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์ˆ˜๋™ ์กฐ์ •์€ ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค. ๋จผ์ € ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ํ•™์Šต์‹œํ‚ค๊ณ  ๊ฒ€์ฆํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š”, 6์ž์œ ๋„ ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ˆ˜์ง‘๋œ ๋กœ๋ด‡ ์ถฉ๋Œ ๋ฐ์ดํ„ฐ๋ฅผ ์„ค๋ช…ํ•œ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ๊ณ ๋ คํ•˜๋Š” ์ถฉ๋Œ ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ๊ฒฝ์„ฑ ์ถฉ๋Œ, ์—ฐ์„ฑ ์ถฉ๋Œ, ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ, ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์€ ๋ชจ๋‘ ๋™์ผํ•˜๊ฒŒ ์ถฉ๋Œ๋กœ ๊ฐ„์ฃผํ•œ๋‹ค. ๊ฐ์ง€ ์„ฑ๋Šฅ ๊ฒ€์ฆ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋Š” ์ด 787๊ฑด์˜ ์ถฉ๋Œ๊ณผ 62.4๋ถ„์˜ ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์œผ๋ฉฐ, ์ด๋Š” ๋กœ๋ด‡์ด ๋žœ๋ค ์ ๋Œ€์  6๊ด€์ ˆ ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์•ˆ ์ˆ˜์ง‘๋œ๋‹ค. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ค‘ ๋กœ๋ด‡์˜ ๋๋‹จ์—๋Š” ๋ฏธ๋ถ€์ฐฉ, 3.3 kg, 5.0 kg์˜ ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ํŽ˜์ด๋กœ๋“œ๋ฅผ ๋ถ€์ฐฉํ•œ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ˆ˜์ง‘๋œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๊ฐ€๋ฒผ์šด ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•ด ๊ด‘๋ฒ”์œ„ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ถˆํ™•์‹ค์„ฑ๊ณผ ์ธก์ • ๋…ธ์ด์ฆˆ, ๋ฐฑ๋ž˜์‹œ, ๋ณ€ํ˜• ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๊นŒ์ง€ ๋ณด์ƒ๋จ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•˜๋ฉฐ ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ์•„์›ƒํ’‹ ํ•„ํ„ฐ ํŒŒ๋ผ๋ฏธํ„ฐ์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•œ๋ฐ, ๋‘ ๋ฐฉ๋ฒ• ๋ชจ๋‘ ์ง๊ด€์ ์ธ ๊ฐ๋„ ์กฐ์ •์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ๋‚˜์•„๊ฐ€ ์ผ๋ จ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜์„ ํ†ตํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ๊ณผ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ ๋˜ํ•œ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ• ๋˜ํ•œ ๊ฐ€๋ฒผ์šด ๊ณ„์‚ฐ๊ณผ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ์œผ๋กœ ๋‹ค์–‘ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ฐ•์ธํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ๋งˆ์ฐฐ์„ ํฌํ•จํ•œ ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ๋„ ๋ณด์ƒํ•  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๋” ๋‚˜์€ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ถฉ๋Œ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํ•„์š”๋กœ ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ๋” ์ ํ•ฉํ•˜๋‹ค.Collaborative robot manipulators operating in dynamic and unstructured environments shared with humans require fast and accurate detection of collisions, which can range from sharp impacts (hard collisions) to pulling and pushing motions of longer duration (soft collisions). When using dynamics model-based detection methods that estimate the external joint torque with motor current measurements, proper treatment for friction in the motors is required, such as accurate modeling and identification of friction parameters. Although highly effective when done correctly, modeling and identifying the dynamics and friction parameters, and manually setting multiple detection thresholds require considerable effort, making them difficult to be replicated for mass-produced industrial robots. There may also still exist unmodeled effects or uncertainties in the dynamics even after proper identification, e.g., backlash, elasticity. This dissertation presents a total of four learning-based collision detection methods for robot manipulators as a means of sidestepping some of the implementation difficulties of pure model-based methods and compensating for uncertain dynamic effects. Two methods use supervised learning algorithms โ€“ support vector machine regression and a one-dimensional convolutional neural network-based โ€“ that require both the collision and collision-free motion data for training. The other two methods are based on unsupervised anomaly detection algorithms โ€“ a one-class support vector machine and an autoencoder-based โ€“ that require only the collision-free motion data for training. Only the motor current measurements together with a robot dynamics model are required while no additional external sensors, friction modeling, or manual tuning of multiple detection thresholds are needed. We first describe the robot collision dataset collected with a six-dof collaborative robot manipulator, which is used for training and validating our supervised and unsupervised detection methods. The collision scenarios we consider are hard collisions, soft collisions, and collision-free, where both hard and soft collisions are treated in the same manner as just collisions. The test dataset for detection performance verification includes a total of 787 collisions and 62.4 minutes of collision-free motions, all collected while the robot is executing random point-to-point six-joint motions. During data collection, three types of payloads are attached to the end-effector: no payload, 3.3 kg payload, and 5.0 kg payload. Then the detection performance of our supervised detection methods is experimentally verified with the collected test dataset. Results demonstrate that our supervised detection methods can accurately detect a wide range of hard and soft collisions in real-time using a light network, compensating for uncertainties in the model parameters as well as unmodeled effects like friction, measurement noise, backlash, and deformations. Moreover, the SVMR-based method requires only one constant detection threshold to be tuned while the 1-D CNN-based method requires only one output filter parameter to be tuned, both of which allow intuitive sensitivity tuning. Furthermore, the generalization capability of our supervised detection methods is experimentally verified with a set of simulation experiments. Finally, our unsupervised detection methods are also validated for the same test dataset; the detection performance and the generalization capability are verified. The experimental results show that our unsupervised detection methods are also able to robustly detect a variety of hard and soft collisions in real-time with very light computation and with only one constant detection threshold required to be tuned, validating that uncertain dynamic effects including the unmodeled friction can be successfully compensated also with unsupervised learning. Although our supervised detection methods show better detection performance, our unsupervised detection methods are more practical for mass-produced industrial robots since they require only the data for collision-free motions for training, and the knowledge of every possible type of collision that can occur is not required.1 Introduction 1 1.1 Model-Free Methods 2 1.2 Model-Based Methods 2 1.3 Learning-Based Methods 4 1.3.1 Using Supervised Learning Algorithms 5 1.3.2 Using Unsupervised Learning Algorithms 6 1.4 Contributions of This Dissertation 7 1.4.1 Supervised Learning-Based Model-Compensating Detection 7 1.4.2 Unsupervised Learning-Based Model-Compensating Detection 8 1.4.3 Comparison with Existing Detection Methods 9 1.5 Organization of This Dissertation 14 2 Preliminaries 17 2.1 Introduction 17 2.2 Robot Dynamics 17 2.3 Momentum Observer-Based Collision Detection 19 2.4 Supervised Learning Algorithms 21 2.4.1 Support Vector Machine Regression 21 2.4.2 One-Dimensional Convolutional Neural Network 23 2.5 Unsupervised Anomaly Detection 25 2.6 One-Class Support Vector Machine 26 2.7 Autoencoder-Based Anomaly Detection 28 2.7.1 Autoencoder Network Architecture and Training 28 2.7.2 Anomaly Detection Using Autoencoders 29 3 Robot Collision Data 31 3.1 Introduction 31 3.2 True Collision Index Labeling 31 3.3 Collision Scenarios 35 3.4 Monitoring Signal 36 3.5 Signal Normalization and Sampling 37 3.6 Test Data for Detection Performance Verification 39 4 Supervised Learning-Based Model-Compensating Detection 43 4.1 Introduction 43 4.2 SVMR-Based Collision Detection 44 4.2.1 Input Feature Vector Design 44 4.2.2 SVMR Training 45 4.2.3 Collision Detection Sensitivity Adjustment 46 4.3 1-D CNN-Based Collision Detection 50 4.3.1 Network Input Design 50 4.3.2 Network Architecture and Training 50 4.3.3 An Output Filtering Method to Reduce False Alarms 53 4.4 Collision Detection Performance Criteria 54 4.4.1 Area Under the Precision-Recall Curve (PRAUC) 54 4.4.2 Detection Delay and Number of Detection Failures 54 4.5 Collision Detection Performance Analysis 56 4.5.1 Global Performance with Varying Thresholds 56 4.5.2 Detection Delay and Number of Detection Failures 57 4.5.3 Real-Time Inference 60 4.6 Generalization Capability Analysis 60 4.6.1 Generalization to Small Perturbations 60 4.6.2 Generalization to an Unseen Payload 62 5 Unsupervised Learning-Based Model-Compensating Detection 67 5.1 Introduction 67 5.2 OC-SVM-Based Collision Detection 68 5.2.1 Input Feature Vector 68 5.2.2 OC-SVM Training 70 5.2.3 Collision Detection with the Trained OC-SVM 70 5.3 Autoencoder-Based Collision Detection 70 5.3.1 Network Input and Output 71 5.3.2 Network Architecture and Training 71 5.3.3 Collision Detection with the Trained Autoencoder 72 5.4 Collision Detection Performance Analysis 74 5.4.1 Global Performance with Varying Thresholds 75 5.4.2 Detection Delay and Number of Detection Failures 75 5.4.3 Comparison with Supervised Learning-Based Methods 80 5.4.4 Real-Time Inference 83 5.5 Generalization Capability Analysis 83 5.5.1 Generalization to Small Perturbations 84 5.5.2 Generalization to an Unseen Payload 85 6 Conclusion 89 6.1 Summary and Discussion 89 6.2 Future Work 93 A Appendix 95 A.1 SVM-Based Classification of Detected Collisions 95 A.2 Direct Estimation-Based Detection Methods 97 A.3 Model-Independent Supervised Detection Methods 101 A.4 Generalization to Large Changes in the Dynamics Model 102 Bibliography 106 Abstract 112๋ฐ•

    IoT Anomaly Detection Methods and Applications: A Survey

    Full text link
    Ongoing research on anomaly detection for the Internet of Things (IoT) is a rapidly expanding field. This growth necessitates an examination of application trends and current gaps. The vast majority of those publications are in areas such as network and infrastructure security, sensor monitoring, smart home, and smart city applications and are extending into even more sectors. Recent advancements in the field have increased the necessity to study the many IoT anomaly detection applications. This paper begins with a summary of the detection methods and applications, accompanied by a discussion of the categorization of IoT anomaly detection algorithms. We then discuss the current publications to identify distinct application domains, examining papers chosen based on our search criteria. The survey considers 64 papers among recent publications published between January 2019 and July 2021. In recent publications, we observed a shortage of IoT anomaly detection methodologies, for example, when dealing with the integration of systems with various sensors, data and concept drifts, and data augmentation where there is a shortage of Ground Truth data. Finally, we discuss the present such challenges and offer new perspectives where further research is required.Comment: 22 page

    Autoencoder Based Iterative Modeling and Multivariate Time-Series Subsequence Clustering Algorithm

    Full text link
    This paper introduces an algorithm for the detection of change-points and the identification of the corresponding subsequences in transient multivariate time-series data (MTSD). The analysis of such data has become more and more important due to the increase of availability in many industrial fields. Labeling, sorting or filtering highly transient measurement data for training condition based maintenance (CbM) models is cumbersome and error-prone. For some applications it can be sufficient to filter measurements by simple thresholds or finding change-points based on changes in mean value and variation. But a robust diagnosis of a component within a component group for example, which has a complex non-linear correlation between multiple sensor values, a simple approach would not be feasible. No meaningful and coherent measurement data which could be used for training a CbM model would emerge. Therefore, we introduce an algorithm which uses a recurrent neural network (RNN) based Autoencoder (AE) which is iteratively trained on incoming data. The scoring function uses the reconstruction error and latent space information. A model of the identified subsequence is saved and used for recognition of repeating subsequences as well as fast offline clustering. For evaluation, we propose a new similarity measure based on the curvature for a more intuitive time-series subsequence clustering metric. A comparison with seven other state-of-the-art algorithms and eight datasets shows the capability and the increased performance of our algorithm to cluster MTSD online and offline in conjunction with mechatronic systems.Comment: 26 pages, 11 figures, for associated python code repositories see https://github.com/Jokonu/mt3scm and https://github.com/Jokonu/abimca; Minor spelling and grammar corrections, fixed wrong bibtex entry for SOStream, some improvements and corrections in formulas of section

    Decentralized Vision-Based Byzantine Agent Detection in Multi-Robot Systems with IOTA Smart Contracts

    Full text link
    Multiple opportunities lie at the intersection of multi-robot systems and distributed ledger technologies (DLTs). In this work, we investigate the potential of new DLT solutions such as IOTA, for detecting anomalies and byzantine agents in multi-robot systems in a decentralized manner. Traditional blockchain approaches are not applicable to real-world networked and decentralized robotic systems where connectivity conditions are not ideal. To address this, we leverage recent advances in partition-tolerant and byzantine-tolerant collaborative decision-making processes with IOTA smart contracts. We show how our work in vision-based anomaly and change detection can be applied to detecting byzantine agents within multiple robots operating in the same environment. We show that IOTA smart contracts add a low computational overhead while allowing to build trust within the multi-robot system. The proposed approach effectively enables byzantine robot detection based on the comparison of images submitted by the different robots and detection of anomalies and changes between them

    Adversarial Data Augmentation for HMM-based Anomaly Detection

    Get PDF
    In this work, we concentrate on the detection of anomalous behaviors in systems operating in the physical world and for which it is usually not possible to have a complete set of all possible anomalies in advance. We present a data augmentation and retraining approach based on adversarial learning for improving anomaly detection. In particular, we first define a method for gener- ating adversarial examples for anomaly detectors based on Hidden Markov Models (HMMs). Then, we present a data augmentation and retraining technique that uses these adversarial examples to improve anomaly detection performance. Finally, we evaluate our adversarial data augmentation and retraining approach on four datasets showing that it achieves a statistically significant perfor- mance improvement and enhances the robustness to adversarial attacks. Key differences from the state-of-the-art on adversarial data augmentation are the focus on multivariate time series (as opposed to images), the context of one-class classification (in contrast to standard multi-class classification), and the use of HMMs (in contrast to neural networks)

    Spatiotemporal anomaly detection: streaming architecture and algorithms

    Get PDF
    Includes bibliographical references.2020 Summer.Anomaly detection is the science of identifying one or more rare or unexplainable samples or events in a dataset or data stream. The field of anomaly detection has been extensively studied by mathematicians, statisticians, economists, engineers, and computer scientists. One open research question remains the design of distributed cloud-based architectures and algorithms that can accurately identify anomalies in previously unseen, unlabeled streaming, multivariate spatiotemporal data. With streaming data, time is of the essence, and insights are perishable. Real-world streaming spatiotemporal data originate from many sources, including mobile phones, supervisory control and data acquisition enabled (SCADA) devices, the internet-of-things (IoT), distributed sensor networks, and social media. Baseline experiments are performed on four (4) non-streaming, static anomaly detection multivariate datasets using unsupervised offline traditional machine learning (TML), and unsupervised neural network techniques. Multiple architectures, including autoencoders, generative adversarial networks, convolutional networks, and recurrent networks, are adapted for experimentation. Extensive experimentation demonstrates that neural networks produce superior detection accuracy over TML techniques. These same neural network architectures can be extended to process unlabeled spatiotemporal streaming using online learning. Space and time relationships are further exploited to provide additional insights and increased anomaly detection accuracy. A novel domain-independent architecture and set of algorithms called the Spatiotemporal Anomaly Detection Environment (STADE) is formulated. STADE is based on federated learning architecture. STADE streaming algorithms are based on a geographically unique, persistently executing neural networks using online stochastic gradient descent (SGD). STADE is designed to be pluggable, meaning that alternative algorithms may be substituted or combined to form an ensemble. STADE incorporates a Stream Anomaly Detector (SAD) and a Federated Anomaly Detector (FAD). The SAD executes at multiple locations on streaming data, while the FAD executes at a single server and identifies global patterns and relationships among the site anomalies. Each STADE site streams anomaly scores to the centralized FAD server for further spatiotemporal dependency analysis and logging. The FAD is based on recent advances in DNN-based federated learning. A STADE testbed is implemented to facilitate globally distributed experimentation using low-cost, commercial cloud infrastructure provided by Microsoftโ„ข. STADE testbed sites are situated in the cloud within each continent: Africa, Asia, Australia, Europe, North America, and South America. Communication occurs over the commercial internet. Three STADE case studies are investigated. The first case study processes commercial air traffic flows, the second case study processes global earthquake measurements, and the third case study processes social media (i.e., Twitterโ„ข) feeds. These case studies confirm that STADE is a viable architecture for the near real-time identification of anomalies in streaming data originating from (possibly) computationally disadvantaged, geographically dispersed sites. Moreover, the addition of the FAD provides enhanced anomaly detection capability. Since STADE is domain-independent, these findings can be easily extended to additional application domains and use cases

    Deep Learning Techniques for Radar-Based Continuous Human Activity Recognition

    Get PDF
    Human capability to perform routine tasks declines with age and age-related problems. Remote human activity recognition (HAR) is beneficial for regular monitoring of the elderly population. This paper addresses the problem of the continuous detection of daily human activities using a mm-wave Doppler radar. In this study, two strategies have been employed: the first method uses un-equalized series of activities, whereas the second method utilizes a gradient-based strategy for equalization of the series of activities. The dynamic time warping (DTW) algorithm and Long Short-term Memory (LSTM) techniques have been implemented for the classification of un-equalized and equalized series of activities, respectively. The input for DTW was provided using three strategies. The first approach uses the pixel-level data of frames (UnSup-PLevel). In the other two strategies, a convolutional variational autoencoder (CVAE) is used to extract Un-Supervised Encoded features (UnSup-EnLevel) and Supervised Encoded features (Sup-EnLevel) from the series of Doppler frames. The second approach for equalized data series involves the application of four distinct feature extraction methods: i.e., convolutional neural networks (CNN), supervised and unsupervised CVAE, and principal component Analysis (PCA). The extracted features were considered as an input to the LSTM. This paper presents a comparative analysis of a novel supervised feature extraction pipeline, employing Sup-ENLevel-DTW and Sup-EnLevel-LSTM, against several state-of-the-art unsupervised methods, including UnSUp-EnLevel-DTW, UnSup-EnLevel-LSTM, CNN-LSTM, and PCA-LSTM. The results demonstrate the superiority of the Sup-EnLevel-LSTM strategy. However, the UnSup-PLevel strategy worked surprisingly well without using annotations and frame equalization
    • โ€ฆ
    corecore