13 research outputs found

    A New Residual Dense Network for Dance Action Recognition From Heterogeneous View Perception

    Get PDF
    At present, part of people's body is in the state of sub-health, and more people pay attention to physical exercise. Dance is a relatively simple and popular activity, it has been widely concerned. The traditional action recognition method is easily affected by the action speed, illumination, occlusion and complex background, which leads to the poor robustness of the recognition results. In order to solve the above problems, an improved residual dense neural network method is used to study the automatic recognition of dance action images. Firstly, based on the residual model, the features of dance action are extracted by using the convolution layer and pooling layer. Then, the exponential linear element (ELU) activation function, batch normalization (BN) and Dropout technology are used to improve and optimize the model to mitigate the gradient disappearance, prevent over-fitting, accelerate convergence and enhance the model generalization ability. Finally, the dense connection network (DenseNet) is introduced to make the extracted dance action features more rich and effective. Comparison experiments are carried out on two public databases and one self-built database. The results show that the recognition rate of the proposed method on three databases are 99.98, 97.95, and 0.97.96%, respectively. It can be seen that this new method can effectively improve the performance of dance action recognition

    CAVIAR: Context-driven Active and Incremental Activity Recognition

    Get PDF
    Activity recognition on mobile device sensor data has been an active research area in mobile and pervasive computing for several years. While the majority of the proposed techniques are based on supervised learning, semi-supervised approaches are being considered to reduce the size of the training set required to initialize the model. These approaches usually apply self-training or active learning to incrementally refine the model, but their effectiveness seems to be limited to a restricted set of physical activities. We claim that the context which surrounds the user (e.g., time, location, proximity to transportation routes) combined with common knowledge about the relationship between context and human activities could be effective in significantly increasing the set of recognized activities including those that are difficult to discriminate only considering inertial sensors, and the highly context-dependent ones. In this paper, we propose CAVIAR, a novel hybrid semi-supervised and knowledge-based system for real-time activity recognition. Our method applies semantic reasoning on context-data to refine the predictions of an incremental classifier. The recognition model is continuously updated using active learning. Results on a real dataset obtained from 26 subjects show the effectiveness of our approach in increasing the recognition rate, extending the number of recognizable activities and, most importantly, reducing the number of queries triggered by active learning. In order to evaluate the impact of context reasoning, we also compare CAVIAR with a purely statistical version, considering features computed on context-data as part of the machine learning process

    A Light Weight Smartphone Based Human Activity Recognition System with High Accuracy

    Get PDF
    With the pervasive use of smartphones, which contain numerous sensors, data for modeling human activity is readily available. Human activity recognition is an important area of research because it can be used in context-aware applications. It has significant influence in many other research areas and applications including healthcare, assisted living, personal fitness, and entertainment. There has been a widespread use of machine learning techniques in wearable and smartphone based human activity recognition. Despite being an active area of research for more than a decade, most of the existing approaches require extensive computation to extract feature, train model, and recognize activities. This study presents a computationally efficient smartphone based human activity recognizer, based on dynamical systems and chaos theory. A reconstructed phase space is formed from the accelerometer sensor data using time-delay embedding. A single accelerometer axis is used to reduce memory and computational complexity. A Gaussian mixture model is learned on the reconstructed phase space. A maximum likelihood classifier uses the Gaussian mixture model to classify ten different human activities and a baseline. One public and one collected dataset were used to validate the proposed approach. Data was collected from ten subjects. The public dataset contains data from 30 subjects. Out-of-sample experimental results show that the proposed approach is able to recognize human activities from smartphonesโ€™ one-axis raw accelerometer sensor data. The proposed approach achieved 100% accuracy for individual models across all activities and datasets. The proposed research requires 3 to 7 times less amount of data than the existing approaches to classify activities. It also requires 3 to 4 times less amount of time to build reconstructed phase space compare to time and frequency domain features. A comparative evaluation is also presented to compare proposed approach with the state-of-the-art works

    Enhancement of high-level context recognition performance based on smartphone data using user information

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2018. 8. ๋ฐ•์ข…ํ—Œ.๊ฐœ์ธํ™” ๊ธฐ๊ธฐ์ธ ์Šค๋งˆํŠธํฐ์˜ ์‚ฌ์šฉ์ด ๋ณดํŽธํ™”๋˜๊ณ  ์žˆ๊ณ  ์ด๋ฅผ ์ด์šฉํ•œ ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ์„œ๋น„์Šค๊ฐ€ ๋“ฑ์žฅํ•จ์— ๋”ฐ๋ผ, ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ์— ๋”ฐ๋ฅธ ๋งž์ถคํ˜• ์„œ๋น„์Šค์— ๋Œ€ํ•œ ์š”๊ตฌ๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค. ์Šค๋งˆํŠธํฐ์€ ์Œ์„ฑ ํ†ตํ™”๋‚˜ ๋ฌธ์ž ๋ฉ”์‹œ์ง€ ๋“ฑ๊ณผ ๊ฐ™์€ ๊ธฐ์กด์˜ ํœด๋Œ€์ „ํ™”์˜ ๊ธฐ๋Šฅ ์™ธ์—๋„ ๋ฐ์ดํ„ฐ ํ†ต์‹ ์ด๋‚˜ ๋‹ค์ข… ์„ผ์„œ ๋ฐ์ดํ„ฐ ๋“ฑ์„ ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ํ™œ์šฉ์ด ๊ฐ€๋Šฅํ•œ ๊ฐœ์ธ์šฉ ์ปดํ“จํ„ฐ๋กœ์„œ์˜ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•˜๊ณ  ์žˆ๋‹ค. ๋งŽ์€ ์ˆ˜์˜ ์Šค๋งˆํŠธํฐ ์‚ฌ์šฉ์ž๋“ค์€ ์ผ์ƒ์ƒํ™œ ๋Œ€๋ถ€๋ถ„์˜ ์‹œ๊ฐ„์— ์Šค๋งˆํŠธํฐ์„ ํœด๋Œ€ํ•˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ์Šค๋งˆํŠธํฐ์œผ๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ์„ ์ธ์ง€ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค๊ณผ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ์ถ”๋ก ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ๋‹ค์–‘ํ•˜๊ฒŒ ์ง„ํ–‰๋˜์–ด์™”๋‹ค. ์„ผ์„œ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ๋ฌผ๋ฆฌ์  ์šด๋™์— ๋”ฐ๋ผ ๊ตฌ๋ถ„๋˜๋Š” ์ €์ˆ˜์ค€ ์ปจํ…์ŠคํŠธ๋ฅผ ์ธ์ง€ํ•˜๋Š” ์—ฐ๊ตฌ์— ๋น„ํ•ด, ์‚ฌํšŒ๋‚˜ ๋ฌธํ™”์  ์ฐจ์ด์— ๋”ฐ๋ผ ์˜๋ฏธ๊ฐ€ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ๋Š” ๊ณ ์ˆ˜์ค€ ์ปจํ…์ŠคํŠธ ์ธ์ง€์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋Š” ์ƒ๋Œ€์ ์œผ๋กœ ๋ฌผ๋ฆฌ ์„ผ์„œ ๋ฐ์ดํ„ฐ์˜ ์˜์กด๋„๊ฐ€ ๋‚ฎ๊ธฐ ๋•Œ๋ฌธ์—, ์ธ์ง€ ๋‚œ์ด๋„๋„ ๋†’๊ณ  ์•„์ง๊นŒ์ง€ ์ƒ๋Œ€์ ์œผ๋กœ ๋ฏธ์ง„ํ•˜์˜€๋‹ค. ๊ณ ์ˆ˜์ค€ ์ปจํ…์ŠคํŠธ ์ธ์ง€ ์ •ํ™•๋„๊ฐ€ ์ข‹์„์ˆ˜๋ก ์ƒํ™ฉ๋ณ„ ๋งž์ถคํ˜• ์„œ๋น„์Šค์˜ ๋‹ค์–‘ํ™”๋‚˜ ์ •๊ตํ™”์— ์žˆ์–ด์„œ ํ™œ์šฉ ๋ฐฉ์•ˆ์ด ๋‹ค์–‘ํ•˜๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์‚ฌ์šฉ์ž ํŠน์„ฑ์— ๋”ฐ๋ผ ์„ผ์„œ ๋ฐ์ดํ„ฐ์˜ ๋ถ„ํฌ๊ฐ€ ๋‹ฌ๋ผ์ง„๋‹ค๋Š” ์ ์„ ํ†ตํ•ด, ์„ผ์„œ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜์˜ ์‚ฌ์šฉ์ž ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์— ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž์˜ ๊ณ ์ˆ˜์ค€ ์ปจํ…์ŠคํŠธ ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์˜ ์ •ํ™•๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๋Š” ๊ธฐ๋ฒ•์€ ๋‘ ๋‹จ๊ณ„๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ๊ทธ ์ฒซ๋ฒˆ์งธ ๋‹จ๊ณ„๋กœ, ์ˆœ๊ฐ„์ ์œผ๋กœ ํš๋“์ด ๊ฐ€๋Šฅํ•œ ์Šค๋ƒ…์ƒท ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ฆ‰์‹œ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋‘๋ฒˆ์งธ ๋‹จ๊ณ„๋กœ๋Š”, ์ฒซ ๋‹จ๊ณ„์—์„œ ์–ป์€ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ์ธ์ง€ ๋Œ€์ƒ์ด ๋˜๋Š” ์„ผ์„œ ๋ฐ์ดํ„ฐ์™€ ๋ณ‘ํ•ฉํ•˜์—ฌ ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์˜ ์ž…๋ ฅ๊ฐ’์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ธ์ง€ ๋Œ€์ƒ์˜ ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ ์ธ์ง€๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ถ”๋ก  ๋ชจํ˜•์€ ์Šค๋ƒ…์ƒท ๋ฐ์ดํ„ฐ์ธ ์‚ฌ์šฉ์ž์˜ ์Šค๋งˆํŠธํฐ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ชฉ๋ก์œผ๋กœ๋ถ€ํ„ฐ ์ƒ์„ฑํ•œ ์š”์ธ๋ฒกํ„ฐ๋ฅผ ์ด์šฉํ•ด ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ์ถ”๋ก ํ•œ๋‹ค. ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์€ ๊ฐ€์†๋„ ์„ผ์„œ ๋ฐ์ดํ„ฐ์™€ ์˜ค๋””์˜ค ์„ผ์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์•™์ƒ๋ธ” ํ•™์Šต ๋ฐฉ๋ฒ•์˜ ์ผ์ข…์ธ ๋žœ๋ค ํฌ๋ ˆ์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ชจํ˜•์„ ํ†ตํ•ด ์ˆ˜๋ฉด, ์‹์‚ฌ, ์ˆ˜์—…, ๊ณต๋ถ€, ์Œ์ฃผ, ์ด๋™์˜ ์ด ์—ฌ์„ฏ ๊ฐ€์ง€ ์ƒํ™ฉ์„ ์ธ์ง€ ํ•œ๋‹ค. ์ž์ฒด ์ œ์ž‘ํ•œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ํ†ตํ•ด 100๋ช…์˜ ํ”ผ์‹คํ—˜์ž๋“ค๋กœ๋ถ€ํ„ฐ ์‹คํ—˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์ง‘ํ•˜๊ณ  ์ œ์•ˆ ๊ธฐ๋ฒ•์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜์˜€๋‹ค. ์—ฌ์„ฏ ๊ฐ€์ง€ ์ƒํ™ฉ๊ณผ ์—ด๋‘ ๊ฐ€์ง€ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ์กฐํ•ฉํ•˜์—ฌ ๊ฐ ํŠน์„ฑ ์ •๋ณด๊ฐ€ ํด๋ž˜์Šค ๋ณ„๋กœ ์ธ์ง€ ์„ฑ๋Šฅ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๋ ฅ์„ ์‚ดํŽด๋ณด์•˜๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ์ œ์•ˆ ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ๊ธฐ์กด์˜ ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์—์„œ 13%์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.์ดˆ๋ก i ๋ชฉ์ฐจ v ํ‘œ ๋ชฉ์ฐจ vi ๊ทธ๋ฆผ ๋ชฉ์ฐจ vii ์ œ 1 ์žฅ ์„œ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 ์—ฐ๊ตฌ ๋ชฉ์  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 ์—ฐ๊ตฌ ๋‚ด์šฉ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ์ œ 2 ์žฅ ๋ฐฐ๊ฒฝ ์ด๋ก  ๋ฐ ๊ด€๋ จ ์—ฐ๊ตฌ 5 2.1 ๋ฐฐ๊ฒฝ ์ด๋ก  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 ๋žœ๋ค ํฌ๋ ˆ์ŠคํŠธ .... . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Doc2Vec . . . . .... . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.3 TF-IDF ... . . . . . . .... . . . . . . . . . . . . . . . . . . . . . . 11 2.2 ๊ด€๋ จ ์—ฐ๊ตฌ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.1 ์‚ฌ์šฉ์ž ์ƒํ™ฉ ์ธ์ง€ . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด ์ถ”๋ก  . . . . . . . . . . . . . . . . . . . . . 15 ์ œ3 ์žฅ ์ œ์•ˆ ๊ธฐ๋ฒ• 16 3.1 ์ „์ฒด ๊ณผ์ • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2 ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.1 ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ๊ตฌ๊ฐ„ ๋™๊ธฐํ™” . . . . . . . . . . . . . . . . . . . . 18 3.2.2 ๊ฐ€์†๋„ ์„ผ์„œ ๋ฐ์ดํ„ฐ ๋ณด์ •. . . . . . . . . . . . . . . . . . . . . 20 3.2.3 ์˜ค๋””์˜ค ์„ผ์„œ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ . . . . . . . . . . . . . . . . . . . 22 3.3 ์ธ์ง€ ๋ชจํ˜• . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3.1 ์‚ฌ์šฉ์ž ์ƒํ™ฉ ์ธ์ง€ . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3.2 ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด ์ถ”๋ก  . . . . . . . . . . . . . . . . . . . . . . 23 ์ œ 4 ์žฅ ์‹คํ—˜ ๋ฐ ๊ฒฐ๊ณผ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1 ์‹คํ—˜ ๋ฐ์ดํ„ฐ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1.1 ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘์šฉ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ . . . . . . . . . . . . . . . . . . . 25 4.1.2 ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์‹คํ—˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2 ์‹คํ—˜ ์„ค๊ณ„ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3 ์‹คํ—˜ ํ™˜๊ฒฝ ๋ฐ ํ‰๊ฐ€ ์ง€ํ‘œ . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.1 ์‹คํ—˜ ํ™˜๊ฒฝ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.2 ํ‰๊ฐ€ ์ง€ํ‘œ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.4 ์‹คํ—˜ ๊ฒฐ๊ณผ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.4.1 ์‚ฌ์šฉ์ž ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜• ์ตœ์  ๋งค๊ฐœ๋ณ€์ˆ˜ . . . . . . . . . . . . . 32 4.4.2 ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด ์ ์šฉ ์—ฌ๋ถ€ . . . . . . . . . . . . . . . . . . . . 34 4.4.3 ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด ์ถ”๋ก  ์‹คํ—˜ . . . . . . . . . . . . . . . . . . . . 38 4.4.4 ์‹ค์ œ ์ถ”๋ก ํ•œ ์‚ฌ์šฉ์ž ํŠน์„ฑ ์ •๋ณด๋ฅผ ์ ์šฉํ•œ ์ƒํ™ฉ ์ธ์ง€ ๋ชจํ˜•์˜ ์„ฑ๋Šฅ 39 ์ œ 5 ์žฅ ๊ฒฐ๋ก  41 5.1 ์š”์•ฝ ๋ฐ ์—ฐ๊ตฌ ์˜์˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.2 ํ–ฅํ›„ ์—ฐ๊ตฌ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 ์ฐธ๊ณ ๋ฌธํ—Œ 44 Abstract 51Maste

    Activity recognition with weighted frequent patterns mining in smart environments

    Get PDF
    In the past decades, activity recognition has aroused a great interest for the research groups majoring in context-awareness computing and human behaviours monitoring. However, the correlations between the activities and their frequent patterns have never been directly addressed by traditional activity recognition techniques. As a result, activities that trigger the same set of sensors are difficult to differentiate, even though they present different patterns such as different frequencies of the sensor events. In this paper, we propose an efficient association rule mining technique to find the association rules between the activities and their frequent patterns, and build an activity classifier based on these association rules. We also address the classification of overlapped activities by incorporating the global and local weight of the patterns. The experiment results using publicly available dataset demonstrate that our method is able to achieve better performance than traditional recognition methods such as Decision Tree, Naive Bayesian and HMM. Comparison studies show that the proposed association rule mining method is efficient, and we can further improve the activity recognition accuracy by considering global and local weight of frequent patterns of activities
    corecore