5 research outputs found

    Automatic Speech Recognition in Mobile Customer Care Service

    Get PDF
    In the project an automatic speech system is used in mobile customer care   services. In existing  mobile  customer care  services, customer  have  to  wait for 4 to 5 minutes  to get  into the  option  what   they  want to  inquire. Based on the requirement, we go for filtering the incoming calls. Persons who require particular data are dynamically move to speech recognition system that identifies the type of the enquiry chosen. Speech recognition is the one which dynamically identifies the individual speaking based on analyzing the speech waves. It helps in identifying the voice of the speaker to know the recognized user. It also helps in accessing services like telephone banking, mobile shopping, database services and securing the information which is confidential

    A Practical Fuzzy Controller with Q-learning Approach for the Path Tracking of a Walking-aid Robot

    Get PDF
    [[abstract]]This study tackles the path tracking problem of a prototype walking-aid (WAid) robot which features the human-robot interactive navigation. A practical fuzzy controller is proposed for the path tracking control under reinforcement learning ability. The inputs to the designed fuzzy controller are the error distance and the error angle between the current and the desired position and orientation, respectively. The controller outputs are the voltages applied to the left- and right-wheel motors. A heuristic fuzzy control with the Sugeno-type rules is then designed based on a model-free approach. The consequent part of each fuzzy control rule is designed with the aid of Q-learning approach. The design approach of the controller is presented in detail, and effectiveness of the controller is demonstrated by hardware implementation and experimental results under human-robot interaction environment. The results also show that the proposed path tracking control methods can be easily applied in various wheeled mobile robots.[[conferencetype]]ๅœ‹้š›[[conferencedate]]20140914~20140917[[booktype]]้›ปๅญ็‰ˆ[[iscallforpapers]]Y[[conferencelocation]]Nagoya, Japa

    Using Deep Learning Technology to Realize the Automatic Control Program of Robot Arm Based on Hand Gesture Recognition

    Get PDF
    In this study, the robot arm control, computer vision, and deep learning technologies are combined to realize an automatic control program. There are three functional modules in this program, i.e., the hand gesture recognition module, the robot arm control module, and the communication module. The hand gesture recognition module records the userโ€™s hand gesture images to recognize the gesturesโ€™ features using the YOLOv4 algorithm. The recognition results are transmitted to the robot arm control module by the communication module. Finally, the received hand gesture commands are analyzed and executed by the robot arm control module. With the proposed program, engineers can interact with the robot arm through hand gestures, teach the robot arm to record the trajectory by simple hand movements, and call different scripts to satisfy robot motion requirements in the actual production environment

    Localization in Low Luminance, Slippery Indoor Environment Using Afocal Optical Flow Sensor and Image Processing

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2017. 8. ์กฐ๋™์ผ.์‹ค๋‚ด ์„œ๋น„์Šค๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ž์œจ ์ฃผํ–‰์„ ์œ„ํ•œ ํ•„์ˆ˜ ์š”๊ฑด์ด๋‹ค. ํŠนํžˆ ์นด๋ฉ”๋ผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๊ธฐ ์–ด๋ ค์šด ์‹ค๋‚ด ์ €์กฐ๋„ ํ™˜๊ฒฝ์—์„œ ๋ฏธ๋„๋Ÿฌ์ง์ด ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ์—๋Š” ์œ„์น˜ ์ถ”์ •์˜ ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์•„์ง„๋‹ค. ๋ฏธ๋„๋Ÿฌ์ง์€ ์ฃผ๋กœ ์นดํŽซ์ด๋‚˜ ๋ฌธํ„ฑ ๋“ฑ์„ ์ฃผํ–‰ํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋ฉฐ, ํœ  ์—”์ฝ”๋” ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰๊ธฐ๋ก์œผ๋กœ๋Š” ์ฃผํ–‰ ๊ฑฐ๋ฆฌ์˜ ์ •ํ™•ํ•œ ์ธ์‹์— ํ•œ๊ณ„๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ ๋™์‹œ์  ์œ„์น˜์ถ”์ • ๋ฐ ์ง€๋„์ž‘์„ฑ ๊ธฐ์ˆ (simultaneous localization and mappingSLAM)์ด ๋™์ž‘ํ•˜๊ธฐ ์–ด๋ ค์šด ์ €์กฐ๋„, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ ์ €๊ฐ€์˜ ๋ชจ์…˜์„ผ์„œ์™€ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(afocal optical flow sensorAOFS) ๋ฐ VGA๊ธ‰ ์ „๋ฐฉ ๋‹จ์•ˆ์นด๋ฉ”๋ผ๋ฅผ ์œตํ•ฉํ•˜์—ฌ ๊ฐ•์ธํ•˜๊ฒŒ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ–ˆ๋‹ค. ๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰๊ณผ ๋ฐฉ์œ„๊ฐ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰์„ ๋ˆ„์  ์œตํ•ฉํ•˜์—ฌ ์‚ฐ์ถœํ–ˆ์œผ๋ฉฐ, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ๋„ ์ข€ ๋” ์ •ํ™•ํ•œ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ถ”์ •์„ ์œ„ํ•ด ํœ  ์—”์ฝ”๋”์™€ AOFS๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ด๋™ ๋ณ€์œ„ ์ •๋ณด๋ฅผ ์œตํ•ฉํ–ˆ๊ณ , ๋ฐฉ์œ„๊ฐ ์ถ”์ •์„ ์œ„ํ•ด ๊ฐ์†๋„ ์„ผ์„œ์™€ ์ „๋ฐฉ ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํŒŒ์•…๋œ ์‹ค๋‚ด ๊ณต๊ฐ„์ •๋ณด๋ฅผ ํ™œ์šฉํ–ˆ๋‹ค. ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” ๋ฐ”ํ€ด ๋ฏธ๋„๋Ÿฌ์ง์— ๊ฐ•์ธํ•˜๊ฒŒ ์ด๋™ ๋ณ€์œ„๋ฅผ ์ถ”์ • ํ•˜์ง€๋งŒ, ์นดํŽซ์ฒ˜๋Ÿผ ํ‰ํ‰ํ•˜์ง€ ์•Š์€ ํ‘œ๋ฉด์„ ์ฃผํ–‰ํ•˜๋Š” ์ด๋™ ๋กœ๋ด‡์— ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์žฅ์ฐฉํ•  ๊ฒฝ์šฐ, ์ฃผํ–‰ ์ค‘ ๋ฐœ์ƒํ•˜๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์™€ ๋ฐ”๋‹ฅ ๊ฐ„์˜ ๋†’์ด ๋ณ€ํ™”๊ฐ€ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์ด์šฉํ•œ ์ด๋™๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ์˜ ์ฃผ์š”์ธ์œผ๋กœ ์ž‘์šฉํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์— ๋ฌดํ•œ์ดˆ์ ๊ณ„ ์›๋ฆฌ๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด ์˜ค์ฐจ ์š”์ธ์„ ์™„ํ™”ํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋กœ๋ด‡ ๋ฌธํ˜• ์‹œ์Šคํ…œ(robotic gantry system)์„ ์ด์šฉํ•˜์—ฌ ์นดํŽซ ๋ฐ ์„ธ๊ฐ€์ง€ ์ข…๋ฅ˜์˜ ๋ฐ”๋‹ฅ์žฌ์งˆ์—์„œ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์˜ ๋†’์ด๋ฅผ 30 mm ์—์„œ 50 mm ๋กœ ๋ณ€ํ™”์‹œํ‚ค๋ฉฐ 80 cm ๊ฑฐ๋ฆฌ๋ฅผ ์ด๋™ํ•˜๋Š” ์‹คํ—˜์„ 10๋ฒˆ์”ฉ ๋ฐ˜๋ณตํ•œ ๊ฒฐ๊ณผ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” AOFS ๋ชจ๋“ˆ์€ 1 mm ๋†’์ด ๋ณ€ํ™” ๋‹น 0.1% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ(systematic error)๋ฅผ ๋ฐœ์ƒ์‹œ์ผฐ์œผ๋‚˜, ๊ธฐ์กด์˜ ๊ณ ์ •์ดˆ์ ๋ฐฉ์‹์˜ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” 14.7% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ๋ฅผ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์‹ค๋‚ด ์ด๋™์šฉ ์„œ๋น„์Šค ๋กœ๋ด‡์— AOFS๋ฅผ ์žฅ์ฐฉํ•˜์—ฌ ์นดํŽซ ์œ„์—์„œ 1 m ๋ฅผ ์ฃผํ–‰ํ•œ ๊ฒฐ๊ณผ ํ‰๊ท  ๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ๋Š” 0.02% ์ด๊ณ , ๋ถ„์‚ฐ์€ 17.6% ์ธ ๋ฐ˜๋ฉด, ๊ณ ์ •์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ๋กœ๋ด‡์— ์žฅ์ฐฉํ•˜์—ฌ ๊ฐ™์€ ์‹คํ—˜์„ ํ–ˆ์„ ๋•Œ์—๋Š” 4.09% ์˜ ํ‰๊ท  ์˜ค์ฐจ ๋ฐ 25.7% ์˜ ๋ถ„์‚ฐ์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์ฃผ์œ„๊ฐ€ ๋„ˆ๋ฌด ์–ด๋‘์›Œ์„œ ์˜์ƒ์„ ์œ„์น˜ ๋ณด์ •์— ์‚ฌ์šฉํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ, ์ฆ‰, ์ €์กฐ๋„ ์˜์ƒ์„ ๋ฐ๊ฒŒ ๊ฐœ์„ ํ–ˆ์œผ๋‚˜ SLAM์— ํ™œ์šฉํ•  ๊ฐ•์ธํ•œ ํŠน์ง•์  ํ˜น์€ ํŠน์ง•์„ ์„ ์ถ”์ถœํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ์—๋„ ๋กœ๋ด‡ ์ฃผํ–‰ ๊ฐ๋„ ๋ณด์ •์— ์ €์กฐ๋„ ์ด๋ฏธ์ง€๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ–ˆ๋‹ค. ์ €์กฐ๋„ ์˜์ƒ์— ํžˆ์Šคํ† ๊ทธ๋žจ ํ‰ํ™œํ™”(histogram equalization) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜๋ฉด ์˜์ƒ์ด ๋ฐ๊ฒŒ ๋ณด์ • ๋˜๋ฉด์„œ ๋™์‹œ์— ์žก์Œ๋„ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋˜๋Š”๋ฐ, ์˜์ƒ ์žก์Œ์„ ์—†์• ๋Š” ๋™์‹œ์— ์ด๋ฏธ์ง€ ๊ฒฝ๊ณ„๋ฅผ ๋šœ๋ ทํ•˜๊ฒŒ ํ•˜๋Š” ๋กค๋ง ๊ฐ€์ด๋˜์Šค ํ•„ํ„ฐ(rolling guidance filterRGF)๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ด ์ด๋ฏธ์ง€์—์„œ ์‹ค๋‚ด ๊ณต๊ฐ„์„ ๊ตฌ์„ฑํ•˜๋Š” ์ง๊ต ์ง์„  ์„ฑ๋ถ„์„ ์ถ”์ถœ ํ›„ ์†Œ์‹ค์ (vanishing pointVP)์„ ์ถ”์ •ํ•˜๊ณ  ์†Œ์‹ค์ ์„ ๊ธฐ์ค€์œผ๋กœ ํ•œ ๋กœ๋ด‡ ์ƒ๋Œ€ ๋ฐฉ์œ„๊ฐ์„ ํš๋“ํ•˜์—ฌ ๊ฐ๋„ ๋ณด์ •์— ํ™œ์šฉํ–ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋กœ๋ด‡์— ์ ์šฉํ•˜์—ฌ 0.06 ~ 0.21 lx ์˜ ์ €์กฐ๋„ ์‹ค๋‚ด ๊ณต๊ฐ„(77 sqm)์— ์นดํŽซ์„ ์„ค์น˜ํ•˜๊ณ  ์ฃผํ–‰ํ–ˆ์„ ๊ฒฝ์šฐ, ๋กœ๋ด‡์˜ ๋ณต๊ท€ ์œ„์น˜ ์˜ค์ฐจ๊ฐ€ ๊ธฐ์กด 401 cm ์—์„œ 21 cm๋กœ ์ค„์–ด๋“ฆ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.์ œ 1 ์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ 1 1.2 ์„ ํ–‰ ์—ฐ๊ตฌ ์กฐ์‚ฌ 6 1.2.1 ์‹ค๋‚ด ์ด๋™ํ˜• ์„œ๋น„์Šค ๋กœ๋ด‡์˜ ๋ฏธ๋„๋Ÿฌ์ง ๊ฐ์ง€ ๊ธฐ์ˆ  6 1.2.2 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๊ธฐ์ˆ  8 1.3 ๊ธฐ์—ฌ๋„ 12 1.4 ๋…ผ๋ฌธ์˜ ๊ตฌ์„ฑ 14 ์ œ 2 ์žฅ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ 16 2.1 ๋ฌดํ•œ์ดˆ์  ์‹œ์Šคํ…œ(afocal system) 16 2.2 ๋ฐ”๋Š˜๊ตฌ๋ฉ ํšจ๊ณผ 18 2.3 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ํ”„๋กœํ† ํƒ€์ž… 20 2.4 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ณ„ํš 24 2.5 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ฒฐ๊ณผ 29 ์ œ 3 ์žฅ ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ๋ณด์ • ํ™œ์šฉ๋ฐฉ๋ฒ• 36 3.1 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๋ฐฉ๋ฒ• 36 3.2 ํ•œ ์žฅ์˜ ์˜์ƒ์œผ๋กœ ์‹ค๋‚ด ๊ณต๊ฐ„ ํŒŒ์•… ๋ฐฉ๋ฒ• 38 3.3 ์†Œ์‹ค์  ๋žœ๋“œ๋งˆํฌ๋ฅผ ์ด์šฉํ•œ ๋กœ๋ด‡ ๊ฐ๋„ ์ถ”์ • 41 3.4 ์ตœ์ข… ์ฃผํ–‰๊ธฐ๋ก ์•Œ๊ณ ๋ฆฌ์ฆ˜ 46 3.5 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ณ„ํš 48 3.6 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ฒฐ๊ณผ 50 ์ œ 4 ์žฅ ์ €์กฐ๋„ ํ™˜๊ฒฝ ์œ„์น˜์ธ์‹ ์‹คํ—˜ ๊ฒฐ๊ณผ 54 4.1 ์‹คํ—˜ ํ™˜๊ฒฝ 54 4.2 ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 59 4.3 ์ž„๋ฒ ๋””๋“œ ์‹คํ—˜ ๊ฒฐ๊ณผ 61 ์ œ 5 ์žฅ ๊ฒฐ๋ก  62Docto

    An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity

    Get PDF
    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocitiesโ€“0.5ยฐ/time step, 1.0ยฐ/time step and 1.5ยฐ/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking
    corecore