790 research outputs found

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Multimedia sensors embedded in smartphones for ambient assisted living and e-health

    Full text link
    The final publication is available at link.springer.com[EN] Nowadays, it is widely extended the use of smartphones to make human life more comfortable. Moreover, there is a special interest on Ambient Assisted Living (AAL) and e-Health applications. The sensor technology is growing and amount of embedded sensors in the smartphones can be very useful for AAL and e-Health. While some sensors like the accelerometer, gyroscope or light sensor are very used in applications such as motion detection or light meter, there are other ones, like the microphone and camera which can be used as multimedia sensors. This paper reviews the published papers focused on showing proposals, designs and deployments of that make use of multimedia sensors for AAL and e-health. We have classified them as a function of their main use. They are the sound gathered by the microphone and image recorded by the camera. We also include a comparative table and analyze the gathered information.Parra-Boronat, L.; Sendra, S.; Jimenez, JM.; Lloret, J. (2016). Multimedia sensors embedded in smartphones for ambient assisted living and e-health. Multimedia Tools and Applications. 75(21):13271-13297. doi:10.1007/s11042-015-2745-8S13271132977521Acampora G, Cook DJ, Rashidi P, Vasilakos AV (2013) A survey on ambient intelligence in healthcare. Proc IEEE 101(12):2470–2494Al-Attas R, Yassine A, Shirmohammadi S (2012) Tele-Medical Applications in Home-Based Health Care. In proceeding of the 2012 I.E. International Conference on Multimedia and Expo Workshops (ICMEW 2012). Jul. 9–13, 2012. Melbourne, Australia. (pp. 441–446)Alemdar H, Ersoy C (2010) Wireless sensor networks for healthcare: a survey. Comput Netw 54(15):2688–2710Alqassim S, Ganesh M, Khoja S, Zaidi M, Aloul F, Sagahyroon A (2012) Sleep apnea monitoring using mobile phones. In proceedings of the 14th International Conference on e-Health Networking, Applications and Services (Healthcom 2012). Oct. 10 – 13, 2012. Beijing, China. (pp. 443–446)Anderson G, Horvath J (2004) The growing burden of chronic disease in America. Public Health Rep 119(3):263–270Aquilano M, Cavallo F, Bonaccorsi M, Esposito R, Rovini E, Filippi M, Carrozza MC (2012) Ambient assisted living and ageing: Preliminary results of RITA project. In proceedings of 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2012), Aug. 28-Sept. 1, 2012. San Diego USA. (pp. 5823–5826)Bellini P, Bruno I, Cenni D, Fuzier A, Nesi P, Paolucci M (2012) Mobile Medicine: semantic computing management for health care applications on desktop and mobile devices. Multimed Tools Appl 58(1):41–79Boulos MN, Wheeler S, Tavares C, Jones R (2011) How smartphones are changing the face of mobile and participatory healthcare: an overview, with example from eCAALYX. Biomed Eng Online 10(1):24Bourouis A, Feham M, Hossain MA, Zhang L (2014) An intelligent mobile based decision support system for retinal disease diagnosis. Decis Support Syst 59(2014):341–350Bourouis A, Zerdazi A, Feham M, Bouchachia A (2013) M-health: skin disease analysis system using Smartphone’s camera. Procedia Comput Sci 19(2013):1116–1120M.W. Brault, (2010). Americans With Disabilities: 2010. Household Economic Studies. In United States Census Bureau website. Available at: www.census.gov/prod/2012pubs/p70-131.pdf Last Access 16 Dec 2014Breath Counter App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.softrove.app.bc Last Access 30 Nov 2014Cardinaux F, Bhowmik D, Abhayaratne C, Hawley MS (2011) Video based technology for ambient assisted living: a review of the literature. J Ambient Intell Smart Environ 3(3):253–269Cardiograph App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.macropinch.hydra.android . Last Access 30 Nov 2014Chaaraoui AA, Climent-Pérez P, Flórez-Revuelta F (2012) A review on vision techniques applied to human behaviour analysis for ambient-assisted living. Expert Syst Appl 39(12):10873–10888Chen NC, Wang KC, Chu HH (2012) Listen-to-nose: a low-cost system to record nasal symptoms in daily life. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UBIComp 2012). Sep. 05–08, 2012. Pittsburgh, USA. (pp. 590–591)Chiarini G, Ray P, Akter S, Masella C, Ganz A (2013) mHealth technologies for chronic diseases and elders: a systematic review. IEEE J Sel Areas Commun 31(9):6–18Color Detector App In Google Play website. Available at: //play.google.com/store/apps/details?id = com.mobialia.colordetector. Last Access 30 Nov 2014Colorblind Assitant App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.unclechromedome.colorblindassistant . Last Access 30 Nov 2014Dale O, Solheim I, Halbach T, Schulz T, Spiru L, Turcu I (2013) What seniors want in a mobile Help-On-Demand service. In proceedings of the Fifth International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2013). Feb. 24 – Mar. 1, 2013. Nice, France. (pp. 96–101)Estepa AJ, Estepa R, Vozmediano J, Carrillo P (2014) Dynamic VoIP codec selection on smartphones. Netw Protoc Algoritm 6(2):22–37Falk TH, Maier M (2013) Context awareness in WBANs: a survey on medical and non-medical applications. IEEE Wirel Commun 20(4):30–37Franco C, Fleury A, Guméry PY, Diot B, Demongeot J, Vuillerme N (2013) iBalance-ABF: a smartphone-based audio-biofeedback balance system. IEEE Trans Biomed Eng 60(1):211–215García M, Lloret J, Bellver I, Tomás J (2013) Intelligent IPTV Distribution for Smart Phones (Book Chapter 13). In Intelligent Multimedia Technologies for Networking Applications. IGI GlobalGregoski MJ, Mueller M, Vertegel A, Shaporev A, Jackson BB, Frenzel RM, Treiber FA (2012) Development and validation of a smartphone heart rate acquisition application for health promotion and wellness telehealth applications. Int J Telemed Appl 2012, 1. Article ID 696324Grimaldi D, Kurylyak Y, Lamonaca F, Nastro A (2011) Photoplethysmography detection by smartphone’s videocamera. In proceedings of the 6th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IEEE IDAACS 2011), Sep. 15–17, 2011. Prague, Czech Republic. (Vol. 1, pp. 488–491)Gurrin C, Qiu Z, Hughes M, Caprani N, Doherty AR, Hodges SE, Smeaton AF (2013) The smartphone as a platform for wearable cameras in health research. Am J Prev Med 44(3):308–313Haché G, Lemaire ED, Baddour N (2011) Wearable mobility monitoring using a multimedia smartphone platform. IEEE Trans Instrum Meas 60(9):3153–3161Heathers JA (2013) Smartphone-enabled pulse rate variability: an alternative methodology for the collection of heart rate variability in psychophysiological research. Int J Psychophysiol 89(3):297–304Hoseini-Tabatabaei SA, Gluhak A, Tafazolli R (2013) A survey on smartphone-based systems for opportunistic user context recognition. ACM Comput Surv (CSUR) 45(3):1–51, Paper No. 27Illiger K, Hupka M, von Jan U, Wichelhaus D, Albrecht UV (2014) Mobile technologies: expectancy, usage, and acceptance of clinical staff and patients at a University Medical Center. JMIR mHealth uHealth 2(4), e42Kanjo E (2012) Tools and architectural support for mobile phones based crowd control systems. Netw Protoc Algoritm 4(3):4–14Kawano Y, Yanai K (2014) FoodCam: a real-time food recognition system on a smartphone. Multimedia Tools and Applications,Published online:April 2014: 1–25Khan FH, Khan ZH (2010) A systematic approach for developing mobile information system based on location based services. Netw Protoc Algoritm 2(2):54–65Kochanov D, Jonas S, Hamadeh N, Yalvac E, Slijp H, Deserno TM (2014) Urban Positioning Using Smartphone-Based Imaging. In Bildverarbeitung für die Medizin, 2014: 186–191Kurniawan S (2008) Older people and mobile phones: a multi-method investigation. Int J Human-Comput Stud 66(12):889–901Lacuesta R, Lloret J, Sendra S, Peñalver L (2014) Spontaneous Ad Hoc mobile cloud computing network. Sci World J 2014:1–19Lakens D (2013) Using a Smartphone to measure heart rate changes during relived happiness and anger. IEEE Trans Affect Comput 5(3):217–226Larson EC, Goel M, Boriello G, Heltshe S, Rosenfeld M, Patel SN (2012) Spirosmart: using a microphone to measure lung function on a mobile phone, In proceedings of the 2012 ACM Conference on Ubiquitous Computing (UBIComp 2012). Sep. 05–08, 2012. Pittsburgh, USA. (pp. 280–289)Lee J, Reyes BA, McManus DD, Mathias O, Chon KH (2012) Atrial fibrillation detection using a smart phone. In proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2012). Aug.28-Sep.1, 2012. San Diego, (pp. 1177–1180)Lloret J, Garcia M, Bri D, Diaz JR (2009) A cluster-based architecture to structure the topology of parallel wireless sensor networks. Sensors (Basel) 9(12):10513–10544Lu H, Frauendorfer D, Rabbi M, Mast MS, Chittaranjan GT, Campbell AT, Gatica-Perez D, Choudhury T (2012) StressSense: detecting stress in unconstrained acoustic environments using smartphones. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UBIComp 2012). Sep. 05–08, 2012. Pittsburgh, USA. (pp. 351–360)Macías E, Abdelfatah H, Suárez A, Cánovas A (2011) Full geo-localized mobile video in Android mobile telephones. Netw Protoc Algoritm 3(1):64–81Macias E, Lloret J, Suarez A, Garcia M (2012) Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices. Sensors 12(2):2062–2087Macias E, Suarez A, Lloret J (2013) Mobile sensing systems. Sensors 13(12):17292–17321MedCam App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.cupel.MedCam . Last Access 30 Nov 2014Monteiro DM, Rodrigues JJ, Lloret J, Sendra S (2014) A hybrid NFC–Bluetooth secure protocol for Credit Transfer among mobile phones. Secur Commun Netw 7(2):325–337Mosa ASM, Yoo I, Sheets L (2012) A systematic review of healthcare applications for smartphones. BMC Med Inform Decis Mak 12(1):67MyEarDroid App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.tecnalia.health.myeardroid . Last Access 30 Nov 2014O’Grady MJ, Muldoon C, Dragone M, Tynan R, O’Hare GM (2010) Towards evolutionary ambient assisted living systems. J Ambient Intell Humaniz Comput 1(1):15–29Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990Quit Snoring App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.ptech_hm.qs . Last Access 30 Nov 2014Rahman MA, Hossain MS, El Saddik A (2013) Context-aware multimedia services modeling: an e-Health perspective. Multimed Tools Appl 73(3):1147–1176Sendra S, Granell E, Lloret J, Rodrigues JJPC (2014) Smart collaborative mobile system for taking care of disabled and elderly people. Mob Netw Appl 19(3):287–302Smartphone Milestone: Half of Mobile Subscribers Ages 55+ Own Smartphones Mobile. Online report.(April 22,2014). In the Nielsen Company website. Available at: http://www.nielsen.com/us/en/insights/news/2014/smartphone-milestone-half-of-americans-ages-55-own-smartphones.html Last Access 25 Nov 2014Smith A (2013) Smartphone Ownership 2013. On-line Report June 5, 2013. In Pew Research Center’s Internet & American Life Project website. Available at: http://www.pewinternet.org/2013/06/05/smartphone-ownership-2013/ Last Access 25 Nov 2014SnoreClock App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=de.ralphsapps.snorecontrol Last Access 30 Nov 2014Storf H, Kleinberger T, Becker M, Schmitt M, Bomarius F, Prueckner S (2009) An event-driven approach to activity recognition in ambient assisted living. Lect Notes Comput Sci 5859:123–132Su X, Tong H, Ji P (2014) Activity recognition with smartphone sensors. Tsinghua Sci Technol 19(3):235–249Tapu R, Mocanu B, Bursuc A, Zaharia T (2013) A smartphone-based obstacle detection and classification system for assisting visually impaired people. In proceedings of the 2013 I.E. International Conference on Computer Vision Workshops (ICCVW 2013). Dec. 2–8, 2013. Sydney, Australia. (pp. 444–451)The vOICe for Android App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=vOICe.vOICe . Last Access 30 Nov 2014Tudzarov A, Janevski T (2011) Protocols and algorithms for the next generation 5G mobile systems. Netw Protoc Algoritm 3(1):94–114Tyagi A, Miller K, Cockburn M (2012) e-Health tools for targeting and improving melanoma screening: a review. J Skin Cancer 2012, Article ID 437502Voice Cam for Blind App. In Google Play website. Available at: https://play.google.com/store/apps/details?id=com.prod.voice.cam Last Access 30 Nov 2014Wadhawan T, Situ N, Rui H, Lancaster K, Yuan X, Zouridakis G (2011) Implementation of the 7-point checklist for melanoma detection on smart handheld devices. In proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (EMBC 2011). Aug. 30- Sep 03, 2011. Boston, MA, USA (pp. 3180–3183)Xiong H, Zhang D, Zhang D, Gauthier V (2012) Predicting mobile phone user locations by exploiting collective behavioral patterns. In proceedings of the 9th International Conference on Ubiquitous Intelligence & Computing and 9th International Conference on Autonomic & Trusted Computing (UIC/ATC). 4–7 Sept. 2012. Fukuoka, Japan. (pp. 164–171)Xu X, Shu L, Guizani M, Liu M, Lu J (2014) A survey on energy harvesting and integrated data sharing in wireless body area networks. Int J Distrib Sens Netw. Article ID 438695Yu W, Su X, Hansen J (2012) A smartphone design approach to user communication interface for administering storage system network. Netw Protoc Algoritm 4(4):126–155Zhang D, Vasilakos AV, Xiong H (2012) Predicting location using mobile phone calls. ACM SIGCOMM Comput Commun Rev 42(4):295–296Zhang D, Xiong H, Yang L, Gauither V (2013) NextCell: predicting location using social interplay from cell phone traces. EEE Trans Comput 64(2):452–46

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private Audiokanäle anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme während dem Pendeln oder zum freihändigen Telefonieren. Dank diesem eindeutigen primären Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stärker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese Geräte sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die Funktionalität von Kopfhörern zu erweitern. Die räumliche Nähe von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform für die Erfassung einer Vielzahl von Eigenschaften, Prozessen und Aktivitäten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollständig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche Sensorikansätze erforscht werden, welche die Erkennung von bisher unzugänglichen Phänomenen ermöglichen. Durch die Einführung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher Fähigkeiten zu etablieren. Um eine fundierte Grundlage für die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-Phänomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und Aktivität, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesünderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. Darüber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher Aktivitätserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige Ansätze. Darüber hinaus wird ein Regressionsmodell eingeführt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten für Forschung, welche sich nahtlos in das tägliche Leben integrieren lässt und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lässt, um neuartige Phänomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusätzliches interaktives Medium eingesetzt, welches eine freihändige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mündet die Dissertation in einer offenen Hard- und Software-Plattform für Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die für verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die Einstiegshürden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trägt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. Darüber hinaus trägt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen für Earables bei. Durch diese Forschung schließt die Dissertation die Lücke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. Darüber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten Geräte zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher Sensorfähigkeiten bietet, um Eigenschaften, Prozesse und Aktivitäten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen Fähigkeiten wird somit zunehmend realer

    Evaluation of a Behind-the-Ear ECG Device for Smartphone based Integrated Multiple Smart Sensor System in Health Applications

    Get PDF
    In this paper, we present a wireless Multiple Smart Sensor System (MSSS) in conjunction with a smartphone to enable an unobtrusive monitoring of electrocardiogram (ear-lead ECG) integrated with multiple sensor system which includes core body temperature and blood oxygen saturation (SpO2) for ambulatory patients. The proposed behind-the-ear device makes the system desirable to measure ECG data: technically less complex, physically attached to non-hair regions, hence more suitable for long term use, and user friendly as no need to undress the top garment. The proposed smart sensor device is similar to the hearing aid device and is wirelessly connected to a smartphone for physiological data transmission and displaying. This device not only gives access to the core temperature and ECG from the ear, but also the device can be controlled (removed and reapplied) by the patient at any time, thus increasing the usability of personal healthcare applications. A number of combination ECG electrodes, which are based on the area of the electrode and dry/non-dry nature of the surface of the electrodes are tested at various locations near behind the ear. The best ECG electrode is then chosen based on the Signal-to-Noise Ratio (SNR) of the measured ECG signals. These electrodes showed acceptable SNR ratio of ~20 db, which is comparable with existing tradition ECG electrodes. The developed ECG electrode systems is then integrated with commercially available PPG sensor (Amperor pulse oximeter) and core body temperature sensor (MLX90614) using a specialized micro controller (Arduino UNO) and the results monitored using a newly developed smartphone (android) application

    EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction

    Get PDF
    We explore how discreet input can be provided using the tensor tympani -a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound.We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to ear rumble. Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifer based on three simple ear rumble gestures which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-efort and discreet eyes-and hands-free interaction technique that users found magical and almost telepathic.</p

    EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction

    Get PDF
    We explore how discreet input can be provided using the tensor tympani -a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound.We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to ear rumble. Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifer based on three simple ear rumble gestures which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-efort and discreet eyes-and hands-free interaction technique that users found magical and almost telepathic.</p

    Development of algorithms for smart hearing protection devices

    Get PDF
    In industrial environments, wearing hearing protection devices is required to protect the wearers from high noise levels and prevent hearing loss. In addition to their protection against excessive noise, hearing protectors block other types of signals, even if they are useful and convenient. Therefore, if people want to communicate and exchange information, they must remove their hearing protectors, which is not convenient, or even dangerous. To overcome the problems encountered with the traditional passive hearing protection devices, this thesis outlines the steps and the process followed for the development of signal processing algorithms for a hearing protector that allows protection against external noise and oral communication between wearers. This hearing protector is called the “smart hearing protection device”. The smart hearing protection device is a traditional hearing protector in which a miniature digital signal processor is embedded in order to process the incoming signals, in addition to a miniature microphone to pickup external signals and a miniature internal loudspeaker to transmit the processed signals to the protected ear. To enable oral communication without removing the smart hearing protectors, signal processing algorithms must be developed. Therefore, the objective of this thesis consists of developing a noise-robust voice activity detection algorithm and a noise reduction algorithm to improve the quality and intelligibility of the speech signal. The methodology followed for the development of the algorithms is divided into three steps: first, the speech detection and noise reduction algorithms must be developed, second, these algorithms need to be evaluated and validated in software, and third, they must be implemented in the digital signal processor to validate their feasibility for the intended application. During the development of the two algorithms, the following constraints must be taken into account: the hardware resources of the digital signal processor embedded in the hearing protector (memory, number of operations per second), and the real-time constraint since the algorithm processing time should not exceed a certain threshold not to generate a perceptible delay between the active and passive paths of the hearing protector or a delay between the lips movement and the speech perception. From a scientific perspective, the thesis determines the thresholds that the digital signal processor should not exceed to not generate a perceptible delay between the active and passive paths of the hearing protector. These thresholds were obtained from a subjective study, where it was found that this delay depends on different parameters: (a) the degree of attenuation of the hearing protector, (b) the duration of the signal, (c) the level of the background noise, and (d) the type of the background noise. This study showed that when the fit of the hearing protector is shallow, 20 % of participants begin to perceive a delay after 8 ms for a bell sound (transient), 16 ms for a clean speech signal and 22 ms for a speech signal corrupted by babble noise. On the other hand, when having a deep hearing rotection fit, it was found that the delay between the two paths is 18 ms for the bell signal, 26 ms for the speech signal without noise and no delay when speech is corrupted by babble noise, showing that a better attenuation allows more time for digital signal processing. Second, this work presents a new voice activity detection algorithm in which a low complexity speech characteristic has been extracted. This characteristic was calculated as the ratio between the signal’s energy in the frequency region that contains the first formant to characterize the speech signal, and the low or high frequencies to characterize the noise signals. The evaluation of this algorithm and its comparison to another benchmark algorithm has demonstrated its selectivity with a false positive rate averaged over three signal to noise ratios (SNR) (10, 5 and 0 dB) of 4.2 % and a true positive rate of 91.4 % compared to 29.9 % false positives and 79.0 % of true positives for the benchmark algorithm. Third, this work shows that the extraction of the temporal envelope of a signal to generate a nonlinear and adaptive gain function enables the reduction of the background noise, the improvement of the quality of the speech signal and the generation of the least musical noise compared to three other benchmark algorithms. The development of speech detection and noise reduction algorithms, their objective and subjective evaluations in different noise environments, and their implementations in digital signal processors enabled the validation of their efficiency and low complexity for the the smart hearing protection application

    Determination and evaluation of clinically efficient stopping criteria for the multiple auditory steady-state response technique

    Get PDF
    Background: Although the auditory steady-state response (ASSR) technique utilizes objective statistical detection algorithms to estimate behavioural hearing thresholds, the audiologist still has to decide when to terminate ASSR recordings introducing once more a certain degree of subjectivity. Aims: The present study aimed at establishing clinically efficient stopping criteria for a multiple 80-Hz ASSR system. Methods: In Experiment 1, data of 31 normal hearing subjects were analyzed off-line to propose stopping rules. Consequently, ASSR recordings will be stopped when (1) all 8 responses reach significance and significance can be maintained for 8 consecutive sweeps; (2) the mean noise levels were ≤ 4 nV (if at this “≤ 4-nV” criterion, p-values were between 0.05 and 0.1, measurements were extended only once by 8 sweeps); and (3) a maximum amount of 48 sweeps was attained. In Experiment 2, these stopping criteria were applied on 10 normal hearing and 10 hearing-impaired adults to asses the efficiency. Results: The application of these stopping rules resulted in ASSR threshold values that were comparable to other multiple-ASSR research with normal hearing and hearing-impaired adults. Furthermore, in 80% of the cases, ASSR thresholds could be obtained within a time-frame of 1 hour. Investigating the significant response-amplitudes of the hearing-impaired adults through cumulative curves indicated that probably a higher noise-stop criterion than “≤ 4 nV” can be used. Conclusions: The proposed stopping rules can be used in adults to determine accurate ASSR thresholds within an acceptable time-frame of about 1 hour. However, additional research with infants and adults with varying degrees and configurations of hearing loss is needed to optimize these criteria
    • …
    corecore