3,410 research outputs found

    Design and Implementation of Wireless Point-Of-Care Health Monitoring Systems: Diagnosis For Sleep Disorders and Cardiovascular Diseases

    Get PDF
    Chronic sleep disorders are present in 40 million people in the United States. More than 25 million people remain undiagnosed and untreated, which accounts for over $22 billion in unnecessary healthcare costs. In addition, another major chronic disease is the heart diseases which cause 23.8% of the deaths in the United States. Thus, there is a need for a low cost, reliable, and ubiquitous patient monitoring system. A remote point-of-care system can satisfy this need by providing real time monitoring of the patient\u27s health condition at remote places. However, the currently available POC systems have some drawbacks; the fixed number of physiological channels and lack of real time monitoring. In this dissertation, several remote POC systems are reported to diagnose sleep disorders and cardiovascular diseases to overcome the drawbacks of the current systems. First, two types of remote POC systems were developed for sleep disorders. One was designed with ZigBee and Wi-Fi network, which provides increase/decrease the number of physiological channels flexibly by using ZigBee star network. It also supports the remote real-time monitoring by extending WPAN to WLAN with combination of two wireless communication topologies, ZigBee and Wi-Fi. The other system was designed with GSM/WCDMA network, which removes the restriction of testing places and provides remote real-time monitoring in the true sense of the word. Second, a fully wearable textile integrated real-time ECG acquisition system for football players was developed to prevent sudden cardiac death. To reduce power consumption, adaptive RF output power control was implemented based on RSSI and the power consumption was reduced up to 20%. Third, as an application of measuring physiological signals, a wireless brain machine interface by using the extracted features of EOG and EEG was implemented to control the movement of a robot. The acceleration/deceleration of the robot is controlled based on the attention level from EEG. The left/right motion of eyeballs of EOG is used to control the direction of the robot. The accuracy rate was about 95%. These kinds of health monitoring systems can reduce the exponentially increasing healthcare costs and cater the most important healthcare needs of the society

    Aerospace medicine and biology. A continuing bibliography with indexes, supplement 186

    Get PDF
    This bibliography lists 159 reports, articles, and other documents introduced into the NASA Scientific and Technical Information System in October 1978

    Understand-Before-Talk (UBT): A Semantic Communication Approach to 6G Networks

    Full text link
    In Shannon theory, semantic aspects of communication were identified but considered irrelevant to the technical communication problems. Semantic communication (SC) techniques have recently attracted renewed research interests in (6G) wireless because they have the capability to support an efficient interpretation of the significance and meaning intended by a sender (or accomplishment of the goal) when dealing with multi-modal data such as videos, images, audio, text messages, and so on, which would be the case for various applications such as intelligent transportation systems where each autonomous vehicle needs to deal with real-time videos and data from a number of sensors including radars. A notable difficulty of existing SC frameworks lies in handling the discrete constraints imposed on the pursued semantic coding and its interaction with the independent knowledge base, which makes reliable semantic extraction extremely challenging. Therefore, we develop a new lightweight hashing-based semantic extraction approach to the SC framework, where our learning objective is to generate one-time signatures (hash codes) using supervised learning for low latency, secure and efficient management of the SC dynamics. We first evaluate the proposed semantic extraction framework over large image data sets, extend it with domain adaptive hashing and then demonstrate the effectiveness of "semantics signature" in bulk transmission and multi-modal data

    5G: 2020 and Beyond

    Get PDF
    The future society would be ushered in a new communication era with the emergence of 5G. 5G would be significantly different, especially, in terms of architecture and operation in comparison with the previous communication generations (4G, 3G...). This book discusses the various aspects of the architecture, operation, possible challenges, and mechanisms to overcome them. Further, it supports users? interac- tion through communication devices relying on Human Bond Communication and COmmunication-NAvigation- SENsing- SErvices (CONASENSE).Topics broadly covered in this book are; • Wireless Innovative System for Dynamically Operating Mega Communications (WISDOM)• Millimeter Waves and Spectrum Management• Cyber Security• Device to Device Communicatio

    Real-time FGPA implementation of a neuromorphic pitch detection system

    Get PDF
    This thesis explores the real-time implementation of a biologically inspired pitch detection system in digital electronics. Pitch detection is well understood and has been shown to occur in the initial stages of the auditory brainstem. By building such a system in digital hardware we can prove the feasibility of implementing neuromorphic systems using digital technology. This research not only aims to prove that such an implementation is possible but to investigate ways of achieving efficient and effective designs. We aim to achieve this complexity reduction while maintaining the fine granularity of the signal processing inherent in neural systems. By producing an efficient design we present the possibility of implementing the system within the available resources, thus producing a demonstrable system. This thesis presents a review of computational models of all the components within the pitch detection system. The review also identifies key issues relating to the efficient implementation and development of the pitch detection system. Four investigations are presented to address these issues for optimal neuromorphic designs of neuromorphic systems. The first investigation aims to produce the first-ever digital hardware implementation of the inner hair cell. The second investigation develops simplified models of the auditory nerve and the coincidence cell. The third investigation aims to reduce the most complex stage of the system, the stellate chopper cell array. Finally, we investigate implementing a large portion of the pitch detection system in hardware. The results contained in this thesis enable us to understand the feasibility of implementing such systems in real-time digital hardware. This knowledge may help researchers to make design decisions within the field of digital neuromorphic systems

    Análisis y experimentación de efectos de degradación del rendimiento visual debido a estímulos auditivos en entornos virtuales erosivos.

    Get PDF
    In recent years, both consumption and people’s interest in Virtual Reality (VR) are increasing dizzily. This innovative technology provides a number of groundbreaking capabilities while has lately become more accessible due to continued hardware development. In VR, the user turns into an active element that can interact in many ways with the virtual environment, which differs from the user passive role settled in traditional media. This interaction occurs naturally once the user is immersed in the virtual world and senses detect what is happening around. Similarly to reality, human perception can be deceived or altered under certain conditions where our senses gather contradictory or too much information. In fact, an audiovisual suppression effect was reported by Malpica et al. (2020) in which it was proved how auditory stimuli can cause loss of visual information. User’s visual performance degrades when spatially incongruent but temporally consistent sounds are listened at once. Our brain perceives both visual and auditory stimuli although some visual data is lost due to neural interactions. The main goal of this project is to analyze and get a better insight of this audiovisual suppression effect, more concretely, its auditory part. Using the publication previously mentioned as baseline, we create a virtual environment in which both auditory and visual stimuli will be presented to the user. Regarding auditory stimuli, we research how sounds located at the limits of our hearing range can influence the appearance of this effect. Therefore, frequency values associated to hearing limits are obtained for each user and will be used after in the sounds generated throughout the experiment. The participant will encounter not only unimodal stimuli (auditory or visual only) but bimodal (auditory and visual at the same moment) stimuli as well. Bimodal stimuli are dynamically generated in fixed locations keeping temporally consistency, creating the proper conditions under which the audiovisual suppression effect occurs. By keeping stimuli apparition record as well as user performance regarding the moments when any stimuli was perceived, it is possible to check if the user has suffered the suppression effect. The experiments and the frequency test have been performed by a group of 20 participants. The achieved results manifest that detection and recognition rates of visual stimuli are indeed decreased by almost inaudible sounds. Thereby, the audiovisual suppression effect still occurs with auditory stimuli located at the limits of our hearing range. Surveys fulfilled by the participants demonstrated how the majority of them experimented a great feeling of immersion and presence in the virtual world. Besides, it is not appreciated that the experiment has significant side effects neither drawbacks that disturb participants while making the virtual experience worse. Lastly, it is suggested as future work the analysis of the eye-tracking data recorded during the experiment, in order to study how users behave when barely audible sounds are perceived in VR. With the aim of researching the impact that other factors, such as personal emotions and state of mind, may have on the suppression effect, a couple of appropriate gadgets are proposed as well.<br /

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    Multimodal Wearable Sensors for Human-Machine Interfaces

    Get PDF
    Certain areas of the body, such as the hands, eyes and organs of speech production, provide high-bandwidth information channels from the conscious mind to the outside world. The objective of this research was to develop an innovative wearable sensor device that records signals from these areas more conveniently than has previously been possible, so that they can be harnessed for communication. A novel bioelectrical and biomechanical sensing device, the wearable endogenous biosignal sensor (WEBS), was developed and tested in various communication and clinical measurement applications. One ground-breaking feature of the WEBS system is that it digitises biopotentials almost at the point of measurement. Its electrode connects directly to a high-resolution analog-to-digital converter. A second major advance is that, unlike previous active biopotential electrodes, the WEBS electrode connects to a shared data bus, allowing a large or small number of them to work together with relatively few physical interconnections. Another unique feature is its ability to switch dynamically between recording and signal source modes. An accelerometer within the device captures real-time information about its physical movement, not only facilitating the measurement of biomechanical signals of interest, but also allowing motion artefacts in the bioelectrical signal to be detected. Each of these innovative features has potentially far-reaching implications in biopotential measurement, both in clinical recording and in other applications. Weighing under 0.45 g and being remarkably low-cost, the WEBS is ideally suited for integration into disposable electrodes. Several such devices can be combined to form an inexpensive digital body sensor network, with shorter set-up time than conventional equipment, more flexible topology, and fewer physical interconnections. One phase of this study evaluated areas of the body as communication channels. The throat was selected for detailed study since it yields a range of voluntarily controllable signals, including laryngeal vibrations and gross movements associated with vocal tract articulation. A WEBS device recorded these signals and several novel methods of human-to-machine communication were demonstrated. To evaluate the performance of the WEBS system, recordings were validated against a high-end biopotential recording system for a number of biopotential signal types. To demonstrate an application for use by a clinician, the WEBS system was used to record 12‑lead electrocardiogram with augmented mechanical movement information
    corecore