60 research outputs found

    An inclusive survey of contactless wireless sensing: a technology used for remotely monitoring vital signs has the potential to combating COVID-19

    Get PDF
    With the Coronavirus pandemic showing no signs of abating, companies and governments around the world are spending millions of dollars to develop contactless sensor technologies that minimize the need for physical interactions between the patient and healthcare providers. As a result, healthcare research studies are rapidly progressing towards discovering innovative contactless technologies, especially for infants and elderly people who are suffering from chronic diseases that require continuous, real-time control, and monitoring. The fusion between sensing technology and wireless communication has emerged as a strong research candidate choice because wearing sensor devices is not desirable by patients as they cause anxiety and discomfort. Furthermore, physical contact exacerbates the spread of contagious diseases which may lead to catastrophic consequences. For this reason, research has gone towards sensor-less or contactless technology, through sending wireless signals, then analyzing and processing the reflected signals using special techniques such as frequency modulated continuous wave (FMCW) or channel state information (CSI). Therefore, it becomes easy to monitor and measure the subject’s vital signs remotely without physical contact or asking them to wear sensor devices. In this paper, we overview and explore state-of-the-art research in the field of contactless sensor technology in medicine, where we explain, summarize, and classify a plethora of contactless sensor technologies and techniques with the highest impact on contactless healthcare. Moreover, we overview the enabling hardware technologies as well as discuss the main challenges faced by these systems.This work is funded by the scientific and technological research council of Turkey (TÜBITAK) under grand 119E39

    Extracting Cardiac Information From Medical Radar Using Locally Projective Adaptive Signal Separation

    Get PDF
    Electrocardiography is the gold standard for electrical heartbeat activity, but offers no direct measurement of mechanical activity. Mechanical cardiac activity can be assessed non-invasively using, e.g., ballistocardiography and recently, medical radar has emerged as a contactless alternative modality. However, all modalities for measuring the mechanical cardiac activity are affected by respiratory movements, requiring a signal separation step before higher-level analysis can be performed. This paper adapts a non-linear filter for separating the respiratory and cardiac signal components of radar recordings. In addition, we present an adaptive algorithm for estimating the parameters for the non-linear filter. The novelty of our method lies in the combination of the non-linear signal separation method with a novel, adaptive parameter estimation method specifically designed for the non-linear signal separation method, eliminating the need for manual intervention and resulting in a fully adaptive algorithm. Using the two benchmark applications of (i) cardiac template extraction from radar and (ii) peak timing analysis, we demonstrate that the non-linear filter combined with adaptive parameter estimation delivers superior results compared to linear filtering. The results show that using locally projective adaptive signal separation (LoPASS), we are able to reduce the mean standard deviation of the cardiac template by at least a factor of 2 across all subjects. In addition, using LoPASS, 9 out of 10 subjects show significant (at a confidence level of 2.5%) correlation between the R-T-interval and the R-radar-interval, while using linear filters this ratio drops to 6 out of 10. Our analysis suggests that the improvement is due to better preservation of the cardiac signal morphology by the non-linear signal separation method. Hence, we expect that the non-linear signal separation method introduced in this paper will mostly benefit analysis methods investigating the cardiac radar signal morphology on a beat-to-beat basis

    Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers

    Get PDF
    Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of Diabetes Mellitus (DM). It has been estimated that patients with diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up to 85% of the lower limb amputation due to failure to recognise and treat DFU properly. Current practice for DFU screening involves manual inspection of the foot by podiatrists and further medical tests such as vascular and blood tests are used to determine the presence of ischemia and infection in DFU. A comprehensive review of computerized techniques for recognition of DFU has been performed to identify the work done so far in this field. During this stage, it became clear that computerized analysis of DFU is relatively emerging field that is why related literature and research works are limited. There is also a lack of standardised public database of DFU and other wound-related pathologies. We have received approximately 1500 DFU images through the ethical approval with Lancashire Teaching Hospitals. In this work, we standardised both DFU dataset and expert annotations to perform different computer vision tasks such as classification, segmentation and localization on popular deep learning frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin patches of the foot region to determine the possible misclassified cases of both classes. Secondly, we used fully convolutional networks for the segmentation of DFU and surrounding skin in full foot images with high specificity and sensitivity. Finally, we used robust and lightweight deep localisation methods in mobile devices to detect the DFU on foot images for remote monitoring. Despite receiving very good performance for the recognition of DFU, these algorithms were not able to detect pre-ulcer conditions and very subtle DFU. Although recognition of DFU by computer vision algorithms is a valuable study, we performed the further analysis of DFU on foot images to determine factors that predict the risk of amputation such as the presence of infection and ischemia in DFU. The complete DFU diagnosis system with these computer vision algorithms have the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution with more data and expert annotations

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    A Low-cost Depth Imaging Mobile Platform for Canola Phenotyping

    Get PDF
    To meet the high demand for supporting and accelerating progress in the breeding of novel traits, plant scientists and breeders have to measure a large number of plants and their characteristics accurately. A variety of imaging methodologies are being deployed to acquire data for quantitative studies of complex traits. When applied to a large number of plants such as canola plants, however, a complete three-dimensional (3D) model is time-consuming and expensive for high-throughput phenotyping with an enormous amount of data. In some contexts, a full rebuild of entire plants may not be necessary. In recent years, many 3D plan phenotyping techniques with high cost and large-scale facilities have been introduced to extract plant phenotypic traits, but these applications may be affected by limited research budgets and cross environments. This thesis proposed a low-cost depth and high-throughput phenotyping mobile platform to measure canola plant traits in cross environments. Methods included detecting and counting canola branches and seedpods, monitoring canola growth stages, and fusing color images to improve images resolution and achieve higher accuracy. Canola plant traits were examined in both controlled environment and field scenarios. These methodologies were enhanced by different imaging techniques. Results revealed that this phenotyping mobile platform can be used to investigate canola plant traits in cross environments with high accuracy. The results also show that algorithms for counting canola branches and seedpods enable crop researchers to analyze the relationship between canola genotypes and phenotypes and estimate crop yields. In addition to counting algorithms, fusing techniques can be helpful for plant breeders with more comfortable access plant characteristics by improving the definition and resolution of color images. These findings add value to the automation, low-cost depth and high-throughput phenotyping for canola plants. These findings also contribute a novel multi-focus image fusion that exhibits a competitive performance with outperforms some other state-of-the-art methods based on the visual saliency maps and gradient domain fast guided filter. This proposed platform and counting algorithms can be applied to not only canola plants but also other closely related species. The proposed fusing technique can be extended to other fields, such as remote sensing and medical image fusion

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    How can the diagnostic accuracy of lower limb cellulitis be improved?

    Get PDF
    Introduction Lower limb cellulitis (cellulitis) affects 1 in 40 people annually in the United Kingdom. However, misdiagnosis is common: approximately a third of those presenting with a red leg and initially managed as cellulitis turn out to have other diagnoses. Incorrect diagnoses lead to inappropriate hospital admissions and antibiotic prescribing. How to improve the diagnostic accuracy of cellulitis is therefore imperative, and a key research priority from the James Lind Alliance cellulitis priority setting partnership. Aim The main aim of this thesis was to explore how the diagnosis of cellulitis can be improved. Methods A scoping review and interview studies with health care professionals and people with cellulitis were undertaken to help to identify the key challenges in diagnosing cellulitis. A systematic review to identify diagnostic tools developed for cellulitis was performed. The interview study with health care professionals also identified key clinical features for future diagnostic tools. Results The key challenges in diagnosing cellulitis centred on three themes: 1) clinical presentation (subthemes: vague early symptoms, overlapping core features, unclear typical features in certain groups); 2) clinical reasoning (subthemes: specific diagnostic tests, subjectivity, strategic decision making); and 3) learning and education. The systematic review identified six different diagnostic tools from eleven studies: a biochemical marker, diagnostic criterion, a diagnostic decision support system, a diagnostic predictive model, thermal imaging and light imaging. All studies were considered to have a high risk of bias in at least one domain. Health care professionals identified key clinical features for a cellulitis diagnosis, which could be considered for inclusion in future diagnostic tools. Conclusion Despite a third of suspected cellulitis presentations being misdiagnosed, the solutions to improve the diagnostic accuracy of cellulitis remain limited. This thesis has highlighted the challenges in diagnosing cellulitis and has identified emerging diagnostic tools warranting further investigation
    • …
    corecore