18 research outputs found

    Machine learning strategies for diagnostic imaging support on histopathology and optical coherence tomography

    Full text link
    Tesis por compendio[ES] Esta tesis presenta soluciones de vanguardia basadas en algoritmos de computer vision (CV) y machine learning (ML) para ayudar a los expertos en el diagnóstico clínico. Se centra en dos áreas relevantes en el campo de la imagen médica: la patología digital y la oftalmología. Este trabajo propone diferentes paradigmas de machine learning y deep learning para abordar diversos escenarios de supervisión en el estudio del cáncer de próstata, el cáncer de vejiga y el glaucoma. En particular, se consideran métodos supervisados convencionales para segmentar y clasificar estructuras específicas de la próstata en imágenes histológicas digitalizadas. Para el reconocimiento de patrones específicos de la vejiga, se llevan a cabo enfoques totalmente no supervisados basados en técnicas de deep-clustering. Con respecto a la detección del glaucoma, se aplican algoritmos de memoria a corto plazo (LSTMs) que permiten llevar a cabo un aprendizaje recurrente a partir de volúmenes de tomografía por coherencia óptica en el dominio espectral (SD-OCT). Finalmente, se propone el uso de redes neuronales prototípicas (PNN) en un marco de few-shot learning para determinar el nivel de gravedad del glaucoma a partir de imágenes OCT circumpapilares. Los métodos de inteligencia artificial (IA) que se detallan en esta tesis proporcionan una valiosa herramienta de ayuda al diagnóstico por imagen, ya sea para el diagnóstico histológico del cáncer de próstata y vejiga o para la evaluación del glaucoma a partir de datos de OCT.[CA] Aquesta tesi presenta solucions d'avantguarda basades en algorismes de *computer *vision (CV) i *machine *learning (ML) per a ajudar als experts en el diagnòstic clínic. Se centra en dues àrees rellevants en el camp de la imatge mèdica: la patologia digital i l'oftalmologia. Aquest treball proposa diferents paradigmes de *machine *learning i *deep *learning per a abordar diversos escenaris de supervisió en l'estudi del càncer de pròstata, el càncer de bufeta i el glaucoma. En particular, es consideren mètodes supervisats convencionals per a segmentar i classificar estructures específiques de la pròstata en imatges histològiques digitalitzades. Per al reconeixement de patrons específics de la bufeta, es duen a terme enfocaments totalment no supervisats basats en tècniques de *deep-*clustering. Respecte a la detecció del glaucoma, s'apliquen algorismes de memòria a curt termini (*LSTMs) que permeten dur a terme un aprenentatge recurrent a partir de volums de tomografia per coherència òptica en el domini espectral (SD-*OCT). Finalment, es proposa l'ús de xarxes neuronals *prototípicas (*PNN) en un marc de *few-*shot *learning per a determinar el nivell de gravetat del glaucoma a partir d'imatges *OCT *circumpapilares. Els mètodes d'intel·ligència artificial (*IA) que es detallen en aquesta tesi proporcionen una valuosa eina d'ajuda al diagnòstic per imatge, ja siga per al diagnòstic histològic del càncer de pròstata i bufeta o per a l'avaluació del glaucoma a partir de dades d'OCT.[EN] This thesis presents cutting-edge solutions based on computer vision (CV) and machine learning (ML) algorithms to assist experts in clinical diagnosis. It focuses on two relevant areas at the forefront of medical imaging: digital pathology and ophthalmology. This work proposes different machine learning and deep learning paradigms to address various supervisory scenarios in the study of prostate cancer, bladder cancer and glaucoma. In particular, conventional supervised methods are considered for segmenting and classifying prostate-specific structures in digitised histological images. For bladder-specific pattern recognition, fully unsupervised approaches based on deep-clustering techniques are carried out. Regarding glaucoma detection, long-short term memory algorithms (LSTMs) are applied to perform recurrent learning from spectral-domain optical coherence tomography (SD-OCT) volumes. Finally, the use of prototypical neural networks (PNNs) in a few-shot learning framework is proposed to determine the severity level of glaucoma from circumpapillary OCT images. The artificial intelligence (AI) methods detailed in this thesis provide a valuable tool to aid diagnostic imaging, whether for the histological diagnosis of prostate and bladder cancer or glaucoma assessment from OCT data.García Pardo, JG. (2022). Machine learning strategies for diagnostic imaging support on histopathology and optical coherence tomography [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/182400Compendi

    Information Extraction from Messy Data, Noisy Spectra, Incomplete Data, and Unlabeled Images

    Get PDF
    Data collected from real-world scenarios are never ideal but often messy because data errors are inevitable and may occur in creative and unexpected ways. And there are always some unexpected tricky troubles between ideal theory and real-world applications. Although with the development of data science, more and more elegant algorithms have been well developed and validated by rigorous proof, data scientists still have to spend 50\% to 80\% of their work time on cleaning and organizing data, leaving little time for actual data analysis. This dissertation research involves three scenarios of statistical modeling with common data issues: quantifying function effect on noisy functional data, multistage decision-making model over incomplete data, and unsupervised image segmentation over imperfect engineering images. And three methodologies are proposed accordingly to solve them efficiently. In Chapter 2, a general two-step procedure is proposed to quantify the effects of a certain treatment on the spectral signals subjecting to multiple uncertainties for an engineering application that involves materials treatment for aircraft maintenance. With this procedure, two types of uncertainties in the spectral signals, offset shift and multiplicative error, are carefully addressed. In the two-step procedure, a novel optimization problem is formulated to estimate the representative template spectrum first, and then another optimization problem is formulated to obtain the pattern of modification g\mathbf{g} that reveals how the treatment affects the shape of the spectral signal, as well as a vector δ\boldsymbol{\delta} that describes the degree of change caused by different treatment magnitudes. The effectiveness of the proposed method is validated in a simulation study. \textcolor{black}{Furtherly, in} a real case study, the proposed method \textcolor{black}{is used} to investigate the effect of plasma exposure on the FTIR spectra. As a result, the proposed method effectively identifies the pattern of modification under uncertainties in the manufacturing environment, which matches the knowledge of the affected chemical components by the plasma treatment. And the recovered magnitude of modification provides guidance in selecting the control parameter of the plasma treatment. In Chapter 3, an active learning-based multistage sequential decision-making model is proposed to assist doctors and patients to make cost-effective treatment recommendations when some clinical data are more expensive or time-consuming to collect than other laboratory data. The main idea is to formulate the incomplete clinical data into a multistage decision-making model where the doctors can make diagnostics decisions sequentially in these stages, and actively collect only the necessary examination data from certain patients rather than all. There are two novelties in estimating parameters in the proposed model. First, unlike the existed ordinal logistic regression model which only models a single stage, a multistage model is built by maximizing the joint likelihood function for all samples in all stages. Second, considering that the data in different stages are nested in a cumulative way, it is assumed that the coefficients for common features in different stages are invariant. Compared with the baseline approach that models each stage individually and independently, the proposed multistage model with common coefficients assumption has significant advantages. It reduces the number of variables to estimate significantly, improves the computational efficiency, and makes the doctors feel intuitive by assuming that newly added features will not affect the weights of existed ones. In a simulation study, the relative efficiency of the proposed method with regards to the baseline approach is 162\% to 1,938\%, proving its efficiency and effectiveness soundly. Then, in a real case study, the proposed method estimates all parameters very efficiently and reasonably. %It estimates all parameters simultaneously to reach the global optimum and fully considers the cumulative characteristics between these stages by making common coefficients assumption. In Chapter 4, a simple yet very effective unsupervised image segmentation method, called RG-filter, is proposed to segment engineering images with no significant contrast between foreground and background for a material testing application. With the challenge of limited data size, imperfect data quality, unreachable binary true label, we developed the RG-filter which thresholding the pixels according to the relative magnitude of the R channel and G channel of the RGB image. %And the other one is called the superpixels clustering algorithm, where we add another layer of clustering over the segmented superpixels to binarize their labels. To test the performance of the existed image segmentation and proposed algorithm on our CFRP image data, we conducted a series of experiments over an example specimen. Comparing all the pixel labeling results, the proposed RG-filter outperforms the others to be the most recommended one. in addition, it is super intuitive and efficient in computation. The proposed RG-filter can help to analyze the failure mode distribution and proportion on the surface of composite material after destructive DCB testing. The result can help engineers better understand the weak link during the bonding of composite materials, which may provide guidance on how to improve the joining of structures during aircraft maintenance. Also, it can be crucial data when modeling together with some downstream data as a whole. And if we can predict it from other variables, the destructive DCB testing can be avoided, a lot of time and money can be saved. In Chapter 5, we concluded the dissertation and summarized the original contributions. In addition, future research topics associated with the dissertation have also been discussed. In summary, the dissertation contributes to the area of \textit{System Informatics and Control} (SIAC) to develop systematic methodologies based on messy real-world data in the field of composite materials and healthcare. The fundamental methodologies developed in this thesis have the potential to be applied to other advanced manufacturing systems.Ph.D

    The application of b-mode ultrasonography for analysis of human skeletal muscle

    Get PDF
    Skeletal muscles control the joints of the skeletal system and they allow human movement and interaction with the environment. They are vital for stability in balance, walking and running, and many other skilled motor tasks. To understand how muscles operate in general and specific situations there are a variety of tools at the disposal of research scientists and clinicians for analysing muscle function. Strain gauges for example allow the quantification of forces exerted during joint rotation. However, skeletal muscles are multilayer systems and often different muscles are responsible for the overall force generated during joint rotation. Therefore, strain gauges do not reveal the extent of the contribution of individual muscles during muscle function. The most widely-used and accepted muscle analysis tool is electromyography (EMG), which can measure the activation level of individual muscles by measuring the electrical potential propagating through muscle resulting from local activations of motor units. However, EMG does not linearly relate to any real physical forces, meaning that without prior knowledge of the force exertion on the level of the muscle, force cannot be estimated. EMG can measure superficial layers of muscle non-invasively by attaching surface electrodes (surface EMG) to the skin over the belly of the muscle. To measure the activity of individual muscle beneath the superficial muscle, a needle or thin-wire electrode must be inserted through the skin and into the muscle volume (intramuscular EMG), which is invasive and not practical in many situations. Furthermore, intramuscular EMG can only provide measurement of a very small volume (<1mm3) which can have varying amounts of active motor units. Ultrasonography is a powerful cost-effective non-invasive imaging technology which allows real-time observation of cross-sections of multiple layers of dynamic skeletal muscle. Recent advances in automated skeletal muscle ultrasound analysis techniques, and advances in image processing techniques make ultrasound a valuable line of investigation for analysis of dynamic skeletal muscle. This aim of this thesis is to study and develop advanced image analysis techniques applicable to the analysis of dynamic skeletal muscle. The broader aim is to understand the capacity/limits of ultrasound as a skeletal muscle analysis tool. The ideas presented within offer new approaches to modelling complex muscle architecture and function via ultrasound. Tools have also been developed here that will contribute to, and promote ultrasound skeletal muscle analysis as a new and emerging technology which may be used by clinicians and research scientists to develop our understanding of skeletal muscle function. The main findings of this thesis are that automated segmentation of architecturally simple and complex skeletal muscle groups is possible and accurate, and that information about joint angles and muscle activity/force can be automatically extracted directly from ultrasound images without the explicit knowledge of how to extract it. The techniques used offer new possibilities for non-invasive information extraction from complex muscle groups such as the muscles in the human posterior neck

    Collaborative design and feasibility assessment of computational nutrient sensing for simulated food-intake tracking in a healthcare environment

    Get PDF
    One in four older adults (65 years and over) are living with some form of malnutrition. This increases their odds of hospitalization four-fold and is associated with decreased quality of life and increased mortality. In long-term care (LTC), residents have more complex care needs and the proportion affected is a staggering 54% primarily due to low intake. Tracking intake is important for monitoring whether residents are meeting their nutritional needs however current methods are time-consuming, subjective, and prone to large margins of error. This reduces the utility of tracked data and makes it challenging to identify individuals at-risk in a timely fashion. While technologies exist for tracking food-intake, they have not been designed for use within the LTC context and require a large time burden by the user. Especially in light of the machine learning boom, there is great opportunity to harness learnings from this domain and apply it to the field of nutrition for enhanced food-intake tracking. Additionally, current approaches to monitoring food-intake tracking are limited by the nutritional database to which they are linked making generalizability a challenge. Drawing inspiration from current methods, the desires of end-users (primary users: personal support workers, registered staff, dietitians), and machine learning approaches suitable for this context in which there is limited data available, we investigated novel methods for assessing needs in this environment and imagine an alternative approach. We leveraged image processing and machine learning to remove subjectivity while increasing accuracy and precision to support higher-quality food-intake tracking. This thesis presents the ideation, design, development and evaluation of a collaboratively designed, and feasibility assessment, of computational nutrient sensing for simulated food-intake tracking in the LTC environment. We sought to remove potential barriers to uptake through collaborative design and ongoing end user engagement for developing solution concepts for a novel Automated Food Imaging and Nutrient Intake Tracking (AFINI-T) system while implementing the technology in parallel. More specifically, we demonstrated the effectiveness of applying a modified participatory iterative design process modeled from the Google Sprint framework in the LTC context which identified priority areas and established functional criteria for usability and feasibility. Concurrently, we developed the novel AFINI-T system through the co-integration of image processing and machine learning and guided by the application of food-intake tracking in LTC to address three questions: (1) where is there food? (i.e., food segmentation), (2) how much food was consumed? (i.e., volume estimation) using a fully automatic imaging system for quantifying food-intake. We proposed a novel deep convolutional encoder-decoder food network with depth-refinement (EDFN-D) using an RGB-D camera for quantifying a plate’s remaining food volume relative to reference portions in whole and modified texture foods. To determine (3) what foods are present (i.e., feature extraction and classification), we developed a convolutional autoencoder to learn meaningful food-specific features and developed classifiers which leverage a priori information about when certain foods would be offered and the level of texture modification prescribed to apply real-world constraints of LTC. We sought to address real-world complexity by assessing a wide variety of food items through the construction of a simulated food-intake dataset emulating various degrees of food-intake and modified textures (regular, minced, puréed). To ensure feasibility-related barriers to uptake were mitigated, we employed a feasibility assessment using the collaboratively designed prototype. Finally, this thesis explores the feasibility of applying biophotonic principles to food as a first step to enhancing food database estimates. Motivated by a theoretical optical dilution model, a novel deep neural network (DNN) was evaluated for estimating relative nutrient density of commercially prepared purées. For deeper analysis we describe the link between color and two optically active nutrients, vitamin A, and anthocyanins, and suggest it may be feasible to utilize optical properties of foods to enhance nutritional estimation. This research demonstrates a transdisciplinary approach to designing and implementing a novel food-intake tracking system which addresses several shortcomings of the current method. Upon translation, this system may provide additional insights for supporting more timely nutritional interventions through enhanced monitoring of nutritional intake status among LTC residents

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Visual Tracking of Instruments in Minimally Invasive Surgery

    Get PDF
    Reducing access trauma has been a focal point for modern surgery and tackling the challenges that arise from new operating techniques and instruments is an exciting and open area of research. Lack of awareness and control from indirect manipulation and visualization has created a need to augment the surgeon's understanding and perception of how their instruments interact with the patient's anatomy but current methods of achieving this are inaccurate and difficult to integrate into the surgical workflow. Visual methods have the potential to recover the position and orientation of the instruments directly in the reference frame of the observing camera without the need to introduce additional hardware to the operating room and perform complex calibration steps. This thesis explores how this problem can be solved with the fusion of coarse region and fine scale point features to enable the recovery of both the rigid and articulated degrees of freedom of laparoscopic and robotic instruments using only images provided by the surgical camera. Extensive experiments on different image features are used to determine suitable representations for reliable and robust pose estimation. Using this information a novel framework is presented which estimates 3D pose with a region matching scheme while using frame-to-frame optical flow to account for challenges due to symmetry in the instrument design. The kinematic structure of articulated robotic instruments is also used to track the movement of the head and claspers. The robustness of this method was evaluated on calibrated ex-vivo images and in-vivo sequences and comparative studies are performed with state-of-the-art kinematic assisted tracking methods

    Development of an image processing method for automated, non-invasive and scale-independent monitoring of adherent cell cultures

    Get PDF
    Adherent cell culture is a key experimental method for biological investigations in diverse areas such as developmental biology, drug discovery and biotechnology. Light microscopy-based methods, for example phase contrast microscopy (PCM), are routinely used for visual inspection of adherent cells cultured in transparent polymeric vessels. However, the outcome of such inspections is qualitative and highly subjective. Analytical methods that produce quantitative results can be used but often at the expense of culture integrity or viability. In this work, an imaging-based strategy to adherent cell cultures monitoring was investigated. Automated image processing and analysis of PCM images enabled quantitative measurements of key cell culture characteristics. Two types of segmentation algorithms for the detection of cellular objects on PCM images were evaluated. The first one, based on contrast filters and dynamic programming was quick (<1s per 1280×960 image) and performed well for different cell lines, over a wide range of imaging conditions. The second approach, termed ‘trainable segmentation’, was based on machine learning using a variety of image features such as local structures and symmetries. It accommodated complex segmentation tasks while maintaining low processing times (<5s per 1280×960 image). Based on the output from these segmentation algorithms, imaging-based monitoring of a large palette of cell responses was demonstrated, including proliferation, growth arrest, differentiation, and cell death. This approach is non-invasive and applicable to any transparent culture vessel, including microfabricated culture devices where a lack of suitable analytical methods often limits their applicability. This work was a significant contribution towards the establishment of robust, standardised, and affordable monitoring methods for adherent cell cultures. Finally, automated image processing was combined with computer-controlled cultures in small-scale devices. This provided a first demonstration of how adaptive culture protocols could be established; i.e. culture protocols which are based on cellular response instead of arbitrary time points

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore