1,387 research outputs found
Multi-sensor data fusion in mobile devices for the identification of Activities of Daily Living
Following the recent advances in technology and the growing use of mobile devices such as
smartphones, several solutions may be developed to improve the quality of life of users in the
context of Ambient Assisted Living (AAL). Mobile devices have different available sensors, e.g.,
accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS)
receiver, which allow the acquisition of physical and physiological parameters for the
recognition of different Activities of Daily Living (ADL) and the environments in which they are
performed. The definition of ADL includes a well-known set of tasks, which include basic selfcare
tasks, based on the types of skills that people usually learn in early childhood, including
feeding, bathing, dressing, grooming, walking, running, jumping, climbing stairs, sleeping,
watching TV, working, listening to music, cooking, eating and others. On the context of AAL,
some individuals (henceforth called user or users) need particular assistance, either because
the user has some sort of impairment, or because the user is old, or simply because users
need/want to monitor their lifestyle. The research and development of systems that provide a
particular assistance to people is increasing in many areas of application. In particular, in the
future, the recognition of ADL will be an important element for the development of a personal
digital life coach, providing assistance to different types of users. To support the recognition
of ADL, the surrounding environments should be also recognized to increase the reliability of
these systems.
The main focus of this Thesis is the research on methods for the fusion and classification of the
data acquired by the sensors available in off-the-shelf mobile devices in order to recognize ADL
in almost real-time, taking into account the large diversity of the capabilities and
characteristics of the mobile devices available in the market. In order to achieve this objective,
this Thesis started with the review of the existing methods and technologies to define the
architecture and modules of the method for the identification of ADL. With this review and
based on the knowledge acquired about the sensors available in off-the-shelf mobile devices,
a set of tasks that may be reliably identified was defined as a basis for the remaining research
and development to be carried out in this Thesis. This review also identified the main stages
for the development of a new method for the identification of the ADL using the sensors
available in off-the-shelf mobile devices; these stages are data acquisition, data processing,
data cleaning, data imputation, feature extraction, data fusion and artificial intelligence. One
of the challenges is related to the different types of data acquired from the different sensors,
but other challenges were found, including the presence of environmental noise, the positioning
of the mobile device during the daily activities, the limited capabilities of the mobile devices
and others. Based on the acquired data, the processing was performed, implementing data
cleaning and feature extraction methods, in order to define a new framework for the recognition of ADL. The data imputation methods were not applied, because at this stage of
the research their implementation does not have influence in the results of the identification
of the ADL and environments, as the features are extracted from a set of data acquired during
a defined time interval and there are no missing values during this stage. The joint selection of
the set of usable sensors and the identifiable set of tasks will then allow the development of a
framework that, considering multi-sensor data fusion technologies and context awareness, in
coordination with other information available from the user context, such as his/her agenda
and the time of the day, will allow to establish a profile of the tasks that the user performs in
a regular activity day. The classification method and the algorithm for the fusion of the features
for the recognition of ADL and its environments needs to be deployed in a machine with some
computational power, while the mobile device that will use the created framework, can
perform the identification of the ADL using a much less computational power. Based on the
results reported in the literature, the method chosen for the recognition of the ADL is composed
by three variants of Artificial Neural Networks (ANN), including simple Multilayer Perceptron
(MLP) networks, Feedforward Neural Networks (FNN) with Backpropagation, and Deep Neural
Networks (DNN).
Data acquisition can be performed with standard methods. After the acquisition, the data must
be processed at the data processing stage, which includes data cleaning and feature extraction
methods. The data cleaning method used for motion and magnetic sensors is the low pass filter,
in order to reduce the noise acquired; but for the acoustic data, the Fast Fourier Transform
(FFT) was applied to extract the different frequencies. When the data is clean, several features
are then extracted based on the types of sensors used, including the mean, standard deviation,
variance, maximum value, minimum value and median of raw data acquired from the motion
and magnetic sensors; the mean, standard deviation, variance and median of the maximum
peaks calculated with the raw data acquired from the motion and magnetic sensors; the five
greatest distances between the maximum peaks calculated with the raw data acquired from
the motion and magnetic sensors; the mean, standard deviation, variance, median and 26 Mel-
Frequency Cepstral Coefficients (MFCC) of the frequencies obtained with FFT based on the raw
data acquired from the microphone data; and the distance travelled calculated with the data
acquired from the GPS receiver. After the extraction of the features, these will be grouped in
different datasets for the application of the ANN methods and to discover the method and
dataset that reports better results. The classification stage was incrementally developed,
starting with the identification of the most common ADL (i.e., walking, running, going upstairs,
going downstairs and standing activities) with motion and magnetic sensors. Next, the
environments were identified with acoustic data, i.e., bedroom, bar, classroom, gym, kitchen,
living room, hall, street and library. After the environments are recognized, and based on the
different sets of sensors commonly available in the mobile devices, the data acquired from the
motion and magnetic sensors were combined with the recognized environment in order to
differentiate some activities without motion, i.e., sleeping and watching TV. The number of recognized activities in this stage was increased with the use of the distance travelled,
extracted from the GPS receiver data, allowing also to recognize the driving activity.
After the implementation of the three classification methods with different numbers of
iterations, datasets and remaining configurations in a machine with high processing
capabilities, the reported results proved that the best method for the recognition of the most
common ADL and activities without motion is the DNN method, but the best method for the
recognition of environments is the FNN method with Backpropagation. Depending on the
number of sensors used, this implementation reports a mean accuracy between 85.89% and
89.51% for the recognition of the most common ADL, equals to 86.50% for the recognition of
environments, and equals to 100% for the recognition of activities without motion, reporting
an overall accuracy between 85.89% and 92.00%.
The last stage of this research work was the implementation of the structured framework for
the mobile devices, verifying that the FNN method requires a high processing power for the
recognition of environments and the results reported with the mobile application are lower
than the results reported with the machine with high processing capabilities used. Thus, the
DNN method was also implemented for the recognition of the environments with the mobile
devices. Finally, the results reported with the mobile devices show an accuracy between 86.39%
and 89.15% for the recognition of the most common ADL, equal to 45.68% for the recognition
of environments, and equal to 100% for the recognition of activities without motion, reporting
an overall accuracy between 58.02% and 89.15%.
Compared with the literature, the results returned by the implemented framework show only
a residual improvement. However, the results reported in this research work comprehend the
identification of more ADL than the ones described in other studies. The improvement in the
recognition of ADL based on the mean of the accuracies is equal to 2.93%, but the maximum
number of ADL and environments previously recognized was 13, while the number of ADL and
environments recognized with the framework resulting from this research is 16. In conclusion,
the framework developed has a mean improvement of 2.93% in the accuracy of the recognition
for a larger number of ADL and environments than previously reported.
In the future, the achievements reported by this PhD research may be considered as a start
point of the development of a personal digital life coach, but the number of ADL and
environments recognized by the framework should be increased and the experiments should be
performed with different types of devices (i.e., smartphones and smartwatches), and the data
imputation and other machine learning methods should be explored in order to attempt to
increase the reliability of the framework for the recognition of ADL and its environments.Após os recentes avanços tecnológicos e o crescente uso dos dispositivos móveis, como por
exemplo os smartphones, várias soluções podem ser desenvolvidas para melhorar a qualidade
de vida dos utilizadores no contexto de Ambientes de Vida Assistida (AVA) ou Ambient Assisted
Living (AAL). Os dispositivos móveis integram vários sensores, tais como acelerómetro,
giroscópio, magnetómetro, microfone e recetor de Sistema de Posicionamento Global (GPS),
que permitem a aquisição de vários parâmetros fÃsicos e fisiológicos para o reconhecimento de
diferentes Atividades da Vida Diária (AVD) e os seus ambientes. A definição de AVD inclui um
conjunto bem conhecido de tarefas que são tarefas básicas de autocuidado, baseadas nos tipos
de habilidades que as pessoas geralmente aprendem na infância. Essas tarefas incluem
alimentar-se, tomar banho, vestir-se, fazer os cuidados pessoais, caminhar, correr, pular, subir
escadas, dormir, ver televisão, trabalhar, ouvir música, cozinhar, comer, entre outras. No
contexto de AVA, alguns indivÃduos (comumente chamados de utilizadores) precisam de
assistência particular, seja porque o utilizador tem algum tipo de deficiência, seja porque é
idoso, ou simplesmente porque o utilizador precisa/quer monitorizar e treinar o seu estilo de
vida. A investigação e desenvolvimento de sistemas que fornecem algum tipo de assistência
particular está em crescente em muitas áreas de aplicação. Em particular, no futuro, o
reconhecimento das AVD é uma parte importante para o desenvolvimento de um assistente
pessoal digital, fornecendo uma assistência pessoal de baixo custo aos diferentes tipos de
pessoas. pessoas. Para ajudar no reconhecimento das AVD, os ambientes em que estas se
desenrolam devem ser reconhecidos para aumentar a fiabilidade destes sistemas.
O foco principal desta Tese é o desenvolvimento de métodos para a fusão e classificação dos
dados adquiridos a partir dos sensores disponÃveis nos dispositivos móveis, para o
reconhecimento quase em tempo real das AVD, tendo em consideração a grande diversidade
das caracterÃsticas dos dispositivos móveis disponÃveis no mercado. Para atingir este objetivo,
esta Tese iniciou-se com a revisão dos métodos e tecnologias existentes para definir a
arquitetura e os módulos do novo método de identificação das AVD. Com esta revisão da
literatura e com base no conhecimento adquirido sobre os sensores disponÃveis nos dispositivos
móveis disponÃveis no mercado, um conjunto de tarefas que podem ser identificadas foi
definido para as pesquisas e desenvolvimentos desta Tese. Esta revisão também identifica os
principais conceitos para o desenvolvimento do novo método de identificação das AVD,
utilizando os sensores, são eles: aquisição de dados, processamento de dados, correção de
dados, imputação de dados, extração de caracterÃsticas, fusão de dados e extração de
resultados recorrendo a métodos de inteligência artificial. Um dos desafios está relacionado
aos diferentes tipos de dados adquiridos pelos diferentes sensores, mas outros desafios foram
encontrados, sendo os mais relevantes o ruÃdo ambiental, o posicionamento do dispositivo durante a realização das atividades diárias, as capacidades limitadas dos dispositivos móveis.
As diferentes caracterÃsticas das pessoas podem igualmente influenciar a criação dos métodos,
escolhendo pessoas com diferentes estilos de vida e caracterÃsticas fÃsicas para a aquisição e
identificação dos dados adquiridos a partir de sensores. Com base nos dados adquiridos,
realizou-se o processamento dos dados, implementando-se métodos de correção dos dados e a
extração de caracterÃsticas, para iniciar a criação do novo método para o reconhecimento das
AVD. Os métodos de imputação de dados foram excluÃdos da implementação, pois não iriam
influenciar os resultados da identificação das AVD e dos ambientes, na medida em que são
utilizadas as caracterÃsticas extraÃdas de um conjunto de dados adquiridos durante um intervalo
de tempo definido.
A seleção dos sensores utilizáveis, bem como das AVD identificáveis, permitirá o
desenvolvimento de um método que, considerando o uso de tecnologias para a fusão de dados
adquiridos com múltiplos sensores em coordenação com outras informações relativas ao
contexto do utilizador, tais como a agenda do utilizador, permitindo estabelecer um perfil de
tarefas que o utilizador realiza diariamente. Com base nos resultados obtidos na literatura, o
método escolhido para o reconhecimento das AVD são as diferentes variantes das Redes
Neuronais Artificiais (RNA), incluindo Multilayer Perceptron (MLP), Feedforward Neural
Networks (FNN) with Backpropagation and Deep Neural Networks (DNN). No final, após a
criação dos métodos para cada fase do método para o reconhecimento das AVD e ambientes, a
implementação sequencial dos diferentes métodos foi realizada num dispositivo móvel para
testes adicionais.
Após a definição da estrutura do método para o reconhecimento de AVD e ambientes usando
dispositivos móveis, verificou-se que a aquisição de dados pode ser realizada com os métodos
comuns. Após a aquisição de dados, os mesmos devem ser processados no módulo de
processamento de dados, que inclui os métodos de correção de dados e de extração de
caracterÃsticas. O método de correção de dados utilizado para sensores de movimento e
magnéticos é o filtro passa-baixo de modo a reduzir o ruÃdo, mas para os dados acústicos, a
Transformada Rápida de Fourier (FFT) foi aplicada para extrair as diferentes frequências.
Após a correção dos dados, as diferentes caracterÃsticas foram extraÃdas com base nos tipos de
sensores usados, sendo a média, desvio padrão, variância, valor máximo, valor mÃnimo e
mediana de dados adquiridos pelos sensores magnéticos e de movimento, a média, desvio
padrão, variância e mediana dos picos máximos calculados com base nos dados adquiridos pelos
sensores magnéticos e de movimento, as cinco maiores distâncias entre os picos máximos
calculados com os dados adquiridos dos sensores de movimento e magnéticos, a média, desvio
padrão, variância e 26 Mel-Frequency Cepstral Coefficients (MFCC) das frequências obtidas
com FFT com base nos dados obtidos a partir do microfone, e a distância calculada com os
dados adquiridos pelo recetor de GPS. Após a extração das caracterÃsticas, as mesmas são agrupadas em diferentes conjuntos de dados
para a aplicação dos métodos de RNA de modo a descobrir o método e o conjunto de
caracterÃsticas que reporta melhores resultados. O módulo de classificação de dados foi
incrementalmente desenvolvido, começando com a identificação das AVD comuns com sensores
magnéticos e de movimento, i.e., andar, correr, subir escadas, descer escadas e parado. Em
seguida, os ambientes são identificados com dados de sensores acústicos, i.e., quarto, bar, sala
de aula, ginásio, cozinha, sala de estar, hall, rua e biblioteca. Com base nos ambientes
reconhecidos e os restantes sensores disponÃveis nos dispositivos móveis, os dados adquiridos
dos sensores magnéticos e de movimento foram combinados com o ambiente reconhecido para
diferenciar algumas atividades sem movimento (i.e., dormir e ver televisão), onde o número
de atividades reconhecidas nesta fase aumenta com a fusão da distância percorrida, extraÃda
a partir dos dados do recetor GPS, permitindo também reconhecer a atividade de conduzir.
Após a implementação dos três métodos de classificação com diferentes números de iterações,
conjuntos de dados e configurações numa máquina com alta capacidade de processamento, os
resultados relatados provaram que o melhor método para o reconhecimento das atividades
comuns de AVD e atividades sem movimento é o método DNN, mas o melhor método para o
reconhecimento de ambientes é o método FNN with Backpropagation. Dependendo do número
de sensores utilizados, esta implementação reporta uma exatidão média entre 85,89% e 89,51%
para o reconhecimento das AVD comuns, igual a 86,50% para o reconhecimento de ambientes,
e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão
global entre 85,89% e 92,00%.
A última etapa desta Tese foi a implementação do método nos dispositivos móveis, verificando
que o método FNN requer um alto poder de processamento para o reconhecimento de
ambientes e os resultados reportados com estes dispositivos são inferiores aos resultados
reportados com a máquina com alta capacidade de processamento utilizada no
desenvolvimento do método. Assim, o método DNN foi igualmente implementado para o
reconhecimento dos ambientes com os dispositivos móveis. Finalmente, os resultados relatados
com os dispositivos móveis reportam uma exatidão entre 86,39% e 89,15% para o
reconhecimento das AVD comuns, igual a 45,68% para o reconhecimento de ambientes, e igual
a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão geral
entre 58,02% e 89,15%.
Com base nos resultados relatados na literatura, os resultados do método desenvolvido mostram
uma melhoria residual, mas os resultados desta Tese identificam mais AVD que os demais
estudos disponÃveis na literatura. A melhoria no reconhecimento das AVD com base na média
das exatidões é igual a 2,93%, mas o número máximo de AVD e ambientes reconhecidos pelos
estudos disponÃveis na literatura é 13, enquanto o número de AVD e ambientes reconhecidos
com o método implementado é 16. Assim, o método desenvolvido tem uma melhoria de 2,93%
na exatidão do reconhecimento num maior número de AVD e ambientes. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto
de partida para o desenvolvimento de um assistente digital pessoal, mas o número de ADL e
ambientes reconhecidos pelo método deve ser aumentado e as experiências devem ser
repetidas com diferentes tipos de dispositivos móveis (i.e., smartphones e smartwatches), e os
métodos de imputação e outros métodos de classificação de dados devem ser explorados de
modo a tentar aumentar a confiabilidade do método para o reconhecimento das AVD e
ambientes
Improving the Accuracy of Mobile Touchscreen QWERTY Keyboards
In this thesis we explore alternative keyboard layouts in hopes of finding one that increases the accuracy of text input on mobile touchscreen devices. In particular, we investigate if a single swap of 2 keys can significantly improve accuracy on mobile touchscreen QWERTY keyboards. We do so by carefully considering the placement of keys, exploiting a specific vulnerability that occurs within a keyboard layout, namely, that the placement of particular keys next to others may be increasing errors when typing. We simulate the act of typing on a mobile touchscreen QWERTY keyboard, beginning with modeling the typographical errors that can occur when doing so. We then construct a simple autocorrector using Bayesian methods, describing how we can autocorrect user input and evaluate the ability of the keyboard to output the correct text. Then, using our models, we provide methods of testing and define a metric, the WAR rating, which provides us a way of comparing the accuracy of a keyboard layout. After running our tests on all 325 2-key swap layouts against the original QWERTY layout, we show that there exists more than one 2-key swap that increases the accuracy of the current QWERTY layout, and that the best 2-key swap is i ↔ t, increasing accuracy by nearly 0.18 percent
Decentralized Control and Adaptation in Distributed Applications via Web and Semantic Web Technologies
The presented work provides an approach and an implementation for enabling decentralized control in distributed applications composed of heterogeneous components by benefiting from the interoperability provided by the Web stack and relying on semantic technologies for enabling data integration. In particular, the concept of Smart Components enables adaptability at runtime through an adaptation layer and is complemented by a reference architecture as well as a prototypical implementation
Compressed Sensing in Resource-Constrained Environments: From Sensing Mechanism Design to Recovery Algorithms
Compressed Sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. It is promising that CS can be utilized in environments where the signal acquisition process is extremely difficult or costly, e.g., a resource-constrained environment like the smartphone platform, or a band-limited environment like visual sensor network (VSNs). There are several challenges to perform sensing due to the characteristic of these platforms, including, for example, needing active user involvement, computational and storage limitations and lower transmission capabilities. This dissertation focuses on the study of CS in resource-constrained environments.
First, we try to solve the problem on how to design sensing mechanisms that could better adapt to the resource-limited smartphone platform. We propose the compressed phone sensing (CPS) framework where two challenging issues are studied, the energy drainage issue due to continuous sensing which may impede the normal functionality of the smartphones and the requirement of active user inputs for data collection that may place a high burden on the user.
Second, we propose a CS reconstruction algorithm to be used in VSNs for recovery of frames/images. An efficient algorithm, NonLocal Douglas-Rachford (NLDR), is developed. NLDR takes advantage of self-similarity in images using nonlocal means (NL) filtering. We further formulate the nonlocal estimation as the low-rank matrix approximation problem and solve the constrained optimization problem using Douglas-Rachford splitting method.
Third, we extend the NLDR algorithm to surveillance video processing in VSNs and propose recursive Low-rank and Sparse estimation through Douglas-Rachford splitting (rLSDR) method for recovery of the video frame into a low-rank background component and sparse component that corresponds to the moving object. The spatial and temporal low-rank features of the video frame, e.g., the nonlocal similar patches within the single video frame and the low-rank background component residing in multiple frames, are successfully exploited
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
Design and Effect of Continuous Wearable Tactile Displays
Our sense of touch is one of our core senses and while not as information rich as sight and hearing, it tethers us to reality.
Our skin is the largest sensory organ in our body and we rely on it so much that we don\u27t think about it most of the time.
Tactile displays - with the exception of actuators for notifications on smartphones and smartwatches - are currently understudied and underused.
Currently tactile cues are mostly used in smartphones and smartwatches to notify the user of an incoming call or text message.
Specifically continuous displays - displays that do not just send one notification but stay active for an extended period of time and continuously communicate information - are rarely studied.
This thesis aims at exploring the utilization of our vibration perception to create continuous tactile displays.
Transmitting a continuous stream of tactile information to a user in a wearable format can help elevate tactile displays from being mostly used for notifications to becoming more like additional senses enabling us to perceive our environment in new ways.
This work provides a serious step forward in design, effect and use of continuous tactile displays and their use in human-computer interaction.
The main contributions include:
Exploration of Continuous Wearable Tactile Interfaces
This thesis explores continuous tactile displays in different contexts and with different types of tactile information systems. The use-cases were explored in various domains for tactile displays - Sports, Gaming and Business applications. The different types of continuous tactile displays feature one- or multidimensional tactile patterns, temporal patterns and discrete tactile patterns.
Automatic Generation of Personalized Vibration Patterns
In this thesis a novel approach of designing vibrotactile patterns without expert knowledge by leveraging evolutionary algorithms to create personalized vibration patterns - is described. This thesis presents the design of an evolutionary algorithm with a human centered design generating abstract vibration patterns. The evolutionary algorithm was tested in a user study which offered evidence that interactive generation of abstract vibration patterns is possible and generates diverse sets of vibration patterns that can be recognized with high accuracy.
Passive Haptic Learning for Vibration Patterns
Previous studies in passive haptic learning have shown surprisingly strong results for learning Morse Code. If these findings could be confirmed and generalized, it would mean that learning a new tactile alphabet could be made easier and learned in passing. Therefore this claim was investigated in this thesis and needed to be corrected and contextualized. A user study was conducted to study the effects of the interaction design and distraction tasks on the capability to learn stimulus-stimulus-associations with Passive Haptic Learning. This thesis presents evidence that Passive Haptic Learning of vibration patterns induces only a marginal learning effect and is not a feasible and efficient way to learn vibration patterns that include more than two vibrations.
Influence of Reference Frames for Spatial Tactile Stimuli
Designing wearable tactile stimuli that contain spatial information can be a challenge due to the natural body movement of the wearer. An important consideration therefore is what reference frame to use for spatial cues. This thesis investigated allocentric versus egocentric reference frames on the wrist and compared them for induced cognitive load, reaction time and accuracy in a user study. This thesis presents evidence that using an allocentric reference frame drastically lowers cognitive load and slightly lowers reaction time while keeping the same accuracy as an egocentric reference frame, making a strong case for the utilization of allocentric reference frames in tactile bracelets with several tactile actuators
Smart and Pervasive Healthcare
Smart and pervasive healthcare aims at facilitating better healthcare access, provision, and delivery by overcoming spatial and temporal barriers. It represents a shift toward understanding what patients and clinicians really need when placed within a specific context, where traditional face-to-face encounters may not be possible or sufficient. As such, technological innovation is a necessary facilitating conduit. This book is a collection of chapters written by prominent researchers and academics worldwide that provide insights into the design and adoption of new platforms in smart and pervasive healthcare. With the COVID-19 pandemic necessitating changes to the traditional model of healthcare access and its delivery around the world, this book is a timely contribution
The Impact of Digital Technologies on Public Health in Developed and Developing Countries
This open access book constitutes the refereed proceedings of the 18th International Conference on String Processing and Information Retrieval, ICOST 2020, held in Hammamet, Tunisia, in June 2020.* The 17 full papers and 23 short papers presented in this volume were carefully reviewed and selected from 49 submissions. They cover topics such as: IoT and AI solutions for e-health; biomedical and health informatics; behavior and activity monitoring; behavior and activity monitoring; and wellbeing technology. *This conference was held virtually due to the COVID-19 pandemic
Improving Access and Mental Health for Youth Through Virtual Models of Care
The overall objective of this research is to evaluate the use of a mobile health smartphone application (app) to improve the mental health of youth between the ages of 14–25 years, with symptoms of anxiety/depression. This project includes 115 youth who are accessing outpatient mental health services at one of three hospitals and two community agencies. The youth and care providers are using eHealth technology to enhance care. The technology uses mobile questionnaires to help promote self-assessment and track changes to support the plan of care. The technology also allows secure virtual treatment visits that youth can participate in through mobile devices. This longitudinal study uses participatory action research with mixed methods. The majority of participants identified themselves as Caucasian (66.9%). Expectedly, the demographics revealed that Anxiety Disorders and Mood Disorders were highly prevalent within the sample (71.9% and 67.5% respectively). Findings from the qualitative summary established that both staff and youth found the software and platform beneficial
- …