15,866 research outputs found
Health Figures: An Open Source JavaScript Library for Health Data Visualization
The way we look at data has a great impact on how we can understand it,
particularly when the data is related to health and wellness. Due to the
increased use of self-tracking devices and the ongoing shift towards preventive
medicine, better understanding of our health data is an important part of
improving the general welfare of the citizens. Electronic Health Records,
self-tracking devices and mobile applications provide a rich variety of data
but it often becomes difficult to understand. We implemented the hFigures
library inspired on the hGraph visualization with additional improvements. The
purpose of the library is to provide a visual representation of the evolution
of health measurements in a complete and useful manner. We researched the
usefulness and usability of the library by building an application for health
data visualization in a health coaching program. We performed a user evaluation
with Heuristic Evaluation, Controlled User Testing and Usability
Questionnaires. In the Heuristics Evaluation the average response was 6.3 out
of 7 points and the Cognitive Walkthrough done by usability experts indicated
no design or mismatch errors. In the CSUQ usability test the system obtained an
average score of 6.13 out of 7, and in the ASQ usability test the overall
satisfaction score was 6.64 out of 7. We developed hFigures, an open source
library for visualizing a complete, accurate and normalized graphical
representation of health data. The idea is based on the concept of the hGraph
but it provides additional key features, including a comparison of multiple
health measurements over time. We conducted a usability evaluation of the
library as a key component of an application for health and wellness
monitoring. The results indicate that the data visualization library was
helpful in assisting users in understanding health data and its evolution over
time.Comment: BMC Medical Informatics and Decision Making 16.1 (2016
Emotions in context: examining pervasive affective sensing systems, applications, and analyses
Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; âsensingâ, âanalysisâ, and âapplicationâ. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
The Emerging Trends of Multi-Label Learning
Exabytes of data are generated daily by humans, leading to the growing need
for new efforts in dealing with the grand challenges for multi-label learning
brought by big data. For example, extreme multi-label classification is an
active and rapidly growing research area that deals with classification tasks
with an extremely large number of classes or labels; utilizing massive data
with limited supervision to build a multi-label classification model becomes
valuable for practical applications, etc. Besides these, there are tremendous
efforts on how to harvest the strong learning capability of deep learning to
better capture the label dependencies in multi-label learning, which is the key
for deep learning to address real-world classification tasks. However, it is
noted that there has been a lack of systemic studies that focus explicitly on
analyzing the emerging trends and new challenges of multi-label learning in the
era of big data. It is imperative to call for a comprehensive survey to fulfill
this mission and delineate future research directions and new applications.Comment: Accepted to TPAMI 202
Mobile Wound Assessment and 3D Modeling from a Single Image
The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image
- âŠ