8,688 research outputs found
Medical data processing and analysis for remote health and activities monitoring
Recent developments in sensor technology, wearable computing, Internet of Things (IoT), and wireless communication have given rise to research in ubiquitous healthcare and remote monitoring of human\u2019s health and activities. Health monitoring systems involve processing and analysis of data retrieved from smartphones, smart watches, smart bracelets, as well as various sensors and wearable devices. Such systems enable continuous monitoring of patients psychological and health conditions by sensing and transmitting measurements such as heart rate, electrocardiogram, body temperature, respiratory rate, chest sounds, or blood pressure. Pervasive healthcare, as a relevant application domain in this context, aims at revolutionizing the delivery of medical services through a medical assistive environment and facilitates the independent living of patients. In this chapter, we discuss (1) data collection, fusion, ownership and privacy issues; (2) models, technologies and solutions for medical data processing and analysis; (3) big medical data analytics for remote health monitoring; (4) research challenges and opportunities in medical data analytics; (5) examples of case studies and practical solutions
Statistics in the Big Data era
It is estimated that about 90% of the currently available data have been produced over the last two years. Of these, only 0.5% is effectively analysed and used. However, this data can be a great wealth, the oil of 21st century, when analysed with the right approach. In this article, we illustrate some specificities of these data and the great interest that they can represent in many fields. Then we consider some challenges to statistical analysis that emerge from their analysis, suggesting some strategies
Evaluation of IoT-Based Computational Intelligence Tools for DNA Sequence Analysis in Bioinformatics
In contemporary age, Computational Intelligence (CI) performs an essential
role in the interpretation of big biological data considering that it could
provide all of the molecular biology and DNA sequencing computations. For this
purpose, many researchers have attempted to implement different tools in this
field and have competed aggressively. Hence, determining the best of them among
the enormous number of available tools is not an easy task, selecting the one
which accomplishes big data in the concise time and with no error can
significantly improve the scientist's contribution in the bioinformatics field.
This study uses different analysis and methods such as Fuzzy, Dempster-Shafer,
Murphy and Entropy Shannon to provide the most significant and reliable
evaluation of IoT-based computational intelligence tools for DNA sequence
analysis. The outcomes of this study can be advantageous to the bioinformatics
community, researchers and experts in big biological data
Big Data and Reliability Applications: The Complexity Dimension
Big data features not only large volumes of data but also data with
complicated structures. Complexity imposes unique challenges in big data
analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an
extensive discussion of the opportunities and challenges in big data and
reliability, and described engineering systems that can generate big data that
can be used in reliability analysis. Meeker and Hong (2014) focused on large
scale system operating and environment data (i.e., high-frequency multivariate
time series data), and provided examples on how to link such data as covariates
to traditional reliability responses such as time to failure, time to
recurrence of events, and degradation measurements. This paper intends to
extend that discussion by focusing on how to use data with complicated
structures to do reliability analysis. Such data types include high-dimensional
sensor data, functional curve data, and image streams. We first provide a
review of recent development in those directions, and then we provide a
discussion on how analytical methods can be developed to tackle the challenging
aspects that arise from the complexity feature of big data in reliability
applications. The use of modern statistical methods such as variable selection,
functional data analysis, scalar-on-image regression, spatio-temporal data
models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure
Identifying smart design attributes for Industry 4.0 customization using a clustering Genetic Algorithm
Industry 4.0 aims at achieving mass customization at a
mass production cost. A key component to realizing this is accurate
prediction of customer needs and wants, which is however a
challenging issue due to the lack of smart analytics tools. This
paper investigates this issue in depth and then develops a predictive
analytic framework for integrating cloud computing, big data
analysis, business informatics, communication technologies, and
digital industrial production systems. Computational intelligence
in the form of a cluster k-means approach is used to manage
relevant big data for feeding potential customer needs and wants
to smart designs for targeted productivity and customized mass
production. The identification of patterns from big data is achieved
with cluster k-means and with the selection of optimal attributes
using genetic algorithms. A car customization case study shows
how it may be applied and where to assign new clusters with
growing knowledge of customer needs and wants. This approach
offer a number of features suitable to smart design in realizing
Industry 4.0
- …