1,780 research outputs found
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
NFV orchestration in edge and fog scenarios
Mención Internacional en el título de doctorLas infraestructuras de red actuales soportan una
variedad diversa de servicios como video bajo demanda,
video conferencias, redes sociales, sistemas
de educación, o servicios de almacenamiento de
fotografías. Gran parte de la población mundial ha
comenzado a utilizar estos servicios, y los utilizan
diariamente. Proveedores de Cloud y operadores
de infraestructuras de red albergan el tráfico de
red generado por estos servicios, y sus tareas de
gestión no solo implican realizar el enrutamiento
del tráfico, sino también el procesado del tráfico de
servicios de red. Tradicionalmente, el procesado
del tráfico ha sido realizado mediante aplicaciones/
programas desplegados en servidores que estaban
dedicados en exclusiva a tareas concretas
como la inspección de paquetes. Sin embargo, en
los últimos anos los servicios de red se han virtualizado
y esto ha dado lugar al paradigma de
virtualización de funciones de red (Network Function
Virtualization (NFV) siguiendo las siglas en
ingles), en el que las funciones de red de un servicio
se ejecutan en contenedores o máquinas virtuales
desacopladas de la infraestructura hardware. Como
resultado, el procesado de tráfico se ha ido
haciendo más flexible gracias al laxo acople del
software y hardware, y a la posibilidad de compartir
funciones de red típicas, como firewalls, entre
los distintos servicios de red.
NFV facilita la automatización de operaciones
de red, ya que tareas como el escalado, o la migración
son típicamente llevadas a cabo mediante
un conjunto de comandos previamente definidos
por la tecnología de virtualización pertinente, bien
mediante contenedores o máquinas virtuales. De
todos modos, sigue siendo necesario decidir el en rutamiento y procesado del tráfico de cada servicio
de red. En otras palabras, que servidores tienen
que encargarse del procesado del tráfico, y que
enlaces de la red tienen que utilizarse para que las
peticiones de los usuarios lleguen a los servidores
finales, es decir, el conocido como embedding problem.
Bajo el paraguas del paradigma NFV, a este
problema se le conoce en inglés como Virtual Network
Embedding (VNE), y esta tesis utiliza el termino
“NFV orchestration algorithm” para referirse
a los algoritmos que resuelven este problema. El
problema del VNE es NP-hard, lo cual significa
que que es imposible encontrar una solución optima
en un tiempo polinómico, independientemente
del tamaño de la red. Como consecuencia, la comunidad
investigadora y de telecomunicaciones
utilizan heurísticos que encuentran soluciones de
manera más rápida que productos para la resolución
de problemas de optimización.
Tradicionalmente, los “NFV orchestration algorithms”
han intentado minimizar los costes de
despliegue derivados de las soluciones asociadas.
Por ejemplo, estos algoritmos intentan no consumir
el ancho de banda de la red, y usar rutas cortas
para no utilizar tantos recursos. Además, una tendencia
reciente ha llevado a la comunidad investigadora
a utilizar algoritmos que minimizan el
consumo energético de los servicios desplegados,
bien mediante la elección de dispositivos con un
consumo energético más eficiente, o mediante el
apagado de dispositivos de red en desuso. Típicamente,
las restricciones de los problemas de VNE se
han resumido en un conjunto de restricciones asociadas
al uso de recursos y consumo energético, y las
soluciones se diferenciaban por la función objetivo
utilizada. Pero eso era antes de la 5a generación de
redes móviles (5G) se considerase en el problema
de VNE. Con la aparición del 5G, nuevos servicios
de red y casos de uso entraron en escena. Los estándares
hablaban de comunicaciones ultra rápidas
y fiables (Ultra-Reliable and Low Latency Communications
(URLLC) usando las siglas en inglés) con
latencias por debajo de unos pocos milisegundos y
fiabilidades del 99.999%, una banda ancha mejorada
(enhanced Mobile Broadband (eMBB) usando
las siglas en inglés) con notorios incrementos en
el flujo de datos, e incluso la consideración de comunicaciones
masivas entre maquinas (Massive
Machine-Type Communications (mMTC) usando
las siglas en inglés) entre dispositivos IoT. Es más,
paradigmas como edge y fog computing se incorporaron a la tecnología 5G, e introducían la idea
de tener dispositivos de computo más cercanos al
usuario final. Como resultado, el problema del VNE
tenía que incorporar los nuevos requisitos como
restricciones a tener en cuenta, y toda solución
debía satisfacer bajas latencias, alta fiabilidad, y
mayores tasas de transmisión.
Esta tesis estudia el problema des VNE, y propone
algunos heurísticos que lidian con las restricciones
asociadas a servicios 5G en escenarios
edge y fog, es decir, las soluciones propuestas se
encargan de asignar funciones virtuales de red a
servidores, y deciden el enrutamiento del trafico
en las infraestructuras 5G con dispositivos edge y
fog. Para evaluar el rendimiento de las soluciones
propuestas, esta tesis estudia en primer lugar la
generación de grafos que representan redes 5G.
Los mecanismos propuestos para la generación de
grafos sirven para representar distintos escenarios
5G. En particular, escenarios de federación en
los que varios dominios comparten recursos entre
ellos. Los grafos generados también representan
servidores en el edge, así como dispositivos fog con
una batería limitada. Además, estos grafos tienen
en cuenta los requisitos de estándares, y la demanda
que se espera en las redes 5G. La generación de
grafos propuesta sirve para representar escenarios
federación en los que varios dominios comparten
recursos entre ellos, y redes 5G con servidores edge,
así como dispositivos fog estáticos o móviles con
una batería limitada. Los grafos generados para
infraestructuras 5G tienen en cuenta los requisitos
de estándares, y la demanda de red que se espera
en las redes 5G. Además, los grafos son diferentes
en función de la densidad de población, y el área
de estudio, es decir, si es una zona industrial, una
autopista, o una zona urbana.
Tras detallar la generación de grafos que representan
redes 5G, esta tesis propone algoritmos de
orquestación NFV para resolver con el problema
del VNE. Primero, se centra en escenarios federados
en los que los servicios de red se tienen que
asignar no solo a la infraestructura de un dominio,
sino a los recursos compartidos en la federación
de dominios. Dos problemas diferentes han sido estudiados,
uno es el problema del VNE propiamente
dicho sobre una infraestructura federada, y el otro
es la delegación de servicios de red. Es decir, si
un servicio de red se debe desplegar localmente
en un dominio, o en los recursos compartidos por
la federación de dominios; a sabiendas de que el último caso supone el pago de cuotas por parte del
dominio local a cambio del despliegue del servicio
de red. En segundo lugar, esta tesis propone
OKpi, un algoritmo de orquestación NFV para conseguir
la calidad de servicio de las distintas slices
de las redes 5G. Conceptualmente, el slicing consiste
en partir la red de modo que cada servicio
de red sea tratado de modo diferente dependiendo
del trozo al que pertenezca. Por ejemplo, una
slice de eHealth reservara los recursos de red necesarios
para conseguir bajas latencias en servicios
como operaciones quirúrgicas realizadas de manera
remota. Cada trozo (slice) está destinado a
unos servicios específicos con unos requisitos muy
concretos, como alta fiabilidad, restricciones de
localización, o latencias de un milisegundo. OKpi
es un algoritmo de orquestación NFV que consigue
satisfacer los requisitos de servicios de red en los
distintos trozos, o slices de la red. Tras presentar
OKpi, la tesis resuelve el problema del VNE en redes
5G con dispositivos fog estáticos y móviles. El
algoritmo de orquestación NFV presentado tiene
en cuenta las limitaciones de recursos de computo
de los dispositivos fog, además de los problemas
de falta de cobertura derivados de la movilidad de
los dispositivos.
Para concluir, esta tesis estudia el escalado
de servicios vehiculares Vehicle-to-Network (V2N),
que requieren de bajas latencias para servicios como
la prevención de choques, avisos de posibles
riesgos, y conducción remota. Para estos servicios,
los atascos y congestiones en la carretera pueden
causar el incumplimiento de los requisitos de latencia.
Por tanto, es necesario anticiparse a esas
circunstancias usando técnicas de series temporales
que permiten saber el tráfico inminente en los
siguientes minutos u horas, para así poder escalar
el servicio V2N adecuadamente.Current network infrastructures handle a diverse
range of network services such as video
on demand services, video-conferences, social
networks, educational systems, or photo
storage services. These services have been
embraced by a significant amount of the
world population, and are used on a daily basis.
Cloud providers and Network operators’
infrastructures accommodate the traffic rates
that the aforementioned services generate, and
their management tasks do not only involve
the traffic steering, but also the processing of
the network services’ traffic. Traditionally,
the traffic processing has been assessed via
applications/programs deployed on servers
that were exclusively dedicated to a specific
task as packet inspection. However, in recent
years network services have stated to be
virtualized and this has led to the Network
Function Virtualization (Network Function
Virtualization (NFV)) paradigm, in which the
network functions of a service run on containers
or virtual machines that are decoupled
from the hardware infrastructure. As a result,
the traffic processing has become more flexible
because of the loose coupling between
software and hardware, and the possibility
of sharing common network functions, as
firewalls, across multiple network services.
NFV eases the automation of network operations,
since scaling and migrations tasks
are typically performed by a set of commands
predefined by the virtualization technology,
either containers or virtual machines. However,
it is still necessary to decide the traffic steering and processing of every network
service. In other words, which servers will
hold the traffic processing, and which are the
network links to be traversed so the users’ requests
reach the final servers, i.e., the network
embedding problem. Under the umbrella of
NFV, this problem is known as Virtual Network
Embedding (VNE), and this thesis refers
as “NFV orchestration algorithms” to those
algorithms solving such a problem. The VNE
problem is a NP-hard, meaning that it is impossible
to find optimal solutions in polynomial
time, no matter the network size. As a
consequence, the research and telecommunications
community rely on heuristics that find
solutions quicker than a commodity optimization
solver.
Traditionally, NFV orchestration algorithms
have tried to minimize the deployment
costs derived from their solutions. For example,
they try to not exhaust the network
bandwidth, and use short paths to use less
network resources. Additionally, a recent
tendency led the research community towards
algorithms that minimize the energy consumption
of the deployed services, either
by selecting more energy efficient devices
or by turning off those network devices that
remained unused. VNE problem constraints
were typically summarized in a set of resources/energy constraints, and the solutions
differed on which objectives functions were
aimed for. But that was before 5th generation
of mobile networks (5G) were considered
in the VNE problem. With the appearance
of 5G, new network services and use cases
started to emerge. The standards talked about
Ultra Reliable Low Latency Communication
(Ultra-Reliable and Low Latency Communications
(URLLC)) with latencies below few
milliseconds and 99.999% reliability, an enhanced
mobile broadband (enhanced Mobile
Broadband (eMBB)) with significant data
rate increases, and even the consideration
of massive machine-type communications
(Massive Machine-Type Communications
(mMTC)) among Internet of Things (IoT) devices.
Moreover, paradigms such as edge and
fog computing blended with the 5G technology
to introduce the idea of having computing
devices closer to the end users. As a result, the VNE problem had to incorporate the new
requirements as constraints to be taken into
account, and every solution should either
satisfy low latencies, high reliability, or larger
data rates.
This thesis studies the VNE problem, and
proposes some heuristics tackling the constraints
related to 5G services in Edge and
fog scenarios, that is, the proposed solutions
assess the assignment of Virtual Network
Functions to resources, and the traffic steering
across 5G infrastructures that have Edge and
Fog devices. To evaluate the performance
of the proposed solutions, the thesis studies
first the generation of graphs that represent
5G networks. The proposed mechanisms to
generate graphs serve to represent diverse 5G
scenarios. In particular federation scenarios
in which several domains share resources
among themselves. The generated graphs
also represent edge servers, so as fog devices
with limited battery capacity. Additionally,
these graphs take into account the standard
requirements, and the expected demand for
5G networks. Moreover, the graphs differ depending
on the density of population, and the
area of study, i.e., whether it is an industrial
area, a highway, or an urban area.
After detailing the generation of graphs
representing the 5G networks, this thesis proposes
several NFV orchestration algorithms
to tackle the VNE problem. First, it focuses
on federation scenarios in which network services
should be assigned not only to a single
domain infrastructure, but also to the shared
resources of the federation of domains. Two
different problems are studied, one being the
VNE itself over a federated infrastructure, and
the other the delegation of network services.
That is, whether a network service should be
deployed in a local domain, or in the pool
of resources of the federation domain; knowing
that the latter charges the local domain
for hosting the network service. Second, the
thesis proposes OKpi, a NFV orchestration
algorithm to meet 5G network slices quality
of service. Conceptually, network slicing consists
in splitting the network so network services
are treated differently based on the slice
they belong to. For example, an eHealth network
slice will allocate the network resources necessary to meet low latencies for network
services such as remote surgery. Each network
slice is devoted to specific services with
very concrete requirements, as high reliability,
location constraints, or 1ms latencies. OKpi is
a NFV orchestration algorithm that meets the
network service requirements among different
slices. It is based on a multi-constrained
shortest path heuristic, and its solutions satisfy
latency, reliability, and location constraints.
After presenting OKpi, the thesis tackles the
VNE problem in 5G networks with static/moving
fog devices. The presented NFV orchestration
algorithm takes into account the limited
computing resources of fog devices, as well
as the out-of-coverage problems derived from
the devices’ mobility.
To conclude, this thesis studies the scaling
of Vehicle-to-Network (V2N) services, which
require low latencies for network services as
collision avoidance, hazard warning, and remote
driving. For these services, the presence
of traffic jams, or high vehicular traffic congestion
lead to the violation of latency requirements.
Hence, it is necessary to anticipate to
such circumstances by using time-series techniques
that allow to derive the incoming vehicular
traffic flow in the next minutes or hours,
so as to scale the V2N service accordingly.The 5G Exchange (5GEx) project (2015-2018) was an EU-funded project (H2020-ICT-2014-2 grant agreement 671636).
The 5G-TRANSFORMER project (2017-2019) is an EU-funded project (H2020-ICT-2016-2 grant agreement 761536).
The 5G-CORAL project (2017-2019) is an EU-Taiwan project (H2020-ICT-2016-2 grant agreement 761586).Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Ioannis Stavrakakis.- Secretario: Pablo Serrano Yáñez-Mingot.- Vocal: Paul Horatiu Patra
Internet of Things and Sensors Networks in 5G Wireless Communications
This book is a printed edition of the Special Issue Internet of Things and Sensors Networks in 5G Wireless Communications that was published in Sensors
Internet of Things and Sensors Networks in 5G Wireless Communications
The Internet of Things (IoT) has attracted much attention from society, industry and academia as a promising technology that can enhance day to day activities, and the creation of new business models, products and services, and serve as a broad source of research topics and ideas. A future digital society is envisioned, composed of numerous wireless connected sensors and devices. Driven by huge demand, the massive IoT (mIoT) or massive machine type communication (mMTC) has been identified as one of the three main communication scenarios for 5G. In addition to connectivity, computing and storage and data management are also long-standing issues for low-cost devices and sensors. The book is a collection of outstanding technical research and industrial papers covering new research results, with a wide range of features within the 5G-and-beyond framework. It provides a range of discussions of the major research challenges and achievements within this topic
Design for energy-efficient and reliable fog-assisted healthcare IoT systems
Cardiovascular disease and diabetes are two of the most dangerous diseases as they are the leading causes of death in all ages. Unfortunately, they cannot be completely cured with the current knowledge and existing technologies. However, they can be effectively managed by applying methods of continuous health monitoring. Nonetheless, it is difficult to achieve a high quality of healthcare with the current health monitoring systems which often have several limitations such as non-mobility support, energy inefficiency, and an insufficiency of advanced services. Therefore, this thesis presents a Fog computing approach focusing on four main tracks, and proposes it as a solution to the existing limitations. In the first track, the main goal is to introduce Fog computing and Fog services into remote health monitoring systems in order to enhance the quality of healthcare.
In the second track, a Fog approach providing mobility support in a real-time health monitoring IoT system is proposed. The handover mechanism run by Fog-assisted smart gateways helps to maintain the connection between sensor nodes and the gateways with a minimized latency. Results show that the handover latency of the proposed Fog approach is 10%-50% less than other state-of-the-art mobility support approaches.
In the third track, the designs of four energy-efficient health monitoring IoT systems are discussed and developed. Each energy-efficient system and its sensor nodes are designed to serve a specific purpose such as glucose monitoring, ECG monitoring, or fall detection; with the exception of the fourth system which is an advanced and combined system for simultaneously monitoring many diseases such as diabetes and cardiovascular disease. Results show that these sensor nodes can continuously work, depending on the application, up to 70-155 hours when using a 1000 mAh lithium battery.
The fourth track mentioned above, provides a Fog-assisted remote health monitoring IoT system for diabetic patients with cardiovascular disease. Via several proposed algorithms such as QT interval extraction, activity status categorization, and fall detection algorithms, the system can process data and detect abnormalities in real-time. Results show that the proposed system using Fog services is a promising approach for improving the treatment of diabetic patients with cardiovascular disease
Clustering in the Big Data Era: methods for efficient approximation, distribution, and parallelization
Data clustering is an unsupervised machine learning task whose objective is to group together similar items. As a versatile data mining tool, data clustering has numerous applications, such as object detection and localization using data from 3D laser-based sensors, finding popular routes using geolocation data, and finding similar patterns of electricity consumption using smart meters.The datasets in modern IoT-based applications are getting more and more challenging for conventional clustering schemes. Big Data is a term used to loosely describe hard-to-manage datasets. Particularly, large numbers of data points, high rates of data production, large numbers of dimensions, high skewness, and distributed data sources are aspects that challenge the classical data processing schemes, including clustering methods. This thesis contributes to efficient big data clustering for distributed and parallel computing architectures, representative of the processing environments in edge-cloud computing continuum. The thesis also proposes approximation techniques to cope with certain challenging aspects of big data.Regarding distributed clustering, the thesis proposes MAD-C, abbreviating Multi-stage Approximate Distributed Cluster-Combining. MAD-C leverages an approximation-based data synopsis that drastically lowers the required communication bandwidth among the distributed nodes and achieves multiplicative savings in computation time, compared to a baseline that centrally gathers and clusters the data. The thesis shows MAD-C can be used to detect and localize objects using data from distributed 3D laser-based sensors with high accuracy. Furthermore, the work in the thesis shows how to utilize MAD-C to efficiently detect the objects within a restricted area for geofencing purposes.Regarding parallel clustering, the thesis proposes a family of algorithms called PARMA-CC, abbreviating Parallel Multistage Approximate Cluster Combining. Using approximation-based data synopsis, PARMA-CC algorithms achieve scalability on multi-core systems by facilitating parallel execution of threads with limited dependencies which get resolved using fine-grained synchronization techniques. To further enhance the efficiency, PARMA-CC algorithms can be configured with respect to different data properties. Analytical and empirical evaluations show PARMA-CC algorithms achieve significantly higher scalability than the state-of-the-art methods while preserving a high accuracy.On parallel high dimensional clustering, the thesis proposes IP.LSH.DBSCAN, abbreviating Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing (LSH). IP.LSH.DBSCAN fuses the process of creating an LSH index into the process of data clustering, and it takes advantage of data parallelization and fine-grained synchronization. Analytical and empirical evaluations show IP.LSH.DBSCAN facilitates parallel density-based clustering of massive datasets using desired distance measures resulting in several orders of magnitude lower latency than state-of-the-art for high dimensional data.In essence, the thesis proposes methods and algorithmic implementations targeting the problem of big data clustering and applications using distributed and parallel processing. The proposed methods (available as open source software) are extensible and can be used in combination with other methods
Time series data mining: preprocessing, analysis, segmentation and prediction. Applications
Currently, the amount of data which is produced for any information system is increasing exponentially. This motivates the development of automatic techniques to process and mine these data correctly. Specifically, in this Thesis, we tackled these problems for time series data, that is, temporal data which is collected chronologically. This kind of data can be found in many fields of science, such as palaeoclimatology, hydrology, financial problems, etc. TSDM consists of several tasks which try to achieve different objectives, such as, classification, segmentation, clustering, prediction, analysis, etc. However, in this Thesis, we focus on time series preprocessing, segmentation and prediction. Time series preprocessing is a prerequisite for other posterior tasks: for example, the reconstruction of missing values in incomplete parts of time series can be essential for clustering them. In this Thesis, we tackled the problem of massive missing data reconstruction in SWH time series from the Gulf of Alaska. It is very common that buoys stop working for different periods, what it is usually related to malfunctioning or bad weather conditions. The relation of the time series of each buoy is analysed and exploited to reconstruct the whole missing time series. In this context, EANNs with PUs are trained, showing that the resulting models are simple and able to recover these values with high precision. In the case of time series segmentation, the procedure consists in dividing the time series into different subsequences to achieve different purposes. This segmentation can be done trying to find useful patterns in the time series. In this Thesis, we have developed novel bioinspired algorithms in this context. For instance, for paleoclimate data, an initial genetic algorithm was proposed to discover early warning signals of TPs, whose detection was supported by expert opinions. However, given that the expert had to individually evaluate every solution given by the algorithm, the evaluation of the results was very tedious. This led to an improvement in the body of the GA to evaluate the procedure automatically. For significant wave height time series, the objective was the detection of groups which contains extreme waves, i.e. those which are relatively large with respect other waves close in time. The main motivation is to design alert systems. This was done using an HA, where an LS process was included by using a likelihood-based segmentation, assuming that the points follow a beta distribution. Finally, the analysis of similarities in different periods of European stock markets was also tackled with the aim of evaluating the influence of different markets in Europe. When segmenting time series with the aim of reducing the number of points, different techniques have been proposed. However, it is an open challenge given the difficulty to operate with large amounts of data in different applications. In this work, we propose a novel statistically-driven CRO algorithm (SCRO), which automatically adapts its parameters during the evolution, taking into account the statistical distribution of the population fitness. This algorithm improves the state-of-the-art with respect to accuracy and robustness. Also, this problem has been tackled using an improvement of the BBPSO algorithm, which includes a dynamical update of the cognitive and social components in the evolution, combined with mathematical tricks to obtain the fitness of the solutions, which
significantly reduces the computational cost of previously proposed coral reef methods.
Also, the optimisation of both objectives (clustering quality and approximation quality),
which are in conflict, could be an interesting open challenge, which will be tackled
in this Thesis. For that, an MOEA for time series segmentation is developed, improving the clustering quality of the solutions and their approximation. The prediction in time series is the estimation of future values by observing and studying the previous ones. In this context, we solve this task by applying prediction over high-order representations of the elements of the time series, i.e. the segments obtained by time series segmentation. This is applied to two challenging problems, i.e. the prediction of extreme wave height and fog prediction. On the one hand, the number of extreme values in SWH time series is less with respect to the number of standard values. In this way, the prediction of these values cannot be done using standard algorithms without taking into account the imbalanced ratio of the dataset. For that, an algorithm that automatically finds the set of segments and then applies EANNs is developed, showing the high ability of the algorithm to detect and predict these special events. On the other hand, fog prediction is affected by the same problem, that is, the number of fog events is much lower tan that of non-fog events, requiring a special treatment too. A preprocessing of different data coming from sensors situated in different parts of the Valladolid airport are used for making a simple ANN model, which is physically corroborated and discussed. The last challenge which opens new horizons is the estimation of the statistical distribution of time series to guide different methodologies. For this, the estimation of a mixed distribution for SWH time series is then used for fixing the threshold of POT approaches. Also, the determination of the fittest distribution for the time series is used for discretising it and making a prediction which treats the problem as ordinal classification. The work developed in this Thesis is supported by twelve papers in international journals, seven papers in international conferences, and four papers in national conferences
Resource Management in Multi-Access Edge Computing (MEC)
This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks.
The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource
management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC.
MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has
limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is
crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms.
Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%.
Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead
Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection
Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Technische Universiteit Eindhoven. This PhD Thesis has been developed in the framework of, and according to, the rules of the Erasmus Mundus Joint Doctorate on Interactive and Cognitive Environments EMJD ICE [FPA no. 2010-0012]Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment.
Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons.
The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art
FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared.
The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificidadPostprint (published version
Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection
Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment.
Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons.
The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art
FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared.
The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificida
- …