15 research outputs found
A sensitive data access model in support of learning health systems
Given the ever-growing body of knowledge, healthcare improvement hinges more than ever on efficient knowledge transfer to clinicians and patients. Promoted initially by the Institute of Medicine, the Learning Health System (LHS) framework emerged in the early 2000s. It places focus on learning cycles where care delivery is tightly coupled with research activities, which in turn is closely tied to knowledge transfer, ultimately injecting solid improvements into medical practice. Sensitive health data access across multiple organisations is therefore paramount to support LHSs. While the LHS vision is well established, security requirements to support them are not. Health data exchange approaches have been implemented (e.g., HL7 FHIR) or proposed (e.g., blockchain-based methods), but none cover the entire LHS requirement spectrum. To address this, the Sensitive Data Access Model (SDAM) is proposed. Using a representation of agents and processes of data access systems, specific security requirements are presented and the SDAM layer architecture is described, with an emphasis on its mix-network dynamic topology approach. A clinical application benefiting from the model is subsequently presented and an analysis evaluates the security properties and vulnerability mitigation strategies offered by a protocol suite following SDAM and in parallel, by FHIR
Implementing Efficient and Multi-Hop Image Acquisition In Remote Monitoring IoT systems using LoRa Technology
Remote sensing or monitoring through the deployment of wireless sensor networks (WSNs) is considered an economical and convenient manner in which to collect information without cumbersome human intervention. Unfortunately, due to challenging deployment conditions, such as large geographic area, and lack of electricity and network infrastructure, designing such wireless sensor networks for large-scale farms or forests is difficult and expensive. Many WSN-appropriate wireless technologies, such as Wi-Fi, Bluetooth, Zigbee and 6LoWPAN, have been widely adopted in remote sensing. The performance of these technologies, however, is not sufficient for use across large areas. Generally, as the geographical scope expands, more devices need to be employed to expand network coverage, so the number and cost of devices in wireless sensor networks will increase dramatically. Besides, this type of deployment usually not only has a high probability of failure and high transmission costs, but also imposes additional overhead on system management and maintenance.
LoRa is an emerging physical layer standard for long range wireless communication. By utilizing chirp spread spectrum modulation, LoRa features a long communication range and broad signal coverage. At the same time, LoRa also has low power consumption. Thus, LoRa outperforms similar technologies in terms of hardware cost, power consumption and radio coverage. It is also considered to be one of the promising solutions for the future of the Internet of Things (IoT). As the research and development of LoRa are still in its early stages, it lacks sufficient support for multi-packet transport and complex deployment topologies. Therefore, LoRa is not able to further expand its network coverage and efficiently support big data transfers like other conventional technologies. Besides, due to the smaller payload and data rate in LoRa physical design, it is more challenging to implement these features in LoRa. These shortcomings limit the potential for LoRa to be used in more productive application scenarios.
This thesis addresses the problem of multi-packet and multi-hop transmission using LoRa by proposing two novel protocols, namely Multi-Packet LoRa (MPLR) and Multi-Hop LoRa (MHLR). LoRa's ability to transmit large messages is first evaluated in this thesis, and then the protocols are well designed and implemented to enrich LoRa's possibilities in image transmission applications and multi-hop topologies. MPLR introduces a reliable transport mechanism for multi-packet sensory data, making its network not limited to the transmission of small sensor data only. In collaboration with a data channel reservation technique, MPLR is able to greatly mitigate data collisions caused by the increased transmission time in laboratory experiments. MHLR realizes efficient routing in LoRa multi-hop transmission by utilizing the power of machine learning. The results of both indoor and outdoor experiments show that the machine learning based routing is effective in wireless sensor networks
Channel estimation techniques for filter bank multicarrier based transceivers for next generation of wireless networks
A dissertation submitted to Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of Master of Science in Engineering (Electrical and Information Engineering), August 2017The fourth generation (4G) of wireless communication system is designed based on the principles of cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) where the cyclic prefix (CP) is used to combat inter-symbol interference (ISI) and inter-carrier interference (ICI) in order to achieve higher data rates in comparison to the previous generations of wireless networks. Various filter bank multicarrier systems have been considered as potential waveforms for the fast emerging next generation (xG) of wireless networks (especially the fifth generation (5G) networks). Some examples of the considered waveforms are orthogonal frequency division multiplexing with offset quadrature amplitude modulation based filter bank, universal filtered multicarrier (UFMC), bi-orthogonal frequency division multiplexing (BFDM) and generalized frequency division multiplexing (GFDM). In perfect reconstruction (PR) or near perfect reconstruction (NPR) filter bank designs, these aforementioned FBMC waveforms adopt the use of well-designed prototype filters (which are used for designing the synthesis and analysis filter banks) so as to either replace or minimize the CP usage of the 4G networks in order to provide higher spectral efficiencies for the overall increment in data rates. The accurate designing of the FIR low-pass prototype filter in NPR filter banks results in minimal signal distortions thus, making the analysis filter bank a time-reversed version of the corresponding synthesis filter bank. However, in non-perfect reconstruction (Non-PR) the analysis filter bank is not directly a time-reversed version of the corresponding synthesis filter bank as the prototype filter impulse response for this system is formulated (in this dissertation) by the introduction of randomly generated errors. Hence, aliasing and amplitude distortions are more prominent for Non-PR.
Channel estimation (CE) is used to predict the behaviour of the frequency selective channel and is usually adopted to ensure excellent reconstruction of the transmitted symbols. These techniques can be broadly classified as pilot based, semi-blind and blind channel estimation schemes. In this dissertation, two linear pilot based CE techniques namely the least square (LS) and linear minimum mean square error (LMMSE), and three adaptive channel estimation schemes namely least mean square (LMS), normalized least mean square (NLMS) and recursive least square (RLS) are presented, analyzed and documented. These are implemented while exploiting the near orthogonality properties of offset quadrature amplitude modulation (OQAM) to mitigate the effects of interference for two filter bank waveforms (i.e. OFDM/OQAM and GFDM/OQAM) for the next generation of wireless networks assuming conditions of both NPR and Non-PR in slow and fast frequency selective Rayleigh fading channel. Results obtained from the computer simulations carried out showed that the channel estimation schemes performed better in an NPR filter bank system as compared with Non-PR filter banks. The low performance of Non-PR system is due to the amplitude distortion and aliasing introduced from the random errors generated in the system that is used to design its prototype filters. It can be concluded that RLS, NLMS, LMS, LMMSE and LS channel estimation schemes offered the best normalized mean square error (NMSE) and bit error rate (BER) performances (in decreasing order) for both waveforms assuming both NPR and Non-PR filter banks.
Keywords: Channel estimation, Filter bank, OFDM/OQAM, GFDM/OQAM, NPR, Non-PR, 5G, Frequency selective channel.CK201
Power Modeling and Resource Optimization in Virtualized Environments
The provisioning of on-demand cloud services has revolutionized the IT industry. This emerging paradigm has drastically increased the growth of data centers (DCs) worldwide. Consequently, this rising number of DCs is contributing to a large amount of world total power consumption. This has directed the attention of researchers and service providers to investigate a power-aware solution for the deployment and management of these systems and networks. However, these solutions could be bene\ufb01cial only if derived from a precisely estimated power consumption at run-time. Accuracy in power estimation is a challenge in virtualized environments due to the lack of certainty of actual resources consumed by virtualized entities and of their impact on applications\u2019 performance. The heterogeneous cloud, composed of multi-tenancy architecture, has also raised several management challenges for both service providers and their clients. Task scheduling and resource allocation in such a system are considered as an NP-hard problem. The inappropriate allocation of resources causes the under-utilization of servers, hence reducing throughput and energy e\ufb03ciency. In this context, the cloud framework needs an e\ufb00ective management solution to maximize the use of available resources and capacity, and also to reduce the impact of their carbon footprint on the environment with reduced power consumption. This thesis addresses the issues of power measurement and resource utilization in virtualized environments as two primary objectives. At \ufb01rst, a survey on prior work of server power modeling and methods in virtualization architectures is carried out. This helps investigate the key challenges that elude the precision of power estimation when dealing with virtualized entities. A di\ufb00erent systematic approach is then presented to improve the prediction accuracy in these networks, considering the resource abstraction at di\ufb00erent architectural levels. Resource usage monitoring at the host and guest helps in identifying the di\ufb00erence in performance between the two. Using virtual Performance Monitoring Counters (vPMCs) at a guest level provides detailed information that helps in improving the prediction accuracy and can be further used for resource optimization, consolidation and load balancing. Later, the research also targets the critical issue of optimal resource utilization in cloud computing. This study seeks a generic, robust but simple approach to deal with resource allocation in cloud computing and networking. The inappropriate scheduling in the cloud causes under- and over- utilization of resources which in turn increases the power consumption and also degrades the system performance. This work \ufb01rst addresses some of the major challenges related to task scheduling in heterogeneous systems. After a critical analysis of existing approaches, this thesis presents a rather simple scheduling scheme based on the combination of heuristic solutions. Improved resource utilization with reduced processing time can be achieved using the proposed energy-e\ufb03cient scheduling algorithm
Efficient Variations of the Quality Threshold Clustering Algorithm
Clustering gene expression data such that the diameters of the clusters formed are no greater than a specified threshold prompted the development of the Quality Threshold Clustering (QTC) algorithm. It iteratively forms clusters of non-increasing size until all points are clustered; the largest cluster is always selected first. The QTC algorithm applies in many other domains that require a similar quality guarantee based on cluster diameter. The worst-case complexity of the original QTC algorithm is (n5). Since practical applications often involve large datasets, researchers called for more efficient versions of the QTC algorithm.
This dissertation aimed to develop and evaluate efficient variations of the QTC algorithm that guarantee a maximum cluster diameter while producing partitions that are similar to those produced by the original QTC algorithm. The QTC algorithm is expensive because it considers forming clusters around every item in the dataset. This dissertation addressed this issue by developing methods for selecting a small subset of promising items around which to form clusters. A second factor that adversely affects the efficiency of the QTC algorithm is the computational cost of updating cluster diameters as new items are added to clusters. This dissertation proposed and evaluated alternate methods to meet the cluster diameter constraint while not having to repeatedly update the cluster diameters.
The variations of the QTC algorithm developed in this dissertation were evaluated on benchmark datasets using two measures: execution time and quality of solutions produced. Execution times were compared to the time taken to execute the most efficient published implementation of the QTC algorithm. Since the partitions produced by the proposed variations are not guaranteed to be identical to those produced by the original algorithm, the Jaccard measure of partition similarity was used to measure the quality of the solutions.
The findings of this research were threefold. First, the Stochastic QTC alone wasn’t computationally helpful since in order to produce partitions that were acceptably similar to those found by the deterministic QTCs, the algorithm had to be seeded with a large number of centers (ntry ≈ n). Second, the preprocessed data methods are desirable since they reduce the complexity of the search for candidate cluster points. Third, radius based methods are promising since they produce partitions that are acceptably similar to those found by the deterministic QTCs in significantly less time
New Secure IoT Architectures, Communication Protocols and User Interaction Technologies for Home Automation, Industrial and Smart Environments
Programa Oficial de Doutoramento en Tecnoloxías da Información e das Comunicacións en Redes Móbiles. 5029V01Tese por compendio de publicacións[Abstract]
The Internet of Things (IoT) presents a communication network where heterogeneous
physical devices such as vehicles, homes, urban infrastructures or industrial machinery
are interconnected and share data. For these communications to be successful, it is
necessary to integrate and embed electronic devices that allow for obtaining environmental
information (sensors), for performing physical actuations (actuators) as well as
for sending and receiving data (network interfaces).
This integration of embedded systems poses several challenges. It is needed for these
devices to present very low power consumption. In many cases IoT nodes are powered by
batteries or constrained power supplies. Moreover, the great amount of devices needed in
an IoT network makes power e ciency one of the major concerns of these deployments,
due to the cost and environmental impact of the energy consumption. This need for low
energy consumption is demanded by resource constrained devices, con
icting with the
second major concern of IoT: security and data privacy. There are critical urban and
industrial systems, such as tra c management, water supply, maritime control, railway
control or high risk industrial manufacturing systems such as oil re neries that will
obtain great bene ts from IoT deployments, for which non-authorized access can posse
severe risks for public safety. On the other hand, both these public systems and the
ones deployed on private environments (homes, working places, malls) present a risk for
the privacy and security of their users. These IoT deployments need advanced security
mechanisms, both to prevent access to the devices and to protect the data exchanged
by them.
As a consequence, it is needed to improve two main aspects: energy e ciency of IoT
devices and the use of lightweight security mechanisms that can be implemented by
these resource constrained devices but at the same time guarantee a fair degree of
security.
The huge amount of data transmitted by this type of networks also presents another
challenge. There are big data systems capable of processing large amounts of data,
but with IoT the granularity and dispersion of the generated information presents a
new scenario very di erent from the one existing nowadays. Forecasts anticipate that there will be a growth from the 15 billion installed devices in 2015 to more than 75
billion devices in 2025. Moreover, there will be much more services exploiting the data
produced by these networks, meaning the resulting tra c will be even higher. The
information must not only be processed in real time, but data mining processes will
have to be performed to historical data.
The main goal of this Ph.D. thesis is to analyze each one of the previously described
challenges and to provide solutions that allow for an adequate adoption of IoT in
Industrial, domestic and, in general, any scenario that can obtain any bene t from the
interconnection and
exibility that IoT brings.[Resumen]
La internet de las cosas (IoT o Internet of Things) representa una red de intercomunicaciones
en la que participan dispositivos físicos de toda índole, como vehículos,
viviendas, electrodomésticos, infraestructuras urbanas o maquinaria y dispositivos industriales.
Para que esta comunicación se pueda llevar a cabo es necesario integrar
elementos electr onicos que permitan obtener informaci on del entorno (sensores), realizar
acciones f sicas (actuadores) y enviar y recibir la informaci on necesaria (interfaces de
comunicaciones de red).
La integración y uso de estos sistemas electrónicos embebidos supone varios retos. Es
necesario que dichos dispositivos presenten un consumo reducido. En muchos casos
deberían ser alimentados por baterías o fuentes de alimentación limitadas. Además,
la gran cantidad de dispositivos que involucra la IoT hace necesario que la e ciencia
energética de los mismos sea una de las principales preocupaciones, por el coste e
implicaciones medioambientales que supone el consumo de electricidad de los mismos.
Esta necesidad de limitar el consumo provoca que dichos dispositivos tengan unas
prestaciones muy limitadas, lo que entra en conflicto con la segunda mayor preocupación
de la IoT: la seguridad y privacidad de los datos. Por un lado existen sistemas críticos
urbanos e industriales, como puede ser la regulación del tráfi co, el control del suministro
de agua, el control marítimo, el control ferroviario o los sistemas de producción industrial
de alto riesgo, como refi nerías, que son claros candidatos a benefi ciarse de la IoT, pero
cuyo acceso no autorizado supone graves problemas de seguridad ciudadana. Por otro
lado, tanto estos sistemas de naturaleza publica, como los que se desplieguen en entornos
privados (viviendas, entornos de trabajo o centros comerciales, entre otros) suponen
un riesgo para la privacidad y también para la seguridad de los usuarios. Todo esto
hace que sean necesarios mecanismos de seguridad avanzados, tanto de acceso a los
dispositivos como de protección de los datos que estos intercambian.
En consecuencia, es necesario avanzar en dos aspectos principales: la e ciencia energética de los dispositivos y el uso de mecanismos de seguridad e ficientes, tanto
computacional como energéticamente, que permitan la implantación de la IoT sin
comprometer la seguridad y la privacidad de los usuarios. Por otro lado, la ingente cantidad de información que estos sistemas puede llegar
a producir presenta otros dos retos que deben ser afrontados. En primer lugar, el
tratamiento y análisis de datos toma una nueva dimensión. Existen sistemas de big
data capaces de procesar cantidades enormes de información, pero con la internet de
las cosas la granularidad y dispersión de los datos plantean un escenario muy distinto
al actual. La previsión es pasar de 15.000.000.000 de dispositivos instalados en 2015
a más de 75.000.000.000 en 2025. Además existirán multitud de servicios que harán
un uso intensivo de estos dispositivos y de los datos que estos intercambian, por lo
que el volumen de tráfico será todavía mayor. Asimismo, la información debe ser
procesada tanto en tiempo real como a posteriori sobre históricos, lo que permite
obtener información estadística muy relevante en diferentes entornos.
El principal objetivo de la presente tesis doctoral es analizar cada uno de estos retos
(e ciencia energética, seguridad, procesamiento de datos e interacción con el usuario)
y plantear soluciones que permitan una correcta adopción de la internet de las cosas
en ámbitos industriales, domésticos y en general en cualquier escenario que se pueda
bene ciar de la interconexión y
flexibilidad de acceso que proporciona el IoT.[Resumo]
O internet das cousas (IoT ou Internet of Things) representa unha rede de intercomunicaci
óns na que participan dispositivos físicos moi diversos, coma vehículos, vivendas,
electrodomésticos, infraestruturas urbanas ou maquinaria e dispositivos industriais.
Para que estas comunicacións se poidan levar a cabo é necesario integrar elementos
electrónicos que permitan obter información da contorna (sensores), realizar accións
físicas (actuadores) e enviar e recibir a información necesaria (interfaces de comunicacións
de rede).
A integración e uso destes sistemas electrónicos integrados supón varios retos. En
primeiro lugar, é necesario que estes dispositivos teñan un consumo reducido. En
moitos casos deberían ser alimentados por baterías ou fontes de alimentación limitadas.
Ademais, a gran cantidade de dispositivos que se empregan na IoT fai necesario que a
e ciencia enerxética dos mesmos sexa unha das principais preocupacións, polo custo e
implicacións medioambientais que supón o consumo de electricidade dos mesmos. Esta
necesidade de limitar o consumo provoca que estes dispositivos teñan unhas prestacións
moi limitadas, o que entra en con
ito coa segunda maior preocupación da IoT: a
seguridade e privacidade dos datos. Por un lado existen sistemas críticos urbanos e
industriais, como pode ser a regulación do tráfi co, o control de augas, o control marítimo,
o control ferroviario ou os sistemas de produción industrial de alto risco, como refinerías,
que son claros candidatos a obter benefi cios da IoT, pero cuxo acceso non autorizado
supón graves problemas de seguridade cidadá. Por outra parte tanto estes sistemas de
natureza pública como os que se despreguen en contornas privadas (vivendas, contornas
de traballo ou centros comerciais entre outros) supoñen un risco para a privacidade e
tamén para a seguridade dos usuarios. Todo isto fai que sexan necesarios mecanismos
de seguridade avanzados, tanto de acceso aos dispositivos como de protección dos datos
que estes intercambian.
En consecuencia, é necesario avanzar en dous aspectos principais: a e ciencia enerxética
dos dispositivos e o uso de mecanismos de seguridade re cientes, tanto computacional
como enerxéticamente, que permitan o despregue da IoT sen comprometer a seguridade
e a privacidade dos usuarios.
Por outro lado, a inxente cantidade de información que estes sistemas poden chegar
a xerar presenta outros retos que deben ser tratados. O tratamento e a análise de
datos toma unha nova dimensión. Existen sistemas de big data capaces de procesar
cantidades enormes de información, pero coa internet das cousas a granularidade e
dispersión dos datos supón un escenario moi distinto ao actual. A previsión e pasar
de 15.000.000.000 de dispositivos instalados no ano 2015 a m ais de 75.000.000.000 de
dispositivos no ano 2025. Ademais existirían multitude de servizos que farían un uso
intensivo destes dispositivos e dos datos que intercambian, polo que o volume de tráfico
sería aínda maior. Do mesmo xeito a información debe ser procesada tanto en tempo
real como posteriormente sobre históricos, o que permite obter información estatística
moi relevante en diferentes contornas.
O principal obxectivo da presente tese doutoral é analizar cada un destes retos
(e ciencia enerxética, seguridade, procesamento de datos e interacción co usuario) e
propor solucións que permitan unha correcta adopción da internet das cousas en ámbitos
industriais, domésticos e en xeral en todo aquel escenario que se poda bene ciar da
interconexión e
flexibilidade de acceso que proporciona a IoT
Current Practices for Preventive Maintenance and Expectations for Predictive Maintenance in East-Canadian Mines
ABSTRACT: Preventive maintenance practices have been proven to reduce maintenance costs in many industries. In the mining industry, preventive maintenance is the main form of maintenance, especially for mobile equipment. With the increase of sensor data and the installation of wireless infrastructure within underground mines, predictive maintenance practices are beginning to be applied to the mining equipment maintenance process. However, for the transition from preventive to predictive maintenance to succeed, researchers must first understand the maintenance process implemented in mines. In this paper, we conducted interviews with 15 maintenance experts from 7 mining sites (6 gold, 1 diamond) across East-Canada to investigate the maintenance planning process currently implemented in Canadian mines. We documented experts’ feedback on the process, their expectations regarding the introduction of predictive maintenance in mining, and the usability of existing computerized maintenance management software (CMMS). From our results, we compiled a summary of actual maintenance practices and showed how they differ from theoretical practices. Finally, we list the Key Performance Indicators (KPIs) relevant for maintenance planning and user requirements to improve the usability of CMMS