568 research outputs found
Feasibility of Post-Operative Mobile Health Monitoring Among Colorectal Surgery Patients
Post-operative readmission following colorectal surgery is a common and costly occurrence. Remote health monitoring via mobile applications has the potential to reduce post-operative readmissions by early identification of complications. This intervention depends on patient acceptance and compliance with available technology. The feasibility of home monitoring using automated daily surveys and wound photo uploads, delivered via a mobile health application, was tested in the immediate post-operative period after colorectal surgery. Patient compliance, the association between generated alerts and readmissions, and patient satisfaction were measured. Patient satisfaction was high; 80.5% of patients reported that they felt safer going home knowing that they were monitored and 76.2% of patients reported that they would use the current app for post-operative monitoring again. However, only 37.0% of patients answered the survey at least 80% of the time in the first 2 weeks following discharge. Patient compliance significantly limited the feasibility of post-operative monitoring using our mobile health application
A job response time prediction method for production Grid computing environments
A major obstacle to the widespread adoption of Grid Computing in both the scientific
community and industry sector is the difficulty of knowing in advance a job submission running
cost that can be used to plan a correct allocation of resources.
Traditional distributed computing solutions take advantage of homogeneous and open
environments to propose prediction methods that use a detailed analysis of the hardware and
software components. However, production Grid computing environments, which are large and
use a complex and dynamic set of resources, present a different challenge. In Grid computing
the source code of applications, programme libraries, and third-party software are not always
available. In addition, Grid security policies may not agree to run hardware or software analysis
tools to generate Grid components models.
The objective of this research is the prediction of a job response time in production Grid
computing environments. The solution is inspired by the concept of predicting future Grid
behaviours based on previous experiences learned from heterogeneous Grid workload trace
data. The research objective was selected with the aim of improving the Grid resource usability
and the administration of Grid environments. The predicted data can be used to allocate
resources in advance and inform forecasted finishing time and running costs before submission.
The proposed Grid Computing Response Time Prediction (GRTP) method implements
several internal stages where the workload traces are mined to produce a response time
prediction for a given job. In addition, the GRTP method assesses the predicted result against
the actual target job’s response time to inference information that is used to tune the methods
setting parameters.
The GRTP method was implemented and tested using a cross-validation technique to assess
how the proposed solution generalises to independent data sets. The training set was taken from
the Grid environment DAS (Distributed ASCI Supercomputer). The two testing sets were taken
from AuverGrid and Grid5000 Grid environments
Three consecutive tests assuming stable jobs, unstable jobs, and using a job type method to
select the most appropriate prediction function were carried out. The tests offered a significant
increase in prediction performance for data mining based methods applied in Grid computing
environments. For instance, in Grid5000 the GRTP method answered 77 percent of job
prediction requests with an error of less than 10 percent. While in the same environment, the most effective and accurate method using workload traces was only able to predict 32 percent of
the cases within the same range of error.
The GRTP method was able to handle unexpected changes in resources and services which
affect the job response time trends and was able to adapt to new scenarios. The tests showed
that the proposed GRTP method is capable of predicting job response time requests and it also
improves the prediction quality when compared to other current solutions
Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring
PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains
such as healthcare, smart homes, smart cars, smart-x applications, and smart
cities. The number of applications based on IoT and cloud computing is projected
to increase rapidly over the next few years. IoT-based services must meet
the guaranteed levels of quality of service (QoS) to match users’ expectations.
Ensuring QoS through specifying the QoS constraints using service level agreements
(SLAs) is crucial. Also because of the potentially highly complex nature
of multi-layered IoT applications, lifecycle management (deployment, dynamic
reconfiguration, and monitoring) needs to be automated. To achieve this it is
essential to be able to specify SLAs in a machine-readable format.
currently available SLA specification languages are unable to accommodate
the unique characteristics (interdependency of its multi-layers) of the IoT domain.
Therefore, in this research, we propose a grammar for a syntactical structure
of an SLA specification for IoT. The grammar is based on a proposed conceptual
model that considers the main concepts that can be used to express the requirements
for most common hardware and software components of an IoT application
on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to
evaluate the generality and expressiveness of the proposed grammar by reviewing
its concepts and their predefined lists of vocabularies against two use-cases
with a number of participants whose research interests are mainly related to IoT.
The results of the analysis show that the proposed grammar achieved 91.70% of
its generality goal and 93.43% of its expressiveness goal.
To enhance the process of specifying SLA terms, We then developed a toolkit
for creating SLA specifications for IoT applications. The toolkit is used to simplify
the process of capturing the requirements of IoT applications. We demonstrate
the effectiveness of the toolkit using a remote health monitoring service (RHMS)
use-case as well as applying a user experience measure to evaluate the tool by
applying a questionnaire-oriented approach. We discussed the applicability of our
tool by including it as a core component of two different applications: 1) a contextaware
recommender system for IoT configuration across layers; and 2) a tool for
automatically translating an SLA from JSON to a smart contract, deploying it
on different peer nodes that represent the contractual parties. The smart contract
is able to monitor the created SLA using Blockchain technology. These two
applications are utilized within our proposed SLA management framework for IoT.
Furthermore, we propose a greedy heuristic algorithm to decentralize workflow
activities of an IoT application across Edge and Cloud resources to enhance
response time, cost, energy consumption and network usage. We evaluated the
efficiency of our proposed approach using iFogSim simulator. The performance
analysis shows that the proposed algorithm minimized cost, execution time, networking,
and Cloud energy consumption compared to Cloud-only and edge-ward
placement approaches
AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities and Challenges
Artificial Intelligence for IT operations (AIOps) aims to combine the power
of AI with the big data generated by IT Operations processes, particularly in
cloud infrastructures, to provide actionable insights with the primary goal of
maximizing availability. There are a wide variety of problems to address, and
multiple use-cases, where AI capabilities can be leveraged to enhance
operational efficiency. Here we provide a review of the AIOps vision, trends
challenges and opportunities, specifically focusing on the underlying AI
techniques. We discuss in depth the key types of data emitted by IT Operations
activities, the scale and challenges in analyzing them, and where they can be
helpful. We categorize the key AIOps tasks as - incident detection, failure
prediction, root cause analysis and automated actions. We discuss the problem
formulation for each task, and then present a taxonomy of techniques to solve
these problems. We also identify relatively under explored topics, especially
those that could significantly benefit from advances in AI literature. We also
provide insights into the trends in this field, and what are the key investment
opportunities
Remote Sensing of Biophysical Parameters
Vegetation plays an essential role in the study of the environment through plant respiration and photosynthesis. Therefore, the assessment of the current vegetation status is critical to modeling terrestrial ecosystems and energy cycles. Canopy structure (LAI, fCover, plant height, biomass, leaf angle distribution) and biochemical parameters (leaf pigmentation and water content) have been employed to assess vegetation status and its dynamics at scales ranging from kilometric to decametric spatial resolutions thanks to methods based on remote sensing (RS) data.Optical RS retrieval methods are based on the radiative transfer processes of sunlight in vegetation, determining the amount of radiation that is measured by passive sensors in the visible and infrared channels. The increased availability of active RS (radar and LiDAR) data has fostered their use in many applications for the analysis of land surface properties and processes, thanks to their insensitivity to weather conditions and the ability to exploit rich structural and texture information. Optical and radar data fusion and multi-sensor integration approaches are pressing topics, which could fully exploit the information conveyed by both the optical and microwave parts of the electromagnetic spectrum.This Special Issue reprint reviews the state of the art in biophysical parameters retrieval and its usage in a wide variety of applications (e.g., ecology, carbon cycle, agriculture, forestry and food security)
Framework for Security Transparency in Cloud Computing
The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations.
The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area.
This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied.
The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
Adaptive learning-based resource management strategy in fog-to-cloud
Technology in the twenty-first century is rapidly developing and driving us into a new smart computing world, and emerging lots
of new computing architectures. Fog-to-Cloud (F2C) is among one of them, which emerges to ensure the commitment for
bringing the higher computing facilities near to the edge of the network and also help the large-scale computing system to be
more intelligent. As the F2C is in its infantile state, therefore one of the biggest challenges for this computing paradigm is to
efficiently manage the computing resources. Mainly, to address this challenge, in this work, we have given our sole interest for
designing the initial architectural framework to build a proper, adaptive and efficient resource management mechanism in F2C.
F2C has been proposed as a combined, coordinated and hierarchical computing platform, where a vast number of
heterogeneous computing devices are participating. Notably, their versatility creates a massive challenge for effectively handling
them. Even following any large-scale smart computing system, it can easily recognize that various kind of services is served for
different purposes. Significantly, every service corresponds with the various tasks, which have different resource requirements.
So, knowing the characteristics of participating devices and system offered services is giving advantages to build effective and
resource management mechanism in F2C-enabled system. Considering these facts, initially, we have given our intense focus for
identifying and defining the taxonomic model for all the participating devices and system involved services-tasks.
In any F2C-enabled system consists of a large number of small Internet-of-Things (IoTs) and generating a continuous and
colossal amount of sensing-data by capturing various environmental events. Notably, this sensing-data is one of the key
ingredients for various smart services which have been offered by the F2C-enabled system. Besides that, resource statistical
information is also playing a crucial role, for efficiently providing the services among the system consumers. Continuous
monitoring of participating devices generates a massive amount of resource statistical information in the F2C-enabled system.
Notably, having this information, it becomes much easier to know the device's availability and suitability for executing some tasks
to offer some services. Therefore, ensuring better service facilities for any latency-sensitive services, it is essential to securely
distribute the sensing-data and resource statistical information over the network. Considering these matters, we also proposed
and designed a secure and distributed database framework for effectively and securely distribute the data over the network.
To build an advanced and smarter system is necessarily required an effective mechanism for the utilization of system resources.
Typically, the utilization and resource handling process mainly depend on the resource selection and allocation mechanism. The
prediction of resources (e.g., RAM, CPU, Disk, etc.) usage and performance (i.e., in terms of task execution time) helps the
selection and allocation process. Thus, adopting the machine learning (ML) techniques is much more useful for designing an
advanced and sophisticated resource allocation mechanism in the F2C-enabled system. Adopting and performing the ML
techniques in F2C-enabled system is a challenging task. Especially, the overall diversification and many other issues pose a
massive challenge for successfully performing the ML techniques in any F2C-enabled system. Therefore, we have proposed and
designed two different possible architectural schemas for performing the ML techniques in the F2C-enabled system to achieve
an adaptive, advance and sophisticated resource management mechanism in the F2C-enabled system. Our proposals are the
initial footmarks for designing the overall architectural framework for resource management mechanism in F2C-enabled system.La tecnologia del segle XXI avança rà pidament i ens condueix cap a un nou món intel·ligent, creant nous models d'arquitectures informà tiques. Fog-to-Cloud (F2C) és un d’ells, i sorgeix per garantir el compromÃs d’acostar les instal·lacions informà tiques a prop de la xarxa i també ajudar el sistema informà tic a gran escala a ser més intel·ligent. Com que el F2C es troba en un estat preliminar, un dels majors reptes d’aquest paradigma tecnològic és gestionar eficientment els recursos informà tics. Per fer front a aquest repte, en aquest treball hem centrat el nostre interès en dissenyar un marc arquitectònic per construir un mecanisme de gestió de recursos adequat, adaptatiu i eficient a F2C.F2C ha estat concebut com una plataforma informà tica combinada, coordinada i jerà rquica, on participen un gran nombre de dispositius heterogenis. La seva versatilitat planteja un gran repte per gestionar-los de manera eficaç. Els serveis que s'hi executen consten de diverses tasques, que tenen requisits de recursos diferents. Per tant, conèixer les caracterÃstiques dels dispositius participants i dels serveis que ofereix el sistema és un requisit per dissenyar mecanismes eficaços i de gestió de recursos en un sistema habilitat per F2C. Tenint en compte aquests fets, inicialment ens hem centrat en identificar i definir el model taxonòmic per a tots els dispositius i sistemes implicats en l'execució de tasques de serveis. Qualsevol sistema habilitat per F2C inclou en un gran nombre de dispositius petits i connectats (conegut com a Internet of Things, o IoT) que generen una quantitat contÃnua i colossal de dades de detecció capturant diversos events ambientals. Aquestes dades són un dels ingredients clau per a diversos serveis intel·ligents que ofereix F2C. A més, el seguiment continu dels dispositius participants genera igualment una gran quantitat d'informació estadÃstica. En particular, en tenir aquesta informació, es fa molt més fà cil conèixer la disponibilitat i la idoneïtat dels dispositius per executar algunes tasques i oferir alguns serveis. Per tant, per garantir millors serveis sensibles a la latència, és essencial distribuir de manera equilibrada i segura la informació estadÃstica per la xarxa. Tenint en compte aquests assumptes, també hem proposat i dissenyat un entorn de base de dades segura i distribuïda per gestionar de manera eficaç i segura les dades a la xarxa. Per construir un sistema avançat i intel·ligent es necessita un mecanisme eficaç per a la gestió de l'ús dels recursos del sistema. Normalment, el procés d’utilització i manipulació de recursos depèn principalment del mecanisme de selecció i assignació de recursos. La predicció de l’ús i el rendiment de recursos (per exemple, RAM, CPU, disc, etc.) en termes de temps d’execució de tasques ajuda al procés de selecció i assignació. Adoptar les tècniques d’aprenentatge automà tic (conegut com a Machine Learning, o ML) és molt útil per dissenyar un mecanisme d’assignació de recursos avançat i sofisticat en el sistema habilitat per F2C. L’adopció i la realització de tècniques de ML en un sistema F2C és una tasca complexa. Especialment, la diversificació general i molts altres problemes plantegen un gran repte per realitzar amb èxit les tècniques de ML. Per tant, en aquesta recerca hem proposat i dissenyat dos possibles esquemes arquitectònics diferents per realitzar tècniques de ML en el sistema habilitat per F2C per aconseguir un mecanisme de gestió de recursos adaptatiu, avançat i sofisticat en un sistema F2C. Les nostres propostes són els primers passos per dissenyar un marc arquitectònic general per al mecanisme de gestió de recursos en un sistema habilitat per F2C.Postprint (published version
- …