9,590 research outputs found
Towards A Well-Secured Electronic Health Record in the Health Cloud
The major concerns for most cloud implementers particularly in the health care industry have remained data security
and privacy. A prominent and major threat that constitutes a hurdle for practitioners within the health industry from exploiting and
benefiting from the gains of cloud computing is the fear of theft of patients health data in the cloud. Investigations and surveys
have revealed that most practitioners in the health care industry are concerned about the risk of health data mix-up amongst the
various cloud providers, hacking to comprise the cloud platform and theft of vital patients’ health data.An overview of the
diverse issues relating to health data privacy and overall security in the cloud are presented in this technical report. Based on
identifed secure access requirements, an encryption-based eHR security model for securing and enforcing authorised access to
electronic health data (records), eHR is also presented. It highlights three core functionalities for managing issues relating to
health data privacy and security of eHR in health care cloud
Integrating Scale Out and Fault Tolerance in Stream Processing using Operator State Management
As users of big data applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the pay-as-you-go model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs - systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the check-pointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures. Copyright © 2013 ACM
The LHC Logging Service : Handling terabytes of on-line data
Based on previous experience with LEP, a long-term data logging service for the LHC was developed and put in place in 2003, several years before beam operation. The scope of the logging service covers the evolution over time of data acquisitions on accelerator equipment and beam related parameters. The intention is to keep all this time-series data on-line for the lifetime of LHC, allowing easy data comparisons with previous years. The LHC hardware commissioning has used this service extensively prior to the first beams in 2008 and even more so in 2009. Current data writing rates exceed 15TB/year and continue to increase. The high data volumes and throughput rates, in writing as well as in reading, require special arrangements on data organization and data access methods
Parallel Deferred Update Replication
Deferred update replication (DUR) is an established approach to implementing
highly efficient and available storage. While the throughput of read-only
transactions scales linearly with the number of deployed replicas in DUR, the
throughput of update transactions experiences limited improvements as replicas
are added. This paper presents Parallel Deferred Update Replication (P-DUR), a
variation of classical DUR that scales both read-only and update transactions
with the number of cores available in a replica. In addition to introducing the
new approach, we describe its full implementation and compare its performance
to classical DUR and to Berkeley DB, a well-known standalone database
- …