152 research outputs found
Building on Quicksand
Reliable systems have always been built out of unreliable components. Early
on, the reliable components were small such as mirrored disks or ECC (Error
Correcting Codes) in core memory. These systems were designed such that
failures of these small components were transparent to the application. Later,
the size of the unreliable components grew larger and semantic challenges crept
into the application when failures occurred.
As the granularity of the unreliable component grows, the latency to
communicate with a backup becomes unpalatable. This leads to a more relaxed
model for fault tolerance. The primary system will acknowledge the work request
and its actions without waiting to ensure that the backup is notified of the
work. This improves the responsiveness of the system.
There are two implications of asynchronous state capture: 1) Everything
promised by the primary is probabilistic. There is always a chance that an
untimely failure shortly after the promise results in a backup proceeding
without knowledge of the commitment. Hence, nothing is guaranteed! 2)
Applications must ensure eventual consistency. Since work may be stuck in the
primary after a failure and reappear later, the processing order for work
cannot be guaranteed.
Platform designers are struggling to make this easier for their applications.
Emerging patterns of eventual consistency and probabilistic execution may soon
yield a way for applications to express requirements for a "looser" form of
consistency while providing availability in the face of ever larger failures.
This paper recounts portions of the evolution of these trends, attempts to
show the patterns that span these changes, and talks about future directions as
we continue to "build on quicksand".Comment: CIDR 200
Cloudy Forecast: How Predictable is Communication Latency in the Cloud?
Many systems and services rely on timing assumptions for performance and
availability to perform critical aspects of their operation, such as various
timeouts for failure detectors or optimizations to concurrency control
mechanisms. Many such assumptions rely on the ability of different components
to communicate on time -- a delay in communication may trigger the failure
detector or cause the system to enter a less-optimized execution mode.
Unfortunately, these timing assumptions are often set with little regard to
actual communication guarantees of the underlying infrastructure -- in
particular, the variability of communication delays between processes in
different nodes/servers. The higher communication variability holds especially
true for systems deployed in the public cloud since the cloud is a utility
shared by many users and organizations, making it prone to higher performance
variance due to noisy neighbor syndrome. In this work, we present Cloud Latency
Tester (CLT), a simple tool that can help measure the variability of
communication delays between nodes to help engineers set proper values for
their timing assumptions. We also provide our observational analysis of running
CLT in three major cloud providers and share the lessons we learned
The Architecture of an Autonomic, Resource-Aware, Workstation-Based Distributed Database System
Distributed software systems that are designed to run over workstation
machines within organisations are termed workstation-based. Workstation-based
systems are characterised by dynamically changing sets of machines that are
used primarily for other, user-centric tasks. They must be able to adapt to and
utilize spare capacity when and where it is available, and ensure that the
non-availability of an individual machine does not affect the availability of
the system. This thesis focuses on the requirements and design of a
workstation-based database system, which is motivated by an analysis of
existing database architectures that are typically run over static, specially
provisioned sets of machines. A typical clustered database system -- one that
is run over a number of specially provisioned machines -- executes queries
interactively, returning a synchronous response to applications, with its data
made durable and resilient to the failure of machines. There are no existing
workstation-based databases. Furthermore, other workstation-based systems do
not attempt to achieve the requirements of interactivity and durability,
because they are typically used to execute asynchronous batch processing jobs
that tolerate data loss -- results can be re-computed. These systems use
external servers to store the final results of computations rather than
workstation machines. This thesis describes the design and implementation of a
workstation-based database system and investigates its viability by evaluating
its performance against existing clustered database systems and testing its
availability during machine failures.Comment: Ph.D. Thesi
Canada's (Post) "New Age" Spiritual Centers and the Impact of the Internet in the Context of Digital Religion
As a phenomenon that has had overwhelming social, cultural and political influence, the internet has become so embedded in our lives that it is difficult to imagine how we communicated or accessed information before its invention. It is not surprising, then, that the web is also a very active religious environment with religious and spiritual groups using it extensively to proclaim their beliefs and to be in contact with their followers. In a macro sense, web-based religion is any online activity, from the simple dissemination of information about a religious group or church to full web-based religious practice. It can be understood as occurring along a spectrum from religion online at one end to online religion at the other. First developed by Christopher Helland and further refined by Lorne Dawson, religion online means the use of the internet as a means of providing essential information about, or by, religious groups, movements, and traditions. At the other end of the spectrum, online religion sees the internet as a space that permits the practice of religion or ritual, or worship. In other words, rather than use their web browsers to simply search for information, religious followers use the web as an integral part of their religious lives (Helland, 2000; Dawson, 2005).
However, a new term has entered the academic vocabulary and is being applied to online/offline religious praxis and that is Digital Religion. This latest definition brings a broader meaning to online/offline religion because it accepts the reality that current religious practice co-exists in an online and an offline world simultaneously and the rapid growth of digital technology has included religious or spiritual movements.
This dissertation focuses on three New Age spiritual groups in Canada (English Canada only): the Universal Oneness Spiritual Center1 in Toronto, Ontario, the Centre for Spiritual Living in Calgary, Alberta and Unity Vancouver in Vancouver B.C., and reviews how these three groups use the internet in their everyday activities such as ritual, prayer and meditation and compares and contrasts the pros and cons of online and offline New Age spirituality, paying particular attention to issues of social, cultural and geographical differentiation in the light of Digital Religion
Robust health stream processing
2014 Fall.Includes bibliographical references.As the cost of personal health sensors decrease along with improvements in battery life and connectivity, it becomes more feasible to allow patients to leave full-time care environments sooner. Such devices could lead to greater independence for the elderly, as well as for others who would normally require full-time care. It would also allow surgery patients to spend less time in the hospital, both pre- and post-operation, as all data could be gathered via remote sensors in the patients home. While sensor technology is rapidly approaching the point where this is a feasible option, we still lack in processing frameworks which would make such a leap not only feasible but safe. This work focuses on developing a framework which is robust to both failures of processing elements as well as interference from other computations processing health sensor data. We work with 3 disparate data streams and accompanying computations: electroencephalogram (EEG) data gathered for a brain-computer interface (BCI) application, electrocardiogram (ECG) data gathered for arrhythmia detection, and thorax data gathered from monitoring patient sleep status
- …