11,400 research outputs found
The Algorithmic Origins of Life
Although it has been notoriously difficult to pin down precisely what it is
that makes life so distinctive and remarkable, there is general agreement that
its informational aspect is one key property, perhaps the key property. The
unique informational narrative of living systems suggests that life may be
characterized by context-dependent causal influences, and in particular, that
top-down (or downward) causation -- where higher-levels influence and constrain
the dynamics of lower-levels in organizational hierarchies -- may be a major
contributor to the hierarchal structure of living systems. Here we propose that
the origin of life may correspond to a physical transition associated with a
shift in causal structure, where information gains direct, and
context-dependent causal efficacy over the matter it is instantiated in. Such a
transition may be akin to more traditional physical transitions (e.g.
thermodynamic phase transitions), with the crucial distinction that determining
which phase (non-life or life) a given system is in requires dynamical
information and therefore can only be inferred by identifying causal
architecture. We discuss some potential novel research directions based on this
hypothesis, including potential measures of such a transition that may be
amenable to laboratory study, and how the proposed mechanism corresponds to the
onset of the unique mode of (algorithmic) information processing characteristic
of living systems.Comment: 13 pages, 1 tabl
Efficient Task Replication for Fast Response Times in Parallel Computation
One typical use case of large-scale distributed computing in data centers is
to decompose a computation job into many independent tasks and run them in
parallel on different machines, sometimes known as the "embarrassingly
parallel" computation. For this type of computation, one challenge is that the
time to execute a task for each machine is inherently variable, and the overall
response time is constrained by the execution time of the slowest machine. To
address this issue, system designers introduce task replication, which sends
the same task to multiple machines, and obtains result from the machine that
finishes first. While task replication reduces response time, it usually
increases resource usage. In this work, we propose a theoretical framework to
analyze the trade-off between response time and resource usage. We show that,
while in general, there is a tension between response time and resource usage,
there exist scenarios where replicating tasks judiciously reduces completion
time and resource usage simultaneously. Given the execution time distribution
for machines, we investigate the conditions for a scheduling policy to achieve
optimal performance trade-off, and propose efficient algorithms to search for
optimal or near-optimal scheduling policies. Our analysis gives insights on
when and why replication helps, which can be used to guide scheduler design in
large-scale distributed computing systems.Comment: Extended version of the 2-page paper accepted to ACM SIGMETRICS 201
Evaluating weaknesses of "perceptual-cognitive training" and "brain training" methods in sport: An ecological dynamics critique
The recent upsurge in "brain training and perceptual-cognitive training," proposing to improve isolated processes, such as brain function, visual perception, and decision-making, has created significant interest in elite sports practitioners, seeking to create an "edge" for athletes. The claims of these related "performance-enhancing industries" can be considered together as part of a process training approach proposing enhanced cognitive and perceptual skills and brain capacity to support performance in everyday life activities, including sport. For example, the "process training industry" promotes the idea that playing games not only makes you a better player but also makes you smarter, more alert, and a faster learner. In this position paper, we critically evaluate the effectiveness of both types of process training programmes in generalizing transfer to sport performance. These issues are addressed in three stages. First, we evaluate empirical evidence in support of perceptual-cognitive process training and its application to enhancing sport performance. Second, we critically review putative modularized mechanisms underpinning this kind of training, addressing limitations and subsequent problems. Specifically, we consider merits of this highly specific form of training, which focuses on training of isolated processes such as cognitive processes (attention, memory, thinking) and visual perception processes, separately from performance behaviors and actions. We conclude that these approaches may, at best, provide some "general transfer" of underlying processes to specific sport environments, but lack "specificity of transfer" to contextualize actual performance behaviors. A major weakness of process training methods is their focus on enhancing the performance in body "modules" (e.g., eye, brain, memory, anticipatory sub-systems). What is lacking is evidence on how these isolated components are modified and subsequently interact with other process "modules," which are considered to underlie sport performance. Finally, we propose how an ecological dynamics approach, aligned with an embodied framework of cognition undermines the rationale that modularized processes can enhance performance in competitive sport. An ecological dynamics perspective proposes that the body is a complex adaptive system, interacting with performance environments in a functionally integrated manner, emphasizing that the inter-relation between motor processes, cognitive and perceptual functions, and the constraints of a sport task is best understood at the performer-environment scale of analysis
Building the Infrastructure: The Effects of Role Identification Behaviors on Team Cognition Development and Performance
The primary purpose of this study was to extend theory and research regarding the emergence of mental models and transactive memory in teams. Utilizing Kozlowski et al.’s (1999) model of team compilation, we examine the effect of role identification behaviors and argue that such behaviors represent the initial building blocks of team cognition during the role compilation phase of team development. We then hypothesized that team mental models and transactive memory would convey the effects of these behaviors onto team performance in the team compilation phase of development. Results from 60 teams working on a command and control simulation supported our hypotheses
A Bag-of-Tasks Scheduler Tolerant to Temporal Failures in Clouds
Cloud platforms have emerged as a prominent environment to execute high
performance computing (HPC) applications providing on-demand resources as well
as scalability. They usually offer different classes of Virtual Machines (VMs)
which ensure different guarantees in terms of availability and volatility,
provisioning the same resource through multiple pricing models. For instance,
in Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs
are unused instances available for lower price. Despite the monetary
advantages, a spot VM can be terminated, stopped, or hibernated by EC2 at any
moment.
Using both hibernation-prone spot VMs (for cost sake) and on-demand VMs, we
propose in this paper a static scheduling for HPC applications which are
composed by independent tasks (bag-of-task) with deadline constraints. However,
if a spot VM hibernates and it does not resume within a time which guarantees
the application's deadline, a temporal failure takes place. Our scheduling,
thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2
cloud, respecting its deadline and avoiding temporal failures. To this end, our
algorithm statically creates two scheduling maps: (i) the first one contains,
for each task, its starting time and on which VM (i.e., an available spot or
on-demand VM with the current lowest price) the task should execute; (ii) the
second one contains, for each task allocated on a VM spot in the first map, its
starting time and on which on-demand VM it should be executed to meet the
application deadline in order to avoid temporal failures. The latter will be
used whenever the hibernation period of a spot VM exceeds a time limit.
Performance results from simulation with task execution traces, configuration
of Amazon EC2 VM classes, and VMs market history confirms the effectiveness of
our scheduling and that it tolerates temporal failures
Enhancing reliability with Latin Square redundancy on desktop grids.
Computational grids are some of the largest computer systems in existence today. Unfortunately they are also, in many cases, the least reliable. This research examines the use of redundancy with permutation as a method of improving reliability in computational grid applications. Three primary avenues are explored - development of a new redundancy model, the Replication and Permutation Paradigm (RPP) for computational grids, development of grid simulation software for testing RPP against other redundancy methods and, finally, running a program on a live grid using RPP. An important part of RPP involves distributing data and tasks across the grid in Latin Square fashion. Two theorems and subsequent proofs regarding Latin Squares are developed. The theorems describe the changing position of symbols between the rows of a standard Latin Square. When a symbol is missing because a column is removed the theorems provide a basis for determining the next row and column where the missing symbol can be found. Interesting in their own right, the theorems have implications for redundancy. In terms of the redundancy model, the theorems allow one to state the maximum makespan in the face of missing computational hosts when using Latin Square redundancy. The simulator software was developed and used to compare different data and task distribution schemes on a simulated grid. The software clearly showed the advantage of running RPP, which resulted in faster completion times in the face of computational host failures. The Latin Square method also fails gracefully in that jobs complete with massive node failure while increasing makespan. Finally an Inductive Logic Program (ILP) for pharmacophore search was executed, using a Latin Square redundancy methodology, on a Condor grid in the Dahlem Lab at the University of Louisville Speed School of Engineering. All jobs completed, even in the face of large numbers of randomly generated computational host failures
- …