2,672 research outputs found
Transfer of Personality to Synthetic Human ("mind uploading") and the Social Construction of Identity
Humans have long wondered whether they can survive the death of their physical bodies. Some people now look to technology as a means by which this might occur, using terms such 'whole brain emulation', 'mind uploading', and 'substrate independent minds' to describe a set of hypothetical procedures for transferring or emulating the functioning of a human mind on a synthetic substrate. There has been much debate about the philosophical implications of such procedures for personal survival. Most participants to that debate assume that the continuation of identity is an objective fact that can be revealed by scientific enquiry or rational debate. We bring into this debate a perspective that has so far been neglected: that personal identities are in large part social constructs. Consequently, to enable a particular identity to survive the transference process, it is not sufficient to settle age-old philosophical questions about the nature of identity. It is also necessary to maintain certain networks of interaction between the synthetic person and its social environment, and sustain a collective belief in the persistence of identity. We defend this position by using the example of the Dalai Lama in Tibetan Buddhist tradition and identify technological procedures that could increase the credibility of personal continuity between biological and artificial substrates
Management of object-oriented action-based distributed programs
Phd ThesisThis thesis addresses the problem of managing the runtime behaviour of distributed
programs. The thesis of this work is that management is fundamentally
an information processing activity and that the object model, as applied to actionbased
distributed systems and database systems, is an appropriate representation
of the management information. In this approach, the basic concepts of classes,
objects, relationships, and atomic transition systems are used to form object
models of distributed programs. Distributed programs are collections of objects
whose methods are structured using atomic actions, i.e., atomic transactions.
Object models are formed of two submodels, each representing a fundamental
aspect of a distributed program. The structural submodel represents a static
perspective of the distributed program, and the control submodel represents a
dynamic perspective of it. Structural models represent the program's objects,
classes and their relationships. Control models represent the program's object
states, events, guards and actions-a transition system. Resolution of queries on
the distributed program's object model enable the management system to control
certain activities of distributed programs.
At a different level of abstraction, the distributed program can be seen as a
reactive system where two subprograms interact: an application program and a
management program; they interact only through sensors and actuators. Sensors
are methods used to probe an object's state and actuators are methods used
to change an object's state. The management program is capable to prod the
application program into action by activating sensors and actuators available at
the interface of the application program. Actions are determined by management
policies that are encoded in the management program. This way of structuring
the management system encourages a clear modularization of application and
management distributed programs, allowing better separation of concerns. Managemental
concerns can be dealt with by the management program, functional
concerns can be assigned to the application program.
The object-oriented action-based computational model adopted by the management
system provides a natural framework for the implementation of faulttolerant
distributed programs. Object orientation provides modularity and extensibility
through object encapsulation. Atomic actions guarantee the consistency of
the objects of the distributed program despite concurrency and failures. Replication
of the distributed program provides increased fault-tolerance by guaranteeing
the consistent progress of the computation, even though some of the replicated
objects can fail.
A prototype management system based on the management theory proposed
above has been implemented atop Arjuna; an object-oriented programming system
which provides a set of tools for constructing fault-tolerant distributed programs. The management system is composed of two subsystems: Stabilis, a
management system for structural information, and Vigil, a management system
for control information. Example applications have been implemented to illustrate
the use of the management system and gather experimental evidence to give
support to the thesis.CNPq (Consellho Nacional de Desenvolvimento Cientifico e Tecnol6gico, Brazil):
BROADCAST (Basic Research On Advanced Distributed Computing: from Algorithms to SysTems)
AMaÏoSâAbstract Machine for Xcerpt
Web query languages promise convenient and efficient access
to Web data such as XML, RDF, or Topic Maps. Xcerpt is one such Web
query language with strong emphasis on novel high-level constructs for
effective and convenient query authoring, particularly tailored to versatile
access to data in different Web formats such as XML or RDF.
However, so far it lacks an efficient implementation to supplement the
convenient language features. AMaÏoS is an abstract machine implementation
for Xcerpt that aims at efficiency and ease of deployment. It
strictly separates compilation and execution of queries: Queries are compiled
once to abstract machine code that consists in (1) a code segment
with instructions for evaluating each rule and (2) a hint segment that
provides the abstract machine with optimization hints derived by the
query compilation. This article summarizes the motivation and principles
behind AMaÏoS and discusses how its current architecture realizes
these principles
50 years of isolation
The traditional means for isolating applications from each other is via the use of operating system provided âprocessâ abstraction facilities. However, as applications now consist of multiple fine-grained components, the traditional process abstraction model is proving to be insufficient in ensuring this isolation. Statistics indicate that a high percentage of software failure occurs due to propagation of component failures. These observations are further bolstered by the attempts by modern Internet browser application developers, for example, to adopt multi-process architectures in order to increase robustness. Therefore, a fresh look at the available options for isolating program components is necessary and this paper provides an overview of previous and current research on the area
Design, Implementation and Experiments for Moving Target Defense Framework
The traditional defensive security strategy for distributed systems employs well-established defensive techniques such as; redundancy/replications, firewalls, and encryption to prevent attackers from taking control of the system. However, given sufficient time and resources, all these methods can be defeated, especially when dealing with sophisticated attacks from advanced adversaries that leverage zero-day exploits
Self-stabilization Overhead: an Experimental Case Study on Coded Atomic Storage
Shared memory emulation can be used as a fault-tolerant and highly available
distributed storage solution or as a low-level synchronization primitive.
Attiya, Bar-Noy, and Dolev were the first to propose a single-writer,
multi-reader linearizable register emulation where the register is replicated
to all servers. Recently, Cadambe et al. proposed the Coded Atomic Storage
(CAS) algorithm, which uses erasure coding for achieving data redundancy with
much lower communication cost than previous algorithmic solutions.
Although CAS can tolerate server crashes, it was not designed to recover from
unexpected, transient faults, without the need of external (human)
intervention. In this respect, Dolev, Petig, and Schiller have recently
developed a self-stabilizing version of CAS, which we call CASSS. As one would
expect, self-stabilization does not come as a free lunch; it introduces,
mainly, communication overhead for detecting inconsistencies and stale
information. So, one would wonder whether the overhead introduced by
self-stabilization would nullify the gain of erasure coding.
To answer this question, we have implemented and experimentally evaluated the
CASSS algorithm on PlanetLab; a planetary scale distributed infrastructure. The
evaluation shows that our implementation of CASSS scales very well in terms of
the number of servers, the number of concurrent clients, as well as the size of
the replicated object. More importantly, it shows (a) to have only a constant
overhead compared to the traditional CAS algorithm (which we also implement)
and (b) the recovery period (after the last occurrence of a transient fault) is
as fast as a few client (read/write) operations. Our results suggest that CASSS
does not significantly impact efficiency while dealing with automatic recovery
from transient faults and bounded size of needed resources
Recommended from our members
Digital practices: An aesthetic and neuroesthetic approach to virtuality and embodiment
Recommended from our members
Integration, management and communication of heterogeneous design resources with WWW technologies
Recently, advanced information technologies have opened new pos-sibilities for collaborative designs. In this paper, a Web-based collaborative de-sign environment is proposed, where heterogeneous design applications can be integrated with a common interface, managed dynamically for publishing and searching, and communicated with each other for integrated multi-objective de-sign. The CORBA (Common Object Request Broker Architecture) is employed as an implementation tool to enable integration and communication of design application programs; and the XML (eXtensible Markup Language) is used as a common data descriptive language for data exchange between heterogeneous applications and for resource description and recording. This paper also intro-duces the implementation of the system and the encapsulating issues of existing legacy applications. At last, an example of gear design based on the system is il-lustrated to identify the methods and procedure developed by this research
Distributed Processes, Distributed Cognizers and Collaborative Cognition
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (âknow-howâ) This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking -- only whether it can generate doing. The processes that generate thinking and know-how are âdistributedâ within the heads of thinkers, but not across thinkersâ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brainsâ real-time interactive potential in ways that were not possible in oral, written or print interactions
- âŠ