12,089 research outputs found
A Theory Explains Deep Learning
This is our journal for developing Deduction Theory and studying Deep Learning and Artificial intelligence. Deduction Theory is a Theory of Deducing Worldâs Relativity by Information Coupling and Asymmetry. We focus on information processing, see intelligence as an information structure that relatively close object-oriented, probability-oriented, unsupervised learning, relativity information processing and massive automated information processing. We see deep learning and machine learning as an attempt to make all types of information processing relatively close to probability information processing. We will discuss about how to understand Deep Learning and Artificial intelligence and why Deep Learning is shown better performance than the other methods by metaphysical logic
A Mobile Computing Architecture for Numerical Simulation
The domain of numerical simulation is a place where the parallelization of
numerical code is common. The definition of a numerical context means the
configuration of resources such as memory, processor load and communication
graph, with an evolving feature: the resources availability. A feature is often
missing: the adaptability. It is not predictable and the adaptable aspect is
essential. Without calling into question these implementations of these codes,
we create an adaptive use of these implementations. Because the execution has
to be driven by the availability of main resources, the components of a numeric
computation have to react when their context changes. This paper offers a new
architecture, a mobile computing architecture, based on mobile agents and
JavaSpace. At the end of this paper, we apply our architecture to several case
studies and obtain our first results
Visualisation techniques, human perception and the built environment
Historically, architecture has a wealth of visualisation techniques that have evolved throughout the period of structural design, with Virtual Reality (VR) being a relatively recent addition to the toolbox. To date the effectiveness of VR has been demonstrated from conceptualisation through to final stages and maintenance, however, its full potential has yet to be realised (Bouchlaghem et al, 2005). According to Dewey (1934), perceptual integration was predicted to be transformational; as the observer would be able to âengageâ with the virtual environment. However, environmental representations are predominately focused on the area of vision, regardless of evidence stating that the experience is multi sensory. In addition, there is a marked lack of research exploring the complex interaction of environmental design and the user, such as the role of attention or conceptual interpretation. This paper identifies the potential of VR models to aid communication for the Built Environment with specific reference to human perception issues
Gossip vs. Markov Chains, and Randomness-Efficient Rumor Spreading
We study gossip algorithms for the rumor spreading problem which asks one
node to deliver a rumor to all nodes in an unknown network. We present the
first protocol for any expander graph with nodes such that, the
protocol informs every node in rounds with high probability, and
uses random bits in total. The runtime of our protocol is
tight, and the randomness requirement of random bits almost
matches the lower bound of random bits for dense graphs. We
further show that, for many graph families, polylogarithmic number of random
bits in total suffice to spread the rumor in rounds.
These results together give us an almost complete understanding of the
randomness requirement of this fundamental gossip process.
Our analysis relies on unexpectedly tight connections among gossip processes,
Markov chains, and branching programs. First, we establish a connection between
rumor spreading processes and Markov chains, which is used to approximate the
rumor spreading time by the mixing time of Markov chains. Second, we show a
reduction from rumor spreading processes to branching programs, and this
reduction provides a general framework to derandomize gossip processes. In
addition to designing rumor spreading protocols, these novel techniques may
have applications in studying parallel and multiple random walks, and
randomness complexity of distributed algorithms.Comment: 41 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1304.135
Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge
Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud.
Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others.
We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application
The AliEn system, status and perspectives
AliEn is a production environment that implements several components of the
Grid paradigm needed to simulate, reconstruct and analyse HEP data in a
distributed way. The system is built around Open Source components, uses the
Web Services model and standard network protocols to implement the computing
platform that is currently being used to produce and analyse Monte Carlo data
at over 30 sites on four continents. The aim of this paper is to present the
current AliEn architecture and outline its future developments in the light of
emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN
MOAT00
- âŠ