24,995 research outputs found

    White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    Get PDF
    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA

    Reliable multi-hop routing with cooperative transmissions in energy-constrained networks

    Get PDF
    We present a novel approach in characterizing the optimal reliable multi-hop virtual multiple-input single-output (vMISO) routing in ad hoc networks. Under a high node density regime, we determine the optimal cardinality of the cooperation sets at each hop on a path minimizing the total energy cost per transmitted bit. Optimal cooperating set cardinality curves are derived, and they can be used to determine the optimal routing strategy based on the required reliability, transmission power, and path loss coefficient. We design a new greedy geographical routing algorithm suitable for vMISO transmissions, and demonstrate the applicability of our results for more general networks

    2 P2P or Not 2 P2P?

    Full text link
    In the hope of stimulating discussion, we present a heuristic decision tree that designers can use to judge the likely suitability of a P2P architecture for their applications. It is based on the characteristics of a wide range of P2P systems from the literature, both proposed and deployed.Comment: 6 pages, 1 figur

    Process Support for Cooperative Work on the World Wide Web

    Get PDF
    The World Wide Web is becoming a dominating factor in information technology. Consequently, computer supported cooperative work on the Web has recently drawn a lot of attention. “Process Support for Cooperative Work” (PSCW) is a Web based system supporting both structured and unstructured forms of cooperation. It is a combination of the “Basic Support for Cooperative Work” (BSCW) shared workspace system and the Merlin Process Support Environment. The current PSCW prototype offers a loose connection, in effect extending BSCW with a gateway to Merlin. With this prototype we have successfully addressed the technical issues involved; further integration of functionality should not pose any real problems. We focus on the technical side of the PSCW system, which gives a good insight into the issues that have to be addressed generally in the construction of Web based groupware

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Capital as Artificial Intelligence

    Get PDF
    This article examines science-fictional allegorizations of Soviet-style planned economies, financial markets, autonomous trading algorithms, and global capitalism writ large as nonhuman artificial intelligences, focussing primarily on American science fiction of the Cold War period. Key fictional texts discussed include Star Trek, Isaac Asimov\u27s Machine stories, Terminator, Kurt Vonnegut\u27s Player Piano (1952), Charles Stross\u27s Accelerando (2005), and the short stories of Philip K. Dick. The final section of the article discusses Kim Stanley Robinson\u27s novel 2312 (2012) within the contemporary political context of accelerationist anticapitalism, whose advocates propose working with “the machines” rather than against them

    Bipartite electronic SLA as a business framework to support cross-organization load management of real-time online applications

    No full text
    Online applications such as games and e-learning applications fall within the broader category of real-time online interactive applications (ROIA), a new class of ‘killer’ application for the Grid that is being investigated in the edutain@grid project. The two case studies in edutain@grid are an online game and an e-learning training application. We present a novel Grid-based business framework that makes use of bipartite service level agreements (SLAs) and dynamic invoice models to model complex business relationships in a massively scalable and flexible way. We support cross-organization load management at the business level, through zone migration. For evaluation we look at existing and extended value chains, the quality of service (QoS) metrics measured and the dynamic invoice models that support this work. We examine the causal links from customer quality of experience (QoE) and service provider quality of business (QoBiz) through to measured quality of service. Finally we discuss a shared reward business ecosystem and suggest how extended service level agreements and invoice models can support this

    Round Compression for Parallel Matching Algorithms

    Get PDF
    For over a decade now we have been witnessing the success of {\em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {\em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn)O(\log{n}) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n1+Ω(1)n^{1+\Omega(1)} memory, this problem can also be solved 22-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn)O(\log{n}) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above O(logn)O(\log n) round complexity bound even in the case of {\em slightly sublinear} memory per machine. In fact, our improvement here is {\em almost exponential}: we are able to deliver a (2+ϵ)(2+\epsilon)-approximation to maximum matching, for any fixed constant ϵ>0\epsilon>0, in O((loglogn)2)O((\log \log n)^2) rounds
    corecore