5,599 research outputs found

    Mining Dynamic Document Spaces with Massively Parallel Embedded Processors

    Get PDF
    Currently Océ investigates future document management services. One of these services is accessing dynamic document spaces, i.e. improving the access to document spaces which are frequently updated (like newsgroups). This process is rather computational intensive. This paper describes the research conducted on software development for massively parallel processors. A prototype has been built which processes streams of information from specified newsgroups and transforms them into personal information maps. Although this technology does speed up the training part compared to a general purpose processor implementation, however, its real benefits emerges with larger problem dimensions because of the scalable approach. It is recommended to improve on quality of the map as well as on visualisation and to better profile the performance of the other parts of the pipeline, i.e. feature extraction and visualisation

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Multidisciplinary computational aerosciences

    Get PDF
    As the challenges of single disciplinary computational physics are met, such as computational fluid dynamics, computational structural mechanics, computational propulsion, computational aeroacoustics, computational electromagnetics, etc., scientists have begun investigating the combination of these single disciplines into what is being called multidisciplinary computational aerosciences (MCAS). The combination of several disciplines not only offers simulation realism but also formidable computational challenges. The solution of such problems will require computers orders of magnitude larger than those currently available. Such computer power can only be supplied by massively parallel machines because of the current speed-of-light limitation of conventional serial systems. Even with such machines, MCAS problems will require hundreds of hours for their solution. To efficiently utilize such a machine, research is required in three areas that include parallel architectures, systems software, and applications software. The main emphasis of this paper is the applications software element. Examples that demonstrate application software for multidisciplinary problems currently being solved at NASA Ames Research Center are presented. Pacing items for MCAS are discussed such as solution methodology, physical modeling, computer power, and multidisciplinary validation experiments

    A Modeling Approach based on UML/MARTE for GPU Architecture

    Get PDF
    Nowadays, the High Performance Computing is part of the context of embedded systems. Graphics Processing Units (GPUs) are more and more used in acceleration of the most part of algorithms and applications. Over the past years, not many efforts have been done to describe abstractions of applications in relation to their target architectures. Thus, when developers need to associate applications and GPUs, for example, they find difficulty and prefer using API for these architectures. This paper presents a metamodel extension for MARTE profile and a model for GPU architectures. The main goal is to specify the task and data allocation in the memory hierarchy of these architectures. The results show that this approach will help to generate code for GPUs based on model transformations using Model Driven Engineering (MDE).Comment: Symposium en Architectures nouvelles de machines (SympA'14) (2011

    The "MIND" Scalable PIM Architecture

    Get PDF
    MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND architecture
    corecore