11 research outputs found

    Online Change Point Detection in Molecular Dynamics With Optical Random Features

    Full text link
    Proteins are made of atoms constantly fluctuating, but can occasionally undergo large-scale changes. Such transitions are of biological interest, linking the structure of a protein to its function with a cell. Atomic-level simulations, such as Molecular Dynamics (MD), are used to study these events. However, molecular dynamics simulations produce time series with multiple observables, while changes often only affect a few of them. Therefore, detecting conformational changes has proven to be challenging for most change-point detection algorithms. In this work, we focus on the identification of such events given many noisy observables. In particular, we show that the No-prior-Knowledge Exponential Weighted Moving Average (NEWMA) algorithm can be used along optical hardware to successfully identify these changes in real-time. Our method does not need to distinguish between the background of a protein and the protein itself. For larger simulations, it is faster than using traditional silicon hardware and has a lower memory footprint. This technique may enhance the sampling of the conformational space of molecules. It may also be used to detect change-points in other sequential data with a large number of features.Comment: 15 pages, 12 figure

    Massively parallel reasoning in transitive relationship hierarchies

    Get PDF
    This research focuses on building a parallel knowledge representation and reasoning system for the purpose of making progress in realizing human-like intelligence. To achieve human-like intelligence, it is necessary to model human reasoning processes by programs. Knowledge in the real world is huge in size, complex in structure, and is also constantly changing even in limited domains. Unfortunately, reasoning algorithms are very often intractable, which means that they are too slow for any practical applications. One technique to deal with this problem is to design special-purpose reasoners. Many past Al systems have worked rather nicely for limited problem sizes, but attempts to extend them to realistic subsets of world knowledge have led to difficulties. Even special purpose reasoners are not immune to this impasse. In this work, to overcome this problem, we are combining special purpose reasoners with massive We have developed and implemented a massively parallel transitive closure reasoner, called Hydra, that can dynamically assimilate any transitive, binary relation and efficiently answer queries using the transitive closure of all those relations. Within certain limitations, we achieve constant-time responses for transitive closure queries. Hydra can dynamically insert new concepts or new links into a. knowledge base for realistic problem sizes. To get near human-like reasoning capabilities requires the possibility of dynamic updates of the transitive relation hierarchies. Our incremental, massively parallel, update algorithms can achieve almost constant time updates of large knowledge bases. Hydra expands the boundaries of Knowledge Representation and Reasoning in a number of different directions: (1) Hydra improves the representational power of current systems. We have developed a set-based representation for class hierarchies that makes it easy to represent class hierarchies on arrays of processors. Furthermore, we have developed and implemented two methods for mapping this set-based representation onto the processor space of a Connection Machine. These two representations, the Grid Representation and the Double Strand Representation successively improve transitive closure reasoning in terms of speed and processor utilization. (2) Hydra allows fast rerieval and dynamic update of a large knowledge base. New fast update algorithms are formulated to dynamically insert new concepts or new relations into a knowledge base of thousands of nodes. (3) Hydra provides reasoning based on mixed hierarchical representations. We have designed representational tools and massively parallel reasoning algorithms to model reasoning in combined IS-A, Part-of, and Contained-in hierarchies. (4) Hydra\u27s reasoning facilities have been successfully applied to the Medical Entities Dictionary, a large medical vocabulary of Columbia Presbyterian Medical Center. As a result of (1) - (3), Hydra is more general than many current special-purpose reasoners, faster than currently existing general-purpose reasoners, and its knowledge base can be updated dynamically

    Cogitator : a parallel, fuzzy, database-driven expert system

    Get PDF
    The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.KMBT_22

    CIRCUITS AND ARCHITECTURE FOR BIO-INSPIRED AI ACCELERATORS

    Get PDF
    Technological advances in microelectronics envisioned through Moore’s law have led to powerful processors that can handle complex and computationally intensive tasks. Nonetheless, these advancements through technology scaling have come at an unfavorable cost of significantly larger power consumption, which has posed challenges for data processing centers and computers at scale. Moreover, with the emergence of mobile computing platforms constrained by power and bandwidth for distributed computing, the necessity for more energy-efficient scalable local processing has become more significant. Unconventional Compute-in-Memory architectures such as the analog winner-takes-all associative-memory and the Charge-Injection Device processor have been proposed as alternatives. Unconventional charge-based computation has been employed for neural network accelerators in the past, where impressive energy efficiency per operation has been attained in 1-bit vector-vector multiplications, and in recent work, multi-bit vector-vector multiplications. In the latter, computation was carried out by counting quanta of charge at the thermal noise limit, using packets of about 1000 electrons. These systems are neither analog nor digital in the traditional sense but employ mixed-signal circuits to count the packets of charge and hence we call them Quasi-Digital. By amortizing the energy costs of the mixed-signal encoding/decoding over compute-vectors with many elements, high energy efficiencies can be achieved. In this dissertation, I present a design framework for AI accelerators using scalable compute-in-memory architectures. On the device level, two primitive elements are designed and characterized as target computational technologies: (i) a multilevel non-volatile cell and (ii) a pseudo Dynamic Random-Access Memory (pseudo-DRAM) bit-cell. At the level of circuit description, compute-in-memory crossbars and mixed-signal circuits were designed, allowing seamless connectivity to digital controllers. At the level of data representation, both binary and stochastic-unary coding are used to compute Vector-Vector Multiplications (VMMs) at the array level. Finally, on the architectural level, two AI accelerator for data-center processing and edge computing are discussed. Both designs are scalable multi-core Systems-on-Chip (SoCs), where vector-processor arrays are tiled on a 2-layer Network-on-Chip (NoC), enabling neighbor communication and flexible compute vs. memory trade-off. General purpose Arm/RISCV co-processors provide adequate bootstrapping and system-housekeeping and a high-speed interface fabric facilitates Input/Output to main memory

    Challenges of Massive Parallelism Center for

    No full text
    Artificial Intelligence has been the field of study for exploring the principles underlying thought, and utilizing their discovery to develop useful computers. Traditional AI models have been, consciously or subconsciously, optimized for available computing resources which has led AI in certain directions. The emergence of massively parallel computers liberates the way intelligence may be modeled. Although the AI community has yet to make a quantum leap, there are attempts to make use of the opportunities offered by massively parallel computers, such as memory-based reasoning, genetic algorithms, and other novel models. Even within the traditional AI approach, researchers have begun to realize that the needs for high performance computing and very large knowledge bases to develop intelligent systems requires massively parallel AI techniques. In this Computers and Thought Award lecture, I will argue that massively parallel artificial intelligence will add new dimensions to the ways that the AI goals are pursued, and demonstrate that massively parallel artificial intelligence is where AI meets the real world. 1
    corecore