303,810 research outputs found

    Nanoscale Assembly of Functional Peptides with Divergent Programming Elements

    Get PDF
    Self-assembling peptides are being applied both in the biomedical area and as building blocks in nanotechnology. Their applications are closely linked to their modes of self-assembly, which determine the functional nanostructures that they form. This work brings together two structural elements that direct nanoscale self-association in divergent directions: proline as a β-breaker and the β-structure-associated diphenylalanine motif, into a single tripeptide sequence. Amino acid chirality was found to resolve the tension inherent to these conflicting self-assembly instructions. Stereoconfiguration determined the ability of each of the eight possible Pro-Phe-Phe stereoisomers to self-associate into diverse nanostructures, including nanoparticles, nanotapes, or fibrils, which yielded hydrogels with gel-to-sol transition at a physiologically relevant temperature. Three single-crystal structures and all-atom molecular dynamics simulations elucidated the ability of each peptide to establish key interactions to form long-range assemblies (i,e., stacks leading to gelling fibrils), medium-range assemblies (i.e., stacks yielding nanotapes), or short-range assemblies (i.e., dimers or trimers that further associated into nanoparticles). Importantly, diphenylalanine is known to serve as a binding site for pathological amyloids, potentially allowing these heterochiral systems to influence the fibrillization of other biologically relevant peptides. To probe this hypothesis, all eight Pro-Phe-Phe stereoisomers were tested in vitro on the Alzheimer's disease-associated Aβ(1-42) peptide. Indeed, one nonfibril-forming stereoisomer effectively inhibited Aβ fibrillization through multivalent binding between diphenylalanine motifs. This work thus defined heterochirality as a useful feature to strategically develop future therapeutics to interfere with pathological processes, with the additional value of resistance to protease-mediated degradation and biocompatibility

    Combinatory logic: from philosophy and mathematics to computer science

    Get PDF
    In 1920, Moses Schönfinkel provided the first rough details of what later became known as combinatory logic. This endeavour was part of Hilbert’s program to formulate mathematics as a consistent logic system based on a finite set of axioms and inference rules. This program’s importance to the foundations and philosophical aspects of mathematics is still celebrated today. In the 1930s, Haskell Curry furthered Schönfinkel’s work on combinatory logic, attempting – and failing – to show that it can be used as a foundation for mathematics. However, in 1947, he described a high-level functional programming language based on combinatory logic. Research on functional programming languages continued, reaching a high point in the eighties. However, by this time, object-oriented programming languages began taking over and functional languages started to lose their appeal. Lately, however, a resurgence of functional languages is being noted. Indeed, many of the commonly-used programming languages nowadays incorporate functional programming elements in them, while functional languages such as Haskell, OCaml and Erlang are gaining in popularity. Thanks to this revival, it is appropriate to breathe new life into combinatory logic by presenting its main ideas and techniques in this paper.peer-reviewe

    A definition of the ARCA notation

    Get PDF
    ARCA is a programming notation intended for interactive specification and manipulation of combinatorial graphs. The main body of this report is a technical description of ARCA sufficiently detailed to allow an interpreter to be developed. Some simple illustrative programs are included. ARCA incorporates variables for denoting primitive data elements (essentially vertices, edges and scalars), and diagrams (essentially embedded graphs). A novel feature is the use of two kinds of variable: the one storing values (as in conventional procedural languages), the other functional definitions (as in nonprocedural languages). By means of such variables, algebraic expressions over the algebra of primitive data elements may represent either explicit values or formulae. The potential applications and limitations of ARCA, and more general "algebraic notations" defined using similar principles, are briefly discussed

    A generic imperative language for polynomial time

    Full text link
    The ramification method in Implicit Computational Complexity has been associated with functional programming, but adapting it to generic imperative programming is highly desirable, given the wider algorithmic applicability of imperative programming. We introduce a new approach to ramification which, among other benefits, adapts readily to fully general imperative programming. The novelty is in ramifying finite second-order objects, namely finite structures, rather than ramifying elements of free algebras. In so doing we bridge between Implicit Complexity's type theoretic characterizations of feasibility, and the data-flow approach of Static Analysis.Comment: 18 pages, submitted to a conferenc

    Ant Colony Based Hybrid Approach for Optimal Compromise Sum-Difference Patterns Synthesis

    Get PDF
    Dealing with the synthesis of monopulse array antennas, many stochastic optimization algorithms have been used for the solution of the so-called optimal compromise problem between sum and difference patterns when sub-arrayed feed networks are considered. More recently, hybrid approaches, exploiting the convexity of the functional with respect to a sub-set of the unknowns (i.e., the sub-array excitation coefficients) have demonstrated their effectiveness. In this letter, an hybrid approach based on the Ant Colony Optimization (ACO) is proposed. At the first step, the ACO is used to define the sub-array membership of the array elements, while, at the second step, the sub-array weights are computed by solving a convex programming problem. The definitive version is available at www3.interscience.wiley.co

    Parallel Discrete Event Simulation with Erlang

    Full text link
    Discrete Event Simulation (DES) is a widely used technique in which the state of the simulator is updated by events happening at discrete points in time (hence the name). DES is used to model and analyze many kinds of systems, including computer architectures, communication networks, street traffic, and others. Parallel and Distributed Simulation (PADS) aims at improving the efficiency of DES by partitioning the simulation model across multiple processing elements, in order to enabling larger and/or more detailed studies to be carried out. The interest on PADS is increasing since the widespread availability of multicore processors and affordable high performance computing clusters. However, designing parallel simulation models requires considerable expertise, the result being that PADS techniques are not as widespread as they could be. In this paper we describe ErlangTW, a parallel simulation middleware based on the Time Warp synchronization protocol. ErlangTW is entirely written in Erlang, a concurrent, functional programming language specifically targeted at building distributed systems. We argue that writing parallel simulation models in Erlang is considerably easier than using conventional programming languages. Moreover, ErlangTW allows simulation models to be executed either on single-core, multicore and distributed computing architectures. We describe the design and prototype implementation of ErlangTW, and report some preliminary performance results on multicore and distributed architectures using the well known PHOLD benchmark.Comment: Proceedings of ACM SIGPLAN Workshop on Functional High-Performance Computing (FHPC 2012) in conjunction with ICFP 2012. ISBN: 978-1-4503-1577-

    Extensible sparse functional arrays with circuit parallelism

    Get PDF
    A longstanding open question in algorithms and data structures is the time and space complexity of pure functional arrays. Imperative arrays provide update and lookup operations that require constant time in the RAM theoretical model, but it is conjectured that there does not exist a RAM algorithm that achieves the same complexity for functional arrays, unless restrictions are placed on the operations. The main result of this paper is an algorithm that does achieve optimal unit time and space complexity for update and lookup on functional arrays. This algorithm does not run on a RAM, but instead it exploits the massive parallelism inherent in digital circuits. The algorithm also provides unit time operations that support storage management, as well as sparse and extensible arrays. The main idea behind the algorithm is to replace a RAM memory by a tree circuit that is more powerful than the RAM yet has the same asymptotic complexity in time (gate delays) and size (number of components). The algorithm uses an array representation that allows elements to be shared between many arrays with only a small constant factor penalty in space and time. This system exemplifies circuit parallelism, which exploits very large numbers of transistors per chip in order to speed up key algorithms. Extensible Sparse Functional Arrays (ESFA) can be used with both functional and imperative programming languages. The system comprises a set of algorithms and a circuit specification, and it has been implemented on a GPGPU with good performance
    • …
    corecore