18,656 research outputs found

    CADRS: A System for the Design of Complex Data Structures in LISP

    Get PDF
    This thesis describes a project to design and implement a data abstraction facility for LISP. The result of this project, the CADRS (Compiler for Access of Data Regardless of Structure) system, supports the definition, creation, and efficient manipulation of hybrid data/procedure structures that closely resemble class objects in languages like Simula [2] and Smalltalk

    Slisp: A Flexible Software Toolkit for Hybrid, Embedded and Distributed Applications

    Get PDF
    We describe Slisp (pronounced ‘Ess-Lisp’), a hybrid Lisp–C programming toolkit for the development of scriptable and distributed applications. Computationally expensive operations implemented as separate C-coded modules are selectively compiled into a small Xlisp interpreter, then called as Lisp functions in a Lisp-coded program. The resulting hybrid program may run in several modes: as a stand-alone executable, embedded in a different C program, as a networked server accessed from another Slisp client, or as a networked server accessed from a C-coded client. Five years of experience with Slisp, as well experience with other scripting languages such as Tcl and Perl, are summarized. These experiences suggest that Slisp will be most useful for mid-sized applications in which the kinds of scripting and embeddability features provided by Tcl and Perl can be extended in an efïŹcient manner to larger applications, while maintaining a well-deïŹned standard (Common Lisp) for these extensions. In addition, the generality of Lisp makes Lisp a good candidate for an application-level communication language in distributed environments

    Pantry: A Macro Library for Python

    Get PDF
    Python lacks a simple way to create custom syntax and constructs that goes outside of its own syntax rules. A paradigm that allows for these possibilities to exist within languages is macros. Macros allow for a shorter set of syntax to expand into a longer set of instructions at compile-time. This gives the capability to evolve the language to fit personal needs. Pantry, implements a hygienic text-substitution macro system for Python. Pantry achieves this through the introduction of an additional preparsing step that utilizes parsing and lexing of the source code. Pantry proposes a way to simply declare a pattern to be recognized, articulate instructions that replace the pattern, and replace the pattern in the source code. This form of meta-programming allows its users to be able to more concisely write their Python code and present the language in a more natural and intuitive manner. We validate Pantry’s utility through use cases inspired by Python Enhancement Proposals (PEPs) and go through five of them. These are requests from the Python community for features to be implemented into Python. Pantry fulfills these desires through the composition of macros that that performs the new feature

    Supporting Massive Mobility with stream processing software

    Get PDF
    The goal of this project is to design a solution for massive mobility using LISP protocol and scalable database systems like Apache Kafka. The project consists of three steps: rst, understanding the requirements of the massive mobility scenario; second, designing a solution based on a stream processing software that integrates with OOR (open-source LISP implementation). Third, building a prototype with OOR and a stream processing software (or a similar technology) and evaluating its performance. Our objectives are: Understand the requirements in an environment for massive mo- bility;Learn and evaluate the architecture of Apache Kafka and similar broker messages to see if these tools could satisfy the requirements; Propose an architecture for massive mobility using protocol LISP and Kafka as mapping system, and nally; Evaluate the performance of Apache Kafka using such architecture. In chapters 3 and 4 we will provide a summary of LISP protocol, Apache Kafka and other message brokers. On these chapters we describe the components of these tools and how we can use such components to achieve our objective. We will be evaluating the di erent mechanisms to 1) authenticate users, 2) access control list, 3) protocols to assure the delivery of the message, 4)integrity and 5)communication patterns. Because we are interested only in the last message of the queue, it is very important that the broker message provides a capability to obtain this message. Regarding the proposed architecture, we will see how we adapted Kafka to store the information managed by the mapping system in LISP. The EID in LISP will be repre- sented by topics in Apache Kafka., It will use the pattern publish-subscribe to spread the noti cation between all the subscribers. xTRs or Mobile devices will be able to play the role of Consumers and Publisher of the message brokers. Every topic will use only one partition and every subscriber will have its own consumer group to avoid competition to consume the messages. Finally we evaluate the performance of Apache Kafka. As we will see, Kafka escalates in a Linear way in the following cases: number of packets in the network in relation with the number of topics, number of packets in the network in relation with the number of subscribers, number of opened les by the server in relation with the number of topics time elapsed between the moment when publisher sends a message and subscriber receives it, regarding to the number of topics. In the conclusion we explain which objectives were achieved and why there are some challenges to be faced by kafka especially in two points: 1) we need only the last location (message) stored in the broker since Kafka does not provide an out of the box mechanism to obtain such messages, and 2) the amount of opened les that have to be managed simultaneously by the server. More study is required to compare the performance of Kafka against other tools
    • 

    corecore