11,527 research outputs found

    Information flow analysis for a dynamically typed language with staged metaprogramming

    Get PDF
    Web applications written in JavaScript are regularly used for dealing with sensitive or personal data. Consequently, reasoning about their security properties has become an important problem, which is made very difficult by the highly dynamic nature of the language, particularly its support for runtime code generation via eval. In order to deal with this, we propose to investigate security analyses for languages with more principled forms of dynamic code generation. To this end, we present a static information flow analysis for a dynamically typed functional language with prototype-based inheritance and staged metaprogramming. We prove its soundness, implement it and test it on various examples designed to show its relevance to proving security properties, such as noninterference, in JavaScript. To demonstrate the applicability of the analysis, we also present a general method for transforming a program using eval into one using staged metaprogramming. To our knowledge, this is the first fully static information flow analysis for a language with staged metaprogramming, and the first formal soundness proof of a CFA-based information flow analysis for a functional programming language

    Almost as helpful as good theory: Some conceptual possibilities for the online classroom

    Get PDF
    Interest and activity in the use of C&IT in higher education is growing, and while there is effort to understand the complexity of the transition to virtual space, aspects of development, particularly clarity about the nature of the learning community, may only be lightly theorized. Based on an ongoing action research study involving postgraduate students studying in the UK and USA, this paper will identify some theoretical roots and derive from these six conceptual areas that seem to the authors to have relevance and significance for behaviour online. An exploration of these forms the basis for a two‐dimensional model which can account for what happens when groups come together to learn in cyberspace. In depicting this model, there is acknowledgement of the existence of third and fourth dimensions at work. However, the explanatory power of taking these extra dimensions into account is beyond the scope of the analysis thus far

    Time to experiment: A response

    Get PDF
    It is with some pleasure that we were given the opportunity to offer this paper for commentary and we are grateful for the efforts made by readers to help us to refine our thinking. Given the constraints of space, we will respond to the main comments in turn. We plan to submit a more considered and elegant paper to a future edition when we have worked more on our model

    About time

    Get PDF
    Time has historically been a measure of progress of recurrent physical processes. Coordination of future actions, prediction of future events, and assigning order to events are three practical reasons for implementing clocks and signalling mechanisms. In large networks of computers, these needs lead to the problem of synchronizing the clocks throughout the network. Recent methods allow this to be done in large networks with precision around 1 millisecond despite mean message exchange times near 5 milliseconds. These methods are discussed

    Massive parallelism in the future of science

    Get PDF
    Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade

    Memory protection

    Get PDF
    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate

    A view of Kanerva's sparse distributed memory

    Get PDF
    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given

    Virtual memory

    Get PDF
    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today

    Will machines ever think

    Get PDF
    Artificial Intelligence research has come under fire for failing to fulfill its promises. A growing number of AI researchers are reexamining the bases of AI research and are challenging the assumption that intelligent behavior can be fully explained as manipulation of symbols by algorithms. Three recent books -- Mind over Machine (H. Dreyfus and S. Dreyfus), Understanding Computers and Cognition (T. Winograd and F. Flores), and Brains, Behavior, and Robots (J. Albus) -- explore alternatives and open the door to new architectures that may be able to learn skills

    Saving all the bits

    Get PDF
    The scientific tradition of saving all the data from experiments for independent validation and for further investigation is under profound challenge by modern satellite data collectors and by supercomputers. The volume of data is beyond the capacity to store, transmit, and comprehend the data. A promising line of study is discovery machines that study the data at the collection site and transmit statistical summaries of patterns observed. Examples of discovery machines are the Autoclass system and the genetic memory system of NASA-Ames, and the proposal for knowbots by Kahn and Cerf
    corecore