3,411 research outputs found

    Sql Injection Attacks and Countermeasures: a Survey of Website Development Practices

    Get PDF
    This study involved the development and subsequent use of a bespoke SQL Injection vulnerability scanner to analyze a set of unique approaches to common tasks, identified by conducting interviews with developers of high-traffic Web sites. The vulnerability scanner was developed to address many recognized shortcomings in existing scanning software, principal among which were the requirements for a comprehensive yet lightweight solution, with which to quickly test targeted aspects of online applications; and a scriptable, Linux-based system. Emulations of each approach were built, using PHP and MySQL, which were then analyzed with the aid of the bespoke scanner. All discovered vulnerabilities were resolved and despite the variety of approaches to securing online applications, adopted by those interviewed; a small number of root causes of SQL Injection vulnerabilities were identified. This allowed a SQL injection security checklist to be compiled to facilitate developers in identifying insecure practices prior to an online application\u27s initial release and following any modifications or upgrades

    Generalizing input-driven languages: theoretical and practical benefits

    Get PDF
    Regular languages (RL) are the simplest family in Chomsky's hierarchy. Thanks to their simplicity they enjoy various nice algebraic and logic properties that have been successfully exploited in many application fields. Practically all of their related problems are decidable, so that they support automatic verification algorithms. Also, they can be recognized in real-time. Context-free languages (CFL) are another major family well-suited to formalize programming, natural, and many other classes of languages; their increased generative power w.r.t. RL, however, causes the loss of several closure properties and of the decidability of important problems; furthermore they need complex parsing algorithms. Thus, various subclasses thereof have been defined with different goals, spanning from efficient, deterministic parsing to closure properties, logic characterization and automatic verification techniques. Among CFL subclasses, so-called structured ones, i.e., those where the typical tree-structure is visible in the sentences, exhibit many of the algebraic and logic properties of RL, whereas deterministic CFL have been thoroughly exploited in compiler construction and other application fields. After surveying and comparing the main properties of those various language families, we go back to operator precedence languages (OPL), an old family through which R. Floyd pioneered deterministic parsing, and we show that they offer unexpected properties in two fields so far investigated in totally independent ways: they enable parsing parallelization in a more effective way than traditional sequential parsers, and exhibit the same algebraic and logic properties so far obtained only for less expressive language families

    ミャンマー語テキストの形式手法による音節分割、正規化と辞書順排列

    Get PDF
    国立大学法人長岡技術科学大

    Call numbers and collating sequences

    Get PDF
    The version uploaded has the last line of text on p.94 in the published version moved to the top of p.95.Call numbers traditionally used to implement shelf order in libraries are often amenable to machine sequencing, since they involve collating sequences which are sufficiently coherent for users and library staff to learn. However, these collating sequences usually do not match those implemented in computer systems, so that special programs to transform the call numbers into machine fileable sequences are necessary if the machines are to display the shelf order in a particular library. Two common machine collating sequences and their relations with some of the properties of Dewey/Cutter call numbers are briefly examined and a transformation procedure outlined

    DNA Chemical Reaction Network Design Synthesis and Compilation

    Get PDF
    The advantages of biomolecular computing include 1) the ability to interface with, monitor, and intelligently protect and maintain the functionality of living systems, 2) the ability to create computational devices with minimal energy needs and hazardous waste production during manufacture and lifecycle, 3) the ability to store large amounts of information for extremely long time periods, and 4) the ability to create computation analogous to human brain function. To realize these advantages over electronics, biomolecular computing is at a watershed moment in its evolution. Computing with entire molecules presents different challenges and requirements than computing just with electric charge. These challenges have led to ad-hoc design and programming methods with high development costs and limited device performance. At the present time, device building entails complete low-level detail immersion. We address these shortcomings by creation of a systems engineering process for building and programming DNA-based computing devices. Contributions of this thesis include numeric abstractions for nucleic acid sequence and secondary structure, and a set of algorithms which employ these abstractions. The abstractions and algorithms have been implemented into three artifacts: DNADL, a design description language; Pyxis, a molecular compiler and design toolset; and KCA, a simulation of DNA kinetics using a cellular automaton discretization. Our methods are applicable to other DNA nanotechnology constructions and may serve in the development of a full DNA computing model

    Extensible Markup Language (XML) 1.1

    Get PDF
    El lenguaje extensible de marcas (XML) es un subconjunto de SGML, y aparece completamente definido en este documento. Su objetivo es permitir que SGML genérico pueda ser servido, recibido y procesado en la Web en la misma manera que hoy es posible con HTML. XML ha sido diseñado de tal manera que sea fácil de implementar y buscando interoperabilidad tanto con SGML como con HTML.Second editio

    Pattern Matching and Discourse Processing in Information Extraction from Japanese Text

    Full text link
    Information extraction is the task of automatically picking up information of interest from an unconstrained text. Information of interest is usually extracted in two steps. First, sentence level processing locates relevant pieces of information scattered throughout the text; second, discourse processing merges coreferential information to generate the output. In the first step, pieces of information are locally identified without recognizing any relationships among them. A key word search or simple pattern search can achieve this purpose. The second step requires deeper knowledge in order to understand relationships among separately identified pieces of information. Previous information extraction systems focused on the first step, partly because they were not required to link up each piece of information with other pieces. To link the extracted pieces of information and map them onto a structured output format, complex discourse processing is essential. This paper reports on a Japanese information extraction system that merges information using a pattern matcher and discourse processor. Evaluation results show a high level of system performance which approaches human performance.Comment: See http://www.jair.org/ for any accompanying file
    corecore