1,823,369 research outputs found

    Eliminating intermediate lists in pH using local transformations

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 68-69).by Jan-Willem Maessen.M.Eng

    Analog VLSI circuits for inertial sensory systems

    Get PDF
    Supervised by Rahul Sarpeshkar.Also isssued as Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (leaves 67-68).by Maziar Tavakoli Dastjerdi

    Random-access technique for modular bathymetry data storage in a continental shelf wave refraction program

    Get PDF
    A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method

    Pixel-level data fusion techniques applied to the detection of gust fonts

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 68-69).by Samuel M. Kwon.M.S

    A Visual Programming Paradigm for Abstract Deep Learning Model Development

    Full text link
    Deep learning is one of the fastest growing technologies in computer science with a plethora of applications. But this unprecedented growth has so far been limited to the consumption of deep learning experts. The primary challenge being a steep learning curve for learning the programming libraries and the lack of intuitive systems enabling non-experts to consume deep learning. Towards this goal, we study the effectiveness of a no-code paradigm for designing deep learning models. Particularly, a visual drag-and-drop interface is found more efficient when compared with the traditional programming and alternative visual programming paradigms. We conduct user studies of different expertise levels to measure the entry level barrier and the developer load across different programming paradigms. We obtain a System Usability Scale (SUS) of 90 and a NASA Task Load index (TLX) score of 21 for the proposed visual programming compared to 68 and 52, respectively, for the traditional programming methods

    The enhancement of the search-ability of OCRed Texts HTR+, segmentation and meta-dating early modern ordinances

    Get PDF
    From May till November 2019, the KB National Library of the Netherlands (The Hague) hosts the Researcher-in-Residence project ‘Entangled Histories of Early Modern Ordinances’. This ‘short paper’-presentation reports on the ongoing project. The project’s hypotheses read that: when problems arose, small ‘states’ had to act swiftly; thus, governments may have adopted – parts of – successful legislation from neighbouring areas. These ‘entangled histories’ may become obvious while studying the political-institutional activities through legislation. Hence, the project uses printed books of ordinances from the Low Countries (1500-1800s) to answer this hypothesis. These sources used for this project are early modern norms (ordinances) printed in a gothic font (25%) and roman font (75%). The techniques that will be developed through ‘entangled histories’ will have a widespread positive effect. These techniques coincide with the project’s three steps: 1. Segmentation of the texts, going from sentence-recognition (Lonij, Harbers, 2016) to article-segmentation of larger texts; this requires that the computer is trained to recognise the beginning and end of texts, either as a chapter or as an individual text within a compilation of texts. We apply the P2PaLA-tool from the University of Valencia, which is integrated into Transkribus to recognise the different sections in our corpus. 2. Improving the OCR-results of digitised sources. By using machine learning techniques, such as the Handwritten Text Recognition suite Transkribus we will reprocess the files which currently have poor quality OCR. By treating the printed with hand-carved letters as very consistent handwriting we hope to obtain a higher quality of recognition (CER <5%). 3. We want to create automatic metadata by training a computer to recognise the conditions that categories were based upon. This will be based on an already developed tool called Annif (created by the Finnish National Library). After supervised training, the computer can then suggest, apply and supplement categories to other texts based on the idea of topic modelling (Leydesdorff, 2017). This is a pilot that will prove the applicability of the tool to other languages as well.' We will use NLP such as NER technologies to identify dates, titles, and persons. Due to its significance for such a broadly studied range of sources, we hope to make the output of the tagged entities available as RDF with the intention of making the output available in the Dutch national infrastructure With the data generated in this project, visualisations of the development of laws across the Netherlands – and possibly Europe - could become possible. These normative texts (laws) this data is based on, contain indications of how governments of burgeoning states dealt with unexpected threats to safety, security, and order through home-invented measures, borrowed rules, or adjustments of what was established elsewhere. While the thousands of texts become more easily accessible, it will become possible to look for entangled histories with neighbouring states, due to synchronic and diachronic comparisons. Firstly within the Low Countries (from one province to another), from which we will be using 108 books of ordinances; secondly, it will be possible to look for connections with German-speaking lands as the same metadata is applied as the Max-Planck-Institute für europäische Rechtsgeschichte has done in several projects. The connection with the German dataset will be established through (numbered) URI’s and Linked Data, as a step after the initial project. The ordinances within the books of ordinances are frequently consulted by researchers of various disciplines (e.g. history, law, political sciences, linguistics) to unravel rules for controlling complex societies. Having the possibility of a longitudinal search, based upon contents rather than the index or title, as well as having an overview based upon several states has so far been impossible due to the impenetrable amount of scanned texts. • Leydesdorff, L., and A. Nerghes. (2017) "Co-word maps and topic modelling: A comparison using small and medium-sized corpora (N< 1,000)." Journal of the Association for Information Science and Technology 68(4): 1024-35. • Lonij, J., Harbers, F. (2016), Genre classifier. KB Lab: The Hague http://lab.kb.nl/tool/genre-classifier • Finnish National Library, Annif 2.0, https://github.com/NatLibFi/Annif

    Runtime of unstructured search with a faulty Hamiltonian oracle

    Get PDF
    We show that it is impossible to obtain a quantum speedup for a faulty Hamiltonian oracle. The effect of dephasing noise to this continuous-time oracle model has first been investigated by Shenvi, Brown, and Whaley [Phys. Rev. A 68, 052313 (2003).]. The authors consider a faulty oracle described by a continuous-time master equation that acts as dephasing noise in the basis determined by the marked item. The analysis focuses on the implementation with a particular driving Hamiltonian. A universal lower bound for this oracle model, which rules out a better performance with a different driving Hamiltonian, has so far been lacking. Here, we derive an adversary-type lower bound which shows that the evolution time T has to be at least in the order of N , i.e., the size of the search space, when the error rate of the oracle is constant. This means that quadratic quantum speedup vanishes and the runtime assumes again the classical scaling. For the standard quantum oracle model this result was first proven by Regev and Schiff [in Automata, Languages and Programming, Lecture Notes in Computer Science Vol. 5125 (Springer, Berlin, 2008), pp. 773–781]. Here, we extend this result to the continuous-time setting

    Computability and Algorithmic Complexity in Economics

    Get PDF
    This is an outline of the origins and development of the way computability theory and algorithmic complexity theory were incorporated into economic and finance theories. We try to place, in the context of the development of computable economics, some of the classics of the subject as well as those that have, from time to time, been credited with having contributed to the advancement of the field. Speculative thoughts on where the frontiers of computable economics are, and how to move towards them, conclude the paper. In a precise sense - both historically and analytically - it would not be an exaggeration to claim that both the origins of computable economics and its frontiers are defined by two classics, both by Banach and Mazur: that one page masterpiece by Banach and Mazur ([5]), built on the foundations of Turing’s own classic, and the unpublished Mazur conjecture of 1928, and its unpublished proof by Banach ([38], ch. 6 & [68], ch. 1, #6). For the undisputed original classic of computable economics is Rabinís effectivization of the Gale-Stewart game ([42];[16]); the frontiers, as I see them, are defined by recursive analysis and constructive mathematics, underpinning computability over the computable and constructive reals and providing computable foundations for the economist’s Marshallian penchant for curve-sketching ([9]; [19]; and, in general, the contents of Theoretical Computer Science, Vol. 219, Issue 1-2). The former work has its roots in the Banach-Mazur game (cf. [38], especially p.30), at least in one reading of it; the latter in ([5]), as well as other, earlier, contributions, not least by Brouwer.

    Observations in using parallel and sequential evolutionary algorithms for automatic software testing

    Get PDF
    Computers & Operations Research, 35 (10),2007, pp.3161-3183In this paper we analyze the application of parallel and sequential evolutionary algorithms (EAs) to the automatic test data generation problem. The problem consists of automatically creating a set of input data to test a program. This is a fundamental step in software development and a time consuming task in existing software companies. Canonical sequential EAs have been used in the past for this task. We explore here the use of parallel EAs. Evidence of greater efficiency, larger diversity maintenance, additional availability of memory/CPU, and multi-solution capabilities of the parallel approach, reinforce the importance of the advances in research with these algorithms. We describe in this work how canonical genetic algorithms (GAs) and evolutionary strategies (ESs) can help in software testing, and what the advantages are (if any) of using decentralized populations in these techniques. In addition, we study the influence of some parameters of the proposed test data generator in the results. For the experiments we use a large benchmark composed of twelve programs that includes fundamental algorithms in computer science.Ministry of Education and Science and FEDER under Contract TIN2005-08818-C04-01 (the OPLINK Project). Francisco Chicano was supported by a Grant (BOJA 68/2003) from the Junta de Andalucía (Spain)
    corecore