14,499 research outputs found

    On the basis for ELF - An Extensible Language Facility

    Get PDF
    Computer language for data processing and information retrieva

    Lattice Perturbation Theory by Computer Algebra: A Three-Loop Result for the Topological Susceptibility

    Full text link
    We present a scheme for the analytic computation of renormalization functions on the lattice, using a symbolic manipulation computer language. Our first nontrivial application is a new three-loop result for the topological susceptibility.Comment: 15 pages + 2 figures (PostScript), report no. IFUP-TH 31/9

    The Sizing and Optimization Language, (SOL): Computer language for design problems

    Get PDF
    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual

    Microcomputer-controlled polarographic instrumentation and its use in the determination of stability constants of crown ether complexes

    Get PDF
    A computer-controlled polarographic system is described, based on a commercially available polarograph interfaced to a microcomputer. Experiments are controlled and monitored entirely from software, including automatic evaluation of the Tast polarograms and addition of solutions to the polarographic cell from a motor burette. The program was written in FORTH, a computer language especially apt for laboratory automation. The system is used in the determination of stability constants of crown ether complexes

    A Recursive Method to Calculate UV-divergent Parts at One-Loop Level in Dimensional Regularization

    Full text link
    A method is introduced to calculate the UV-divergent parts at one-loop level in dimensional regularization. The method is based on the recursion, and the basic integrals are just the scaleless integrals after the recursive reduction, which involve no other momentum scales except the loop momentum itself. The method can be easily implemented in any symbolic computer language, and an implementation in Mathematica is ready to use.Comment: 10 pages, 1 figure, typos fixed, to appear in Computer Physics Communication

    A trilingual creation of terms of the computer language in English, French and Igbo

    Get PDF
    Like humans, computers have their own language. This language is specific to them and is the main medium of communication between computer systems. A computer language consists of all the instructions to make a request to the system for processing a task and it includes various languages that are used to communicate with a computer. This language can only be made available in other languages by their being created in those languages. Term creation or Terminology cannot exist outside the cultural environment of its birth: Language. In other words, terms cannot be created outside language. We were able to collect lexical items of the computer language in English and found the French equivalents. We then created the Igbo terms of the computer language. Our creating terms in the Igbo language makes it possible for the computer language to be assimilated into the Igbo culture and be adapted and used by Igbo speakers. Nevertheless, there are challenges in the of creation terms. The thrust of this paper is to create terms in the computer language domain, explain how the terms were  created, make a comparison of some of the terms in the three languages and suggest ways of making the terms created to be available to translators and native speakers of Igbo language.Keywords: computer language, cultural environment, term creation, terminolog

    Higher-twist contributions to the Structure Functions coming from 4-fermion operators

    Full text link
    We evaluate the contribution of a class of higher-twist operators to the lowest moment of the Structure Functions, by computing appropriate matrix elements of six four-fermion operators in the quenched approximation. Their perturbative renormalization constants and mixing coefficients are calculated in the 't Hooft-Veltman scheme of dimensional regularization, using codes written in the algebraic manipulation computer language FORM.Comment: Talk presented at LATTICE99(matrix elements), Pisa (Italy), June 29 - July 3; 3 pages; to be published in Nucl. Phys. B (Proc. Suppl.

    Computer-language based data prefetching techniques

    Get PDF
    Data prefetching has long been used as a technique to improve access times to persistent data. It is based on retrieving data records from persistent storage to main memory before the records are needed. Data prefetching has been applied to a wide variety of persistent storage systems, from file systems to Relational Database Management Systems and NoSQL databases, with the aim of reducing access times to the data maintained by the system and thus improve the execution times of the applications using this data. However, most existing solutions to data prefetching have been based on information that can be retrieved from the storage system itself, whether in the form of heuristics based on the data schema or data access patterns detected by monitoring access to the system. There are multiple disadvantages of these approaches in terms of the rigidity of the heuristics they use, the accuracy of the predictions they make and / or the time they need to make these predictions, a process often performed while the applications are accessing the data and causing considerable overhead. In light of the above, this thesis proposes two novel approaches to data prefetching based on predictions made by analyzing the instructions and statements of the computer languages used to access persistent data. The proposed approaches take into consideration how the data is accessed by the higher-level applications, make accurate predictions and are performed without causing any additional overhead. The first of the proposed approaches aims at analyzing instructions of applications written in object-oriented languages in order to prefetch data from Persistent Object Stores. The approach is based on static code analysis that is done prior to the application execution and hence does not add any overhead. It also includes various strategies to deal with cases that require runtime information unavailable prior to the execution of the application. We integrate this analysis approach into an existing Persistent Object Store and run a series of extensive experiments to measure the improvement obtained by prefetching the objects predicted by the approach. The second approach analyzes statements and historic logs of the declarative query language SPARQL in order to prefetch data from RDF Triplestores. The approach measures two types of similarity between SPARQL queries in order to detect recurring query patterns in the historic logs. Afterwards, it uses the detected patterns to predict subsequent queries and launch them before they are requested to prefetch the data needed by them. Our evaluation of the proposed approach shows that it high-accuracy prediction and can achieve a high cache hit rate when caching the results of the predicted queries.Precargar datos ha sido una de las técnicas más comunes para mejorar los tiempos de acceso a datos persistentes. Esta técnica se basa en predecir los registros de datos que se van a acceder en el futuro y cargarlos del almacenimiento persistente a la memoria con antelación a su uso. Precargar datos ha sido aplicado en multitud de sistemas de almacenimiento persistente, desde sistemas de ficheros a bases de datos relacionales y NoSQL, con el objetivo de reducir los tiempos de acceso a los datos y por lo tanto mejorar los tiempos de ejecución de las aplicaciones que usan estos datos. Sin embargo, la mayoría de los enfoques existentes utilizan predicciones basadas en información que se encuentra dentro del mismo sistema de almacenimiento, ya sea en forma de heurísticas basadas en el esquema de los datos o patrones de acceso a los datos generados mediante la monitorización del acceso al sistema. Estos enfoques presentan varias desventajas en cuanto a la rigidez de las heurísticas usadas, la precisión de las predicciones generadas y el tiempo que necesitan para generar estas predicciones, un proceso que se realiza con frecuencia mientras las aplicaciones acceden a los datos y que puede tener efectos negativos en el tiempo de ejecución de estas aplicaciones. En vista de lo anterior, esta tesis presenta dos enfoques novedosos para precargar datos basados en predicciones generadas por el análisis de las instrucciones y sentencias del lenguaje informático usado para acceder a los datos persistentes. Los enfoques propuestos toman en consideración cómo las aplicaciones acceden a los datos, generan predicciones precisas y mejoran el rendimiento de las aplicaciones sin causar ningún efecto negativo. El primer enfoque analiza las instrucciones de applicaciones escritas en lenguajes de programación orientados a objetos con el fin de precargar datos de almacenes de objetos persistentes. El enfoque emplea análisis estático de código hecho antes de la ejecución de las aplicaciones, y por lo tanto no afecta negativamente el rendimiento de las mismas. El enfoque también incluye varias estrategias para tratar casos que requieren información de runtime no disponible antes de ejecutar las aplicaciones. Además, integramos este enfoque en un almacén de objetos persistentes y ejecutamos una serie extensa de experimentos para medir la mejora de rendimiento que se puede obtener utilizando el enfoque. Por otro lado, el segundo enfoque analiza las sentencias y logs del lenguaje declarativo de consultas SPARQL para precargar datos de triplestores de RDF. Este enfoque aplica dos medidas para calcular la similtud entre las consultas del lenguaje SPARQL con el objetivo de detectar patrones recurrentes en los logs históricos. Posteriormente, el enfoque utiliza los patrones detectados para predecir las consultas siguientes y precargar con antelación los datos que necesitan. Nuestra evaluación muestra que este enfoque produce predicciones de alta precisión y puede lograr un alto índice de aciertos cuando los resultados de las consultas predichas se guardan en el caché.Postprint (published version
    • …
    corecore