1,465 research outputs found

    AOIPS 3 user's guide. Volume 2: Program descriptions

    Get PDF
    The Atmospheric and Oceanographic Information Processing System (AOIPS) 3 is the version of the AOIPS software as of April 1989. The AOIPS software was developed jointly by the Goddard Space Flight Center and General Sciences Corporation. A detailed description of very AOIPS program is presented. It is intended to serve as a reference for such items as program functionality, program operational instructions, and input/output variable descriptions. Program descriptions are derived from the on-line help information. Each program description is divided into two sections. The functional description section describes the purpose of the program and contains any pertinent operational information. The program description sections lists the program variables as they appear on-line, and describes them in detail

    Vasteajan mittausjärjestelmän suunnittelu, toteutus ja testaus

    Get PDF
    A touchscreen is a commonly used medium for the interaction between a user and a device. Response to user's action is often indicated visually on the screen after a certain delay. This interface latency is inherent in any computer system. Studies indicate that the latency has a major contribution on how users perceive the interaction with the device. While modern commercial touchscreen devices manifest latencies ranging between 50 ms and 200 ms, research indicates that the user performance for tapping tasks deteriorates at considerably lower levels and users are able to discern the latency as low as 3 ms. In this Thesis we present a novel solution for Android operated mobile devices to expose factors behind the feedback latency of a tap event. We start by reviewing the main components of the Android operating system. Next we describe the internal system elements which partake in the interaction between the user's touch input event and its corresponding visual presentation on the screen of the device. Propelled by the obtained information, we implement an affordable, fully automated system that is capable of collecting both temporal and environmental data. The constructed measurement system provided revealing results. We discovered that most of the feedback latency on a mobile device is accumulated by the internal components which are involved in presenting the visual feedback to the user. We also identified two main user action patterns which impose a huge effect upon system's responsiveness. Firstly, the location of touch is reflected in the amount of feedback latency. Secondly, the interval between two consecutive touch events might cause even unexpected results. Our study demonstrated that the latency can vary a lot between different devices by ranging from no effect on one device to a five-fold difference on another device. The study concludes that, despite the feedback latency is affected by multiple factors, the latency can be measured very precisely with the system that can be built even by an average Joe.Kosketusnäyttö on yleisesti käytetty kanava käyttäjän ja laitteen välisessä vuorovaikutuksessa. Järjestelmän palaute käyttäjän antamaan syötteeseen esitetään usein visuaalisesti laitteen näytöllä. Vasteen tuottamisessa syntyy kuitenkin jonkin verran viivettä eli latenssia. Tutkimusten mukaan viiveellä on suuri vaikutus käyttäjäkokemukseen. Nykyisten kosketuslaitteiden latenssi vaihtelee yleensä 50 ja 200 millisekunnin välillä. Kosketuspohjaisten tapahtumien suorittamisen on todettu heikentyvät jo huomattavasti pienemmän viiveen johdosta ja jopa alle kolme millisekuntia kestävä viive on vielä havaittavissa. Tässä diplomityössä esitetään Android-pohjaisille mobiililaitteille luotu edullinen järjestelmä, jonka avulla pystytään mittaamaan käyttäjän näytölle luoman kosketuksen ja sitä vastaavan järjestelmän antaman visuaalisen palautteen välistä viivettä. Työssä esitetellään ensin Android-käyttöjärjestelmän komponentit, jotka osallistuvat tämän tapahtumaketjun suorittamiseksi vaadittaviin toimintoihin. Tietojen pohjalta luodaan järjestelmä, jolla voidaan kerätä automaattisesti dataa viiveen eri syntykohdista ja sen ympäristöön littyvistä seikoista. Datan avulla pystytään aiempaa paremmin arvioimaan viiveen syntyyn vaikuttavia tekijöitä. Saatua tietoa voidaan hyödyntää yleisesti viiveen hallitsemiseen tähtääviin toimenpiteisiin ja siten lopulta käyttäjäkokemuksen parantamiseen. Järjestelmällä mitatuista tuloksista selviää, että suurin osa tapahtumaketjun latenssista syntyy käyttäjälle esitettävän visuaalisen palautteen vaatimiin toimenpiteisiin. Lisäksi työ tuo esille kaksi käyttäjän syötteen antamiseen liittyvää toimintatapaa, joilla on suuri vaikutus latenssiin. Kosketuksen sijainti ruudulla ja kahden peräkkäisen kosketuksen välinen aika vaikuttavat vasteaikaan. Latenssi ei aina muodostu suoraviivaisesti ja se voi ilmentää jopa yllättäviä piirteitä eri laitteiden välillä: toimintatapa yhdessä laitteessa ei vaikuta tulokseen, mutta saattaa toisessa laitteessa näkyä moninkertaisena erona. Vaikka latenssin syntyyn vaikuttaa monta eri tekijää, sitä voidaan onneksi mitata erittäin tarkasti järjestelmällä, jonka jopa Matti Meikäläinen pystyy rakentamaan

    Exploiting task-based programming models for resilience

    Get PDF
    Hardware errors become more common as silicon technologies shrink and become more vulnerable, especially in memory cells, which are the most exposed to errors. Permanent and intermittent faults are caused by manufacturing variability and circuits ageing. While these can be mitigated once they are identified, their continuous rate of appearance throughout the lifetime of memory devices will always cause unexpected errors. In addition, transient faults are caused by effects such as radiation or small voltage/frequency margins, and there is no efficient way to shield against these events. Other constraints related to the diminishing sizes of transistors, such as power consumption and memory latency have caused the microprocessor industry to turn to increasingly complex processor architectures. To solve the difficulties arising from programming such architectures, programming models have emerged that rely on runtime systems. These systems form a new intermediate layer on the hardware-software abstraction stack, that performs tasks such as distributing work across computing resources: processor cores, accelerators, etc. These runtime systems dispose of a lot of information, both from the hardware and the applications, and offer thus many possibilities for optimisations. This thesis proposes solutions to the increasing fault rates in memory, across multiple resilience disciplines, from algorithm-based fault tolerance to hardware error correcting codes, through OS reliability strategies. These solutions rely for their efficiency on the opportunities presented by runtime systems. The first contribution of this thesis is an algorithmic-based resilience technique, allowing to tolerate detected errors in memory. This technique allows to recover data that is lost by performing computations that rely on simple redundancy relations identified in the program. The recovery is demonstrated for a family of iterative solvers, the Krylov subspace methods, and evaluated for the conjugate gradient solver. The runtime can transparently overlap the recovery with the computations of the algorithm, which allows to mask the already low overheads of this technique. The second part of this thesis proposes a metric to characterise the impact of faults in memory, which outperforms state-of-the-art metrics in precision and assurances on the error rate. This metric reveals a key insight into data that is not relevant to the program, and we propose an OS-level strategy to ignore errors in such data, by delaying the reporting of detected errors. This allows to reduce failure rates of running programs, by ignoring errors that have no impact. The architectural-level contribution of this thesis is a dynamically adaptable Error Correcting Code (ECC) scheme, that can increase protection of memory regions where the impact of errors is highest. A runtime methodology is presented to estimate the fault rate at runtime using our metric, through performance monitoring tools of current commodity processors. Guiding the dynamic ECC scheme online using the methodology's vulnerability estimates allows to decrease error rates of programs at a fraction of the redundancy cost required for a uniformly stronger ECC. This provides a useful and wide range of trade-offs between redundancy and error rates. The work presented in this thesis demonstrates that runtime systems allow to make the most of redundancy stored in memory, to help tackle increasing error rates in DRAM. This exploited redundancy can be an inherent part of algorithms that allows to tolerate higher fault rates, or in the form of dead data stored in memory. Redundancy can also be added to a program, in the form of ECC. In all cases, the runtime allows to decrease failure rates efficiently, by diminishing recovery costs, identifying redundant data, or targeting critical data. It is thus a very valuable tool for the future computing systems, as it can perform optimisations across different layers of abstractions.Los errores en memoria se vuelven más comunes a medida que las tecnologías de silicio reducen su tamaño. La variabilidad de fabricación y el envejecimiento de los circuitos causan fallos permanentes e intermitentes. Aunque se pueden mitigar una vez identificados, su continua tasa de aparición siempre causa errores inesperados. Además, la memoria también sufre de fallos transitorios contra los cuales no se puede proteger eficientemente. Estos fallos están causados por efectos como la radiación o los reducidos márgenes de voltaje y frecuencia. Otras restricciones coetáneas, como el consumo de energía y la latencia de la memoria, obligaron a las arquitecturas de computadores a volverse cada vez más complejas. Para programar tales procesadores, se desarrollaron modelos de programación basados en entornos de ejecución. Estos sistemas forman una nueva abstracción entre hardware y software, realizando tareas como la distribución del trabajo entre recursos informáticos: núcleos de procesadores, aceleradores, etc. Estos entornos de ejecución disponen de mucha información tanto sobre el hardware como sobre las aplicaciones, y ofrecen así muchas posibilidades de optimización. Esta tesis propone soluciones a los fallos en memoria entre múltiples disciplinas de resiliencia, desde la tolerancia a fallos basada en algoritmos, hasta los códigos de corrección de errores en hardware, incluyendo estrategias de resiliencia del sistema operativo. La eficiencia de estas soluciones depende de las oportunidades que presentan los entornos de ejecución. La primera contribución de esta tesis es una técnica a nivel algorítmico que permite corregir fallos encontrados mientras el programa su ejecuta. Para corregir fallos se han identificado redundancias simples en los datos del programa para toda una clase de algoritmos, los métodos del subespacio de Krylov (gradiente conjugado, GMRES, etc). La estrategia de recuperación de datos desarrollada permite corregir errores sin tener que reinicializar el algoritmo, y aprovecha el modelo de programación para superponer las computaciones del algoritmo y de la recuperación de datos. La segunda parte de esta tesis propone una métrica para caracterizar el impacto de los fallos en la memoria. Esta métrica supera en precisión a las métricas de vanguardia y permite identificar datos que son menos relevantes para el programa. Se propone una estrategia a nivel del sistema operativo retrasando la notificación de los errores detectados, que permite ignorar fallos en estos datos y reducir la tasa de fracaso del programa. Por último, la contribución a nivel arquitectónico de esta tesis es un esquema de Código de Corrección de Errores (ECC por sus siglas en inglés) adaptable dinámicamente. Este esquema puede aumentar la protección de las regiones de memoria donde el impacto de los errores es mayor. Se presenta una metodología para estimar el riesgo de fallo en tiempo de ejecución utilizando nuestra métrica, a través de las herramientas de monitorización del rendimiento disponibles en los procesadores actuales. El esquema de ECC guiado dinámicamente con estas estimaciones de vulnerabilidad permite disminuir la tasa de fracaso de los programas a una fracción del coste de redundancia requerido para un ECC uniformemente más fuerte. El trabajo presentado en esta tesis demuestra que los entornos de ejecución permiten aprovechar al máximo la redundancia contenida en la memoria, para contener el aumento de los errores en ella. Esta redundancia explotada puede ser una parte inherente de los algoritmos que permite tolerar más fallos, en forma de datos inutilizados almacenados en la memoria, o agregada a la memoria de un programa en forma de ECC. En todos los casos, el entorno de ejecución permite disminuir los efectos de los fallos de manera eficiente, disminuyendo los costes de recuperación, identificando datos redundantes, o focalizando esfuerzos de protección en los datos críticos.Postprint (published version

    A Handbook Supporting Model-Driven Software Development - a Case Study

    Get PDF

    Comprehensive Mapping and Benchmarking of Esaki Diode Performance

    Get PDF
    The tunneling-FET (TFET) has been identified as a prospective MOSFET replacement technology with the potential to extend geometric and electrostatic scaling of digital integrated circuits. However, experimental demonstrations of the TFET have yet to reliably achieve drive currents necessary to power large scale integrated circuits. Consequentially, much effort has gone into optimizing the band-to-band tunneling (BTBT) efficiency of the TFET. In this work, the Esaki tunnel diode (ETD) is used as a short loop element to map and optimize BTBT performance for a large design space. The experimental results and tools developed for this work may be used to (1) map additional and more complicated ETD structures, (2) guide development of improved TFET structures and BTBT devices, (3) design ETDs targeted BTBT characteristics, and (4) calibrate BTBT models. The first objective was to verify the quality of monolithically integrated III-V based ETDs on Si substrates (the industry standard). Five separate GaAs/InGaAs ETDs were fabricated on GaAs-virtual substrates via aspect ratio trapping, along with two companion ETDs grown on Si and GaAs bulk substrates. The quality of the virtual substrates and BTBT were verified with (i) very large peak-valley current ratios (up to 56), (ii) temperature measurements, and (iii) deep sub-micron scaling. The second objective mapped the BTBT characteristics of the In1-xGaxAs ternary system by (1) standardizing the ETD structure, (2) limiting experimental work to unstrained (i) GaAs, (ii) In0.53Ga0.47As, and (iii) InAs homojunctions, and (3) systematically varying doping concentrations. Characteristic BTBT trendlines were determined for each material system, ranging from ultra-low to ultra-high peak current densities (JP) of 11 μA/cm2 to 975 kA/cm2 for GaAs and In0.53Ga0.47As, respectively. Furthermore, the BTBT mapping results establishes that BTBT current densities can only be improved by ~2-3 times the current record, by increasing doping concentration and In content up to ~75%. The E. O. Kane BTBT model has been shown to accurately predict the tunneling characteristics for the entire design space. Furthermore, it was used to help guide the development of a new universal BTBT model, which is a closed form exponential using 2 fitting parameters, material constants, and doping concentrations. With it, JP can quickly be predicted over the entire design space of this work

    MapCAST : Real-Time Collaboration With Concept Maps

    Get PDF
    This thesis describes the development of the application mapCAST, a computer-based concept-mapping tool that allows synchronous collaboration via TCP/IP networks, such as the Internet The useability and feasibility of mapCAST as a computer-based tool was examined and analysed in a real world situation. Results indicate that mapCAST is successful as a collaborative tool in a situations involving knowledge organisation, but lacks certain functionality that many Macintosh users are accustomed to
    corecore