62 research outputs found

    Statistiline lähenemine mälulekete tuvastamiseks Java rakendustes

    Get PDF
    Kaasaegsed hallatud käitusaja keskkonnad (ingl. managed runtime environment) ja programmeerimiskeeled lihtsustavad rakenduste loomist ning haldamist. Kõige levinumaks näiteks säärase keele ja keskkonna kohta on Java. Üheks tähtsaks hallatud käitusaja keskkonna ülesandeks on automaatne mäluhaldus. Vaatamata sisseehitatud prügikoristajale, mälulekke probleem Javas on endiselt relevantne ning tähendab tarbetut mälu hoidmist. Probleem on eriti kriitiline rakendustes mis peaksid ööpäevaringselt tõrgeteta toimima, kuna mäluleke on üks väheseid programmeerimisvigu mis võib hävitada kogu Java rakenduse. Parimaks indikaatoriks otsustamaks kas objekt on kasutuses või mitte on objekti viimane kasutusaeg. Selle meetrika põhiliseks puudujäägiks on selle hind jõudluse mõttes. Käesolev väitekiri uurib mälulekete problemaatikat Javas ning pakub välja uudse mälulekkeid tuvastava ning diagnoosiva algoritmi. Väitekirjas kirjeldatakse alternatiivset lähenemisviisi objektide kasutuse hindamiseks. Põhihüpoteesiks on idee et lekkivaid objekte saab statistiliste meetoditega eristada mittelekkivatest kui vaadelda objektide populatsiooni eluiga erinevate gruppide lõikes. Pakutud lähenemine on oluliselt odavama hinnaga jõudluse mõttes, kuna objekti kohta on vaja salvestada infot ainult selle loomise hetkel. Väitekirja uurimistöö tulemusi on rakendatud mälulekete tuvastamise tööriista Plumbr arendamisel, mida hetkel edukalt kasutatakse ka erinevates toodangkeskkondades. Pärast sissejuhatavaid peatükke, väitekirjas vaadeldakse siiani pakutud lahendusi ning on pakutud välja ka nende meetodite klassifikatsioon. Järgnevalt on kirjeldatud statistiline baasmeetod mälulekete tuvastamiseks. Lisaks on analüüsitud ka kirjeldatud baasmeetodi puudujääke. Järgnevalt on kirjeldatud kuidas said defineeritud lisamõõdikud mis aitasid masinõppe abil baasmeetodit täpsemaks teha. Testandmeid masinõppe tarbeks on kogutud Plumbri abil päris rakendustest ning toodangkeskkondadest. Lisaks, kirjeldatakse väitekirjas juhtumianalüüse ning võrdlust ühe olemasoleva mälulekete tuvastamise lahendusega.Modern managed runtime environments and programming languages greatly simplify creation and maintenance of applications. One of the best examples of such managed runtime environments and a language is the Java Virtual Machine and the Java programming language. Despite the built in garbage collector, the memory leak problem is still relevant in Java and means wasting memory by preventing unused objects from being removed. The problem of memory leaks is especially critical for applications, which are expected to work uninterrupted around the clock, as running out of memory is one of a few reasons which may cause the termination of the whole Java application. The best indicator of whether an object is used or not is the time of the last access. However, the main disadvantage of this metric is the incurred performance overhead. Current thesis researches the memory leak problem and proposes a novel approach for memory leak detection and diagnosis. The thesis proposes an alternative approach for estimation of the 'unusedness' of objects. The main hypothesis is that leaked objects may be identified by applying statistical methods to analyze lifetimes of objects, by observing the ages of the population of objects grouped by their allocation points. Proposed solution is much more efficient performance-wise as for each object it is sufficient to record any information at the time of creation of the object. The research conducted for the thesis is utilized in a memory leak detection tool Plumbr. After the introduction and overview of the state of the art, current thesis reviews existing solutions and proposes the classification for memory leak detection approaches. Next, the statistical approach for memory leak detection is described along with the description of the main metric used to distinguish leaking objects from non-leaking ones. Follows the analysis of this single metric. Based on this analysis additional metrics are designed and machine learning algorithms are applied on the statistical data acquired from real production environments from the Plumbr tool. Case studies of real applications and one previous solution for the memory leak detection are performed in order to evaluate performance overhead of the tool

    Modeling of ground excavation with the particle finite element method

    Get PDF
    The present work introduces a new application of the Particle Finite Element Method (PFEM) for the modeling of excavation problems. PFEM is presented as a very suitable tool for the treatment of excavation problem. The method gives solution for the analysis of all processes that derive from it. The method has a high versatility and a reasonable computational cost. The obtained results are really promising.Postprint (published version

    A Scalable Recoverable Skip List for Persistent Memory on NUMA Machines

    Get PDF
    Interest in recoverable, persistent-memory-resident (PMEM-resident) data structures is growing as availability of Intel Optane Data Center Persistent Memory increases. An interesting use case for in-memory, recoverable data structures is for database indexes, which need high availability and reliability. Skip lists are a data structure particularly well-suited for usage as a fully PMEM-resident index, due to their reduced amount of writes from their probabilistic balancing in comparison to other index data structures like B-trees. The Untitled Persistent Skip List (UPSkipList) is a PMEM-resident recoverable skip list derived from Herlihy et al.'s lock-free skip list algorithm. It is developed using a new conversion technique that extends the RECIPE algorithm by Lee et al. to work on lock-free algorithms with non-blocking writes and no inherent recovery mechanism. It does this by tracking the current time period between two failures, or failure-free epoch, and recording the current epoch in nodes when they are being modified. This way, an observing thread can determine if an inconsistent node is being modified in this epoch or was being modified in a previous epoch and now is in need of recovery. The algorithm is also extended to support concurrent data node splitting to improve performance, which is easily made recoverable using the extension to RECIPE allowing detection of incomplete node splits. UPSkipList also supports cache-efficient NUMA awareness of dynamically allocated objects using an extension to the Region-ID in Value (RIV) method by Chen et al. By using additional bits after the most significant bits in an RIV pointer to indicate the object in which the remaining bits are referenced relative to, chunks of memory can by dynamically allocated to UPSkipList from multiple shared pools without the need for fat pointers, which reduce cache efficiency by halving the number of pointers that can fit in a cache line. This combines the benefits of both the RIV method and the dynamic memory allocation method built into the Persistent Memory Development Kit (PMDK), improving both performance and practicality. Additionally, memory manually managed within a chunk using the RIV method can have its recovery after a crash deferred to the next attempted allocation by a thread sharing the ID with the thread responsible for the allocation of the memory being recovered, reducing recovery time for large pools with many threads active during the time of a crash. Comparison was done against the BzTree of Arulraj et al., as implemented by Lersch et al., which has non-blocking, non-repairing writes implemented using the persistent multi-word CAS (PMwCAS) primitive by Wang et al., and a transactional recoverable skip list implemented using the PMDK. Tested with the Yahoo Cloud Serving Benchmark (YCSB), UPSkipList achieves better performance in write-heavy workloads at high levels of concurrency than BzTree, and outperforms the PMDK-based skip list, due to the PMDK-based skip list's higher average latency. Using the extended RIV pointers to dynamically allocate memory resulted in a 40% performance increase over using the PMDK's fat pointers. The impact of NUMA awareness using multiple pools of memory compared with striping a single pool across multiple nodes was found to only be a 5.6% decrease in performance. Finally, recovery time of UPSkipList was found to be comparable to the PMDK-based skip list, and 9 times faster than BzTree with 500K descriptors in its PMwCAS pool. Correctness of UPSkipList and its conversion and recovery techniques were tested using black-box recoverable linearizability analysis, which found UPSkipList to be free of strict linearizability errors across 30 trials

    Life Sciences Program Tasks and Bibliography

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1995. Additionally, this inaugural edition of the Task Book includes information for FY 1994 programs. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web pag

    Efficient runtime systems for speculative parallelization

    Get PDF
    Manuelle Parallelisierung ist zeitaufwändig und fehleranfällig. Automatische Parallelisierung andererseits findet häufig nur einen Bruchteil der verfügbaren Parallelität. Mithilfe von Spekulation kann jedoch auch für komplexere Programme ein Großteil der Parallelität ausgenutzt werden. Spekulativ parallelisierte Programme benötigen zur Ausführung immer ein Laufzeitsystem, um die spekulativen Annahmen abzusichern und für den Fall des Nichtzutreffens die korrekte Ausführungssemantik sicherzustellen. Solche Laufzeitsysteme sollen die Ausführungszeit des parallelen Programms so wenig wie möglich beeinflussen. In dieser Arbeit untersuchen wir, inwiefern aktuelle Systeme, die Speicherzugriffe explizit und in Software beobachten, diese Anforderung erfüllen, und stellen Änderungen vor, die die Laufzeit massiv verbessern. Außerdem entwerfen wir zwei neue Systeme, die mithilfe von virtueller Speicherverwaltung das Programm indirekt beobachten und dadurch eine deutlich geringere Auswirkung auf die Laufzeit haben. Eines der vorgestellten Systeme ist mittels eines Moduls direkt in den Linux-Betriebssystemkern integriert und bietet so die bestmögliche Effizienz. Darüber hinaus bietet es weitreichendere Sicherheitsgarantien als alle bisherigen Techniken, indem sogar Systemaufrufe zum Beispiel zur Datei Ein- und Ausgabe in der spekulativen Isolation mit eingeschlossen sind. Wir zeigen an einer Reihe von Benchmarks die Überlegenheit unserer Spekulationssyteme über den derzeitigen Stand der Technik. Sämtliche unserer Erweiterungen und Neuentwicklungen stehen als open source zur freien Verfügung. Diese Arbeit ist in englischer Sprache verfasst.Manual parallelization is time consuming and error-prone. Automatic parallelization on the other hand is often unable to extract substantial parallelism. Using speculation, however, most of the parallelism can be exploited even of complex programs. Speculatively parallelized programs always need a runtime system during execution in order to ensure the validity of the speculative assumptions, and to ensure the correct semantics even in the case of misspeculation. These runtime systems should influence the execution time of the parallel program as little as possible. In this thesis, we investigate to which extend state-of-the-art systems which track memory accesses explicitly in software fulfill this requirement. We describe and implement changes which improve their performance substantially. We also design two new systems utilizing virtual memory abstraction to track memory changed implicitly, thus causing less overhead during execution. One of the new systems is integrated into the Linux kernel as a kernel module, providing the best possible performance. Furthermore it provides stronger soundness guarantees than any state-of-the-art system by also capturing system calls, hence including for example file I/O into speculative isolation. In a number of benchmarks we show the performance improvements of our virtual memory based systems over the state of the art. All our extensions and newly developed speculation systems are made available as open source

    Performance measurement methodology for integrated services networks

    Get PDF
    With the emergence of advanced integrated services networks, the need for effective performance analysis techniques has become extremely important. Further advancements in these networks can only be possible if the practical performance issues of the existing networks are clearly understood. This thesis is concerned with the design and development of a measurement system which has been implemented on a large experimental network. The measurement system is based on dedicated traffic generators which have been designed and implemented on the Project Unison network. The Unison project is a multisite networking experiment for conducting research into the interconnection and interworking of local area network based multi-media application systems. The traffic generators were first developed for the Cambridge Ring based Unison network. Once their usefulness and effectiveness was proven, high performance traffic generators using transputer technology were built for the Cambridge Fast Ring based Unison network. The measurement system is capable of measuring the conventional performance parameters such as throughput and packet delay, and is able to characterise the operational performance of network bridging components under various loading conditions. In particular, the measurement system has been used in a 'measure and tune' fashion in order to improve the performance of a complex bridging device. Accurate measurement of packet delay in wide area networks is a recognised problem. The problem is associated with the synchronisation of the clocks between the distant machines. A chronological timestamping technique has been introduced in which the clocks are synchronised using a broadcast synchronisation technique. Rugby time clock receivers have been interfaced to each generator for the purpose of synchronisation. In order to design network applications, an accurate knowledge of the expected network performance under different loading conditions is essential. Using the measurement system, this has been achieved by examining the network characteristics at the network/user interface. Also, the generators are capable of emulating a variety of application traffic which can be injected into the network along with the traffic from real applications, thus enabling user oriented performance parameters to be evaluated in a mixed traffic environment. A number of performance measurement experiments have been conducted using the measurement system. Experimental results obtained from the Unison network serve to emphasise the power and effectiveness of the measurement methodology

    Behaviour of carbon/epoxy composite sandwich panels with sustainable core materials subjected intermediate velocity impacts

    Get PDF
    Sandwich composite structures are made of two strong and stiff face-sheets separated by a lightweight core material. They are used in lightweight structures for load-carrying applications in the aerospace, marine, railway and wind-energy industry as a way to increase the bending stiffness and bucking resistance while maintaining a low weight. During their lifetime these structures are subjected to impact events such as the accidental drop of tools during assembly, bird strikes, hailstone impact, or even Foreign Object (FO) impact of stones, debris, etc…. Damage produced by impacts can compromise the integrity of a structure reducing its residual strength and stiffness, causing premature failure of a component under service loads. This PhD thesis studies the mechanical response, impact process, and damage mechanisms taking place in an Intermediate Velocity Impact (IVI) over a sandwich composite panel made of woven carbon/epoxy face-sheets with either agglomerated cork core or PET foam core. This is done by applying a numerical-experimental methodology based on the building block approach used for aircraft certification in which results obtained from numerical models are directly compared with results obtained in the experimental test. In this context, the thesis is divided into three main parts. In the first part of this thesis, the face-sheet and core are treated independently to understand their unique dynamic response and select appropriate constitutive material models for the FEA model implementation. Continuous damaged models are used to model the inter-laminar and intra-laminar fracture behaviour in the face-sheets. The suitability of these models is assessed through the implementation of independent FEA models for fracture tests (modes I & II) and ballistic impact which are validated with experimental experiments from the literature. In the case of the core, the compressive and tensile response of the core materials (agglomerated cork and PET foam) is studied by performing static and dynamic characterization tests. The collected data is then used for the validation of the non-linear material models by implementing an FEA model for dynamic compression. The second part of this thesis studies the IVI event of the whole sandwich panel. This is done by performing a set of experimental impact tests and implementing a detailed explicit/nonlinear FEA model, which is validated against experimental results. The experimental tests are performed using a gas gun together with a different state of the art measuring techniques such as high-speed video recording, 3D Digital Image Correlation x (DIC) and Computed X-ray Tomography (CT). The FEA model is successfully validated and it is used to study the phases and mechanisms of damage evolution present during the impact process, something that is not possible to obtain experimentally and provides a valuable tool to understand the phenomenon. At the most general level, the impact process is dominated by different interacting physical mechanisms such as elastic deformation of the panel, inter-laminar and intra-laminar fracture of the face-sheets, non-linear core deformation, multiaxial core failure and core-face-sheet debonding. Different impact phases are observed and their physical mechanisms explained in detail. The FEA model is also used to perform a comparative analysis of different impact parameters (e.g. impact velocity, core thickness, impact angle, and axial preload) analysing their influence in the mechanical response of the sandwich panels under IVI. The third part of this thesis studies the hailstone impact over the sandwich panels using the developed FEA model of the sandwich panel together with a particles model for the hailstone. The interaction between the dominant physical mechanisms in the sandwich panel (e.g. elastic response, face-sheet damage, core failure, etc…) and the fragmentation of the hailstone are explained in detail together with some failure modes expected in this kind of event and the severity of the impact extended for two different hailstone sizes.Las estructuras tipo sándwich están compuestas a partir de dos placas rígidas y resistentes separadas por un núcleo liviano. Se utilizan en estructuras ligeras en industrias como la aeroespacial, marina, ferroviaria y eólica como una forma de aumentar la rigidez a la flexión y la resistencia al pandeo, manteniendo un peso reducido. Durante su vida útil, estas estructuras están sujetas a eventos de impacto, como la caída accidental de herramientas durante el montaje, impactos de pájaros, impactos de granizo o incluso impactos de objetos extraños (piedras, escombros, etc…). Los daños producidos por impactos pueden comprometer la integridad de una estructura reduciendo su resistencia y rigidez residuales, provocando la falla prematura de un componente bajo cargas de servicio. Esta tesis doctoral estudia la respuesta mecánica, el proceso de impacto y los mecanismos de daño que tienen lugar en un impacto de velocidad intermedia (IVI) sobre un panel sándwich fabricado a partir de laminados de tejido de carbono/epoxi con núcleo de corcho aglomerado o núcleo de espuma PET. El estudio se realiza mediante la aplicación de una metodología numérico-experimental basada en el enfoque de bloques de construcción utilizado típicamente para la certificación de aeronaves en el que los resultados obtenidos de los modelos numéricos se comparan directamente con los resultados obtenidos en la prueba experimental. En este contexto, la tesis se divide en tres partes principales. En la primera parte, el laminado y el núcleo se tratan de forma independiente para comprender su respuesta dinámica única y seleccionar modelos de materiales constitutivos apropiados para la implementación del modelo FEA. Se utilizan modelos de daño continuo para modelar el comportamiento de fractura inter-laminar e intra-laminar en las laminados. La idoneidad de estos modelos se evalúa mediante la implementación de modelos FEA independientes para ensayos de fractura (modos I y II) e impacto balístico que se validan con datos experimentales de la literatura. En el caso del núcleo, se estudia la respuesta a compresión y tracción de los materiales del núcleo (corcho aglomerado y espuma PET) mediante la realización de ensayos de caracterización estática y dinámica. Los datos recopilados se utilizan para validar los modelos de materiales no lineales mediante la implementación de un modelo FEA para compresión dinámica. La segunda parte de esta tesis estudia el evento IVI del panel sándwich completo. Se ha realizado un conjunto de pruebas de impacto experimentales e implementando un modelo FEA explícito/no-lineal detallado que se valida con resultados experimentales. Para las pruebas de impacto se utiliza un cañon de gas empleando diferentes técnicas de medición de última generación como grabación de video de alta velocidad, la correlación de imágenes digitales (DIC) en 3D y tomografía de rayos X computarizada (CT). El modelo FEA se valida satisfactoriamente y se utiliza para estudiar las fases y mecanismos de evolución del daño que ocurren durante el impacto; algo que no es posible experimentalmente y que proporciona una valiosa herramienta para comprender el fenómeno. A nivel general, el proceso de impacto está dominado por diferentes mecanismos físicos que interactúan entre si como la deformación elástica del panel, la fractura inter-laminar e intra-laminar de los laminados, la deformación no lineal del núcleo, la falla multiaxial del núcleo y el despegue entre núcleo y laminado. Se observan diferentes fases de impacto y se explican en detalle sus mecanismos físicos. El modelo FEA también se utiliza para realizar un análisis comparativo de diferentes parámetros del problema (por ejemplo, velocidad de impacto, espesor del núcleo, ángulo de impacto y precarga axial) analizando su influencia en la respuesta mecánica de los paneles sándwich bajo IVI. La tercera parte de esta tesis estudia el impacto del granizo en los paneles sándwich utilizando el modelo FEA desarrollado del panel sándwich junto con un modelo de partículas para el granizo. Se explica en detalle la interacción entre los mecanismos físicos dominantes en el panel sándwich (por ejemplo, respuesta elástica, daño en la cara frontal, falla del núcleo, etc.) y la fragmentación del granizo asi como algunos modos de falla esperados en este tipo de evento y la severidad de la extensión de daño asumiendo dos tamaños diferentes de granizo.Programa de Doctorado en Ingeniería Mecánica y de Organización Industrial por la Universidad Carlos III de MadridPresidente: Jacobo Díaz García.- Secretario: Shirley Kalamis García Castillo.- Vocal: Alberto Solís Fajard

    Life Sciences Program Tasks and Bibliography for FY 1996

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1996. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web page

    Numerical modelling of additive manufacturing process for stainless steel tension testing samples

    Get PDF
    Nowadays additive manufacturing (AM) technologies including 3D printing grow rapidly and they are expected to replace conventional subtractive manufacturing technologies to some extents. During a selective laser melting (SLM) process as one of popular AM technologies for metals, large amount of heats is required to melt metal powders, and this leads to distortions and/or shrinkages of additively manufactured parts. It is useful to predict the 3D printed parts to control unwanted distortions and shrinkages before their 3D printing. This study develops a two-phase numerical modelling and simulation process of AM process for 17-4PH stainless steel and it considers the importance of post-processing and the need for calibration to achieve a high-quality printing at the end. By using this proposed AM modelling and simulation process, optimal process parameters, material properties, and topology can be obtained to ensure a part 3D printed successfully
    corecore