1,858 research outputs found

    Assessment of plastics in the National Trust: a case study at Mr Straw's House

    Get PDF
    The National Trust is a charity that cares for over 300 publically accessible historic buildings and their contents across England, Wales and Northern Ireland. There have been few previous studies on preservation of plastics within National Trust collections, which form a significant part of the more modern collections of objects. This paper describes the design of an assessment system which was successfully trialled at Mr Straws House, a National Trust property in Worksop, UK. This system can now be used for future plastic surveys at other National Trust properties. In addition, the survey gave valuable information about the state of the collection, demonstrating that the plastics that are deteriorating are those that are known to be vulnerable, namely cellulose nitrate/acetate, PVC and rubber. Verifying this knowledge of the most vulnerable plastics enables us to recommend to properties across National Trust that these types should be seen as a priority for correct storage and in-depth recording

    Detecting Semantic Conflicts using Static Analysis

    Full text link
    Version control system tools empower developers to independently work on their development tasks. These tools also facilitate the integration of changes through merging operations, and report textual conflicts. However, when developers integrate their changes, they might encounter other types of conflicts that are not detected by current merge tools. In this paper, we focus on dynamic semantic conflicts, which occur when merging reports no textual conflicts but results in undesired interference - causing unexpected program behavior at runtime. To address this issue, we propose a technique that explores the use of static analysis to detect interference when merging contributions from two developers. We evaluate our technique using a dataset of 99 experimental units extracted from merge scenarios. The results provide evidence that our technique presents significant interference detection capability. It outperforms, in terms of F1 score and recall, previous methods that rely on dynamic analysis for detecting semantic conflicts, but these show better precision. Our technique precision is comparable to the ones observed in other studies that also leverage static analysis or use theorem proving techniques to detect semantic conflicts, albeit with significantly improved overall performance

    Aging induced changes on NEXAFS fingerprints in individual combustion particles

    Get PDF
    Soot particles can significantly influence the Earth's climate by absorbing and scattering solar radiation as well as by acting as cloud condensation nuclei. However, despite their environmental (as well as economic and political) importance, the way these properties are affected by atmospheric processing of the combustion exhaust gases is still a subject of discussion. In this work, individual soot particles emitted from two different vehicles, a EURO 2 transporter, a EURO 3 passenger car, and a wood stove were investigated on a single-particle basis. The emitted exhaust, including the particulate and the gas phase, was processed in a smog chamber with artificial solar radiation. Single particle specimens of both unprocessed and aged soot were characterized using near edge X-ray absorption fine structure spectroscopy (NEXAFS) and scanning electron microscopy. Comparison of NEXAFS spectra from the unprocessed particles and those resulting from exhaust photooxidation in the chamber revealed changes in the carbon functional group content. For the wood stove emissions, these changes were minor, related to the relatively mild oxidation conditions. For the EURO 2 transporter emissions, the most apparent change was that of carboxylic carbon from oxidized organic compounds condensing on the primary soot particles. For the EURO 3 car emissions oxidation of primary soot particles upon photochemical aging has likely contributed as well. Overall, the changes in the NEXAFS fingerprints were in qualitative agreement with data from an aerosol mass spectrometer. Furthermore, by taking full advantage of our in situ microreactor concept, we show that the soot particles from all three combustion sources changed their ability to take up water under humid conditions upon photochemical aging of the exhaust. Due to the selectivity and sensitivity of the NEXAFS technique for the water mass, also small amounts of water taken up into the internal voids of agglomerated particles could be detected. Because such small amounts of water uptake do not lead to measurable changes in particle diameter, it may remain beyond the limits of volume growth measurements, especially for larger agglomerated particles

    Assessment of toxicity of particulate matter in the sub-micrometric range by an Atmospheric Simulation Chamber

    Get PDF
    openAtmospheric aerosols (or Particulate Matter, PM) play an important role in human health and global climate changes, being a central topic in atmospheric physics and chemistry. PM consists of solid and liquid particles suspended in the atmosphere, with high variability in size, composition, concentration, shape, life-time and sources. Among PM constituents, carbonaceous compounds cover a substantial fraction. My thesis focuses on soot particles that are carbonaceous particles generated as by-products of incomplete combustion of hydrocarbon fuels. Soot particles are responsible of negative impacts, both on climate and health. Therefore, it is necessary to investigate their properties and behaviour in the atmosphere in order to fully understand their adverse effects. Aerosols properties can be investigated by experiments performed in Atmospheric Simulation Chambers (ASCs), which are exploratory platforms that allow to study atmospheric processes under realistic but controlled conditions, for long enough time periods to reproduce realistic environments. My PhD took place in the Laboratory for Environmental Physics at the Physics Department of the University of Genoa, where the only Italian ASC, ChAMBRe, is installed. The employ of a soot generator is useful to perform experiments concerning soot particles. They are stable source that generate particles with controlled and known properties, similar to the real atmospheric ones. During my PhD, the Mini-Inverted Soot Generator (MISG) was used, fuelled with both ethylene and propane and varying the oxygen-fuel ratio. The main objective of this thesis was to develop an experimental setup and a procedure that allow to perform systematic studies on soot particles exposed and maintained in different conditions thus investigating their properties, effects and interactions with the other atmospheric pollutants. Combustion conditions and resulting flame shapes were classified; a deep characterization of MISG exhaust, in connection to ChAMBRe, was performed in terms of concentration of emitted particles and gases, particle size distribution, composition and optical properties. The characterization of the MISG exhausts is an important piece of information to design the subsequent experiments. Well-characterized soot particles could be used to investigate the effects that atmospheric parameters can have on soot particles, and to study the interactions between soot particles and other pollutants. During my PhD work, preliminary studies on the soot oxidative potential and toxicological effects as well on interactions between soot particles and bio-aerosols were performed.openXXXIV CICLO - FISICA E NANOSCIENZE - FisicaVernocchi, Virgini

    AI Chain on Large Language Model for Unsupervised Control Flow Graph Generation for Statically-Typed Partial Code

    Full text link
    Control Flow Graphs (CFGs) are essential for visualizing, understanding and analyzing program behavior. For statically-typed programming language like Java, developers obtain CFGs by using bytecode-based methods for compilable code and Abstract Syntax Tree (AST)-based methods for partially uncompilable code. However, explicit syntax errors during AST construction and implicit semantic errors caused by bad coding practices can lead to behavioral loss and deviation of CFGs.To address the issue, we propose a novel approach that leverages the error-tolerant and understanding ability of pre-trained Large Language Models (LLMs) to generate CFGs. Our approach involves a Chain of Thought (CoT) with four steps: structure hierarchy extraction, nested code block extraction, CFG generation of nested code blocks, and fusion of all nested code blocks' CFGs. To address the limitations of the original CoT's single-prompt approach (i.e., completing all steps in a single generative pass), which can result in an ``epic'' prompt with hard-to-control behavior and error accumulation, we break down the CoT into an AI chain with explicit sub-steps. Each sub-step corresponds to a separate AI-unit, with an effective prompt assigned to each unit for interacting with LLMs to accomplish a specific purpose.Our experiments confirmed that our method outperforms existing CFG tools in terms of node and edge coverage, especially for incomplete or erroneous code. We also conducted an ablation experiment and confirmed the effectiveness of AI chain design principles: Hierarchical Task Breakdown, Unit Composition, and Mix of AI Units and Non-AI Units.Our work opens up new possibilities for building foundational software engineering tools based on LLMs, as opposed to traditional program analysis methods

    OSCAR. A Noise Injection Framework for Testing Concurrent Software

    Get PDF
    “Moore’s Law” is a well-known observable phenomenon in computer science that describes a visible yearly pattern in processor’s die increase. Even though it has held true for the last 57 years, thermal limitations on how much a processor’s core frequencies can be increased, have led to physical limitations to their performance scaling. The industry has since then shifted towards multicore architectures, which offer much better and scalable performance, while in turn forcing programmers to adopt the concurrent programming paradigm when designing new software, if they wish to make use of this added performance. The use of this paradigm comes with the unfortunate downside of the sudden appearance of a plethora of additional errors in their programs, stemming directly from their (poor) use of concurrency techniques. Furthermore, these concurrent programs themselves are notoriously hard to design and to verify their correctness, with researchers continuously developing new, more effective and effi- cient methods of doing so. Noise injection, the theme of this dissertation, is one such method. It relies on the “probe effect” — the observable shift in the behaviour of concurrent programs upon the introduction of noise into their routines. The abandonment of ConTest, a popular proprietary and closed-source noise injection framework, for testing concurrent software written using the Java programming language, has left a void in the availability of noise injection frameworks for this programming language. To mitigate this void, this dissertation proposes OSCAR — a novel open-source noise injection framework for the Java programming language, relying on static bytecode instrumentation for injecting noise. OSCAR will provide a free and well-documented noise injection tool for research, pedagogical and industry usage. Additionally, we propose a novel taxonomy for categorizing new and existing noise injection heuristics, together with a new method for generating and analysing concurrent software traces, based on string comparison metrics. After noising programs from the IBM Concurrent Benchmark with different heuristics, we observed that OSCAR is highly effective in increasing the coverage of the interleaving space, and that the different heuristics provide diverse trade-offs on the cost and benefit (time/coverage) of the noise injection process.Resumo A “Lei de Moore” Ă© um fenĂłmeno, bem conhecido na ĂĄrea das ciĂȘncias da computação, que descreve um padrĂŁo evidente no aumento anual da densidade de transĂ­stores num processador. Mesmo mantendo-se vĂĄlido nos Ășltimos 57 anos, o aumento do desempenho dos processadores continua garrotado pelas limitaçÔes tĂ©rmicas inerentes `a subida da sua frequĂȘncia de funciona- mento. Desde entĂŁo, a industria transitou para arquiteturas multi nĂșcleo, com significativamente melhor e mais escalĂĄvel desempenho, mas obrigando os programadores a adotar o paradigma de programação concorrente ao desenhar os seus novos programas, para poderem aproveitar o desempenho adicional que advĂ©m do seu uso. O uso deste paradigma, no entanto, traz consigo, por consequĂȘncia, a introdução de uma panĂłplia de novos erros nos programas, decorrentes diretamente da utilização (inadequada) de tĂ©cnicas de programação concorrente. Adicionalmente, estes programas concorrentes sĂŁo conhecidos por serem consideravelmente mais difĂ­ceis de desenhar e de validar, quanto ao seu correto funcionamento, incentivando investi- gadores ao desenvolvimento de novos mĂ©todos mais eficientes e eficazes de o fazerem. A injeção de ruĂ­do, o tema principal desta dissertação, Ă© um destes mĂ©todos. Esta baseia-se no “efeito sonda” (do inglĂȘs “probe effect”) — caracterizado por uma mudança de comportamento observĂĄvel em programas concorrentes, ao terem ruĂ­do introduzido nas suas rotinas. Com o abandono do Con- Test, uma framework popular, proprietĂĄria e de cĂłdigo fechado, de anĂĄlise dinĂąmica de programas concorrentes atravĂ©s de injecção de ruĂ­do, escritos com recurso `a linguagem de programação Java, viu-se surgir um vazio na oferta de framework de injeção de ruĂ­do, para esta mesma linguagem. Para mitigar este vazio, esta dissertação propĂ”e o OSCAR — uma nova framework de injeção de ruĂ­do, de cĂłdigo-aberto, para a linguagem de programação Java, que utiliza manipulação estĂĄtica de bytecode para realizar a introdução de ruĂ­do. O OSCAR pretende oferecer uma ferramenta livre e bem documentada de injeção de ruĂ­do para fins de investigação, pedagĂłgicos ou atĂ© para a indĂșstria. Adicionalmente, a dissertação propĂ”e uma nova taxonomia para categorizar os dife- rentes tipos de heurĂ­sticas de injecção de ruĂ­dos novos e existentes, juntamente com um mĂ©todo para gerar e analisar traces de programas concorrentes, com base em mĂ©tricas de comparação de strings. ApĂłs inserir ruĂ­do em programas do IBM Concurrent Benchmark, com diversas heurĂ­sticas, ob- servĂĄmos que o OSCAR consegue aumentar significativamente a dimensĂŁo da cobertura do espaço de estados de programas concorrentes. Adicionalmente, verificou-se que diferentes heurĂ­sticas produzem um leque variado de prĂłs e contras, especialmente em termos de eficĂĄcia versus eficiĂȘncia
    • 

    corecore