718 research outputs found

    Heap Reference Analysis Using Access Graphs

    Full text link
    Despite significant progress in the theory and practice of program analysis, analysing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations which summarize properties of the heap data. This summarization is based on the structure of the program which manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application which benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact which has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we discover four properties of heap data: liveness, aliasing, availability, and anticipability. Together, they cover all combinations of directions of analysis (i.e. forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.Comment: Accepted for printing by ACM TOPLAS. This version incorporates referees' comment

    An algebraic approach to analysis of recursive and concurrent programs

    Get PDF

    Observing the epoch of reionization and dark ages with redshifted 21-cm hydrogen line

    Get PDF
    The billion years subsequent to the Big Bang pose the next challenging frontier for precision cosmology. The concordant cosmological model, ΔCDM, propounds that during this period, the dark matter gravitationally shepherds the baryonic matter to form the primordial large-scale structures. This era is termed the Dark Ages (DA). The following era, the Epoch of Reionization (EoR), leads to the formation of the first stars and galaxies that reionize the permeating neutral hydrogen. The linear polarization of the cosmic background radiation and the Gunn-Peterson troughs in quasar absorption spectra provide indirect evidence for the EoR. Currently, there is no observational evidence for the DA. While state-of-the-art radio telescope arrays, Low Frequency Array (LOFAR) and Square Kilometre Array (SKA), propose various strategies to observe the early phases of the Universe, the advanced simulations employing high-performance computing (HPC) methodologies continue to play significant role in constraining various models based upon limited observational data. Despite a wide range of research, there is no end-to-end simulation solution available to quantifiably address the observational challenges due to statistical and systematic errors including foregrounds, ionosphere, polarization, RFI, instrument stability, and directional dependent gains. This research consolidates the cutting-edge simulation solutions, Cube-P3M, C2-Ray, and MeqTrees, to build an HPC prototype pipeline entitled, Simulating Interferometry Measurements (SIM). To establish and validate the efficacy of the SIM pipeline, the research builds a theoretical framework of two science drivers, viz., the presence of Lymanlimit absorbers and measuring non-Gaussianity from the 21-cm data. Thereafter, using the LOFAR and SKA telescope configurations, the SIM generates data visibility cubes with direction dependent and independent propagation effects. Finally, SIM extracts the original signal through standard techniques exploring the parametric phase-space. Results are presented herein

    Timing model derivation : static analysis of hardware description languages

    Get PDF
    Safety-critical hard real-time systems are subject to strict timing constraints. In order to derive guarantees on the timing behavior, the worst-case execution time (WCET) of each task comprising the system has to be known. The aiT tool has been developed for computing safe upper bounds on the WCET of a task. Its computation is mainly based on abstract interpretation of timing models of the processor and its periphery. These models are currently hand-crafted by human experts, which is a time-consuming and error-prone process. Modern processors are automatically synthesized from formal hardware specifications. Besides the processor’s functional behavior, also timing aspects are included in these descriptions. A methodology to derive sound timing models using hardware specifications is described within this thesis. To ease the process of timing model derivation, the methodology is embedded into a sound framework. A key part of this framework are static analyses on hardware specifications. This thesis presents an analysis framework that is build on the theory of abstract interpretation allowing use of classical program analyses on hardware description languages. Its suitability to automate parts of the derivation methodology is shown by different analyses. Practical experiments demonstrate the applicability of the approach to derive timing models. Also the soundness of the analyses and the analyses’ results is proved.Sicherheitskritische Echtzeitsysteme unterliegen strikten Zeitanforderungen. Um ihr Zeitverhalten zu garantieren mĂŒssen die AusfĂŒhrungszeiten der einzelnen Programme, die das System bilden, bekannt sein. Um sichere obere Schranken fĂŒr die AusfĂŒhrungszeit von Programmen zu berechnen wurde aiT entwickelt. Die Berechnung basiert auf abstrakter Interpretation von Zeitmodellen des Prozessors und seiner Peripherie. Diese Modelle werden hĂ€ndisch in einem zeitaufwendigen und fehleranfĂ€lligen Prozess von Experten entwickelt. Moderne Prozessoren werden automatisch aus formalen Spezifikationen erzeugt. Neben dem funktionalen Verhalten beschreiben diese auch das Zeitverhalten des Prozessors. In dieser Arbeit wird eine Methodik zur sicheren Ableitung von Zeitmodellen aus der Hardwarespezifikation beschrieben. Um den Ableitungsprozess zu vereinfachen ist diese Methodik in eine automatisierte Umgebung eingebettet. Ein Hauptbestandteil dieses Systems sind statische Analysen auf Hardwarebeschreibungen. Diese Arbeit stellt eine Analyse-Umgebung vor, die auf der Theorie der abstrakten Interpretation aufbaut und den Einsatz von klassischen Programmanalysen auf Hardwarebeschreibungssprachen erlaubt. Die Eignung des Systems, Teile der Ableitungsmethodik zu automatisieren, wird anhand einiger Analysen gezeigt. Experimentelle Ergebnisse zeigen die Anwendbarkeit der Methodik zur Ableitung von Zeitmodellen. Die Korrektheit der Analysen und der Analyse-Ergebnisse wird ebenfalls bewiesen

    Systematic Approaches to Advanced Information Flow Analysis – and Applications to Software Security

    Get PDF
    In dieser Arbeit berichte ich ĂŒber Anwendungen von Slicing und ProgrammabhĂ€ngigkeitsgraphen (PAG) in der Softwaresicherheit. Außerdem schlage ich ein Analyse-Rahmenwerk vor, welches Datenflussanalyse auf Kontrollflussgraphen und Slicing auf ProgrammabhĂ€ngigkeitsgraphen verallgemeinert. Mit einem solchen Rahmenwerk lassen sich neue PAG-basierte Analysen systematisch ableiten, die ĂŒber Slicing hinausgehen. Die Hauptthesen meiner Arbeit lauten wie folgt: (1) PAG-basierte Informationsflusskontrolle ist nĂŒtzlich, praktisch anwendbar und relevant. (2) Datenflussanalyse kann systematisch auf ProgrammabhĂ€ngigkeitsgraphen angewendet werden. (3) Datenflussanalyse auf ProgrammabhĂ€ngigkeitsgraphen ist praktisch durchfĂŒhrbar

    Analysis of preemptively scheduled hard real-time systems

    Get PDF
    As timing is a major property of hard real-time, proving timing correctness is of utter importance. A static timing analysis derives upper bounds on the execution time of tasks, a scheduling analysis uses these bounds and checks if each task meets its timing constraints. In preemptively scheduled systems with caches, this interface between timing analysis and scheduling analysis must be considered outdated. On a context switch, a preempting task may evict cached data of a preempted task that need to be reloaded again after preemption. The additional execution time due to these reloads, called cache-related preemption delay (CRPD), may substantially prolong a task\u27s execution time and strongly influence the system\u27s performance. In this thesis, we present a formal definition of the cache-related preemption delay and determine the applicability and the limitations of a separate CRPD computation. To bound the CRPD based on the analysis of the preempted task, we introduce the concept of definitely cached useful cache blocks. This new concept eliminates substantial pessimism with respect to former analyses by considering the over-approximation of a preceding timing analysis. We consider the impact of the preempting task to further refine the CRPD bounds. To this end, we present the notion of resilience. The resilience of a cache block is a measure for the amount of disturbance of a preempting task a cache block of the preempted task may survive. Based on these CRPD bounds, we show how to correctly account for the CRPD in the schedulability analysis for fixed-priority preemptive systems and present new CRPD-aware response time analyses: ECB-Union and Multiset approaches.Da das Zeitverhalten ein Hauptbestandteil harter Echtzeitsysteme ist, ist das Beweisen der zeitlichen Korrektheit von großer Bedeutung. Eine statische Zeitanalyse berechnet obere Schranken der AusfĂŒhrungszeiten von Programmen, eine Planbarkeitsanalyse benutzt diese und prĂŒft ob jedes Programm die Zeitanforderungen erfĂŒllt. In prĂ€emptiv geplanten Systemen mit Caches, muss die Nahtstelle zwischen Zeitanalyse und Planbarkeitsanalyse als veraltet angesehen werden. Im Falle eines Kontextwechsels kann das unterbrechende Programm Cache-daten des unterbrochenen Programms entfernen. Diese Daten mĂŒssen nach der Unterbrechung erneut geladen werden. Die zusĂ€tzliche AusfĂŒhrungszeit durch das Nachladen der Daten, welche Cache-bezogene PrĂ€emptions-Verzögerung (engl. Cache-related Preemption Delay (CR-PD)) genannt wird, kann die AusfĂŒhrungszeit des Programm wesentlich erhöhen und hat somit einen starken Einfluss auf die Gesamtleistung des Systems. Wir prĂ€sentieren in dieser Arbeit eine formale Definition der Cache-bezogene PrĂ€emptions-Verzögerung und bestimmen die EinschrĂ€nkungen und die Anwendbarkeit einer separaten Berechnung der CRPD. Basierend auf der Analyse des unterbrochenen Programms prĂ€sentieren wir das Konzept der definitiv gecachten nĂŒtzlichen Cacheblöcke. Verglichen mit bisherigen CRPD-Analysen eleminiert dieses neue Konzept wesentliche ÜberschĂ€tzung indem die ÜberschĂ€tzung der vorherigen Zeitanalyse mit in Betracht gezogen wird. Wir analysieren den Einfluss des unterbrechenden Programms um die CRPD-Schranken weiter zu verbessern. Hierzu fĂŒhren wir das Konzept der Belastbarkeit ein. Die Belastbarkeit eines Cacheblocks ist ein Maß fĂŒr die Störung durch das unterbrechende Programm, die ein nĂŒtzlicher Cacheblock ĂŒberleben kann. Basierend auf diesen CRPD-Schranken zeigen wir, wie die Cache-bezogene PrĂ€emptions-Verzögerung korrekt in die Planbarkeitsanalyse fĂŒr Systeme mit statischen PrioritĂ€ten integriert werden kann und prĂ€sentieren neue CRPD-bewußte Antwortzeitanalysen: die ECB-Union und die Multimengen-AnsĂ€tze

    Computational Aerothermodynamic Analysis of Satellite Trans-Atmospheric Skip Entry Survivability

    Get PDF
    Computational aerothermodynamic analysis is presented for a spacecraft in low Earth orbit performing an atmospheric skip entry maneuver. Typically, atmospheric reentry is a terminal operation signaling mission end-of-life and, in some instances, executed for spacecraft disposal. A variation on reentry – skip entry – is an aeroassisted trans-atmospheric maneuver in which a spacecraft utilizes the effects of aerodynamic drag in order to reduce energy prior to a terminal entry, pinpoint a targeted entry, or change orbital elements such as inclination. Spacecraft performing a skip entry enable new modes of maneuver to enhance operations in nominal or possibly contested mission environments. The present research examines the aerothermodynamic effects of a skip entry trajectory for a small satellite to determine the survivability limits for potential future practical implementation by systems not intentionally designed to survive reentry. Due to the rarefied nature of the upper atmosphere, all fluid flow analysis is performed using SPARTA, a Direct Simulation Monte Carlo (DSMC) solver. Satellite skip entry maneuvers should be survivable with skip perigees near the sensible atmosphere limit in an approximate altitude range of h∈ [90,120] km

    Fast widefield techniques for fluorescence and phase endomicroscopy

    Full text link
    Thesis (Ph.D.)--Boston UniversityEndomicroscopy is a recent development in biomedical optics which gives researchers and physicians microscope-resolution views of intact tissue to complement macroscopic visualization during endoscopy screening. This thesis presents HiLo endomicroscopy and oblique back-illumination endomicroscopy, fast widefield imaging techniques with fluorescence and phase contrast, respectively. Fluorescence imaging in thick tissue is often hampered by strong out-of-focus background signal. Laser scanning confocal endomicroscopy has been developed for optically-sectioned imaging free from background, but reliance on mechanical scanning fundamentally limits the frame rate and represents significant complexity and expense. HiLo is a fast, simple, widefield fluorescence imaging technique which rejects out-of-focus background signal without the need for scanning. It works by acquiring two images of the sample under uniform and structured illumination and synthesizing an optically sectioned result with real-time image processing. Oblique back-illumination microscopy (OBM) is a label-free technique which allows, for the first time, phase gradient imaging of sub-surface morphology in thick scattering tissue with a reflection geometry. OBM works by back-illuminating the sample with the oblique diffuse reflectance from light delivered via off-axis optical fibers. The use of two diametrically opposed illumination fibers allows simultaneous and independent measurement of phase gradients and absorption contrast. Video-rate single-exposure operation using wavelength multiplexing is demonstrated
    • 

    corecore