127,753 research outputs found
Incremental Analysis of Programs
Algorithms used to determine the control and data flow properties of computer programs are generally designed for one-time analysis of an entire new input. Application of such algorithms when the input is only slightly modified results in an inefficient system. In this theses a set of incremental update algorithms are presented for data flow analysis. These algorithms update the solution from a previous analysis to reflect changes in the program. Thus, extensive reanalysis to reflect changes in the program. Thus, extensive reanalysis of programs after each program modification can be avoided. The incremental update algorithms presented for global flow analysis are based on Hecht/Ullman iterative algorithms. Banning\u27s interprocedural data flow analysis algorithms form the basis for the incremental interprocedural algorithms
Recommended from our members
Visualization-driven Structural and Statistical Analysis of Turbulent Flows
Knowledge extraction from data volumes of ever increasing size requires ever more flexible tools to facilitate interactive query. In- teractivity enables real-time hypothesis testing and scientific discovery, but can generally not be achieved without some level of data reduction. The approach described in this paper combines multi-resolution access, region-of-interest extraction, and structure identification in order to pro- vide interactive spatial and statistical analysis of a terascale data volume. Unique aspects of our approach include the incorporation of both local and global statistics of the flow structures, and iterative refinement fa- cilities, which combine geometry, topology, and statistics to allow the user to effectively tailor the analysis and visualization to the science. Working together, these facilities allow a user to focus the spatial scale and domain of the analysis and perform an appropriately tailored mul- tivariate visualization of the corresponding data. All of these ideas and algorithms are instantiated in a deployed visualization and analysis tool called VAPOR, which is in routine use by scientists internationally. In data from a 10243 simulation of a forced turbulent flow, VAPOR allowed us to perform a visual data exploration of the flow properties at interac- tive speeds, leading to the discovery of novel scientific properties of the flow, in the form of two distinct vortical structure populations. These structures would have been very difficult (if not impossible) to find with statistical overviews or other existing visualization-driven analysis ap- proaches. This kind of intelligent, focused analysis/refinement approach will become even more important as computational science moves to- wards petascale applications
Reconstruction of the Antenna Near-Field
Cílem disertační práce je navrhnout efektivně pracující algoritmus, který na základě bezfázového měření v blízkém poli antény bude schopen zrekonstruovat komplexní blízké pole antény resp. vyzařovací diagram antény ve vzdáleném poli. Na základě těchto úvah byly zkoumány vlastnosti minimalizačního algoritmu. Zejména byl analyzován a vhodně zvolen minimalizační přistup, optimalizační metoda a v neposlední řadě i optimalizační funkce tzv. funkcionál. Dále pro urychlení celého minimalizačního procesu byly uvažovány prvotní odhady. A na závěr byla do minimalizačního algoritmu zahrnuta myšlenka nahrazující hledané elektrické pole několika koeficienty. Na základě předchozích analýz byla navržená bezfázová metoda pro charakterizaci vyzařovacích vlastností antén. Tato metoda kombinuje globální optimalizaci s obrazovou kompresní metodou a s lokální metodou ve spojení s konvečním amplitudovým měřením na dvou površích. V našem případě je globální optimalizace použita k nalezení globálního minima minimalizovaného funkcionálu, kompresní metoda k redukci neznámých proměnných na apertuře antény a lokální metoda zajišťuje přesnější nalezení minima. Navržená metoda je velmi robustní a mnohem rychlejší než jiné dostupné minimalizační algoritmy. Další výzkum byl zaměřen na možnosti využití měřených amplitud pouze z jednoho měřícího povrchu pro rekonstrukci vyzařovacích charakteristik antén a využití nového algoritmu pro rekonstrukci fáze na válcové geometrii.The aim of this dissertation thesis is to design a very effective algorithm, which is able to reconstruct the antenna near-field and radiation patterns, respectively, from amplitude-only measurements. Under these circumstances, the properties of minimization algorithm were researched. The selection of the minimization approach, optimization technique and the appropriate functional were investigated and appropriately chosen. To reveal the global minimum area faster, the possibilities in the form of initial estimates for accelerating minimization algorithm were also considered. And finally, the idea to represent the unknown electric field distribution by a few coefficients was implicated into the minimization algorithm. The designed near-field phaseless approach for the antenna far-field characterization combines a global optimization, an image compression method and a local optimization in conjunction with conventional two-surface amplitude measurements. The global optimization method is used to minimize the functional, the image compression method is used to reduce the number of unknown variables, and the local optimization method is used to improve the estimate achieved by the previous method. The proposed algorithm is very robust and faster than comparable algorithms available. Other investigations were focused on possibilities of using amplitude from only single scanning surface for reconstruction of radiation patterns and the application of the novel phase retrieval algorithm for cylindrical geometry.
Distributive Network Utility Maximization (NUM) over Time-Varying Fading Channels
Distributed network utility maximization (NUM) has received an increasing
intensity of interest over the past few years. Distributed solutions (e.g., the
primal-dual gradient method) have been intensively investigated under fading
channels. As such distributed solutions involve iterative updating and explicit
message passing, it is unrealistic to assume that the wireless channel remains
unchanged during the iterations. Unfortunately, the behavior of those
distributed solutions under time-varying channels is in general unknown. In
this paper, we shall investigate the convergence behavior and tracking errors
of the iterative primal-dual scaled gradient algorithm (PDSGA) with dynamic
scaling matrices (DSC) for solving distributive NUM problems under time-varying
fading channels. We shall also study a specific application example, namely the
multi-commodity flow control and multi-carrier power allocation problem in
multi-hop ad hoc networks. Our analysis shows that the PDSGA converges to a
limit region rather than a single point under the finite state Markov chain
(FSMC) fading channels. We also show that the order of growth of the tracking
errors is given by O(T/N), where T and N are the update interval and the
average sojourn time of the FSMC, respectively. Based on this analysis, we
derive a low complexity distributive adaptation algorithm for determining the
adaptive scaling matrices, which can be implemented distributively at each
transmitter. The numerical results show the superior performance of the
proposed dynamic scaling matrix algorithm over several baseline schemes, such
as the regular primal-dual gradient algorithm
Convergence Analysis of Mixed Timescale Cross-Layer Stochastic Optimization
This paper considers a cross-layer optimization problem driven by
multi-timescale stochastic exogenous processes in wireless communication
networks. Due to the hierarchical information structure in a wireless network,
a mixed timescale stochastic iterative algorithm is proposed to track the
time-varying optimal solution of the cross-layer optimization problem, where
the variables are partitioned into short-term controls updated in a faster
timescale, and long-term controls updated in a slower timescale. We focus on
establishing a convergence analysis framework for such multi-timescale
algorithms, which is difficult due to the timescale separation of the algorithm
and the time-varying nature of the exogenous processes. To cope with this
challenge, we model the algorithm dynamics using stochastic differential
equations (SDEs) and show that the study of the algorithm convergence is
equivalent to the study of the stochastic stability of a virtual stochastic
dynamic system (VSDS). Leveraging the techniques of Lyapunov stability, we
derive a sufficient condition for the algorithm stability and a tracking error
bound in terms of the parameters of the multi-timescale exogenous processes.
Based on these results, an adaptive compensation algorithm is proposed to
enhance the tracking performance. Finally, we illustrate the framework by an
application example in wireless heterogeneous network
An Algebraic Framework for Compositional Program Analysis
The purpose of a program analysis is to compute an abstract meaning for a
program which approximates its dynamic behaviour. A compositional program
analysis accomplishes this task with a divide-and-conquer strategy: the meaning
of a program is computed by dividing it into sub-programs, computing their
meaning, and then combining the results. Compositional program analyses are
desirable because they can yield scalable (and easily parallelizable) program
analyses.
This paper presents algebraic framework for designing, implementing, and
proving the correctness of compositional program analyses. A program analysis
in our framework defined by an algebraic structure equipped with sequencing,
choice, and iteration operations. From the analysis design perspective, a
particularly interesting consequence of this is that the meaning of a loop is
computed by applying the iteration operator to the loop body. This style of
compositional loop analysis can yield interesting ways of computing loop
invariants that cannot be defined iteratively. We identify a class of
algorithms, the so-called path-expression algorithms [Tarjan1981,Scholz2007],
which can be used to efficiently implement analyses in our framework. Lastly,
we develop a theory for proving the correctness of an analysis by establishing
an approximation relationship between an algebra defining a concrete semantics
and an algebra defining an analysis.Comment: 15 page
- …