13 research outputs found

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Robust Scalable Sorting

    Get PDF
    Sortieren ist eines der wichtigsten algorithmischen Grundlagenprobleme. Es ist daher nicht verwunderlich, dass Sortieralgorithmen in einer Vielzahl von Anwendungen benötigt werden. Diese Anwendungen werden auf den unterschiedlichsten Geräten ausgeführt -- angefangen bei Smartphones mit leistungseffizienten Multi-Core-Prozessoren bis hin zu Supercomputern mit Tausenden von Maschinen, die über ein Hochleistungsnetzwerk miteinander verbunden sind. Spätestens seitdem die Single-Core-Leistung nicht mehr signifikant steigt, sind parallele Anwendungen in unserem Alltag nicht mehr wegzudenken. Daher sind effiziente und skalierbare Algorithmen essentiell, um diese immense Verfügbarkeit von (paralleler) Rechenleistung auszunutzen. Diese Arbeit befasst sich damit, wie sequentielle und parallele Sortieralgorithmen auf möglichst robuste Art maximale Leistung erzielen können. Dabei betrachten wir einen großen Parameterbereich von Eingabegrößen, Eingabeverteilungen, Maschinen sowie Datentypen. Im ersten Teil dieser Arbeit untersuchen wir sowohl sequentielles Sortieren als auch paralleles Sortieren auf Shared-Memory-Maschinen. Wir präsentieren In-place Parallel Super Scalar Samplesort (IPS⁴o), einen neuen vergleichsbasierten Algorithmus, der mit beschränkt viel Zusatzspeicher auskommt (die sogenannte „in-place” Eigenschaft). Eine wesentliche Erkenntnis ist, dass unsere in-place-Technik die Sortiergeschwindigkeit von IPS⁴o im Vergleich zu ähnlichen Algorithmen ohne in-place-Eigenschaft verbessert. Bisher wurde die Eigenschaft, mit beschränkt viel Zusatzspeicher auszukommen, eher mit Leistungseinbußen verbunden. IPS⁴o ist außerdem cache-effizient und führt O(n/tlogn)O(n/t\log n) Arbeitsschritte pro Thread aus, um ein Array der Größe nn mit tt Threads zu sortieren. Zusätzlich berücksichtigt IPS⁴o Speicherlokalität, nutzt einen Entscheidungsbaum ohne Sprungvorhersagen und verwendet spezielle Partitionen für Elemente mit gleichem Schlüssel. Für den Spezialfall, dass ausschließlich ganzzahlige Schlüssel sortiert werden sollen, haben wir das algorithmische Konzept von IPS⁴o wiederverwendet, um In-place Parallel Super Scalar Radix Sort (IPS²Ra) zu implementieren. Wir bestätigen die Performance unserer Algorithmen in einer umfangreichen experimentellen Studie mit 21 State-of-the-Art-Sortieralgorithmen, sechs Datentypen, zehn Eingabeverteilungen, vier Maschinen, vier Speicherzuordnungsstrategien und Eingabegrößen, die über sieben Größenordnungen variieren. Einerseits zeigt die Studie die robuste Leistungsfähigkeit unserer Algorithmen. Andererseits deckt sie auf, dass viele konkurrierende Algorithmen Performance-Probleme haben: Mit IPS⁴o erhalten wir einen robusten vergleichsbasierten Sortieralgorithmus, der andere parallele in-place vergleichsbasierte Sortieralgorithmen fast um den Faktor drei übertrifft. In der überwiegenden Mehrheit der Fälle ist IPS⁴o der schnellste vergleichsbasierte Algorithmus. Dabei ist es nicht von Bedeutung, ob wir IPS⁴o mit Algorithmen vergleichen, die mit beschränkt viel Zusatzspeicher auskommen, Zusatzspeicher in der Größenordnung der Eingabe benötigen, und parallel oder sequentiell ausgeführt werden. IPS⁴o übertrifft in vielen Fällen sogar konkurrierende Implementierungen von Integer-Sortieralgorithmen. Die verbleibenden Fälle umfassen hauptsächlich gleichmäßig verteilte Eingaben und Eingaben mit Schlüsseln, die nur wenige Bits enthalten. Diese Eingaben sind in der Regel „einfach” für Integer-Sortieralgorithmen. Unser Integer-Sorter IPS²Ra übertrifft andere Integer-Sortieralgorithmen für diese Eingaben in der überwiegenden Mehrheit der Fälle. Ausnahmen sind einige sehr kleine Eingaben, für die die meisten Algorithmen sehr ineffizient sind. Allerdings sind Algorithmen, die auf diese Eingabegrößen abzielen, in der Regel für alle anderen Eingaben deutlich langsamer. Im zweiten Teil dieser Arbeit untersuchen wir skalierbare Sortieralgorithmen für verteilte Systeme, welche robust in Hinblick auf die Eingabegröße, häufig vorkommende Sortierschlüssel, die Verteilung der Sortierschlüssel auf die Prozessoren und die Anzahl an Prozessoren sind. Das Resultat unserer Arbeit sind im Wesentlichen vier robuste skalierbare Sortieralgorithmen, mit denen wir den gesamten Bereich an Eingabegrößen abdecken können. Drei dieser vier Algorithmen sind neue, schnelle Algorithmen, welche so implementiert sind, dass sie nur einen geringen Zusatzaufwand benötigen und gleichzeitig unabhängig von „schwierigen” Eingaben robust skalieren. Es handelt sich z.B. um „schwierige” Eingaben, wenn viele gleiche Elemente vorkommen oder die Eingabeelemente in Hinblick auf ihre Sortierschlüssel ungünstig auf die Prozessoren verteilt sind. Bisherige Algorithmen für mittlere und größere Eingabegrößen weisen ein unzumutbar großes Kommunikationsvolumen auf oder tauschen unverhältnismäßig oft Nachrichten aus. Für diese Eingabegrößen beschreiben wir eine robuste, mehrstufige Verallgemeinerung von Samplesort, die einen brauchbaren Kompromiss zwischen dem Kommunikationsvolumen und der Anzahl ausgetauschter Nachrichten darstellt. Wir überwinden diese bisher unvereinbaren Ziele mittels einer skalierbaren approximativen Splitterauswahl sowie eines neuen Datenumverteilungsalgorithmus. Als eine Alternative stellen wir eine Verallgemeinerung von Mergesort vor, welche den Vorteil von perfekt ausbalancierter Ausgabe hat. Für kleine Eingaben entwerfen wir eine Variante von Quicksort. Mit wenig Zusatzaufwand vermeidet sie das Problem ungünstiger Elementverteilungen und häufig vorkommender Sortierschlüssel, indem sie schnell qualitativ hochwertige Splitter auswählt, die Elemente zufällig den Prozessoren zuweist und einer Duplikat-Behandlung unterzieht. Bisherige praktische Ansätze mit polylogarithmischer Latenz haben entweder einen logarithmischen Faktor mehr Kommunikationsvolumen oder berücksichtigen nur gleichverteilte Eingaben ohne mehrfach vorkommende Sortierschlüssel. Für sehr kleine Eingaben schlagen wir einen einfachen sowie schnellen, jedoch arbeitsineffizienten Algorithmus mit logarithmischer Latenzzeit vor. Für diese Eingaben sind bisherige effiziente Ansätze nur theoretische Algorithmen, die meist unverhältnismäßig große konstante Faktoren haben. Für die kleinsten Eingaben empfehlen wir die Daten zu sortieren, während sie an einen einzelnen Prozessor geschickt werden. Ein wichtiger Beitrag dieser Arbeit zu der praktischen Seite von Algorithm Engineering ist die Kommunikationsbibliothek RangeBasedComm (RBC). Mit RBC ermöglichen wir eine effiziente Umsetzung von rekursiven Algorithmen mit sublinearer Laufzeit, indem sie skalierbare und effiziente Kommunikationsfunktionen für Teilmengen von Prozessoren bereitstellt. Zuletzt präsentieren wir eine umfangreiche experimentelle Studie auf zwei Supercomputern mit bis zu 262144 Prozessorkernen, elf Algorithmen, zehn Eingabeverteilungen und Eingabegrößen variierend über neun Größenordnungen. Mit Ausnahme von den größten Eingabegrößen ist diese Arbeit die einzige, die überhaupt Sortierexperimente auf Maschinen dieser Größe durchführt. Die RBC-Bibliothek beschleunigt die Algorithmen teilweise drastisch – einen konkurrierenden Algorithmus sogar um mehr als zwei Größenordnungen. Die Studie legt dar, dass unsere Algorithmen robust sind und gleichzeitig konkurrierende Implementierungen leistungsmäßig deutlich übertreffen. Die Konkurrenten, die man normalerweise betrachtet hätte, stürzen bei „schwierigen” Eingaben sogar ab

    Interconnection networks for parallel and distributed computing

    Get PDF
    Parallel computers are generally either shared-memory machines or distributed- memory machines. There are currently technological limitations on shared-memory architectures and so parallel computers utilizing a large number of processors tend tube distributed-memory machines. We are concerned solely with distributed-memory multiprocessors. In such machines, the dominant factor inhibiting faster global computations is inter-processor communication. Communication is dependent upon the topology of the interconnection network, the routing mechanism, the flow control policy, and the method of switching. We are concerned with issues relating to the topology of the interconnection network. The choice of how we connect processors in a distributed-memory multiprocessor is a fundamental design decision. There are numerous, often conflicting, considerations to bear in mind. However, there does not exist an interconnection network that is optimal on all counts and trade-offs have to be made. A multitude of interconnection networks have been proposed with each of these networks having some good (topological) properties and some not so good. Existing noteworthy networks include trees, fat-trees, meshes, cube-connected cycles, butterflies, Möbius cubes, hypercubes, augmented cubes, k-ary n-cubes, twisted cubes, n-star graphs, (n, k)-star graphs, alternating group graphs, de Bruijn networks, and bubble-sort graphs, to name but a few. We will mainly focus on k-ary n-cubes and (n, k)-star graphs in this thesis. Meanwhile, we propose a new interconnection network called augmented k-ary n- cubes. The following results are given in the thesis.1. Let k ≥ 4 be even and let n ≥ 2. Consider a faulty k-ary n-cube Q(^k_n) in which the number of node faults f(_n) and the number of link faults f(_e) are such that f(_n) + f(_e) ≤ 2n - 2. We prove that given any two healthy nodes s and e of Q(^k_n), there is a path from s to e of length at least k(^n) - 2f(_n) - 1 (resp. k(^n) - 2f(_n) - 2) if the nodes s and e have different (resp. the same) parities (the parity of a node Q(^k_n) in is the sum modulo 2 of the elements in the n-tuple over 0, 1, ∙∙∙ , k - 1 representing the node). Our result is optimal in the sense that there are pairs of nodes and fault configurations for which these bounds cannot be improved, and it answers questions recently posed by Yang, Tan and Hsu, and by Fu. Furthermore, we extend known results, obtained by Kim and Park, for the case when n = 2.2. We give precise solutions to problems posed by Wang, An, Pan, Wang and Qu and by Hsieh, Lin and Huang. In particular, we show that Q(^k_n) is bi-panconnected and edge-bipancyclic, when k ≥ 3 and n ≥ 2, and we also show that when k is odd, Q(^k_n) is m-panconnected, for m = (^n(k - 1) + 2k - 6’ / ‘_2), and (k -1) pancyclic (these bounds are optimal). We introduce a path-shortening technique, called progressive shortening, and strengthen existing results, showing that when paths are formed using progressive shortening then these paths can be efficiently constructed and used to solve a problem relating to the distributed simulation of linear arrays and cycles in a parallel machine whose interconnection network is Q(^k_n) even in the presence of a faulty processor.3. We define an interconnection network AQ(^k_n) which we call the augmented k-ary n-cube by extending a k-ary n-cube in a manner analogous to the existing extension of an n-dimensional hypercube to an n-dimensional augmented cube. We prove that the augmented k-ary n-cube Q(^k_n) has a number of attractive properties (in the context of parallel computing). For example, we show that the augmented k-ary n-cube Q(^k_n) - is a Cayley graph (and so is vertex-symmetric); has connectivity 4n - 2, and is such that we can build a set of 4n - 2 mutually disjoint paths joining any two distinct vertices so that the path of maximal length has length at most max{{n- l)k- (n-2), k + 7}; has diameter [(^k) / (_3)] + [(^k - 1) /( _3)], when n = 2; and has diameter at most (^k) / (_4) (n+ 1), for n ≥ 3 and k even, and at most [(^k)/ (_4) (n + 1) + (^n) / (_4), for n ^, for n ≥ 3 and k odd.4. We present an algorithm which given a source node and a set of n - 1 target nodes in the (n, k)-star graph S(_n,k) where all nodes are distinct, builds a collection of n - 1 node-disjoint paths, one from each target node to the source. The collection of paths output from the algorithm is such that each path has length at most 6k - 7, and the algorithm has time complexity O(k(^3)n(^4))

    The Quantum Adiabatic Algorithm applied to random optimization problems: the quantum spin glass perspective

    Full text link
    Among various algorithms designed to exploit the specific properties of quantum computers with respect to classical ones, the quantum adiabatic algorithm is a versatile proposition to find the minimal value of an arbitrary cost function (ground state energy). Random optimization problems provide a natural testbed to compare its efficiency with that of classical algorithms. These problems correspond to mean field spin glasses that have been extensively studied in the classical case. This paper reviews recent analytical works that extended these studies to incorporate the effect of quantum fluctuations, and presents also some original results in this direction.Comment: 151 pages, 21 figure

    Statistical Physics of Hard Optimization Problems

    Full text link
    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.Comment: PhD thesi

    A counterexample to Thiagarajan's conjecture on regular event structures

    Full text link
    We provide a counterexample to a conjecture by Thiagarajan (1996 and 2002) that regular event structures correspond exactly to event structures obtained as unfoldings of finite 1-safe Petri nets. The same counterexample is used to disprove a closely related conjecture by Badouel, Darondeau, and Raoult (1999) that domains of regular event structures with bounded \natural-cliques are recognizable by finite trace automata. Event structures, trace automata, and Petri nets are fundamental models in concurrency theory. There exist nice interpretations of these structures as combinatorial and geometric objects. Namely, from a graph theoretical point of view, the domains of prime event structures correspond exactly to median graphs; from a geometric point of view, these domains are in bijection with CAT(0) cube complexes. A necessary condition for both conjectures to be true is that domains of regular event structures (with bounded \natural-cliques) admit a regular nice labeling. To disprove these conjectures, we describe a regular event domain (with bounded \natural-cliques) that does not admit a regular nice labeling. Our counterexample is derived from an example by Wise (1996 and 2007) of a nonpositively curved square complex whose universal cover is a CAT(0) square complex containing a particular plane with an aperiodic tiling. We prove that other counterexamples to Thiagarajan's conjecture arise from aperiodic 4-way deterministic tile sets of Kari and Papasoglu (1999) and Lukkarila (2009). On the positive side, using breakthrough results by Agol (2013) and Haglund and Wise (2008, 2012) from geometric group theory, we prove that Thiagarajan's conjecture is true for regular event structures whose domains occur as principal filters of hyperbolic CAT(0) cube complexes which are universal covers of finite nonpositively curved cube complexes

    Adaptive Wavelet Methods for Inverse Problems: Acceleration Strategies, Adaptive Rothe Method and Generalized Tensor Wavelets

    Get PDF
    In general, inverse problems can be described as the task of inferring conclusions about the cause u from given observations y of its effect. This can be described as the inversion of an operator equation K(u) = y, which is assumed to be ill-posed or ill-conditioned. To arrive at a meaningful solution in this setting, regularization schemes need to be applied. One of the most important regularization methods is the so called Tikhonov regularization. As an approximation to the unknown truth u it is possible to consider the minimizer v of the sum of the data error K(v)-y (in a certain norm) and a weighted penalty term F(v). The development of efficient schemes for the computation of the minimizers is a field of ongoing research and a central Task in this thesis. Most computation schemes for v are based on some generalized gradient descent approach. For problems with weighted lp-norm penalty terms this typically leads to iterated soft shrinkage methods. Without additional assumptions the convergence of these iterations is only guaranteed for subsequences, and even then only to stationary points. In general, stationary points of the minimization problem do not have any regularization properties. Also, the basic iterated soft shrinkage algorithm is known to converge very poorly in practice. This is critical as each iteration step includes the application of the nonlinear operator K and the adjoint of its derivative. This in itself may already be numerically demanding. This thesis is concerned with the development of strategies for the fast computation of the solution of inverse problems with provable convergence rates. In particular, the application and generalization of efficient numerical schemes for the treatment of the arising nonlinear operator equations is considered. The first result of this thesis is a general acceleration strategy for the iterated soft thresholding iteration to compute the solution of the inverse problem. It is based on a decreasing strategy for the weights of the penalty term. The new method converges with linear rate to a global minimizer. A very important class of inverse problems are parameter identification problems for partial differential equations. As a prototype for this class of problems the identification of parameters in a specific parabolic partial differential equation is investigated. The arising operators are analyzed, the applicability of Tikhonov Regularization is proven and the parameters in a simplified test equation are reconstructed. The parabolic differential equations are solved by means of the so called horizontal method of lines, also known as Rothes method. Here the parabolic problem is interpreted as an abstract Cauchy problem. It is discretized in time by means of an implicit scheme. This is combined with a discretization of the resulting system of spatial problems. In this thesis the application of adaptive discretization schemes to solve the spatial subproblems is investigated. Such methods realize highly nonuniform discretizations. Therefore, they tend to require much less degrees of freedom than classical discretization schemes. To ensure the convergence of the resulting inexact Rothe method, a rigorous convergence proof is given. In particular, the application of implementable asymptotically optimal adaptive methods, based on wavelet bases, is considered. An upper bound for the degrees of freedom of the overall scheme that are needed to adaptively approximate the solution up to a prescribed tolerance is derived. As an important case study, the complexity of the approximate solution of the heat equation is investigated. To this end a regularity result for the spatial equations that arise in the Rothe method is proven. The rate of convergence of asymptotically optimal adaptive methods deteriorates with the spatial dimension of the problem. This is often called the curse of dimensionality. One way to avoid this problem is to consider tensor wavelet discretizations. Such discretizations lead to dimension independent convergence rates. However, the classical tensor wavelet construction is limited to domains with simple product geometry. Therefor, in this thesis, a generalized tensor wavelet basis is constructed. It spans a range of Sobolev spaces over a domain with a fairly general geometry. The construction is based on the application of extension operators to appropriate local bases on subdomains that form a non-overlapping domain decomposition. The best m-term approximation of functions with the new generalized tensor product basis converges with a rate that is independent of the spatial dimension of the domain. For two- and three-dimensional polytopes it is shown that the solution of Poisson type problems satisfies the required regularity condition. Numerical tests show that the dimension independent rate is indeed realized in practice

    Understanding and Enhancing CDCL-based SAT Solvers

    Get PDF
    Modern conflict-driven clause-learning (CDCL) Boolean satisfiability (SAT) solvers routinely solve formulas from industrial domains with millions of variables and clauses, despite the Boolean satisfiability problem being NP-complete and widely regarded as intractable in general. At the same time, very small crafted or randomly generated formulas are often infeasible for CDCL solvers. A commonly proposed explanation is that these solvers somehow exploit the underlying structure inherent in industrial instances. A better understanding of the structure of Boolean formulas not only enables improvements to modern SAT solvers, but also lends insight as to why solvers perform well or poorly on certain types of instances. Even further, examining solvers through the lens of these underlying structures can help to distinguish the behavior of different solving heuristics, both in theory and practice. The first issue we address relates to the representation of SAT formulas. A given Boolean satisfiability problem can be represented in arbitrarily many ways, and the type of encoding can have significant effects on SAT solver performance. Further, in some cases, a direct encoding to SAT may not be the best choice. We introduce a new system that integrates SAT solving with computer algebra systems (CAS) to address representation issues for several graph-theoretic problems. We use this system to improve the bounds on several finitely-verified conjectures related to graph-theoretic problems. We demonstrate how our approach is more appropriate for these problems than other off-the-shelf SAT-based tools. For more typical SAT formulas, a better understanding of their underlying structural properties, and how they relate to SAT solving, can deepen our understanding of SAT. We perform a largescale evaluation of many of the popular structural measures of formulas, such as community structure, treewidth, and backdoors. We investigate how these parameters correlate with CDCL solving time, and whether they can effectively be used to distinguish formulas from different domains. We demonstrate how these measures can be used as a means to understand the behavior of solvers during search. A common theme is that the solver exhibits locality during search through the lens of these underlying structures, and that the choice of solving heuristic can greatly influence this locality. We posit that this local behavior of modern SAT solvers is crucial to their performance. The remaining contributions dive deeper into two new measures of SAT formulas. We first consider a simple measure, denoted “mergeability,” which characterizes the proportion of input clauses pairs that can resolve and merge. We develop a formula generator that takes as input a seed formula, and creates a sequence of increasingly more mergeable formulas, while maintaining many of the properties of the original formula. Experiments over randomly-generated industrial-like instances suggest that mergeability strongly negatively correlates with CDCL solving time, i.e., as the mergeability of formulas increases, the solving time decreases, particularly for unsatisfiable instances. Our final contribution considers whether one of the aforementioned measures, namely backdoor size, is influenced by solver heuristics in theory. Starting from the notion of learning-sensitive (LS) backdoors, we consider various extensions of LS backdoors by incorporating different branching heuristics and restart policies. We introduce learning-sensitive with restarts (LSR) backdoors and show that, when backjumping is disallowed, LSR backdoors may be exponentially smaller than LS backdoors. We further demonstrate that the size of LSR backdoors are dependent on the learning scheme used during search. Finally, we present new algorithms to compute upper-bounds on LSR backdoors that intrinsically rely upon restarts, and can be computed with a single run of a SAT solver. We empirically demonstrate that this can often produce smaller backdoors than previous approaches to computing LS backdoors

    Research in progress and other activities of the Institute for Computer Applications in Science and Engineering

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science
    corecore