30 research outputs found

    Distance-regular graphs

    Get PDF
    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN' [Brouwer, A.E., Cohen, A.M., Neumaier, A., Distance-Regular Graphs, Springer-Verlag, Berlin, 1989] was written.Comment: 156 page

    Euclidean representations and substructures of distance-regular graphs

    Get PDF

    The history of degenerate (bipartite) extremal graph problems

    Full text link
    This paper is a survey on Extremal Graph Theory, primarily focusing on the case when one of the excluded graphs is bipartite. On one hand we give an introduction to this field and also describe many important results, methods, problems, and constructions.Comment: 97 pages, 11 figures, many problems. This is the preliminary version of our survey presented in Erdos 100. In this version 2 only a citation was complete

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Communication-Avoiding Algorithms for a High-Performance Hyperbolic PDE Engine

    Get PDF
    The study of waves has always been an important subject of research. Earthquakes, for example, have a direct impact on the daily lives of millions of people while gravitational waves reveal insight into the composition and history of the Universe. These physical phenomena, despite being tackled traditionally by different fields of physics, have in common that they are modelled the same way mathematically: as a system of hyperbolic partial differential equations (PDEs). The ExaHyPE project (“An Exascale Hyperbolic PDE Engine") translates this similarity into a software engine that can be quickly adapted to simulate a wide range of hyperbolic partial differential equations. ExaHyPE’s key idea is that the user only specifies the physics while the engine takes care of the parallelisation and the interplay of the underlying numerical methods. Consequently, a first simulation code for a new hyperbolic PDE can often be realised within a few hours. This is a task that traditionally can take weeks, months, even years for researchers starting from scratch. My main contribution to ExaHyPE is the development of the core infrastructure. This comprises the development and implementation of ExaHyPE’s solvers and adaptive mesh refinement procedures, it’s MPI+X parallelisation as well as high-level aspects of ExaHyPE’s application-tailored code generation, which allows to adapt ExaHyPE to model many different hyperbolic PDE systems. Like any high-performance computing code, ExaHyPE has to tackle the challenges of the coming exascale computing era, notably network communication latencies and the growing memory wall. In this thesis, I propose memory-efficient realisations of ExaHyPE’s solvers that avoid data movement together with a novel task-based MPI+X parallelisation concept that allows to hide network communication behind computation in dynamically adaptive simulations

    The Application of Multi-Attribute Optimisation as a Systems Engineering Tool in an Automotive CAE Environment

    Get PDF
    Multi-Attribute Optimisation (MAO) is proposed as a tool for delivering high value products within the systems engineering approach taken in the automotive industry. This work focuses on MAO methods that use Computer Aided Engineering (CAE) analyses to build a metamodel of system behaviour. A review of the literature and current Jaguar Land Rover optimisation methods showed that the number of samples required to build a metamodel could be estimated using the number of input variables. The application of these estimation methods to a concept airbox design showed that this guidance may not be sufficient to fully capture the complexity of system behaviour in the metamodelling method. The use of the number of input variables and their ranges are proposed as a new approach to the scaling of sample sizes. As a corollary to the issue of the sample size required for accurate metamodelling, the sample required to estimate the error was also examined. This found that the estimation of the global error by additional samples may be impractical in the industrial context. CAE is an important input to the MAO process and must balance the efficiency and accuracy of the model to be suitable for application in the optimisation process. Accurate prediction of automotive attributes may require the use of new CAE techniques such as multi-physics methods. For this, the fluid structure interaction assessment of the durability of internal components in the fuel tank due to slosh was examined. However, application of the StarCD-Abaqus Direct couple and Abaqus Combined Eularian Lagrangian was unsuitable for this fuel slosh application. Further work would be required to assess the suitability of other multi-physics methods in an MAO architecture. Application of the MAO method to an automotive airbox shows the potential for improving both product design and lead time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Modeling Dispersion of Radionuclides in the Turbulent Atmosphere

    Full text link
    In an effort to understand the assumptions and approximations involved in the physics on which atmospheric transport modeling (ATM) relies, we derived from first principles the Lagrangian turbulent velocity drift-diffusion model used by codes such as FLEXPART and HYSPLIT. We showed that the drift-diffusion model is a Langevin model representing the equation of motion for Lagrangian fluid particles based on the turbulent Navier-Stokes equation. That is, the incompressible turbulent Navier-Stokes equation is cast into the form of a stochastic differential equation (SDE) called the Langevin equation which describes the turbulent velocity component of the Lagrangian particle trajectory. The drift coefficient depends on the Lagrangian time scale modeled using the Lagrangian velocity autocorrelation function, while the diffusion coefficient depends additionally on the Reynolds stress or velocity variance. This makes clear that the turbulent Navier-Stokes equation is the physical basis of the drift-diffusion model used by FLEXPART and HYSPLIT and shows what assumptions and approximations are made. In contrast to particle-based methods of the Lagrangian models, the advection-diffusion (AD) equation physically represents a mass-conservation equation in a turbulent fluid and directly models the mean Eulerian concentration field by employing an eddy diffusivity hypothesis. The AD model is the basis for Gaussian plume model codes such as MACCS2 which use the Pasquill-Gifford semi-empirical turbulence model. We parametrically compared the FLEXPART drift-diffusion model to the Gaussian puff model using synthetic meteorological data, which showed significant discrepancies between the vertical or horizontal dispersion parameters for unstable or stable atmospheres, respectively. However, by modifying the FLEXPART turbulence model to simulate the Gaussian puff model dispersion parameters, we demonstrated much better agreement between the two models. On the other hand, the FLEXPART concentration profile dispersion generally agreed well with the Lagrangian particle ensemble dispersion, validating to some extent the relationship between the Lagrangian and Eulerian turbulence parameters. In addition to the complexities associated with physically modeling turbulence, we have demonstrated uncertainties associated with dry deposition, particle size distributions, radioactive decay chains, different meteorological data sets, virtual particle numbers, and mesoscale velocity fluctuations. We have performed studies on: local (100 km radius) and global scales, large (Fukushima) and small (DPRK) radionuclide (RN) emission sources, and particulate (volcanic ash) and gaseous species (Xe). Volcanic ash particulate transport simulations showed that it is necessary to use large numbers of particles per emission source, that the dry deposition model significantly reduces predicted atmospheric concentrations and that this is more pronounced for larger particle sizes. When we examined the radioxenon emissions from the Fukushima Daiichi nuclear accident, we found that the meteorological data set chosen has a significant impact on the simulated RN concentrations at detectors as close as Takasaki, with variations up to four orders of magnitude. Additionally, our studies on DPRK weapons tests showed that the measured RN data is often very sparse and difficult to explain and attribute to a particular source. These studies all demonstrated the many uncertainties and difficulties associated with ATM of RNs when comparing to real data. Thus, we show that ATMs should rely as closely as possible on the underlying physics for accurately modeling RN dispersion in the turbulent atmosphere. In particular, one should use turbulence models based closely on the turbulent Navier-Stokes equation, accurate and high resolution meteorological data, and physics-based deposition and transmutation models.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168032/1/krupcale_1.pd
    corecore