122 research outputs found

    The effect of turbulence in the built environment on wind turbine aerodynamics

    Get PDF
    Urban Wind Energy is a niche of Wind Energy showing an unstoppable trend of growth in its share in the tumultuous DIY energy market. Urban Wind Energy consists of positioning wind turbines within the built environment. The idea is to match energy production and consumption site so to increase the efficiency of the system as energy losses and costs due to the transportation, conversion and delivery of energy are virtually zeroed. Many aficionados advocate the advantage of such a technology for the environment and argue that a greater diffusion might overcome its flaws as a newborn technology. However, no urban wind application to date is known to have been successful in providing but a derisory amount of ‘clean’ energy. The reason for this fiasco lies in the way research in urban wind energy is conducted, i.e. mostly concerned either in improving the efficiency of wind energy converters, or the assessment of the available wind resource. Very few works have considered the technical implications of placing a wind energy converter, one of the most complex aerodynamic devices, in a complex inflow such as that found in built environments, of which very little is known in terms of its turbulence environment. In fact, it has long been acknowledged that the power output, the fatigue limit state or the total service-life downtime of a wind turbine is well correlated with turbulence at the inflow

    Continuous Electrode Inertial Electrostatic Confinement Fusion

    Get PDF
    The NIAC Phase I project on Inertial Electrostatic Confinement was a continuation of early stage research that was funded by an NSTRF. The student on the project, Andrew Chap, was funded by the NSTRF from Fall 2013 through the Summer of 2017, and then was funded on the NIAC through the completion of his PhD. A significant amount of work targeting the plasma confinement physics was the focus of his NSTRF, and over the course of that effort he developed a number of analyses and computational tools that leveraged GPU parallelization. A detailed discussion of these models can be found in his dissertation, which has been included as Appendix D in this report. As a requirement for the NSTRF, Andrew's full dissertation was submitted at the end of the program.Having developed the computational tools, a substantial amount of simulation and analyses leveraging those tools were conducted during the Fall of 2017, under the auspices of the NIAC funded research. Much of this work targeted optimization of the confinement fields, investigating their structure and the possible advantages of having them be time-varying. The results of these simulations can also be found in Appendix D.One of the main results from this research is that the density of ions electrostatically confined within the system can indeed be increased by several orders of magnitude by optimizing the radial potential distribution, and by dynamically varying these fields to maintain compressed ion bunches. An electron population can also be confined within the core by a static radial cusped magnetic field,which helps to support a greater ion density within the core. The issue with the confinement mechanism is that as the ion densities are increased toward fusion-relevant levels, the electrostatic forces generated by the confined electron population become so great that the ions are no longer energetic enough to leave the device core. As their excursions into the outer channels are diminished, the mechanism that is used to maintain their non-thermal velocity distributions becomes ineffective, and eventually the ions become fully confined within the core, where they thermalize. A possible fix to the problem comes by discarding the active ion control (a main pillar of the concept)but retaining the structure of the permanent magnet confinement of the electron population. Such cusped field confinement has been the focus of other IEC approaches (e.g. Polywell), but the high transparency of the permanent magnet structure lends itself to better ion extraction and power conversion (a second pillar of the concept). The question then becomes whether any influence on the ion evolution within the core can be achieved to slow the thermalization of the ions. Such approaches have been studied in highly idealized analytic models, but face major criticisms within the literature. While this is a possible path forward, the uncertainty in the approach did not warrant committing NIAC Phase II resources to investigating the concept at this time

    Intermittency and Self-Organisation in Turbulence and Statistical Mechanics

    Get PDF
    There is overwhelming evidence, from laboratory experiments, observations, and computational studies, that coherent structures can cause intermittent transport, dramatically enhancing transport. A proper description of this intermittent phenomenon, however, is extremely difficult, requiring a new non-perturbative theory, such as statistical description. Furthermore, multi-scale interactions are responsible for inevitably complex dynamics in strongly non-equilibrium systems, a proper understanding of which remains a main challenge in classical physics. As a remarkable consequence of multi-scale interaction, a quasi-equilibrium state (the so-called self-organisation) can however be maintained. This special issue aims to present different theories of statistical mechanics to understand this challenging multiscale problem in turbulence. The 14 contributions to this Special issue focus on the various aspects of intermittency, coherent structures, self-organisation, bifurcation and nonlocality. Given the ubiquity of turbulence, the contributions cover a broad range of systems covering laboratory fluids (channel flow, the Von Kármán flow), plasmas (magnetic fusion), laser cavity, wind turbine, air flow around a high-speed train, solar wind and industrial application

    A systems approach to analyze the robustness of infrastructure networks to complex spatial hazards

    Get PDF
    Ph. D. ThesisInfrastructure networks such as water supply systems, power networks, railway networks, and road networks provide essential services that underpin modern society’s health, wealth, security, and wellbeing. However, infrastructures are susceptible to damage and disruption caused by extreme weather events such as floods and windstorms. For instance, in 2007, extensive disruption was caused by floods affecting a number of electricity substations in the United Kingdom, resulting in an estimated damage of GBP£3.18bn (US4bn).In2017,HurricaneHarveyhittheSouthernUnitedStates,causinganapproximatedUS4bn). In 2017, Hurricane Harvey hit the Southern United States, causing an approximated US125bn (GBP£99.35bn) in damage due to the resulting floods and high winds. The magnitude of these impacts is at risk of being compounded by the effects of Climate Change, which is projected to increase the frequency of extreme weather events. As a result, it is anticipated that an estimated US$3.7tn (GBP£2.9tn) in investment will be required, per year, to meet the expected need between 2019 and 2035. A key reason for the susceptibility of infrastructure networks to extreme weather events is the wide area that needs to be covered to provide essential services. For example, in the United Kingdom alone there are over 800,000 km of overhead electricity cables, suggesting that the footprint of infrastructure networks can be as extended as that of an entire Country. These networks possess different spatial structures and attributes, as a result of their evolution over long timeframes, and respond to damage and disruption in different and complex ways. Existing approaches to understanding the impact of hazards on infrastructure networks typically either (i) use computationally expensive models, which are unable to support the investigation of enough events and scenarios to draw general insights, or (ii) use low complexity representations of hazards, with little or no consideration of their spatial properties. Consequently, this has limited the understanding of the relationship between spatial hazards, the spatial form and connectivity of infrastructure networks, and infrastructure reliability. This thesis investigates these aspects through a systemic modelling approach, applied to a synthetic and a real case study, to evaluate the response of infrastructure networks to spatially complex hazards against a series of robustness metrics. In the first case study, non-deterministic spatial hazards are generated by a fractal method which allows to control their spatial variability, resulting in spatial configurations that very closely resemble natural phenomena such as floods or windstorms. These hazards are then superimposed on a range of synthetic network layouts, which have spatial structures consistent with real infrastructure networks reported in the literature. Failure of network components is initially determined as a function of hazard intensity, and cascading failure of further components is also investigated. The performance of different infrastructure configurations is captured by an array of metrics which cover different aspects of robustness, ranging from the proneness to partitioning to the ability to process flows in the face of disruptions. Whereas analyses to date have largely adopted low complexity representations of hazards, this thesis shows that consideration of a high complexity representation which includes hazard spatial variability can reduce the robustness of the infrastructure network by nearly 40%. A “small-world” network, in which each node is within a limited number of steps from any other node, is shown to be the most robust of all the modelled networks to the different structures of spatial hazard. The second case study uses real data to assess the robustness of a power supply network operating in the Hull region in the United Kingdom, which is split in high and low voltage lines. The spatial hazard is represented by a high-resolution wind gust model and tested under current and future climate scenarios. The analysis reveals how the high and low voltage lines interact with each other in the event of faults, which lines would benefit the most from increased robustness, and which are most exposed to cascading failures. The second case study also reveals the importance of the spatial footprint of the hazard relative to the location of the infrastructure, and how particular hazard patterns can affect low voltage lines that are more often located in exposed areas at the edge of the network. The impact of Climate Change on windstorms is highly uncertain, although it could further reduce network robustness due to more severe events. Overall the two case studies provide important insights for infrastructure designers, asset managers, the academic sector, and practitioners in general. In fact, in the first case study, this thesis defines important design principles, such as the adoption of a small-world network layout, that can integrate the traditional design drivers of demand, efficiency, and cost. In the second case study, this thesis lays out a methodology that can help identify assets requiring increased robustness and protection against cascading failures, resulting in more effective prioritized infrastructure investments and adaptation plans

    Computational methods in string and field theory

    Get PDF
    Thesis is submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the University of the Witwatersrand, Faculty of Science, School of Physics, University of the Witwatersrand, Johannesburg, 2018Like any field or topic of research, significant advancements can be made with increasing computational power - string theory is no exception. In this thesis, an analysis is performed within three areas: Calabi–Yau manifolds, cosmological inflation and application of conformal field theory. Critical superstring theory is a ten dimensional theory. Four of the dimensions refer to the spacetime dimensions we see in nature. To account for the remaining six, Calabi-Yau manifolds are used. Knowing how the space of Calabi-Yau manifolds is distributed gives valuable insight into the compactification process. Using computational modeling and statistical analysis, previously unseen patterns of the distribution of the Hodge numbers are found. In particular, patterns in frequencies exhibit striking new patterns - pseudo-Voigt and Planckian distributions with high confidence and exact fits for many substructures. The patterns indicate typicality within the landscape of Calabi–Yau manifolds of various dimensions. Inflation describes the exponential expansion of the universe after the Big Bang. Finding a successful theory of inflation centres around building a potential of the inflationary field, such that it satisfies the slow-roll conditions. The numerous ways this can be done, coupled with the fact that each model is highly sensitive to initial conditions, means an analytic approach is often not feasible. To bypass this, a statistical analysis of a landscape of thousands of random single and multifield polynomial potentials is performed. Investigation of the single field case illustrates a window in which the potentials satisfy the slow-roll conditions. When there are two scalar fields, it is found that the probability depends on the choice of distribution for the coefficients. A uniform distribution yields a 0.05% probability of finding a suitable minimum in the random potential whereas a maximum entropy distribution yields a 0.1% probability. The benefit of developing computational tools extends into the interdisciplinary study between conformal field theory and the theory of how wildfires propagate. Using the two dimensional Ising model as a basis of inspiration, computational methods of analyzing how fires propagate provide a new tool set which aids in the process of both modeling large scale wildfires as well as describing the emergent scale invariant structure of these fires. By computing the two point and three point correlations of fire occurrences in particular regions within Botswana and Kazakhstan, it is shown that this proposed model gives excellent fits, with the model amplitude being directly proportional to the total burn area of a particular year.EM201

    Seabed biotope characterisation based on acoustic sensing

    Get PDF
    The background to this thesis is Australia’s Oceans Policy, which aims to develop an integrated and ecosystem-based approach to planning and management. An important part of this approach is the identification of natural regions in regional marine planning, for example by establishing marine protected areas for biodiversity conservation. These natural regions will need to be identified on a range of scales for different planning and management actions. The scale of the investigation reported in this thesis is applicable to spatial management at 1 km to 10 km scale and monitoring impacts at the 10s of m to 1 km biotope scale. Seabed biotopes represent a combination of seabed physical attributes and related organisms. To map seabed biotopes in deep water, remote sensing using a combination of acoustic, optical and physical sensors is investigated. The hypothesis tested in this thesis is that acoustic bathymetry and backscatter data from a Simrad EM1002 multi-beam sonar (MBS) can be used to infer (act as a surrogate of) seabed biotopes. To establish a link between the acoustic data and seabed biotopes the acoustic metrics are compared to the physical attributes of the seabed in terms of its substrate and geomorphology at the 10s m to 1 km scale using optical and physical sensors. At this scale the relationship between the dominant faunal functional groups and both the physical attributes of the seabed and the acoustic data is also tested. These tests use data collected from 14 regions and 2 biomes to the south of Australia during a voyage in 2000. Based on 62 reference sites of acoustic, video and physical samples, a significant relationship between ecological seabed terrain types and acoustic backscatter and bathymetry was observed.These ecological terrain types of soft-smooth, soft-rough, hard-smooth and hard-rough were chosen as they were the most relevant to the biota in their ability to attach on or burrow into the seabed. A seabed scattering model supported this empirical relationship and the overall shape of backscatter to incidence angle relationship for soft and hard seabed types. The correlation between acoustic data (backscatter mean and standard deviation) and the visual and physical samples was most consistent between soft-smooth and hard-rough terrain types for a large range of incidence angles (16o to 70o). Using phenomenological backscatter features segmented into 10 common incidence angle bins from -70o to 70o the length resolution of the data decreased to 0.55 times depth. The decreased resolution was offset by improved near normal incidence (0o to 30o) seabed type discrimination with cross validation error reducing from 32% to 4%. A significant relationship was also established between the acoustic data and the dominant functional groups of fauna. Faunal functional groups were based on the ecological function, feeding mode and substrate preference, with 8 out of the 10 groups predicted with 70% correctness by the four acoustically derived ecological terrain types. Restricting the terrain classification to simple soft and hard using the acoustic backscatter data improved the prediction of three faunal functional groups to greater than 80%. Combining the acoustic bathymetry and backscatter data an example region, Everard Canyon, was interpreted at a range of spatial scales and the ability to predict the preferred habitat of a stalked crinoid demonstrated.Seabed terrain of soft and hard was predicted from the acoustic backscatter data referenced to a common seabed incidence angle of 40o. This method of analysis was selected due to its combined properties of high spatial resolution, consistent between terrain discrimination at the widest range of incidence angles and consistent data quality checking at varying ranges. Based in part on the research reported in this thesis a mid-depth Simrad EM300 multibeam sonar was purchased for use in Australian waters. A sampling strategy is outlined to map all offshore waters with priority within the 100 m to 1500 m depths

    Non-Linear Lattice

    Get PDF
    The development of mathematical techniques, combined with new possibilities of computational simulation, have greatly broadened the study of non-linear lattices, a theme among the most refined and interdisciplinary-oriented in the field of mathematical physics. This Special Issue mainly focuses on state-of-the-art advancements concerning the many facets of non-linear lattices, from the theoretical ones to more applied ones. The non-linear and discrete systems play a key role in all ranges of physical experience, from macrophenomena to condensed matter, up to some models of space discrete space-time
    corecore