198 research outputs found

    Development, Implementation, and Optimization of a Modern, Subsonic/Supersonic Panel Method

    Get PDF
    In the early stages of aircraft design, engineers consider many different design concepts, examining the trade-offs between different component arrangements and sizes, thrust and power requirements, etc. Because so many different designs are considered, it is best in the early stages of design to use simulation tools that are fast; accuracy is secondary. A common simulation tool for early design and analysis is the panel method. Panel methods were first developed in the 1950s and 1960s with the advent of modern computers. Despite being reasonably accurate and very fast, their development was abandoned in the late 1980s in favor of more complex and accurate simulation methods. The panel methods developed in the 1980s are still in use by aircraft designers today because of their accuracy and speed. However, they are cumbersome to use and limited in applicability. The purpose of this work is to reexamine panel methods in a modern context. In particular, this work focuses on the application of panel methods to supersonic aircraft (a supersonic aircraft is one that flies faster than the speed of sound). Various aspects of the panel method, including the distributions of the unknown flow variables on the surface of the aircraft and efficiently solving for these unknowns, are discussed. Trade-offs between alternative formulations are examined and recommendations given. This work also serves to bring together, clarify, and condense much of the literature previously published regarding panel methods so as to assist future developers of panel methods

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Advances in scalable learning and sampling of unnormalised models

    Get PDF
    We study probabilistic models that are known incompletely, up to an intractable normalising constant. To reap the full benefit of such models, two tasks must be solved: learning and sampling. These two tasks have been subject to decades of research, and yet significant challenges still persist. Traditional approaches often suffer from poor scalability with respect to dimensionality and model-complexity, generally rendering them inapplicable to models parameterised by deep neural networks. In this thesis, we contribute a new set of methods for addressing this scalability problem. We first explore the problem of learning unnormalised models. Our investigation begins with a well-known learning principle, Noise-contrastive Estimation, whose underlying mechanism is that of density-ratio estimation. By examining why existing density-ratio estimators scale poorly, we identify a new framework, telescoping density-ratio estimation (TRE), that can learn ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE not only yields substantial improvements for the learning of deep unnormalised models, but can do the same for a broader set of tasks including mutual information estimation and representation learning. Subsequently, we explore the problem of sampling unnormalised models. A large literature on Markov chain Monte Carlo (MCMC) can be leveraged here, and in continuous domains, gradient-based samplers such as Metropolis-adjusted Langevin algorithm (MALA) and Hamiltonian Monte Carlo are excellent options. However, there has been substantially less progress in MCMC for discrete domains. To advance this subfield, we introduce several discrete Metropolis-Hastings samplers that are conceptually inspired by MALA, and demonstrate their strong empirical performance across a range of challenging sampling tasks

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    1-D broadside-radiating leaky-wave antenna based on a numerically synthesized impedance surface

    Get PDF
    A newly-developed deterministic numerical technique for the automated design of metasurface antennas is applied here for the first time to the design of a 1-D printed Leaky-Wave Antenna (LWA) for broadside radiation. The surface impedance synthesis process does not require any a priori knowledge on the impedance pattern, and starts from a mask constraint on the desired far-field and practical bounds on the unit cell impedance values. The designed reactance surface for broadside radiation exhibits a non conventional patterning; this highlights the merit of using an automated design process for a design well known to be challenging for analytical methods. The antenna is physically implemented with an array of metal strips with varying gap widths and simulation results show very good agreement with the predicted performance

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale

    Full text link
    Shampoo is an online and stochastic optimization algorithm belonging to the AdaGrad family of methods for training neural networks. It constructs a block-diagonal preconditioner where each block consists of a coarse Kronecker product approximation to full-matrix AdaGrad for each parameter of the neural network. In this work, we provide a complete description of the algorithm as well as the performance optimizations that our implementation leverages to train deep networks at-scale in PyTorch. Our implementation enables fast multi-GPU distributed data-parallel training by distributing the memory and computation associated with blocks of each parameter via PyTorch's DTensor data structure and performing an AllGather primitive on the computed search directions at each iteration. This major performance enhancement enables us to achieve at most a 10% performance reduction in per-step wall-clock time compared against standard diagonal-scaling-based adaptive gradient methods. We validate our implementation by performing an ablation study on training ImageNet ResNet50, demonstrating Shampoo's superiority over standard training recipes with minimal hyperparameter tuning.Comment: 38 pages, 8 figures, 5 table

    Ocean Modelling in Support of Operational Ocean and Coastal Services

    Get PDF
    Operational oceanography is maturing rapidly. Its capabilities are being noticeably enhanced in response to a growing demand for regularly updated ocean information. Today, several core forecasting and monitoring services, such as the Copernicus Marine ones focused on global and regional scales, are well-stablished. The sustained availability of oceanography products has favored the proliferation of specific downstream services devoted to coastal monitoring and forecasting. Ocean models are a key component of these operational oceanographic systems (especially in a context marked by the extensive application of dynamical downscaling approaches), and progress in ocean modeling is certainly a driver for the evolution of these services. The goal of this Special Issue is to publish research papers on ocean modeling that benefit model applications that support existing operational oceanographic services. This Special Issue is addressed to an audience with interests in physical oceanography and especially on its operational applications. There is a focus on the numerical modeling needed for a better forecasts in marine environments and using seamless modeling approaches to simulate global to coastal processes

    The development of tools and guidelines for surfing resource management

    Get PDF
    Surfing is a mainstream pastime and competitive sport in many countries and provides a full range of economic, social, physical, and mental health benefits. Maintaining the integrity of surf breaks has proven to be a challenge, with a litany of degraded or destroyed surfing locations worldwide. This is attributed to a deficiency in expertise and experience in implementing surf science and management within governing authorities, associated consultants, or stakeholder groups; combined with a lack of value recognition and identification. This work considers how surf breaks as coastal resources could be better managed. A literature review of technical reports, published articles, statutory instruments, evidence, and consents, along with interactive stakeholder workshops and surveys to identify key considerations, are combined with complex numerical modelling and machine learning methods to develop tools for effective surf break management. In Aotearoa New Zealand, a surf break is described in policy as having various geophysical components in the vicinity of locations where surfing takes place and the areas offshore. Given the wide range of benefits associated with surfing, and the complexities of managing a natural resource, albeit in some cases anthropologically modified, the term ‘surfing resource’ was established and defined as a major outcome of this work and as a step in the process of developing a set of Management Guidelines for Surfing Resources (the Guidelines). The Guidelines, which are a world first, consider what aspects of the environment are the most important to surfing resources management, provide direction, as implementable steps, to authorities and proponents of activities in the coastal environment that can impact surfing resources, and include identification and monitoring strategies as well as a novel risk assessment framework which is underpinned by a surf break’s sensitivity as a function of geomorphological composition. The Guidelines are supported by research streams that required field data collection and monitoring system development, numerical modelling, and machine learning to improve our understanding of surf break functionality and/or better our management strategies. This work emphasises the role of bathymetric features outside the surf zone that contribute to surfing wave quality, and the value of establishing swell corridors for management purposes. An automated system has been developed to monitor the key surfing wave quality indicator of peel angle through both space and time. Effective surfing resource management requires a holistic, inclusive, case-by-case approach, that may require cultural, social and geophysical assessment, which is best implemented proactively with the identification of surfing resources and the establishment of environmental baselines
    • 

    corecore