200 research outputs found

    Investigations on Improving Broadband Boundary Conditions in Gyrotron Interaction Modelling

    Get PDF
    Gyrotrons are microwave tubes capable of providing mega-watt power at millimetric wavelengths. The microwave power is produced by the conversion of the kinetic energy of an electron beam to electromagnetic wave energy. Simulations of the beam-wave interaction in the gyrotron cavity are essential for gyrotron design, as well as theoretical and experimental studies. In the usual gyrotron operation the spectrum of the generated radiation is concentrated around the nominal frequency. For this reason, the usual simulations consider only a narrow-band output spectrum (e.g. several GHz bandwidth comparing with the working frequency in the range of 100-200 GHz). As a result, the typical existing codes use a single-frequency radiation boundary condition for the generated electromagnetic field in the cavity. This condition is matched only at one frequency. However, there are two important aspects, which motivate an advanced formulation and implementation of the cavity boundary condition. Firstly, the occurrence of broadband effects (which may be several tens of GHz) in some cases, like dynamic after-cavity-interaction or modulation side-bands, requires a broadband boundary condition. Secondly, there are reflections from inside and outside of the gyrotron, which can only be considered in the simulation through a boundary condition with user-defined, frequency-dependent reflections. This master thesis proposes an improved formulation of the broadband boundary condition in the self-consistent, beam-wave interaction code Euridice. In this new formulation, two physical variables — the wave impedance and the axial wavenumber are expanded in polynomial series in the frequency domain. Because the beam-wave interaction process is simulated transiently in the time domain, the boundary condition should be also expressed in the time domain. This involves a non-trivial inverse Fourier transform, for which two solutions are proposed, tested and validated. It has been shown that, through the newly developed formulation, the existing matched boundary condition (that should yield zero-reflection in ideal case) can be improved by 15 dB even with a first-order polynomial series. Moreover, a user-defined, frequency-dependent complex reflection coefficient can be introduced. This was not possible with the previously existing boundary condition in Euridice

    Tree-AMP: Compositional Inference with Tree Approximate Message Passing

    Full text link
    We introduce Tree-AMP, standing for Tree Approximate Message Passing, a python package for compositional inference in high-dimensional tree-structured models. The package provides a unifying framework to study several approximate message passing algorithms previously derived for a variety of machine learning tasks such as generalized linear models, inference in multi-layer networks, matrix factorization, and reconstruction using non-separable penalties. For some models, the asymptotic performance of the algorithm can be theoretically predicted by the state evolution, and the measurements entropy estimated by the free entropy formalism. The implementation is modular by design: each module, which implements a factor, can be composed at will with other modules to solve complex inference tasks. The user only needs to declare the factor graph of the model: the inference algorithm, state evolution and entropy estimation are fully automated.Comment: Source code available at https://github.com/sphinxteam/tramp and documentation at https://sphinxteam.github.io/tramp.doc

    Feasibility study of a microprocessor based oculometer system

    Get PDF
    The elimination of redundancy in data to maximize processing speed and minimize storage requirements were objectives in a feasibility study of a microprocessor based oculometer system that would be portable in size and flexible in use. The appropriate architectural design of the signal processor, improved optics, and the reduction of size, weight, and power to the system were investigated. A flow chart is presented showing the strategy of the design. The simulation for developing microroutines for the high speed algorithmic processor subsystem is discussed as well as the Karhunen-Loeve transform technique for data compression

    Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie

    Get PDF
    A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared). The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios. Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure

    An efficient and exact noncommutative quantum Gibbs sampler

    Full text link
    Preparing thermal and ground states is an essential quantum algorithmic task for quantum simulation. In this work, we construct the first efficiently implementable and exactly detailed-balanced Lindbladian for Gibbs states of arbitrary noncommutative Hamiltonians. Our construction can also be regarded as a continuous-time quantum analog of the Metropolis-Hastings algorithm. To prepare the quantum Gibbs state, our algorithm invokes Hamiltonian simulation for a time proportional to the mixing time and the inverse temperature β\beta, up to polylogarithmic factors. Moreover, the gate complexity reduces significantly for lattice Hamiltonians as the corresponding Lindblad operators are (quasi-) local (with radius ∼β\sim\beta) and only depend on local Hamiltonian patches. Meanwhile, purifying our Lindbladians yields a temperature-dependent family of frustration-free "parent Hamiltonians", prescribing an adiabatic path for the canonical purified Gibbs state (i.e., the Thermal Field Double state). These favorable features suggest that our construction is the ideal quantum algorithmic counterpart of classical Markov chain Monte Carlo sampling.Comment: 39 pages, 4 figure

    A Review On The Comparative Roles Of Mathematical Softwares In Fostering Scientific And Mathematical Research

    Get PDF
    Mathematical software tools used in science, research and engineering have a developmental trend. Various subdivisions for mathematical software applications are available in the aforementioned areas but the research intent or problem under study, determines the choice of software required for mathematical analyses. Since these software applications have their limitations, the features present in one type are often augmented or complemented by revised versions of the original versions in order to increase their abilities to multi-task. For example, the dynamic mathematics software was designed with integrated advantages of different types of existing mathematics software as an improved version for understanding numerical related problems for advanced mathematical content (advanced simulation). In recent times, science institutions have adopted the use of computer codes in solving mathematics related problems. The treatment of complex numerical analysis with the aid of mathematical software is currently used in all branches of physical, biological and social sciences. However, the programming language for mathematics related software varies with their functionalities. Many invaluable researches have been compromised within the confines of unacceptable but expedient standards because of insufficient understanding of the valuable services the available variety of mathematical software could offer. In the developing countries, some mathematical software like Matlab and MathCAD are very common. A comparative review for some mathematical software was embarked upon in order to understand the advantages and limitations of some of the available mathematical software

    Variable Fractional Digital Delay Filter on Reconfigurable Hardware

    Get PDF
    This thesis describes a design for a variable fractional delay (VFD) finite impulse reponse (FIR) filter implemented on reconfigurable hardware. Fractionally delayed signals are required for several audio-based applications, including echo cancellation and musical signal analysis. Traditionally, VFD FIR filters have been implemented using a fixed structure in software based upon the order of the filter. This fixed structure restricts the range of valid fractional delay values permitted by the filter. This proposed design implements an order-scalable FIR filter, permitting fractionally delayed signals of widely varying integer sizes. Furthermore, the proposed design of this thesis builds upon the traditional Lagrange interpolator FIR filter using either asoftware-based coefficient computational unit or hardware-based coefficient computational unit in reconfigurable hardware for updating the FIR coefficients in real-time. Traditional Lagrange interpolator FIR filters have only permitted fixed fractional delay. However, by leveraging todays (2012) low-cost high performance reconfigurable hardware, an FIR-based fractional delay filter was created to permit varying fractional delay. A software/hardware hybrid VFD filter was prototyped using the Xilinx System Generator toolkit. The resulting real-time VFD FIR filter was tested usingSystem Generator, as well as Xilinx ISE and ModelSim.M.S., Computer Engineering -- Drexel University, 201

    Boundary integral equation methods for superhydrophobic flow and integrated photonics

    Get PDF
    This dissertation presents fast integral equation methods (FIEMs) for solving two important problems encountered in practical engineering applications. The first problem involves the mixed boundary value problem in two-dimensional Stokes flow, which appears commonly in computational fluid mechanics. This problem is particularly relevant to the design of microfluidic devices, especially those involving superhydrophobic (SH) flows over surfaces made of composite solid materials with alternating solid portions, grooves, or air pockets, leading to enhanced slip. The second problem addresses waveguide devices in two dimensions, governed by the Helmholtz equation with Dirichlet conditions imposed on the boundary. This problem serves as a model for photonic devices, and the systematic investigation focuses on the scattering matrix formulation, in both analysis and numerical algorithms. This research represents an important step towards achieving efficient and accurate simulations of more complex photonic devices with straight waveguides as input and output channels, and Maxwell\u27s equations in three dimensions as the governing equations. Numerically, both problems pose significant challenges due to the following reasons. First, the problems are typically defined in infinite domains, necessitating the use of artificial boundary conditions when employing volumetric methods such as finite difference or finite element methods. Second, the solutions often exhibit singular behavior, characterized by corner singularities in the geometry or abrupt changes in boundary conditions, even when the underlying geometry is smooth. Analyzing the exact nature of these singularities at corners or transition points is extremely difficult. Existing methods often resort to adaptive refinement, resulting in large linear systems, numerical instability, low accuracy, and extensive computational costs. Under the hood, fast integral equation methods serve as the common engine for solving both problems. First, by utilizing the constant-coefficient nature of the governing partial differential equations (PDEs) in both problems and the availability of free-space Green\u27s functions, the solutions are represented via proper combination of layer potentials. By construction, the representation satisfies the governing PDEs within the volumetric domain and appropriate conditions at infinity. The combination of boundary conditions and jump relations of the layer potentials then leads to boundary integral equations (BIEs) with unknowns defined only on the boundary. This reduces dimensionality of the problem by one in the solve phase. Second, the kernels of the layer potentials often contain logarithmic, singular, and hypersingular terms. High-order kernel-split quadratures are employed to handle these weakly singular, singular, and hypersingular integrals for self-interactions, as well as nearly weakly singular, nearly singular, and nearly hypersingular integrals for near-interactions and close evaluations. Third, the recursively compressed inverse preconditioning (RCIP) method is applied to treat the unknown singularity in the density around corners and transition points. Finally, the celebrated fast multipole method (FMM) is applied to accelerate the scheme in both the solve and evaluation phases. In summary, high-order numerical schemes of linear complexity have been developed to solve both problems often with ten digits of accuracy, as illustrated by extensive numerical examples

    Investigating the evolution of microtextured region in Ti-6242 using FE-FFT multiscale modeling method

    Get PDF
    Titanium alloy Ti-6242 (Ti-6Al-2Sn-4Zr-2Mo) is frequently used in the high-pressure compressor of aero engines due to its excellent resistance to fatigue and creep failure at high temperature. While exhibiting high strength at elevated temperatures, it is susceptible to dwell fatigue at temperatures below 473 K due in part to the presence of microtextured regions (MTRs), also known as macrozones. MTRs are clusters of similarly orientated alpha particles, which form during alpha/beta processing and remain stable even after large deformation. The major objective of this dissertation is to quantify the evolution of MTRs under different thermomechanical processing parameters, and predict the optimal processing parameters to eliminate the MTRs.Idealized MTRs with pure initial orientation are first employed as the benchmark case to investigate the loading direction effect on its breakdown efficiency. Three high-temperature compression processes are simulated with different loading directions using crystal plasticity finite element method, and the results are validated against high-temperature compression experiments and EBSD measurement. The evolution of equivalent plastic strain, accumulated shear strain, and misorientation distribution is analyzed in detail to reveal the relationship between loading direction and MTR breakdown efficiency. Lastly, the reorientation velocity divergence of arbitrary loading direction is expressed in the Rodrigues\u27 space in order to predict the optimal processing parameters for MTR elimination. The MTR breakdown efficiency also depends on the morphology and its position within the specimen. Two different length scales have to be analyzed in order to consider both factors, which present great challenge to the numerical simulation. In this dissertation, a high-efficient FE-FFT multiscale modeling framework is derived and developed to overcome this challenge. The Fourier-Galerkin method is utilized to solve the microscale unit cell problem, while total Lagrangian nite element is used to solve the macroscopic boundary value problems. Several numerical improvements are derived and implemented to further improve its numerical efficiency, including consistent linearization, consistent homogenized tangent stiffness, and inexact Newton method. A series of numerical studies is conducted to investigate the accuracy, efficiency, and robustness of this algorithm
    • …
    corecore