1,493 research outputs found
Shape Optimization of Supersonic Bodies to Reduce Sonic Boom Signature
In recent years there has been resurgence of interest by aerospace industry and NASA in supersonic transport aircraft. In recent studies, the emphasis has been on shape optimization of supersonic plane to reduce the acoustic signature of sonic boom resulting from the supersonic aircraft at high altitude in cruise flight. Because of the limitations of in-flight testing and cost of laboratory scale testing, CFD technology provides an attractive alternative to aid in the design and optimization of supersonic vehicles. In last decade, the predictive capability of CFD technology has significantly improved because of substantial increase in computational power, which allows for treatment of more complex geometries with larger meshes, better numerical algorithms and improved turbulence models for Reynolds-averaged Navier-Stokes (RANS) to reduce the predictive error. As computational power continues to increase, numerical optimization techniques have been combined with CFD to further aid in the design process.
In this thesis, two cases from the recent AIAA Sonic Boom Prediction Workshop have been simulated and one of them is optimized to reduce the sonic boom signature. The AIAA Sonic Boom Prediction Workshop has three models for the study of predicting sonic boom signatures and sonic boom propagation; in this thesis the Lockheed SEEB-ALR and 69 Degree Delta Wing-Body models are considered. The grid generation is conducted by ANSYS ICEM. Flow calculations are performed with ANSYS Fluent using the compressible Euler equations. Excellent agreement between the computed pressure distributions and experimental results at all positions of the models is obtained. Shape optimization of the SEEB-ALR axisymmetric body to minimize the sonic boom signature is then performed using a genetic algorithm (GA). The optimized shape shows decrease in the strength of the sonic boom signature. The results presented in thesis demonstrate that CFD can be accurately and effectively employed for shape optimization of a supersonic airplane to minimize the boom signature
Recommended from our members
Studies of hydroxybutyrate oligomers crystallization behaviour
Monodisperse long chain oligomers such as n-alkanes provide excellent model systems for fundamental studies of polymer crystallization and annealing. Previous studies revealed important results, including a preference for discrete crystal thicknesses corresponding to integer folded chain forms, minima in growth and nucleation rates as the system changes from one chain conformation to another, and clear unfolding transitions during melting. The work presented here extends previous studies to oligomers of hydroxybutyrate (OHB), which serve as models for the polymer poly(3-R-hydroxybutyrate) (PHB). A range of exact length hydroxybutyrate oligomers have been synthesized and their crystallization behaviour and morphology studied using optical and electron microscopy, together with small and wide angle X-ray scattering, including dynamic measurements at the ESRF Grenoble. The oligomers form crystals from dilute solution and from the melt, exhibiting similar overall morphologies and structure to PHB. Growth rate data for HB 24-mer and 32- mer spherulites grown from the melt and crystallization rate data from solution reveal discontinuities in the rate gradient which can be linked to changes in chain conformation. These features could arise from a ‘self-poisoning’ effect previously postulated for the growth minima in long n-alkanes. Crystals grown at the lower temperatures contain folded chains, which transform during heating through a process of partial melting/dissolution and re-crystallization to form extended chain crystals. These unfolding transitions were accompanied by changes in crystallinity and lattice parameter. Crystals grown at higher temperatures contain extended chains that do not rearrange further. Preferred crystal thicknesses are those which result in relatively high proportions of chain ends in the surface. For the 24-mer, they correspond to the extended chain length (E), and to E/2, 2/3E, 3/4E and 5/6E. This wide range of thicknesses is in contrast to results from long n-alkanes, possibly due to hydrogen bonding between chain ends, which effectively links chains together into longer units. The current work reveals a great deal about the way in which HB oligomer chains fold and how they re-arrange themselves from one folded form to another which, combined with previous results on PHB, will contribute towards a more complete view of the whole polymer crystallization process
Quantum resource studied from the perspective of quantum state superposition
Quantum resources,such as discord and entanglement, are crucial in quantum
information processing. In this paper, quantum resources are studied from the
aspect of quantum state superposition. We define the local superposition (LS)
as the superposition between basis of single part, and nonlocal superposition
(NLS) as the superposition between product basis of multiple parts. For quantum
resource with nonzero LS, quantum operation must be introduced to prepare it,
and for quantum resource with nonzero NLS, nonlocal quantum operation must be
introduced to prepare it. We prove that LS vanishes if and only if the state is
classical and NLS vanishes if and only if the state is separable. From this
superposition aspect, quantum resources are categorized as superpositions
existing in different parts. These results are helpful to study quantum
resources from a unified frame.Comment: 9 pages, 4 figure
Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings
In neural machine translation, a source sequence of words is encoded into a
vector from which a target sequence is generated in the decoding phase.
Differently from statistical machine translation, the associations between
source words and their possible target counterparts are not explicitly stored.
Source and target words are at the two ends of a long information processing
procedure, mediated by hidden states at both the source encoding and the target
decoding phases. This makes it possible that a source word is incorrectly
translated into a target word that is not any of its admissible equivalent
counterparts in the target language.
In this paper, we seek to somewhat shorten the distance between source and
target words in that procedure, and thus strengthen their association, by means
of a method we term bridging source and target word embeddings. We experiment
with three strategies: (1) a source-side bridging model, where source word
embeddings are moved one step closer to the output target sequence; (2) a
target-side bridging model, which explores the more relevant source word
embeddings for the prediction of the target sequence; and (3) a direct bridging
model, which directly connects source and target word embeddings seeking to
minimize errors in the translation of ones by the others.
Experiments and analysis presented in this paper demonstrate that the
proposed bridging models are able to significantly improve quality of both
sentence translation, in general, and alignment and translation of individual
source words with target words, in particular.Comment: 9 pages, 6 figures. Accepted by ACL201
Negative exponential behavior of image mutual information for pseudo-thermal light ghost imaging: Observation, modeling, and verification
When use the image mutual information to assess the quality of reconstructed
image in pseudo-thermal light ghost imaging, a negative exponential behavior
with respect to the measurement number is observed. Based on information theory
and a few simple and verifiable assumptions, semi-quantitative model of image
mutual information under varying measurement numbers is established. It is the
Gaussian characteristics of the bucket detector output probability distribution
that leads to this negative exponential behavior. Designed experiments verify
the model.Comment: 13 pages, 6 figure
Binary sampling ghost imaging: add random noise to fight quantization caused image quality decline
When the sampling data of ghost imaging is recorded with less bits, i.e.,
experiencing quantization, decline of image quality is observed. The less bits
used, the worse image one gets. Dithering, which adds suitable random noise to
the raw data before quantization, is proved to be capable of compensating image
quality decline effectively, even for the extreme binary sampling case. A brief
explanation and parameter optimization of dithering are given.Comment: 8 pages, 7 figure
Panel Data Models with Interactive Fixed Effects and Multiple Structural Breaks
In this paper we consider estimation of common structural breaks in panel data models with interactive fixed effects which are unobservable. We introduce a penalized principal component (PPC) estimation procedure with an adaptive group fused LASSO to detect the multiple structural breaks in the models. Under some mild conditions, we show that with probability approaching one the proposed method can correctly determine the unknown number of breaks and consistently estimate the common break dates. Furthermore, we estimate the regression coefficients through the post-LASSO method and establish the asymptotic distribution theory for the resulting estimators. The developed methodology and theory are applicable to the case of dynamic panel data models. The Monte Carlo simulation results demonstrate that the proposed method works well in finite samples with low false detection probability when there is no structural break and high probability of correctly estimating the break numbers when the structural breaks exist. We finally apply our method to study the environmental Kuznets curve for 74 countries over 40 years and detect two breaks in the data
DCHT: Deep Complex Hybrid Transformer for Speech Enhancement
Most of the current deep learning-based approaches for speech enhancement
only operate in the spectrogram or waveform domain. Although a cross-domain
transformer combining waveform- and spectrogram-domain inputs has been
proposed, its performance can be further improved. In this paper, we present a
novel deep complex hybrid transformer that integrates both spectrogram and
waveform domains approaches to improve the performance of speech enhancement.
The proposed model consists of two parts: a complex Swin-Unet in the
spectrogram domain and a dual-path transformer network (DPTnet) in the waveform
domain. We first construct a complex Swin-Unet network in the spectrogram
domain and perform speech enhancement in the complex audio spectrum. We then
introduce improved DPT by adding memory-compressed attention. Our model is
capable of learning multi-domain features to reduce existing noise on different
domains in a complementary way. The experimental results on the
BirdSoundsDenoising dataset and the VCTK+DEMAND dataset indicate that our
method can achieve better performance compared to state-of-the-art methods.Comment: IEEE DDP conferenc
- …