2,578 research outputs found
Seismic Analysis Capability in NASTRAN
Seismic analysis is a technique which pertains to loading described in terms of boundary accelerations. Earthquake shocks to buildings is the type of excitation which usually comes to mind when one hears the word seismic, but this technique also applied to a broad class of acceleration excitations which are applied at the base of a structure such as vibration shaker testing or shocks to machinery foundations. Four different solution paths are available in NASTRAN for seismic analysis. They are: Direct Seismic Frequency Response, Direct Seismic Transient Response, Modal Seismic Frequency Response, and Modal Seismic Transient Response. This capability, at present, is invoked not as separate rigid formats, but as pre-packaged ALTER packets to existing RIGID Formats 8, 9, 11, and 12. These ALTER packets are included with the delivery of the NASTRAN program and are stored on the computer as a library of callable utilities. The user calls one of these utilities and merges it into the Executive Control Section of the data deck to perform any of the four options are invoked by setting parameter values in the bulk data
Statistical correlation analysis for comparing vibration data from test and analysis
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures
Large-wavelength instabilities in free-surface Hartmann flow at low magnetic Prandtl numbers
We study the linear stability of the flow of a viscous electrically
conducting capillary fluid on a planar fixed plate in the presence of gravity
and a uniform magnetic field. We first confirm that the Squire transformation
for MHD is compatible with the stress and insulating boundary conditions at the
free surface, but argue that unless the flow is driven at fixed Galilei and
capillary numbers, the critical mode is not necessarily two-dimensional. We
then investigate numerically how a flow-normal magnetic field, and the
associated Hartmann steady state, affect the soft and hard instability modes of
free surface flow, working in the low magnetic Prandtl number regime of
laboratory fluids. Because it is a critical layer instability, the hard mode is
found to exhibit similar behaviour to the even unstable mode in channel
Hartmann flow, in terms of both the weak influence of Pm on its neutral
stability curve, and the dependence of its critical Reynolds number Re_c on the
Hartmann number Ha. In contrast, the structure of the soft mode's growth rate
contours in the (Re, alpha) plane, where alpha is the wavenumber, differs
markedly between problems with small, but nonzero, Pm, and their counterparts
in the inductionless limit. As derived from large wavelength approximations,
and confirmed numerically, the soft mode's critical Reynolds number grows
exponentially with Ha in inductionless problems. However, when Pm is nonzero
the Lorentz force originating from the steady state current leads to a
modification of Re_c(Ha) to either a sublinearly increasing, or decreasing
function of Ha, respectively for problems with insulating and conducting walls.
In the former, we also observe pairs of Alfven waves, the upstream propagating
wave undergoing an instability at large Alfven numbers.Comment: 58 pages, 16 figure
Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data
A Parameterized Centrality Metric for Network Analysis
A variety of metrics have been proposed to measure the relative importance of
nodes in a network. One of these, alpha-centrality [Bonacich, 2001], measures
the number of attenuated paths that exist between nodes. We introduce a
normalized version of this metric and use it to study network structure,
specifically, to rank nodes and find community structure of the network.
Specifically, we extend the modularity-maximization method [Newman and Girvan,
2004] for community detection to use this metric as the measure of node
connectivity. Normalized alpha-centrality is a powerful tool for network
analysis, since it contains a tunable parameter that sets the length scale of
interactions. By studying how rankings and discovered communities change when
this parameter is varied allows us to identify locally and globally important
nodes and structures. We apply the proposed method to several benchmark
networks and show that it leads to better insight into network structure than
alternative methods.Comment: 11 pages, submitted to Physical Review
Bounds on Quantum Correlations in Bell Inequality Experiments
Bell inequality violation is one of the most widely known manifestations of
entanglement in quantum mechanics; indicating that experiments on physically
separated quantum mechanical systems cannot be given a local realistic
description. However, despite the importance of Bell inequalities, it is not
known in general how to determine whether a given entangled state will violate
a Bell inequality. This is because one can choose to make many different
measurements on a quantum system to test any given Bell inequality and the
optimization over measurements is a high-dimensional variational problem. In
order to better understand this problem we present algorithms that provide, for
a given quantum state, both a lower bound and an upper bound on the maximal
expectation value of a Bell operator. Both bounds apply techniques from convex
optimization and the methodology for creating upper bounds allows them to be
systematically improved. In many cases these bounds determine measurements that
would demonstrate violation of the Bell inequality or provide a bound that
rules out the possibility of a violation. Examples are given to illustrate how
these algorithms can be used to conclude definitively if some quantum states
violate a given Bell inequality.Comment: 13 pages, 1 table, 2 figures. Updated version as published in PR
On the Matter of Time
Drawing on several disciplinary areas, this article considers diverse cultural concepts of time, space, and materiality. It explores historical shifts in ideas about time, observing that these have gone full circle, from visions in which time and space were conflated, through increasingly divergent linear understandings of the relationship between them, to their reunion in contemporary notions of space-time. Making use of long-term ethnographic research and explorations of the topic of Time at Durham University’s Institute of Advanced Study (2012–13), the article considers Aboriginal Australian ideas about relationality and the movement of matter through space and time. It asks why these earliest explanations of the cosmos, though couched in a wholly different idiom, seem to have more in common with the theories proposed by contemporary physicists than with the ideas that dominated the period between the Holocene and the Anthropocene. The analysis suggests that such unexpected resonance between these oldest and newest ideas about time and space may spring from the fact that they share an intense observational focus on material events. Comparing these vastly different but intriguingly compatible worldviews meets interdisciplinary aims in providing a fresh perspective on both of them
Recommended from our members
On the application of optimal wavelet filter banks for ECG signal classification
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier
Symbolic-Numeric Algorithms for Computer Analysis of Spheroidal Quantum Dot Models
A computation scheme for solving elliptic boundary value problems with
axially symmetric confining potentials using different sets of one-parameter
basis functions is presented. The efficiency of the proposed symbolic-numerical
algorithms implemented in Maple is shown by examples of spheroidal quantum dot
models, for which energy spectra and eigenfunctions versus the spheroid aspect
ratio were calculated within the conventional effective mass approximation.
Critical values of the aspect ratio, at which the discrete spectrum of models
with finite-wall potentials is transformed into a continuous one in strong
dimensional quantization regime, were revealed using the exact and adiabatic
classifications.Comment: 6 figures, Submitted to Proc. of The 12th International Workshop on
Computer Algebra in Scientific Computing (CASC 2010) Tsakhkadzor, Armenia,
September 5 - 12, 201
A Finite Element Computation of the Gravitational Radiation emitted by a Point-like object orbiting a Non-rotating Black Hole
The description of extreme-mass-ratio binary systems in the inspiral phase is
a challenging problem in gravitational wave physics with significant relevance
for the space interferometer LISA. The main difficulty lies in the evaluation
of the effects of the small body's gravitational field on itself. To that end,
an accurate computation of the perturbations produced by the small body with
respect the background geometry of the large object, a massive black hole, is
required. In this paper we present a new computational approach based on Finite
Element Methods to solve the master equations describing perturbations of
non-rotating black holes due to an orbiting point-like object. The numerical
computations are carried out in the time domain by using evolution algorithms
for wave-type equations. We show the accuracy of the method by comparing our
calculations with previous results in the literature. Finally, we discuss the
relevance of this method for achieving accurate descriptions of
extreme-mass-ratio binaries.Comment: RevTeX 4. 18 pages, 8 figure
- …