280 research outputs found
On the Strength of Spin-Isospin Transitions in A=28 Nuclei
The relations between the strengths of spin-isospin transition operators
extracted from direct nuclear reactions, magnetic scattering of electrons and
processes of semi-leptonic weak interactions are discussed.Comment: LaTeX, 8 pages, 1Postscript with figur
Recommended from our members
Performance Characteristics of an Extended Throat Flow Nozzle for the Measurement of High Void Fraction Multi-phase Flows
An extended throat flow nozzle has been examined as a device for the measurement of very high void fraction (a ³ 0.95) multi-phase flows. Due to its greater density and partial contact with the wall, the equilibrium velocity of the liquid phase appreciably lags that of the lighter gas phase. The two phases are strongly coupled resulting in pressure drops across the contraction and in the extended throat that are significantly different than those experienced in single-phase flow. Information about the mass flow rates of the two phases can be extracted from the measured pressure drops. The performance of an extended throat flow nozzle has been evaluated under multi-phase conditions using natural gas and hydrocarbon liquids at 400 and 500 psi. Two hydrocarbon solvents were used as the test liquids, Isopar MÒ (sp = 0.79) and Aromatic 100â (sp = 0.87). These data are compared to prior air-water data at nominally 15 psi. The high and low pressure data were found to be consistent, confirming that the temperature, pressure, and size scaling of the extended throat venturi are correctly represented. This consistency allows different sized devices to be applied under different fluid conditions (temperature, pressure, gas and liquid phase composition, etc) with confidence
Geometry of Polynomials and Root-Finding via Path-Lifting
Using the interplay between topological, combinatorial, and geometric
properties of polynomials and analytic results (primarily the covering
structure and distortion estimates), we analyze a path-lifting method for
finding approximate zeros, similar to those studied by Smale, Shub, Kim, and
others. Given any polynomial, this simple algorithm always converges to a root,
except on a finite set of initial points lying on a circle of a given radius.
Specifically, the algorithm we analyze consists of iterating where the form a decreasing sequence of
real numbers and is chosen on a circle containing all the roots. We show
that the number of iterates required to locate an approximate zero of a
polynomial depends only on (where is
the radius of convergence of the branch of taking to a root
) and the logarithm of the angle between and certain critical
values. Previous complexity results for related algorithms depend linearly on
the reciprocals of these angles. Note that the complexity of the algorithm does
not depend directly on the degree of , but only on the geometry of the
critical values.
Furthermore, for any polynomial with distinct roots, the average number
of steps required over all starting points taken on a circle containing all the
roots is bounded by a constant times the average of . The
average of over all polynomials with roots in the
unit disk is . This algorithm readily generalizes to
finding all roots of a polynomial (without deflation); doing so increases the
complexity by a factor of at most .Comment: 44 pages, 12 figure
Array algorithms for H^2 and H^∞ estimation
Currently, the preferred method for implementing H^2 estimation algorithms is what is called the array form, and includes two main families: square-root array algorithms, that are typically more stable than conventional ones, and fast array algorithms, which, when the system is time-invariant, typically offer an order of magnitude reduction in the computational effort. Using our recent observation that H^∞ filtering coincides with Kalman filtering in Krein space, in this chapter we develop array algorithms for H^∞ filtering. These can be regarded as natural generalizations of their H^2 counterparts, and involve propagating the indefinite square roots of the quantities of interest. The H^∞ square-root and fast array algorithms both have the interesting feature that one does not need to explicitly check for the positivity conditions required for the existence of H^∞ filters. These conditions are built into the algorithms themselves so that an H^∞ estimator of the desired level exists if, and only if, the algorithms can be executed. However, since H^∞ square-root algorithms predominantly use J-unitary transformations, rather than the unitary transformations required in the H^2 case, further investigation is needed to determine the numerical behavior of such algorithms
svdPPCS: an effective singular value decomposition-based method for conserved and divergent co-expression gene module identification
<p>Abstract</p> <p>Background</p> <p>Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods.</p> <p>Results</p> <p>Using the singular value decomposition (SVD) technique, we developed a new computational tool, named svdPPCS (<b>SVD</b>-based <b>P</b>attern <b>P</b>airing and <b>C</b>hart <b>S</b>plitting), to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG) and malpighian tubule (MT) tissues of the line W<sup>118 </sup>of <it>M. drosophila</it>. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR < 0.1. One divergent module suggested the tissue-specific relationship between the expressions of mitochondrion-related genes and the aging process. This finding, together with others, may be of biological significance. The validity of the proposed SVD-based method was further verified by a simulation study, as well as the comparisons with regression analysis and cubic spline regression analysis plus PAM based clustering.</p> <p>Conclusions</p> <p>svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time series data of related organisms or different tissues of the same organism under equivalent or similar experimental conditions. The general scheme can be directly extended to the comparisons of multiple data sets. It also can be applied to the integration of data sets from different platforms and of different sources.</p
The Behavioral Roots of Information Systems Security:Exploring Key Factors Related to Unethical IT Use
Unethical information technology (IT) use, related to activities such as hacking, software piracy, phishing, and spoofing, has become a major security concern for individuals, organizations, and society in terms of the threat to information systems (IS) security. While there is a growing body of work on this phenomenon, we notice several gaps, limitations, and inconsistencies in the literature. In order to further understand this complex phenomenon and reconcile past findings, we conduct an exploratory study to uncover the nomological network of key constructs salient to this phenomenon, and the nature of their interrelationships. Using a scenario-based study of young adult participants, and both linear and nonlinear analyses, we uncover key nuances of this phenomenon of unethical IT use. We find that unethical IT use is a complex phenomenon, often characterized by nonlinear and idiosyncratic relationships between the constructs that capture it. Overall, ethical beliefs held by the individuals, along with economic, social, and technological considerations are found to be relevant to this phenomenon. In terms of practical implications, these results suggest that multiple interventions at various levels may be required to combat this growing threat to IS security
Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems
- …