48,760 research outputs found
Sensitivity of optimum solutions to problem parameters
Derivation of the sensitivity equations that yield the sensitivity derivatives directly, which avoids the costly and inaccurate perturb-and-reoptimize approach, is discussed and solvability of the equations is examined. The equations apply to optimum solutions obtained by direct search methods as well as those generated by procedures of the sequential unconstrained minimization technique class. Applications are discussed for the use of the sensitivity derivatives in extrapolation of the optimal objective function and design variable values for incremented parameters, optimization with multiple objectives, and decomposition of large optimization problems
Automated parameters for troubled-cell indicators using outlier detection
In Vuik and Ryan (2014) we studied the use of troubled-cell indicators for discontinuity detection in nonlinear hyperbolic partial differential equations and introduced a new multiwavelet technique to detect troubled cells. We found that these methods perform well as long as a suitable, problem-dependent parameter is chosen. This parameter is used in a threshold which decides whether or not to detect an element as a troubled cell. Until now, these parameters could not be chosen automatically. The choice of the parameter has impact on the approximation: it determines the strictness of the troubled-cell indicator. An inappropriate choice of the parameter will result in detection (and limiting) of too few or too many elements. The optimal parameter is chosen such that the minimal number of troubled cells is detected and the resulting approximation is free of spurious oscillations. In this paper we will see that for each troubled-cell indicator the sudden increase or decrease of the indicator value with respect to the neighboring values is important for detection. Indication basically reduces to detecting the outliers of a vector (one dimension) or matrix (two dimensions). This is done using Tukey's boxplot approach to detect which coefficients in a vector are straying far beyond others (Tukey, 1977). We provide an algorithm that can be applied to various troubled-cell indication variables. Using this technique the problem-dependent parameter that the original indicator requires is no longer necessary as the parameter will be chosen automatically
IL-17 can be protective or deleterious in murine pneumococcal pneumonia
Streptococcus pneumoniae is the major bacterial cause of community-acquired pneumonia, and the leading agent of childhood pneumonia deaths worldwide. Nasal colonization is an essential step prior to infection. The cytokine IL-17 protects against such colonization and vaccines that enhance IL-17 responses to pneumococcal colonization are being developed. The role of IL-17 in host defence against pneumonia is not known. To address this issue, we have utilized a murine model of pneumococcal pneumonia in which the gene for the IL-17 cytokine family receptor, Il17ra, has been inactivated. Using this model, we show that IL-17 produced predominantly from γδ T cells protects mice against death from the invasive TIGR4 strain (serotype 4) which expresses a relatively thin capsule. However, in pneumonia produced by two heavily encapsulated strains with low invasive potential (serotypes 3 and 6B), IL-17 significantly enhanced mortality. Neutrophil uptake and killing of the serotype 3 strain was significantly impaired compared to the serotype 4 strain and depletion of neutrophils with antibody enhanced survival of mice infected with the highly encapsulated SRL1 strain. These data strongly suggest that IL-17 mediated neutrophil recruitment to the lungs clears infection from the invasive TIGR4 strain but that lung neutrophils exacerbate disease caused by the highly encapsulated pneumococcal strains. Thus, whilst augmenting IL-17 immune responses against pneumococci may decrease nasal colonization, this may worsen outcome during pneumonia caused by some strains
Using Google Analytics, Voyant and Other Tools to Better Understand Use of Manuscript Collections at L. Tom Perry Special Collections
[Excerpt] Developing strategies for making data-driven, objective decisions for digitization and value-added processing. based on patron usage has been an important effort in the L. Tom Perry Special Collections (hereafter Perry Special Collections). In a previous study, the authors looked at how creating a matrix using both Web analytics and in-house use statistics could provide a solid basis for making decisions about which collections to digitize as well as which collections merited deeper description. Along with providing this basis for decision making, the study also revealed some intriguing insights into how our collections were being used and raised some important questions about the impact of description on both digital and physical usage. We have continued analyzing the data from our first study and that data forms the basis of the current study. It is helpful to review the major outcomes of our previous study before looking at what we have learned in this deeper analysis. In the first study, we utilized three sources of statistical data to compare two distinct data points (in-house use and online finding aid use) and determine if there were any patterns or other information that would help curators in the department make better decisions about the items or collections selected for digitization or value-added processing. To obtain our data points, we combined two data sources related to the in-person use of manuscript collections in the Perry Special Collections reading room and one related to the use of finding aids for manuscript collections made available online through the department’s Finding Aid database ( http://findingaid.lib.byu.edu/). We mapped the resulting data points into a four quadrant graph (see figure 1)
Encoding of low-quality DNA profiles as genotype probability matrices for improved profile comparisons, relatedness evaluation and database searches
Many DNA profiles recovered from crime scene samples are of a quality that
does not allow them to be searched against, nor entered into, databases. We
propose a method for the comparison of profiles arising from two DNA samples,
one or both of which can have multiple donors and be affected by low DNA
template or degraded DNA. We compute likelihood ratios to evaluate the
hypothesis that the two samples have a common DNA donor, and hypotheses
specifying the relatedness of two donors. Our method uses a probability
distribution for the genotype of the donor of interest in each sample. This
distribution can be obtained from a statistical model, or we can exploit the
ability of trained human experts to assess genotype probabilities, thus
extracting much information that would be discarded by standard interpretation
rules. Our method is compatible with established methods in simple settings,
but is more widely applicable and can make better use of information than many
current methods for the analysis of mixed-source, low-template DNA profiles. It
can accommodate uncertainty arising from relatedness instead of or in addition
to uncertainty arising from noisy genotyping. We describe a computer program
GPMDNA, available under an open source license, to calculate LRs using the
method presented in this paper.Comment: 28 pages. Accepted for publication 2-Sep-2016 - Forensic Science
International: Genetic
Recommended from our members
A unified model of the electrical power network
Traditionally, the different infrastructure layers, technologies and management activities associated with the design, control and protection operation of the Electrical Power Systems have been supported by numerous independent models of the real world network. As a result of increasing competition in this sector, however, the integration of technologies in the network and the coordination of complex management processes have become of vital importance for all electrical power companies.
The aim of the research outlined in this paper is to develop a single network model which will unify the generation, transmission and distribution infrastructure layers and the various alternative implementation technologies. This 'unified model' approach can support ,for example, network fault, reliability and performance analysis. This paper introduces the basic network structures, describes an object-oriented modelling approach and outlines possible applications of the unified model
Non-linear growth of short-wave instabilities in a Batchelor vortex pair
Recent investigations have identified a variety of instability modes which may develop to enhance dispersion of co- and counter-rotating vortex pairs. This has application in the aviation industry, where an aircraft’s trailing vortices pose a significant hazard for other nearby aircraft. Batchelor vortices adopt the radial velocity field of Lamb – Oseen vortices, but with an axial velocity component through the core of the vortex, and are often used to represent vortices within an aircraft wake. Recently, the vortex swirl ratio of the Batchelor vortex pair has been identified as a key parameter which may be used to select the mode of instability which develops. Several modes have recently been identified via linear stability analysis. This study extends these prior investigations by considering the non-linear growth of the three-dimensional instabilities acting to disperse the vortex pair. Here, we validate prior linear instability investigations, and compare and contrast the relative ability of several instability modes to achieve improved vortex dispersion. The study has been conducted using a high-order, three-dimensional spectral element method to solve the timedependent incompressible Navier – Stokes equations. The study is conducted at a circulation Reynolds number of 2 800
Recommended from our members
Power system fault prediction using artificial neural networks
The medium term goal of the research reported in this paper was the development of a major in-house suite of strategic computer aided network simulation and decision support tools to improve the management of power systems. This paper describes a preliminary research investigation to access the feasibility of using an Artificial Intelligence (AI) method to predict and detect faults at an early stage in power systems. To achieve this goal, an AI based detector has been developed to monitor and predict faults at an early stage on particular sections of power systems. The detector only requires external measurements taken from the input and output nodes of the power system. The AI detection system is capable of rapidly predicting a malfunction within the system . Simulation will normally take place using equivalent circuit representation. Artificial Neural Networks (ANNs) are used to construct a hierarchical feed-forward structure which is the most important component in the fault detector. Simulation of a transmission line (2-port circuit ) has already been carried out and preliminary results using this system are promising. This approach provided satisfactory results with accuracy of 95% or higher
Optical characterization of LDEF contaminant film
Dark brown molecular film deposits were found at numerous locations on the Long Duration Exposure Facility (LDEF) and have been documented in great detail by several investigators. The exact deposition mechanism for these deposits is as yet unknown, although direct and scattered atomic oxygen, and solar radiation interacting with materials outgassing products have all been implicated in the formation process. Specimens of the brown molecular film were taken from below the flange of the experimental tray located at position D10 on the LDEF. The tray was one of two, comprising the same experiment, the other being located on the wake facing side of the LDEF satellite at position B4. Having access to both trays, we were able to directly compare the effect that orientation with respect to the atomic oxygen flux vector had on the formation of the brown molecular film deposits. The film is thickest on surfaces facing toward the exterior, i.e. the tray corner, as can be seen by comparing the lee and wake aspects of the rivets. The patterns appear to be aligned not with the velocity vector but with the corner of the tray suggesting that flux to the surface is due to scattered atomic oxygen rather than direct ram impingement. The role of scattered flux is further supported by more faint plume patterns on the sides of the tray. The angle of these plumes is strongly aligned with the ram direction but the outline of the deposit implies that incident atoms are scattered by collisions with the edges of the opening resulting in a directed, but diffuse, flux of atomic oxygen to the surface. Spectral reflectance measurements in the 2 to 10 micron (4000 to 1000 wavenumbers) spectral range are presented for the film in the 'as deposited' condition and for the free standing film. The material was analyzed by FTIR (Fourier Transform Infrared) microspectroscopy using gold as the reference standard. The 'as deposited' specimen was on an aluminum rivet taken from beneath the tray flange while the free film was obtained by chipping some of the material from the rivet. The transmission spectrum over the 2 to 10 micron range for the free film is presented. This spectrum appears to be essentially the same as that presented by Crutcher et.al. for films formed at vent sites which faced into the ram direction and suggested to originate from urethanes and silicones used on the LDEF. Banks et. al. state that silicones, when exposed to atomic oxygen, release polymeric scission fragments which deposit on surfaces and form a glassy, dark contaminant layer upon further atomic oxygen exposure and solar irradiation
Heavy and Light Quarks with Lattice Chiral Fermions
The feasibility of using lattice chiral fermions which are free of
errors for both the heavy and light quarks is examined. The fact that the
effective quark propagators in these fermions have the same form as that in the
continuum with the quark mass being only an additive parameter to a chirally
symmetric antihermitian Dirac operator is highlighted. This implies that there
is no distinction between the heavy and light quarks and no mass dependent
tuning of the action or operators as long as the discretization error is negligible. Using the overlap fermion, we find that the
(and ) errors in the dispersion relations of the pseudoscalar and
vector mesons and the renormalization of the axial-vector current and scalar
density are small. This suggests that the applicable range of may be
extended to with only 5% error, which is a factor of
larger than that of the improved Wilson action. We show that the generalized
Gell-Mann-Oakes-Renner relation with unequal masses can be utilized to
determine the finite errors in the renormalization of the matrix elements
for the heavy-light decay constants and semileptonic decay constants of the B/D
meson.Comment: final version to appear in Int. Jou. Mod. Phys.
- …
