1,070 research outputs found
Functional Analytic Continuation Techniques with Applications in Field Theory
Often one has data at points inside the holomorphy domain of a Green’s function, or of an Amplitude or Form—Factor, and wants to obtain information about the spectral function i.e. the discontinuity along the cuts. Data may be experimental or theoretical. In QCD for example the perturbation expansion is valid only for unphysicaL values of the energy: one would like to continue this information to the cuts to find the resonance parameters. However, analytic continuation off open contours is extremely unstable. Also, the straightforward continuation of the truncated perturbation expansion will not do, since this is itself analytic and continuation will thus yield exactly the same result. This problem is solved by functional techniques, first by allowing small imprecisions in the data to remove the uniqueness of the continuation, and then by introducing a stabilizing condition suited to the particular physical problem, which will suppress the functions with incorrect behaviour. The stabilizing condition is expressed in terms of a norm giving a measure of the smoothness of the Discrepancy Function -which is the Amplitude with the resonances removed. The minimal norm computed from the data depends on the trial values of the resonance parameters and enables one to select the best values for these. The corresponding optimal amplitude is also constructed. An explicit solution is obtained for the case of a discrete data set; in the continuous case the problem is expressed in terms of a Fredholm integral equation
Sparsest factor analysis for clustering variables: a matrix decomposition approach
We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA
A multiscale approach to environment and its influence on the colour distribution of galaxies
We present a multiscale approach to measurements of galaxy density, applied
to a volume-limited sample constructed from SDSS DR5. We populate a rich
parameter space by obtaining independent measurements of density on different
scales for each galaxy, avoiding the implicit assumptions involved, e.g., in
the construction of group catalogues. As the first application of this method,
we study how the bimodality in galaxy colour distribution (u-r) depends on
multiscale density. The u-r galaxy colour distribution is described as the sum
of two gaussians (red and blue) with five parameters: the fraction of red
galaxies (f_r) and the position and width of the red and blue peaks (mu_r,
mu_b, sigma_r and sigma_b). Galaxies mostly react to their smallest scale (<
0.5 Mpc) environments: in denser environments red galaxies are more common
(larger f_r), redder (larger mu_r) and with a narrower distribution (smaller
sigma_r), while blue galaxies are redder (larger mu_b) but with a broader
distribution (larger sigma_b). There are residual correlations of f_r and mu_b
with 0.5 - 1 Mpc scale density, which imply that total or partial truncation of
star formation can relate to a galaxy's environment on these scales. Beyond 1
Mpc (0.5 Mpc for mu_r) there are no positive correlations with density. However
f_r (mu_r) anti-correlates with density on >2 (1) Mpc scales at fixed density
on smaller scales. We examine these trends qualitatively in the context of the
halo model, utilizing the properties of haloes within which the galaxies are
embedded, derived by Yang et al, 2007 and applied to a group catalogue. This
yields an excellent description of the trends with multiscale density,
including the anti-correlations on large scales, which map the region of
accretion onto massive haloes. Thus we conclude that galaxies become red only
once they have been accreted onto haloes of a certain mass.Comment: 22 pages, 14 figures. Accepted for publication in MNRAS
Hard X-ray Variability of AGN
Aims: Active Galactic Nuclei are known to be variable throughout the
electromagnetic spectrum. An energy domain poorly studied in this respect is
the hard X-ray range above 20 keV.
Methods: The first 9 months of the Swift/BAT all-sky survey are used to study
the 14 - 195 keV variability of the 44 brightest AGN. The sources have been
selected due to their detection significance of >10 sigma. We tested the
variability using a maximum likelihood estimator and by analysing the structure
function.
Results: Probing different time scales, it appears that the absorbed AGN are
more variable than the unabsorbed ones. The same applies for the comparison of
Seyfert 2 and Seyfert 1 objects. As expected the blazars show stronger
variability. 15% of the non-blazar AGN show variability of >20% compared to the
average flux on time scales of 20 days, and 30% show at least 10% flux
variation. All the non-blazar AGN which show strong variability are
low-luminosity objects with L(14-195 keV) < 1E44 erg/sec.
Conclusions: Concerning the variability pattern, there is a tendency of
unabsorbed or type 1 galaxies being less variable than the absorbed or type 2
objects at hardest X-rays. A more solid anti-correlation is found between
variability and luminosity, which has been previously observed in soft X-rays,
in the UV, and in the optical domain.Comment: 9 pages, 7 figures, accepted for publication in A&
Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation
We propose in this paper an original technique to predict global radiation
using a hybrid ARMA/ANN model and data issued from a numerical weather
prediction model (ALADIN). We particularly look at the Multi-Layer Perceptron.
After optimizing our architecture with ALADIN and endogenous data previously
made stationary and using an innovative pre-input layer selection method, we
combined it to an ARMA model from a rule based on the analysis of hourly data
series. This model has been used to forecast the hourly global radiation for
five places in Mediterranean area. Our technique outperforms classical models
for all the places. The nRMSE for our hybrid model ANN/ARMA is 14.9% compared
to 26.2% for the na\"ive persistence predictor. Note that in the stand alone
ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of
the forecaster outputs, a complementary study concerning the confidence
interval of each prediction is proposedComment: Energy (2012)
A factor model to analyze heterogeneity in gene expression
<p>Abstract</p> <p>Background</p> <p>Microarray technology allows the simultaneous analysis of thousands of genes within a single experiment. Significance analyses of transcriptomic data ignore the gene dependence structure. This leads to correlation among test statistics which affects a strong control of the false discovery proportion. A recent method called FAMT allows capturing the gene dependence into factors in order to improve high-dimensional multiple testing procedures. In the subsequent analyses aiming at a functional characterization of the differentially expressed genes, our study shows how these factors can be used both to identify the components of expression heterogeneity and to give more insight into the underlying biological processes.</p> <p>Results</p> <p>The use of factors to characterize simple patterns of heterogeneity is first demonstrated on illustrative gene expression data sets. An expression data set primarily generated to map QTL for fatness in chickens is then analyzed. Contrarily to the analysis based on the raw data, a relevant functional information about a QTL region is revealed by factor-adjustment of the gene expressions. Additionally, the interpretation of the independent factors regarding known information about both experimental design and genes shows that some factors may have different and complex origins.</p> <p>Conclusions</p> <p>As biological information and technological biases are identified in what was before simply considered as statistical noise, analyzing heterogeneity in gene expression yields a new point of view on transcriptomic data.</p
Cilia Proteins are Biomarkers of Altered Flow in the Vasculature
Cilia, microtubule-based organelles that project from the apical luminal surface of endothelial cells (ECs), are widely regarded as low-flow sensors. Previous reports suggest that upon high shear stress, cilia on the EC surface are lost, and more recent evidence suggests that deciliation—the physical removal of cilia from the cell surface—is a predominant mechanism for cilia loss in mammalian cells. Thus, we hypothesized that EC deciliation facilitated by changes in shear stress would manifest in increased abundance of cilia-related proteins in circulation. To test this hypothesis, we performed shear stress experiments that mimicked flow conditions from low to high shear stress in human primary cells and a zebrafish model system. In the primary cells, we showed that upon shear stress induction, indeed, ciliary fragments were observed in the effluent in vitro, and effluents contained ciliary proteins normally expressed in both endothelial and epithelial cells. In zebrafish, upon shear stress induction, fewer cilia-expressing ECs were observed. To test the translational relevance of these findings, we investigated our hypothesis using patient blood samples from sickle cell disease and found that plasma levels of ciliary proteins were elevated compared with healthy controls. Further, sickled red blood cells demonstrated high levels of ciliary protein (ARL13b) on their surface after adhesion to brain ECs. Brain ECs postinteraction with sickle RBCs showed high reactive oxygen species (ROS) levels. Attenuating ROS levels in brain ECs decreased cilia protein levels on RBCs and rescued ciliary protein levels in brain ECs. Collectively, these data suggest that cilia and ciliary proteins in circulation are detectable under various altered-flow conditions, which could serve as a surrogate biomarker of the damaged endothelium
Has education lost sight of children?
The reflections presented in this chapter are informed by clinical and personal experiences of school education in the UK. There are many challenges for children and young people in the modern education system and for the professionals who support them. In the UK, there are significant gaps between the highly selective education provided to those who pay privately for it and to the majority of those educated in the state-funded system. Though literacy rates have improved around the world, many children, particularly boys, do not finish their education for reasons such as boredom, behavioural difficulties or because education does not ‘pay’. Violence, bullying, and sexual harassment are issues faced by many children in schools and there are disturbing trends of excluding children who present with behavioural problems at school whose origins are not explored. Excluded children are then educated with other children who may also have multiple problems which often just make the situation worse. The experience of clinicians suggests that school-related mental health problems are increasing in severity. Are mental health services dealing with the consequences of an education system that is not meeting children’s needs? An education system that is testing- and performance-based may not be serving many children well if it is driving important decisions about them at increasingly younger ages. Labelling of children and setting them on educational career paths can occur well before they reach secondary schools, limiting potential very early on in their developmental trajectory. Furthermore, the emphasis at school on testing may come at the expense of creativity and other forms of intelligence, which are also valuable and important. Meanwhile the employment marketplace requires people with widely different skills, with an emphasis on innovation, creativity, and problem solving. Is education losing sight of the children it is educating
Game Plan: What AI can do for Football, and What Football can do for AI
The rapid progress in artificial intelligence (AI) and machine learning has opened unprecedented
analytics possibilities in various team and individual sports, including baseball, basketball, and
tennis. More recently, AI techniques have been applied to football, due to a huge increase in
data collection by professional teams, increased computational power, and advances in machine
learning, with the goal of better addressing new scientific challenges involved in the analysis of
both individual players’ and coordinated teams’ behaviors. The research challenges associated
with predictive and prescriptive football analytics require new developments and progress at the
intersection of statistical learning, game theory, and computer vision. In this paper, we provide
an overarching perspective highlighting how the combination of these fields, in particular, forms a
unique microcosm for AI research, while offering mutual benefits for professional teams, spectators,
and broadcasters in the years to come. We illustrate that this duality makes football analytics
a game changer of tremendous value, in terms of not only changing the game of football itself,
but also in terms of what this domain can mean for the field of AI. We review the state-of-theart and exemplify the types of analysis enabled by combining the aforementioned fields, including
illustrative examples of counterfactual analysis using predictive models, and the combination of
game-theoretic analysis of penalty kicks with statistical learning of player attributes. We conclude
by highlighting envisioned downstream impacts, including possibilities for extensions to other sports
(real and virtual)
Measurement of χ c1 and χ c2 production with s√ = 7 TeV pp collisions at ATLAS
The prompt and non-prompt production cross-sections for the χ c1 and χ c2 charmonium states are measured in pp collisions at s√ = 7 TeV with the ATLAS detector at the LHC using 4.5 fb−1 of integrated luminosity. The χ c states are reconstructed through the radiative decay χ c → J/ψγ (with J/ψ → μ + μ −) where photons are reconstructed from γ → e + e − conversions. The production rate of the χ c2 state relative to the χ c1 state is measured for prompt and non-prompt χ c as a function of J/ψ transverse momentum. The prompt χ c cross-sections are combined with existing measurements of prompt J/ψ production to derive the fraction of prompt J/ψ produced in feed-down from χ c decays. The fractions of χ c1 and χ c2 produced in b-hadron decays are also measured
- …