704 research outputs found
A general wavelet-based profile decomposition in the critical embedding of function spaces
We characterize the lack of compactness in the critical embedding of
functions spaces having similar scaling properties in the
following terms : a sequence bounded in has a subsequence
that can be expressed as a finite sum of translations and dilations of
functions such that the remainder converges to zero in as
the number of functions in the sum and tend to . Such a
decomposition was established by G\'erard for the embedding of the homogeneous
Sobolev space into the in dimensions with
, and then generalized by Jaffard to the case where is a Riesz
potential space, using wavelet expansions. In this paper, we revisit the
wavelet-based profile decomposition, in order to treat a larger range of
examples of critical embedding in a hopefully simplified way. In particular we
identify two generic properties on the spaces and that are of key use
in building the profile decomposition. These properties may then easily be
checked for typical choices of and satisfying critical embedding
properties. These includes Sobolev, Besov, Triebel-Lizorkin, Lorentz, H\"older
and BMO spaces.Comment: 24 page
Shearlets and Optimally Sparse Approximations
Multivariate functions are typically governed by anisotropic features such as
edges in images or shock fronts in solutions of transport-dominated equations.
One major goal both for the purpose of compression as well as for an efficient
analysis is the provision of optimally sparse approximations of such functions.
Recently, cartoon-like images were introduced in 2D and 3D as a suitable model
class, and approximation properties were measured by considering the decay rate
of the error of the best -term approximation. Shearlet systems are to
date the only representation system, which provide optimally sparse
approximations of this model class in 2D as well as 3D. Even more, in contrast
to all other directional representation systems, a theory for compactly
supported shearlet frames was derived which moreover also satisfy this
optimality benchmark. This chapter shall serve as an introduction to and a
survey about sparse approximations of cartoon-like images by band-limited and
also compactly supported shearlet frames as well as a reference for the
state-of-the-art of this research field.Comment: in "Shearlets: Multiscale Analysis for Multivariate Data",
Birkh\"auser-Springe
Comparison of Stochastic Methods for the Variability Assessment of Technology Parameters
This paper provides and compares two alternative solutions for the simulation of cables and interconnects with the inclusion of the effects of parameter uncertainties, namely the Polynomial Chaos (PC) method and the Response Surface Modeling (RSM). The problem formulation applies to the telegraphers equations with stochastic coefficients. According to PC, the solution requires an expansion of the unknown parameters in terms of orthogonal polynomials of random variables. On the contrary, RSM is based on a least-square polynomial fitting of the system response. The proposed methods offer accuracy and improved efficiency in computing the parameter variability effects on system responses with respect to the conventional Monte Carlo approach. These approaches are validated by means of the application to the stochastic analysis of a commercial multiconductor flat cable. This analysis allows us to highlight the respective advantages and disadvantages of the presented method
Association Between Rotating Night Shift Work and Risk of Coronary Heart Disease Among Women
IMPORTANCE: Prospective studies linking shift work to coronary heart disease (CHD) have been inconsistent and limited by short follow-up. OBJECTIVE: To determine whether rotating night shift work is associated with CHD risk. DESIGN, SETTING, AND PARTICIPANTS: Prospective cohort study of 189,158 initially healthy women followed up over 24 years in the Nurses' Health Studies (NHS [1988-2012]: Nâ=â73,623 and NHS2 [1989-2013]: Nâ=â115,535). EXPOSURES: Lifetime history of rotating night shift work (â„3 night shifts per month in addition to day and evening shifts) at baseline (updated every 2 to 4 years in the NHS2). MAIN OUTCOMES AND MEASURES: Incident CHD; ie, nonfatal myocardial infarction, CHD death, angiogram-confirmed angina pectoris, coronary artery bypass graft surgery, stents, and angioplasty. RESULTS: During follow-up, 7303 incident CHD cases occurred in the NHS (mean age at baseline, 54.5 years) and 3519 in the NHS2 (mean age, 34.8 years). In multivariable-adjusted Cox proportional hazards models, increasing years of baseline rotating night shift work was associated with significantly higher CHD risk in both cohorts. In the NHS, the association between duration of shift work and CHD was stronger in the first half of follow-up than in the second half (P=.02 for interaction), suggesting waning risk after cessation of shift work. Longer time since quitting shift work was associated with decreased CHD risk among ever shift workers in the NHS2 (P<.001 for trend). [table: see text] CONCLUSIONS AND RELEVANCE: Among women who worked as registered nurses, longer duration of rotating night shift work was associated with a statistically significant but small absolute increase in CHD risk. Further research is needed to explore whether the association is related to specific work hours and individual characteristics
Concentration analysis and cocompactness
Loss of compactness that occurs in may significant PDE settings can be
expressed in a well-structured form of profile decomposition for sequences.
Profile decompositions are formulated in relation to a triplet , where
and are Banach spaces, , and is, typically, a
set of surjective isometries on both and . A profile decomposition is a
representation of a bounded sequence in as a sum of elementary
concentrations of the form , , , and a remainder that
vanishes in . A necessary requirement for is, therefore, that any
sequence in that develops no -concentrations has a subsequence
convergent in the norm of . An imbedding with this
property is called -cocompact, a property weaker than, but related to,
compactness. We survey known cocompact imbeddings and their role in profile
decompositions
The K2-ESPRINT Project. I. Discovery of the Disintegrating Rocky Planet K2-22b with a Cometary Head and Leading Tail
We present the discovery of a transiting exoplanet candidate in the K2
Field-1 with an orbital period of 9.1457 hr: K2-22b. The highly variable
transit depths, ranging from 0\% to 1.3\%, are suggestive of a planet
that is disintegrating via the emission of dusty effluents. We characterize the
host star as an M-dwarf with K. We have obtained
ground-based transit measurements with several 1-m class telescopes and with
the GTC. These observations (1) improve the transit ephemeris; (2) confirm the
variable nature of the transit depths; (3) indicate variations in the transit
shapes; and (4) demonstrate clearly that at least on one occasion the transit
depths were significantly wavelength dependent. The latter three effects tend
to indicate extinction of starlight by dust rather than by any combination of
solid bodies. The K2 observations yield a folded light curve with lower time
resolution but with substantially better statistical precision compared with
the ground-based observations. We detect a significant "bump" just after the
transit egress, and a less significant bump just prior to transit ingress. We
interpret these bumps in the context of a planet that is not only likely
streaming a dust tail behind it, but also has a more prominent leading dust
trail that precedes it. This effect is modeled in terms of dust grains that can
escape to beyond the planet's Hill sphere and effectively undergo `Roche lobe
overflow,' even though the planet's surface is likely underfilling its Roche
lobe by a factor of 2.Comment: 22 pages, 16 figures. Final version accepted to Ap
Approximation of integral operators using product-convolution expansions
International audienceWe consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computationally intensive problem necessary for many practical problems. We analyze a technique called product-convolution expansion: the operator is locally approximated by a convolution, allowing to design fast numerical algorithms based on the fast Fourier transform. We design various types of expansions, provide their explicit rates of approximation and their complexity depending on the time varying impulse response smoothness. This analysis suggests novel wavelet based implementations of the method with numerous assets such as optimal approximation rates, low complexity and storage requirements as well as adaptivity to the kernels regularity. The proposed methods are an alternative to more standard procedures such as panel clustering, cross approximations, wavelet expansions or hierarchical matrices
Effective selection of informative SNPs and classification on the HapMap genotype data
<p>Abstract</p> <p>Background</p> <p>Since the single nucleotide polymorphisms (SNPs) are genetic variations which determine the difference between any two unrelated individuals, the SNPs can be used to identify the correct source population of an individual. For efficient population identification with the HapMap genotype data, as few informative SNPs as possible are required from the original 4 million SNPs. Recently, Park <it>et al.</it> (2006) adopted the nearest shrunken centroid method to classify the three populations, i.e., Utah residents with ancestry from Northern and Western Europe (CEU), Yoruba in Ibadan, Nigeria in West Africa (YRI), and Han Chinese in Beijing together with Japanese in Tokyo (CHB+JPT), from which 100,736 SNPs were obtained and the top 82 SNPs could completely classify the three populations.</p> <p>Results</p> <p>In this paper, we propose to first rank each feature (SNP) using a ranking measure, i.e., a modified t-test or F-statistics. Then from the ranking list, we form different feature subsets by sequentially choosing different numbers of features (e.g., 1, 2, 3, ..., 100.) with top ranking values, train and test them by a classifier, e.g., the support vector machine (SVM), thereby finding one subset which has the highest classification accuracy. Compared to the classification method of Park <it>et al.</it>, we obtain a better result, i.e., good classification of the 3 populations using on average 64 SNPs.</p> <p>Conclusion</p> <p>Experimental results show that the both of the modified t-test and F-statistics method are very effective in ranking SNPs about their classification capabilities. Combined with the SVM classifier, a desirable feature subset (with the minimum size and most informativeness) can be quickly found in the greedy manner after ranking all SNPs. Our method is able to identify a very small number of important SNPs that can determine the populations of individuals.</p
- âŠ