366,476 research outputs found

    Study of the q^2-Dependence of B --> pi ell nu and B --> rho(omega)ell nu Decay and Extraction of |V_ub|

    Full text link
    We report on determinations of |Vub| resulting from studies of the branching fraction and q^2 distributions in exclusive semileptonic B decays that proceed via the b->u transition. Our data set consists of the 9.7x10^6 BBbar meson pairs collected at the Y(4S) resonance with the CLEO II detector. We measure B(B0 -> pi- l+ nu) = (1.33 +- 0.18 +- 0.11 +- 0.01 +- 0.07)x10^{-4} and B(B0 -> rho- l+ nu) = (2.17 +- 0.34 +0.47/-0.54 +- 0.41 +- 0.01)x10^{-4}, where the errors are statistical, experimental systematic, systematic due to residual form-factor uncertainties in the signal, and systematic due to residual form-factor uncertainties in the cross-feed modes, respectively. We also find B(B+ -> eta l+ nu) = (0.84 +- 0.31 +- 0.16 +- 0.09)x10^{-4}, consistent with what is expected from the B -> pi l nu mode and quark model symmetries. We extract |Vub| using Light-Cone Sum Rules (LCSR) for 0<= q^2<16 GeV^2 and Lattice QCD (LQCD) for 16 GeV^2 <= q^2 < q^2_max. Combining both intervals yields |Vub| = (3.24 +- 0.22 +- 0.13 +0.55/-0.39 +- 0.09)x10^{-3}$ for pi l nu, and |Vub| = (3.00 +- 0.21 +0.29/-0.35 +0.49/-0.38 +-0.28)x10^{-3} for rho l nu, where the errors are statistical, experimental systematic, theoretical, and signal form-factor shape, respectively. Our combined value from both decay modes is |Vub| = (3.17 +- 0.17 +0.16/-0.17 +0.53/-0.39 +-0.03)x10^{-3}.Comment: 45 pages postscript, also available through http://w4.lns.cornell.edu/public/CLNS, submitted to PR

    Galaxy density profiles and shapes -- II. selection biases in strong lensing surveys

    Full text link
    [Abridged] Many current and future astronomical surveys will rely on samples of strong gravitational lens systems to draw conclusions about galaxy mass distributions. We use a new strong lensing pipeline (presented in Paper I of this series) to explore selection biases that may cause the population of strong lensing systems to differ from the general galaxy population. Our focus is on point-source lensing by early-type galaxies with two mass components (stellar and dark matter) that have a variety of density profiles and shapes motivated by observational and theoretical studies of galaxy properties. We seek not only to quantify but also to understand the physics behind selection biases related to: galaxy mass, orientation and shape; dark matter profile parameters such as inner slope and concentration; and adiabatic contraction. We study how all of these properties affect the lensing Einstein radius, total cross-section, quad/double ratio, and image separation distribution. We find significant (factors of several) selection biases with mass; orientation, for a given galaxy shape at fixed mass; cusped dark matter profile inner slope and concentration; concentration of the stellar and dark matter deprojected Sersic models. Interestingly, the intrinsic shape of a galaxy does not strongly influence its lensing cross-section when we average over viewing angles. Our results are an important first step towards understanding how strong lens systems relate to the general galaxy population.Comment: 26 pages, 15 figures; paper I at arXiv:0808.2493; accepted for publication in MNRAS (minor revisions); PDF file with full resolution figures at http://www.sns.ias.edu/~rmandelb/paper2.pd

    Turbulence for (and by) amateurs

    Full text link
    Series of lectures on statistical turbulence written for amateurs but not experts. Elementary aspects and problems of turbulence in two and three dimensional Navier-Stokes equation are introduced. A few properties of scalar turbulence and transport phenomena in turbulent flows are described. Kraichnan's model of passive advection is discussed a bit more precisely. {Part 1: Approaching turbulent flows.} Navier-Stokes equation. Cascades and Kolmogorov theory. Modeling statistical turbulence. Correlation functions and scaling. {Part 2: Deeper in turbulent flows.} Turbulence in two dimensions. Dissipation and dissipative anomalies. Fokker-Planck equations. Multifractal models. {Part 3: Scalar turbulence.} Transport and Lagrangian trajectories. Kraichnan's passive scalar model. Anomalous scalings and universality. {Part 4: Lagrangian trajectories.} Richardson's law. Lagrangian flows in Kraichnan's model. Slow modes. Breakdown of Lagrangian flows. Batchelor limit. Generalized Lagrangian flows and trajectory bundles.Comment: 37 pages, 6 figures, lecture note

    A search for p-modes and other variability in the binary system 85 Pegasi using MOST photometry

    Get PDF
    Context: Asteroseismology has great potential for the study of metal-poor stars due to its sensitivity to determine stellar ages. Aims: Our goal was to detect p-mode oscillations in the metal-poor sub-dwarf 85 Peg A and to search for other variability on longer timescales. Methods: We have obtained continuous high-precision photometry of the binary system 85 Pegasi with the MOST space telescope in two seasons (2005 & 2007). Furthermore, we redetermined vsini for 85 Peg A using high resolution spectra obtained through the ESO archive, and used photometric spot modeling to interpret long periodic variations. Results: Our frequency analysis yields no convincing evidence for p-modes significantly above a noise level of 4 ppm. Using simulated p-mode patterns we provide upper RMS amplitude limits for 85 Peg A. The light curve shows evidence for variability with a period of about 11 d and this periodicity is also seen in the follow up run in 2007; however, as different methods to remove instrumental trends in the 2005 run yield vastly different results, the exact shape and periodicity of the 2005 variability remain uncertain. Our re-determined vsini value for 85 Peg A is comparable to previous studies and we provide realistic uncertainties for this parameter. Using these values in combination with simple photometric spot models we are able to reconstruct the observed variations. Conclusions: The null-detection of p-modes in 85 Peg A is consistent with theoretical values for pulsation amplitudes in this star. The detected long-periodic variation must await confirmation by further observations with similar or better precision and long-term stability. If the 11 d periodicity is real, rotational modulation of surface features on one of the components is the most likely explanation.Comment: 11 pages, 9 figures, accepted for publication in A&

    Multiscale 3D Shape Analysis using Spherical Wavelets

    Get PDF
    ©2005 Springer. The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/11566489_57DOI: 10.1007/11566489_57Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data

    Assessing the Value of Time Travel Savings – A Feasibility Study on Humberside.

    No full text
    It is expected that the opening of the Humber Bridge will cause major changes to travel patterns around Humberside; given the level of tolls as currently stated, many travellers will face decisions involving a trade-off between travel time, money outlay on tolls or fares and money outlay on private vehicle running costs; this either in the context of destination choice, mode choice or route choice. This report sets out the conclusions of a preliminary study of the feasibility of inferring values of travel time savings from observations made on the outcomes of these decisions. Methods based on aggregate data of destination choice are found t o be inefficient; a disaggregate mode choice study i s recommended, subject to caveats on sample size

    Disaggregated Approaches to Freight Analysis: A Feasibility Study.

    Get PDF
    Forecasting the demand for freight transport is notoriously difficult. Although ever more advanced modelling techniques are becoming available, there is little data available for calibration. Compared to passenger travel, there are many fewer decision makers in freight, especially for the main bulk commodities, so the decisions of a relatively small number of principal players greatly influence the outcome. Moreover, freight comes in various shapes, sizes and physical states, which require different handling methods and suit the various modes (and sub-modes) of transport differently. In the face of these difficulties, present DTp practice is to forecast Britain's freight traffic using a very simple aggregate approach which assumes that tonne kilometres will rise in proportion to GDP. Although this simple model fits historical data quite well, there is a clear danger that this relationship will not hold good in the future. The relationship between tonne kilometres and GDP depends on the mix of products produced, their value to weight ratios, number of times lifted and lengths of haul. In the past, a declining ratio of tonnes to GDP has been offset by increasing lengths of haul. This has come about through a complicated set of changes in product mix, industrial structure and distribution systems. A more disaggregate approach which studies changes in all these factors by industrial sector seems likely to provide a better understanding of the relationship between tonne kilometres and GDP. However, there are also problems with disaggregation. As we disaggregate we get more understanding of what might change in the future, but are less able to project trends forward. This can be seen if we consider the future amounts of coal movements. Theoretically there is clearly scope for better forecasting by allowing for past trends to be overturned by a movement towards gas powered electricity generation and more imports of coal direct to coastal power stations. However, making such a sectoral forecast is extremely difficult, and inaccuracy here may more than offset the theoretical gain referred to earlier. This is because it is usually easier to forecast to a given percentage accuracy an aggregate rather than its components. For example, the percentage error on sales forecasts of Hotpoint washing machines will be greater than that for the sales of all washing machines taken together. This occurs because different makes of washing machines are substitutes for each other, so forecasts for Hotpoint washing machines must take into account uncertainty over Hotpoint's market share as well as uncertainty over the future total sales of washing machines. Nevertheless, a disaggregate investigation of the market could spot trends which were `buried' in the aggregate figures. For example, rapidly declining sales for one manufacturer might indicate their leaving the market, which with less competition would then price up and so reduce the total future sales. We have assumed above that the use of the term disaggregate in the brief refers to disaggregation by industrial sector. An alternative usage of the word disaggregate in this context is when referring to modelling at the level of the individual decision making unit. Disaggregate freight modelling in this sense would involve analysing decisions in order to determine the utility weight attached to different attributes of available transport options. Because data on suitable decisions is not readily available in this country, due to commercial confidentiality, we have recently undertaken research in which we have presented decision makers with hypothetical choices, and obtained the necessary utility weights from their responses. Whilst initial scepticism is understandable, this method has produced results acceptable for use in major projects. ITS itself has provided algorithms (known as Leeds Adaptive Stated Preference) which have been used to derive utility weights for use by British Rail in forecasting cross-channel freight, by DTp in evaluating the reaction of commercial vehicles to toll roads, and by the Dutch Ministry of Transport in modelling freight in the Netherlands. In the light of the above, the following objectives were set for the feasibility study: (1)To determine if a forecasting approach disaggregated by industrial sectors, as under the first definition above, can be used to explain recent trends in freight transport; (2)To test the feasibility of the disaggregated approach for improving the understanding of likely future developments in freight markets, this being informed by current best understanding of the disaggregate decision-making process as under the second definition above
    • …
    corecore