17,514 research outputs found

    Coupling of shells in a carbon nanotube quantum dot

    Full text link
    We systematically study the coupling of longitudinal modes (shells) in a carbon nanotube quantum dot. Inelastic cotunneling spectroscopy is used to probe the excitation spectrum in parallel, perpendicular and rotating magnetic fields. The data is compared to a theoretical model including coupling between shells, induced by atomically sharp disorder in the nanotube. The calculated excitation spectra show good correspondence with experimental data.Comment: 8 pages, 4 figure

    Healing the Relevance Vector Machine through Augmentation

    Get PDF
    The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full predictive distributions for test cases. However, the predictive uncertainties have the unintuitive property, that emphthey get smaller the further you move away from the training cases. We give a thorough analysis. Inspired by the analogy to non-degenerate Gaussian Processes, we suggest augmentation to solve the problem. The purpose of the resulting model, RVM*, is primarily to corroborate the theoretical and experimental analysis. Although RVM* could be used in practical applications, it is no longer a truly sparse model. Experiments show that sparsity comes at the expense of worse predictive distributions

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes

    Full text link
    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected with measurement errors on discretized grids. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo. Compared to the standard Bayesian inference that suffers serious computational burden and unstableness for analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results as the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids where the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes.Comment: Under revie

    A comprehensive numerical study of aerosol-cloud-precipitation interactions in marine stratocumulus

    Get PDF
    Three-dimensional large-eddy simulations (LES) with detailed bin-resolved microphysics are performed to explore the diurnal variation of marine stratocumulus (MSc) clouds under clean and polluted conditions. The sensitivity of the aerosol-cloud-precipitation interactions to variation of sea surface temperature, free tropospheric humidity, large-scale divergence rate, and wind speed is assessed. The comprehensive set of simulations corroborates previous studies that (1) with moderate/heavy drizzle, an increase in aerosol leads to an increase in cloud thickness; and (2) with non/light drizzle, an increase in aerosol results in a thinner cloud, due to the pronounced effect on entrainment. It is shown that for higher SST, stronger large-scale divergence, drier free troposphere, or lower wind speed, the cloud thins and precipitation decreases. The sign and magnitude of the Twomey effect, droplet dispersion effect, cloud thickness effect, and cloud optical depth susceptibility to aerosol perturbations (i.e., change in cloud optical depth to change in aerosol number concentration) are evaluated by LES experiments and compared with analytical formulations. The Twomey effect emerges as dominant in total cloud optical depth susceptibility to aerosol perturbations. The dispersion effect, that of aerosol perturbations on the cloud droplet size spectrum, is positive (i.e., increase in aerosol leads to spectral narrowing) and accounts for 3% to 10% of the total cloud optical depth susceptibility at nighttime, with greater influence in heavier drizzling clouds. The cloud thickness effect is negative (i.e., increase in aerosol leads to thinner cloud) for non/light drizzling cloud and positive for a moderate/heavy drizzling clouds; the cloud thickness effect contributes 5% to 22% of the nighttime total cloud susceptibility. Overall, the total cloud optical depth susceptibility ranges from ~0.28 to 0.53 at night; an increase in aerosol concentration enhances cloud optical depth, especially with heavier precipitation and in a more pristine environment. During the daytime, the range of magnitude for each effect is more variable owing to cloud thinning and decoupling. The good agreement between LES experiments and analytical formulations suggests that the latter may be useful in evaluations of the total cloud susceptibility. The ratio of the magnitude of the cloud thickness effect to that of the Twomey effect depends on cloud base height and cloud thickness in unperturbed (clean) clouds

    Meteorological application of Apollo photography Final report

    Get PDF
    Development of meteorological information and parameters based on cloud photographs taken during Apollo 9 fligh

    Study of thermal neutron capture gamma rays using a lithium-drifted germanium spectrometer / [by] Victor John Orphan [and] Norman C. Rasmussen

    Get PDF
    "January 1967.""AFCRL-67-0104."Also issued as an Sc. D. thesis by the first author and advised by the second author, MIT, Dept. of Nuclear Engineering, 1967Includes bibliographical references (pages 199-203)Scientific report, interim; January 1967A gamma-ray spectrometer, using a 30 cc coaxial Ge(Li) detector, which can be operated as a pair spectrometer at high energies and in the Compton suppression mode at low energies provides an effective means of obtaining thermal neutron capture gamma spectra over nearly the entire capture gamma energy range. The energy resolution (fwhm) of the spectrometer is approximately 0.5% at 1 MeV and 0.1% at 7 MeV. Capture gamma-ray energies can be determined to an accuracy of about 1 keV. The relatively high efficiency of this spectrometer allows the use of an external neutron beam geometry, which simplifies sample changing. Using a 4096 channel pulse height analyzer, the capture gamma spectrum of an element may be obtained in about one day. Low cross section (order of 0.1 b) elements with many weak intensity gammas may be studied. Over 100 gamma rays have been identified in the spectrum of one such element, Zr. The spectra of Be, Sc, Fe, Ge, and Zr are presented.United States Air Force contract no. AF19 (628)5551Project no. 5620; Task no. 56200

    A note on Kerr/CFT and free fields

    Full text link
    The near-horizon geometry of the extremal four-dimensional Kerr black hole and certain generalizations thereof has an SL(2,R) x U(1) isometry group. Excitations around this geometry can be controlled by imposing appropriate boundary conditions. For certain boundary conditions, the U(1) isometry is enhanced to a Virasoro algebra. Here, we propose a free-field construction of this Virasoro algebra.Comment: 10 pages, v2: comments and references adde
    corecore