291 research outputs found

    Correlated X-ray/Ultraviolet/Optical Variability in NGC 6814

    Get PDF
    We present results of a 3-month combined X-ray/UV/optical monitoring campaign of the Seyfert 1 galaxy NGC 6814. The object was monitored by Swift from June through August 2012 in the X-ray and UV bands and by the Liverpool Telescope from May through July 2012 in B and V. The light curves are variable and significantly correlated between wavebands. Using cross-correlation analysis, we compute the time lag between the X-ray and lower energy bands. These lags are thought to be associated with the light travel time between the central X-ray emitting region and areas further out on the accretion disc. The computed lags support a thermal reprocessing scenario in which X-ray photons heat the disc and are reprocessed into lower energy photons. Additionally, we fit the lightcurves using CREAM, a Markov Chain Monte Carlo code for a standard disc. The best-fitting standard disc model yields unreasonably high super-Eddington accretion rates. Assuming more reasonable accretion rates would result in significantly under-predicted lags. If the majority of the reprocessing originates in the disc, then this implies the UV/optical emitting regions of the accretion disc are farther out than predicted by the standard thin disc model. Accounting for contributions from broad emission lines reduces the lags in B and V by approximately 25% (less than the uncertainty in the lag measurements), though additional contamination from the Balmer continuum may also contribute to the larger than expected lags. This discrepancy between the predicted and measured interband delays is now becoming common in AGN where wavelength-dependent lags are measured.Comment: 11 pages, 8 figures, accepted for publication in MNRA

    Deformations in deep continuous reinforced concrete transfer girders

    Full text link
    peer reviewedThis paper presents a three-parameter kinematic model for the deformation patterns of deep continuous transfer girders. The three degrees of freedom of the model are the average strains along the top and bottom longitudinal reinforcements within each shear span, as well as the transverse displacement in the critical loading zone. The model is validated with the help of a large test of a two-span continuous beam performed at the University of Toronto. It is shown that the apparently complex deformation patterns of the specimen are captured well by the kinematic model

    Do we need more than one ThinPrep to obtain adequate cellularity in fine needle aspiration?

    Full text link
    No abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/57411/1/20735_ftp.pd

    The entropy of words-learnability and expressivity across more than 1000 languages

    Get PDF
    The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world.Peer ReviewedPostprint (published version

    A Novel Approach to Parallel Coupled Cluster Calculations:  Combining Distributed and Shared Memory Techniques for Modern Cluster Based Systems

    Get PDF
    A parallel coupled cluster algorithm that combines distributed and shared memory techniques for the CCSD(T) method (singles + doubles with perturbative triples) is described. The implementation of the massively parallel CCSD(T) algorithm uses a hybrid molecular and “direct” atomic integral driven approach. Shared memory is used to minimize redundant replicated storage per compute process. The algorithm is targeted at modern cluster based architectures that are comprised of multiprocessor nodes connected by a dedicated communication network. Parallelism is achieved on two levels:  parallelism within a compute node via shared memory parallel techniques and parallelism between nodes using distributed memory techniques. The new parallel implementation is designed to allow for the routine evaluation of mid- (500−750 basis function) to large-scale (750−1000 basis function) CCSD(T) energies. Sample calculations are performed on five low-lying isomers of water hexamer using the aug-cc-pVTZ basis set

    Lunar hand tools

    Get PDF
    Tools useful for operations and maintenance tasks on the lunar surface were determined and designed. Primary constraints are the lunar environment, the astronaut's space suit and the strength limits of the astronaut on the moon. A multipurpose rotary motion tool and a collapsible tool carrier were designed. For the rotary tool, a brushless motor and controls were specified, a material for the housing was chosen, bearings and lubrication were recommended and a planetary reduction gear attachment was designed. The tool carrier was designed primarily for ease of access to the tools and fasteners. A material was selected and structural analysis was performed on the carrier. Recommendations were made about the limitations of human performance and about possible attachments to the torque driver

    Proactive Prevention and Mitigation of Saltwater Damage for Portable Devices

    Get PDF
    This disclosure describes techniques to mitigate the effects of fluid submersion on electronic devices such as mobile phones. Per techniques of this disclosure, with user permission and express consent, user context data is utilized to determine a likelihood of a user device being proximate to a saltwater body. Mitigating actions such as disabling the charging port of the device are automatically performed. If fluid submersion is detected, the duration of submersion is determined and a duration of time before the device undergoes damage is determined. Additional information, such as information about repair shops, may be provided to the user based on the extent of damage to the device. The device may be put in a low-power mode in order to extend the battery life of the device while the charging port is disabled

    Tank Pressure Control Experiment: Thermal Phenomena in Microgravity

    Get PDF
    The report presents the results of the flight experiment Tank Pressure Control Experiment/Thermal Phenomena (TPCE/TP) performed in the microgravity environment of the space shuttle. TPCE/TP, flown on the Space Transportation System STS-52, was a second flight of the Tank Pressure Control Experiment (TPCE). The experiment used Freon 113 at near saturation conditions. The test tank was filled with liquid to about 83% by volume. The experiment consisted of 21 tests. Each test generally started with a heating phase to increase the tank pressure and to develop temperature stratification in the fluid, followed by a fluid mixing phase for the tank pressure reduction and fluid temperature equilibration. The heating phase provided pool boiling data from large (relative to bubble sizes) heating surfaces (0.1046 m by 0.0742 m) at low heat fluxes (0.23 to 1.16 kW/sq m). The system pressure and the bulk liquid subcooling varied from 39 to 78 kPa and 1 to 3 C, respectively. The boiling process during the entire heating period, as well as the jet-induced mixing process for the first 2 min of the mixing period, was also recorded on video. The unique features of the experimental results are the sustainability of high liquid superheats for long periods and the occurrence of explosive boiling at low heat fluxes (0.86 to 1.1 kW/sq m). For a heat flux of 0.97 kW/sq m, a wall superheat of 17.9 C was attained in 10 min of heating. This superheat was followed by an explosive boiling accompanied by a pressure spike of about 38% of the tank pressure at the inception of boiling. However, at this heat flux the vapor blanketing the heating surface could not be sustained. Steady nucleate boiling continued after the explosive boiling. The jet-induced fluid mixing results were obtained for jet Reynolds numbers of 1900 to 8000 and Weber numbers of 0.2 to 6.5. Analyses of data from the two flight experiments (TPCE and TPCE/TP) and their comparison with the results obtained in drop tower experiments suggest that as Bond number approaches zero the flow pattern produced by an axial jet and the mixing time can be predicted by the Weber number
    corecore