2,611 research outputs found

    Microrheological Characterisation of Anisotropic Materials

    Full text link
    We describe the measurement of anisotropic viscoelastic moduli in complex soft materials, such as biopolymer gels, via video particle tracking microrheology of colloid tracer particles. The use of a correlation tensor to find the axes of maximum anisotropy, and hence the mechanical director, is described. The moduli of an aligned DNA gel are reported, as a test of the technique; this may have implications for high DNA concentrations in vivo. We also discuss the errors in microrheological measurement, and describe the use of frequency space filtering to improve displacement resolution, and hence probe these typically high modulus materials.Comment: 5 pages, 5 figures. Replaced after refereeing/ improvement. Main results are the same. The final, published version of the paper is here http://link.aps.org/abstract/PRE/v73/e03190

    Triple and Quadruple Junctions Thermophotovoltaic Devices Lattice Matched to InP

    Get PDF
    Thermophotovoltaic (TPV) conversion of IR radiation emanating from a radioisotope heat source is under consideration for deep space exploration. Ideally, for radiator temperatures of interest, the TPV cell must convert efficiently photons in the 0.4-0.7 eV spectral range. Best experimental data for single junction cells are obtained for lattice-mismatched 0.55 eV InGaAs based devices. It was suggested, that a tandem InGaAs based TPV cell made by monolithically combining two or more lattice mismatched InGaAs subcells on InP would result in a sizeable efficiency improvement. However, from a practical standpoint the implementation of more than two subcells with lattice mismatch systems will require extremely thick graded layers (defect filtering systems) to accommodate the lattice mismatch between the sub-cells and could detrimentally affect the recycling of the unused IR energy to the emitter. A buffer structure, consisting of various InPAs layers, is incorporated to accommodate the lattice mismatch between the high and low bandgap subcells. There are evidences that the presence of the buffer structure may generate defects, which could extend down to the underlying InGaAs layer. The unusual large band gap lowering observed in GaAs(1-x)N(x) with low nitrogen fraction [1] has sparked a new interest in the development of dilute nitrogen containing III-V semiconductors for long-wavelength optoelectronic devices (e.g. IR lasers, detector, solar cells) [2-7]. Lattice matched Ga1-yInyNxAs1-x on InP has recently been investigated for the potential use in the mid-infrared device applications [8], and it could be a strong candidate for the applications in TPV devices. This novel quaternary alloy allows the tuning of the band gap from 1.42 eV to below 1 eV on GaAs and band gap as low as 0.6eV when strained to InP, but it has its own limitations. To achieve such a low band gap using the quaternary Ga1-yInyNxAs1-x, either it needs to be strained on InP, which creates further complications due to the creation of defects and short life of the device or to introduce high content of indium, which again is found problematic due to the difficulties in diluting nitrogen in the presence of high indium [9]. An availability of material of proper band gap and lattice matching on InP are important issues for the development of TPV devices to perform better. To address those issues, recently we have shown that by adjusting the thickness of individual sublayers and the nitrogen composition, strain balanced GaAs(1-x)N(x)/InAs(1-y)N(y) superlattice can be designed to be both lattice matched to InP and have an effective bandgap in the desirable 0.4- 0.7eV range [10,11]. Theoretically the already reduced band gap of GaAs(1-x)N(x), due to the nitrogen effects, can be further reduced by subjecting it to a biaxial tensile strain, for example, by fabricating pseudomorphically strained layers on commonly available InP substrates. While such an approach in principle could allow access to smaller band gap (longer wavelength), only a few atomic monolayers of the material can be grown due to the large lattice mismatch between GaAs(1-x)N(x) and InP (approx.3.8-4.8 % for x<0.05, 300K). This limitation can be avoided using the principle of strain balancing [12], by introducing the alternating layers of InAs(1-y)N(y) with opposite strain (approx.2.4-3.1% for x<0.05, 300K) in combination with GaAs(1-x)N(x). Therefore, even an infinite pseudomorphically strained superlattice thickness can be realized from a sequence of GaAs(1-x)N(x) and InAs(1-y)N(y) layers if the thickness of each layer is kept below the threshold for its lattice relaxatio

    Get it from the Source: Identifying Library Resources and Software Used in Faculty Research

    Get PDF
    Libraries and Information Technology departments aim to support the educational and research needs of students, researchers, and faculty members. Close matches between the resources those departments provide and the resources the institution’s community members actually use highlight the value of the departments, demonstrate fiscally responsibility, and show attentiveness to the community’s needs. Traditionally, libraries rely on usage statistics to guide collection development decisions, but usage statistics can only imply value. Identifying a resource by name in a publication demonstrates the value of that resource more clearly. This pilot project examined the full-text of articles published in 2016-2017 by faculty members at a mid-sized, special-focus institution to answer the questions “Do faculty members have university-provided access to the research tools they need to publish?” and “If not, where are they getting them?” Using a custom database, the presenters indexed every publication by author, publication, resources used, availability of the identified resources, and more. This pilot study can be adapted to projects at other institutions, allowing them to gain a better understanding of the strengths and weaknesses of their own institution’s offerings. In addition, they will be able to identify ways to use that data to negotiate for additional resources, inform strategic partnerships, and facilitate open discussions with the institution’s community

    From cusps to cores: a stochastic model

    Full text link
    The cold dark matter model of structure formation faces apparent problems on galactic scales. Several threads point to excessive halo concentration, including central densities that rise too steeply with decreasing radius. Yet, random fluctuations in the gaseous component can 'heat' the centres of haloes, decreasing their densities. We present a theoretical model deriving this effect from first principles: stochastic variations in the gas density are converted into potential fluctuations that act on the dark matter; the associated force correlation function is calculated and the corresponding stochastic equation solved. Assuming a power law spectrum of fluctuations with maximal and minimal cutoff scales, we derive the velocity dispersion imparted to the halo particles and the relevant relaxation time. We further perform numerical simulations, with fluctuations realised as a Gaussian random field, which confirm the formation of a core within a timescale comparable to that derived analytically. Non-radial collective modes enhance the energy transport process that erases the cusp, though the parametrisations of the analytical model persist. In our model, the dominant contribution to the dynamical coupling driving the cusp-core transformation comes from the largest scale fluctuations. Yet, the efficiency of the transformation is independent of the value of the largest scale and depends weakly (linearly) on the power law exponent; it effectively depends on two parameters: the gas mass fraction and the normalisation of the power spectrum. This suggests that cusp-core transformations observed in hydrodynamic simulations of galaxy formation may be understood and parametrised in simple terms, the physical and numerical complexities of the various implementations notwithstanding.Comment: Minor revisions to match version to appear in MNRAS; Section~2.3 largely rewritten for clarit

    Towards a resolved Kennicutt-Schmidt law at high redshift

    Get PDF
    Massive galaxies in the distant Universe form stars at much higher rates than today. Although direct resolution of the star forming regions of these galaxies is still a challenge, recent molecular gas observations at the IRAM Plateau de Bure interferometer enable us to study the star formation efficiency on subgalactic scales around redshift z = 1.2. We present a method for obtaining the gas and star formation rate (SFR) surface densities of ensembles of clumps composing galaxies at this redshift, even though the corresponding scales are not resolved. This method is based on identifying these structures in position-velocity diagrams corresponding to slices within the galaxies. We use unique IRAM observations of the CO(3-2) rotational line and DEEP2 spectra of four massive star forming distant galaxies - EGS13003805, EGS13004291, EGS12007881, and EGS13019128 in the AEGIS terminology - to determine the gas and SFR surface densities of the identifiable ensembles of clumps that constitute them. The integrated CO line luminosity is assumed to be directly proportional to the total gas mass, and the SFR is deduced from the [OII] line. We identify the ensembles of clumps with the angular resolution available in both CO and [OII] spectroscopy; i.e., 1-1.5". SFR and gas surface densities are averaged in areas of this size, which is also the thickness of the DEEP2 slits and of the extracted IRAM slices, and we derive a spatially resolved Kennicutt-Schmidt (KS) relation on a scale of ~8 kpc. The data generally indicates an average depletion time of 1.9 Gyr, but with significant variations from point to point within the galaxies.Comment: 6 pages, 4 figures, 2 tables, accepted by Astronomy and Astrophysic

    Optimization and activation of renewable durian husk for biosorption of lead (II) from a aqueous medium

    Get PDF
    Background: Biosorption of lead Pb(II) by durian husk activated carbon (DHAC) was investigated. The main aim of this work is to explore the effect of operating variables such as pH, biosorbent dose, temperature, initial metal ion concentration and contact time on the removal of Pb(II) from synthesized aqueous medium using a response surface methodology (RSM) technique. The experimentation was performed in two sets, namely set 1 and set 2. Results: For experimental set 1, pH was set to 7.0. The optimum conditions for the remaining parameters were determined to be 0.39 g DHAC dose, 60 min contact time and 100 mg L−1 of initial metal ion concentration, which yielded maximum biosorption capacity of 14.6 mg g−1. For experimental set 2, 41.27 °C, 8.95 and 99.96 mg L−1 were the optimum conditions determined for temperature, pH and initial Pb(II) concentration, respectively; which revealed a maximum adsorption capacity of 9.67 mg g−1. Characterization of the adsorbent revealed active functional groups such as hydroxyl, carboxylic, alcohol and hemicellulose. The equilibrium adsorption data obeyed the Langmuir isotherm and pseudo‐second‐order kinetic models with maximum Langmuir uptake of 36.1 mg g−1. Conclusions: The biosorbent was capable of reuse, so that the abundant durian husk could be utilized effectively for the removal of Pb(II) from polluted water

    A Revised Design for Microarray Experiments to Account for Experimental Noise and Uncertainty of Probe Response

    Get PDF
    Background Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Results Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. Conclusion The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations
    corecore