9,722 research outputs found

    A I-V analysis of irradiated Gallium Arsenide solar cells

    Get PDF
    A computer program was used to analyze the illuminated I-V characteristics of four sets of gallium arsenide (GaAs) solar cells irradiated with 1-MeV electrons and 10-MeV protons. It was concluded that junction regions (J sub r) dominate nearly all GaAs cells tested, except for irradiated Mitsubishi cells, which appear to have a different doping profile. Irradiation maintains or increases the dominance by J sub r. Proton irradiation increases J sub r more than does electron irradiation. The U.S. cells were optimized for beginning of life (BOL) and the Japanese for end of life (EOL). I-V analysis indicates ways of improving both the BOL and EOL performance of GaAs solar cells

    Gallium Arsenide solar cell radiation damage experiment

    Get PDF
    Gallium arsenide (GaAs) solar cells for space applications from three different manufactures were irradiated with 10 MeV protons or 1 MeV electrons. The electrical performance of the cells was measured at several fluence levels and compared. Silicon cells were included for reference and comparison. All the GaAs cell types performed similarly throughout the testing and showed a 36 to 56 percent power areal density advantage over the silicon cells. Thinner (8-mil versus 12-mil) GaAs cells provide a significant weight reduction. The use of germanium (Ge) substrates to improve mechanical integrity can be implemented with little impact on end of life performance in a radiation environment

    Automated quantitative analysis of single and double label autoradiographs

    Get PDF
    A method for the analysis of silver grain content in both single and double label autoradiographs is presented. The total grain area is calculated by counting the number of pixels at which the recorded light intensity in transmission dark field illumination exceeds a selected threshold. The calibration tests included autoradiographs with low (3H- thymidin) and high (3H-desoxyuridin) silver grain density. The results are proportional to the customary visual grain count. For the range of visibly countable grain densities in single labeled specimens, the correlation coefficient between the computed values and the visual grain counts is better than 0.96. In the first emulsion of the two emulsion layer autoradiographs of double labeled specimens (3H-14C- thymidin) the correlation coefficient is 0.919 and 0.906. The method provides a statistical correction for the background grains not due to the isotope. The possibility to record 14C tracks by shifting the focus through the second emulsion of the double labeled specimens is also demonstrated. The reported technique is essentially independent of size, shape and density of the grains

    Linking Classical and Quantum Key Agreement: Is There "Bound Information"?

    Get PDF
    After carrying out a protocol for quantum key agreement over a noisy quantum channel, the parties Alice and Bob must process the raw key in order to end up with identical keys about which the adversary has virtually no information. In principle, both classical and quantum protocols can be used for this processing. It is a natural question which type of protocols is more powerful. We prove for general states but under the assumption of incoherent eavesdropping that Alice and Bob share some so-called intrinsic information in their classical random variables, resulting from optimal measurements, if and only if the parties' quantum systems are entangled. In addition, we provide evidence that the potentials of classical and of quantum protocols are equal in every situation. Consequently, many techniques and results from quantum information theory directly apply to problems in classical information theory, and vice versa. For instance, it was previously believed that two parties can carry out unconditionally secure key agreement as long as they share some intrinsic information in the adversary's view. The analysis of this purely classical problem from the quantum information-theoretic viewpoint shows that this is true in the binary case, but false in general. More explicitly, bound entanglement, i.e., entanglement that cannot be purified by any quantum protocol, has a classical counterpart. This "bound intrinsic information" cannot be distilled to a secret key by any classical protocol. As another application we propose a measure for entanglement based on classical information-theoretic quantities.Comment: Accepted for Crypto 2000. 17 page

    Discrete tomography and joint inversion for loosely connected or unconnected physical properties: application to crosshole seismic and georadar data sets

    Get PDF
    Tomographic inversions of geophysical data generally include an underdetermined component. To compensate for this shortcoming, assumptions or a priori knowledge need to be incorporated in the inversion process. A possible option for a broad class of problems is to restrict the range of values within which the unknown model parameters must lie. Typical examples of such problems include cavity detection or the delineation of isolated ore bodies in the subsurface. In cavity detection, the physical properties of the cavity can be narrowed down to those of air and/or water, and the physical properties of the host rock either are known to within a narrow band of values or can be established from simple experiments. Discrete tomography techniques allow such information to be included as constraints on the inversions. We have developed a discrete tomography method that is based on mixed-integer linear programming. An important feature of our method is the ability to invert jointly different types of data, for which the key physical properties are only loosely connected or unconnected. Joint inversions reduce the ambiguity in tomographic studies. The performance of our new algorithm is demonstrated on several synthetic data sets. In particular, we show how the complementary nature of seismic and georadar data can be exploited to locate air- or water-filled cavitie

    Ice volume distribution and implications on runoff projections in a glacierized catchment

    Get PDF
    A dense network of helicopter-based ground-penetrating radar (GPR) measurements was used to determine the ice-thickness distribution in the Mauvoisin region. The comprehensive set of ice-thickness measurements was combined with an ice-thickness estimation approach for an accurate determination of the bedrock. A total ice volume of 3.69 ± 0.31 km<sup>3</sup> and a maximum ice thickness of 290 m were found. The ice-thickness values were then employed as input for a combined glacio-hydrological model forced by most recent regional climate scenarios. This model provided glacier evolution and runoff projections for the period 2010–2100. Runoff projections of the measured initial ice volume distribution show an increase in annual runoff of 4% in the next two decades, followed by a persistent runoff decrease until 2100. Finally, we checked the influence of the ice-thickness distribution on runoff projections. Our analyses revealed that reliable estimates of the ice volume are essential for modelling future glacier and runoff evolution. Wrong estimations of the total ice volume might even lead to deviations of the predicted general runoff trend

    Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    Get PDF
    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementatio
    • …
    corecore