1,791 research outputs found

    The Validity of the Super-Particle Approximation during Planetesimal Formation

    Full text link
    The formation mechanism of planetesimals in protoplanetary discs is hotly debated. Currently, the favoured model involves the accumulation of meter-sized objects within a turbulent disc, followed by a phase of gravitational instability. At best one can simulate a few million particles numerically as opposed to the several trillion meter-sized particles expected in a real protoplanetary disc. Therefore, single particles are often used as super-particles to represent a distribution of many smaller particles. It is assumed that small scale phenomena do not play a role and particle collisions are not modeled. The super-particle approximation can only be valid in a collisionless or strongly collisional system, however, in many recent numerical simulations this is not the case. In this work we present new results from numerical simulations of planetesimal formation via gravitational instability. A scaled system is studied that does not require the use of super-particles. We find that the scaled particles can be used to model the initial phases of clumping if the properties of the scaled particles are chosen such that all important timescales in the system are equivalent to what is expected in a real protoplanetary disc. Constraints are given for the number of particles needed in order to achieve numerical convergence. We compare this new method to the standard super-particle approach. We find that the super-particle approach produces unreliable results that depend on artifacts such as the gravitational softening in both the requirement for gravitational collapse and the resulting clump statistics. Our results show that short range interactions (collisions) have to be modelled properly.Comment: 10 pages, 7 figures, accepted for publication in Astronomy and Astrophysic

    Rescaling the dynamics of evaporating drops

    Get PDF
    The dynamics of evaporation of wetting droplets has been investigated experimentally in an extended range of drop sizes, in order to provide trends relevant for a theoretical analysis. A model is proposed, which generalises Tanner's law, allowing us to smooth out the singularities both in dissipation and in evaporative flux at the moving contact line. A qualitative agreement is obtained, which represents a first step towards the solution of a very old, complex problem

    Turning Optical Complex Media into Universal Reconfigurable Linear Operators by Wavefront Shaping

    Full text link
    Performing linear operations using optical devices is a crucial building block in many fields ranging from telecommunication to optical analogue computation and machine learning. For many of these applications, key requirements are robustness to fabrication inaccuracies and reconfigurability. Current designs of custom-tailored photonic devices or coherent photonic circuits only partially satisfy these needs. Here, we propose a way to perform linear operations by using complex optical media such as multimode fibers or thin scattering layers as a computational platform driven by wavefront shaping. Given a large random transmission matrix (TM) representing light propagation in such a medium, we can extract a desired smaller linear operator by finding suitable input and output projectors. We discuss fundamental upper bounds on the size of the linear transformations our approach can achieve and provide an experimental demonstration. For the latter, first we retrieve the complex medium's TM with a non-interferometric phase retrieval method. Then, we take advantage of the large number of degrees of freedom to find input wavefronts using a Spatial Light Modulator (SLM) that cause the system, composed of the SLM and the complex medium, to act as a desired complex-valued linear operator on the optical field. We experimentally build several 16×1616\times16 complex-valued operators, and are able to switch from one to another at will. Our technique offers the prospect of reconfigurable, robust and easy-to-fabricate linear optical analogue computation units

    Tunable high-index photonic glasses

    Full text link
    Materials with extreme photonic properties such as maximum diffuse reflectance, high albedo, or tunable band gaps are essential in many current and future photonic devices and coatings. While photonic crystals, periodic anisotropic structures, are well established, their disordered counterparts, photonic glasses (PGs), are less understood despite their most interesting isotropic photonic properties. Here, we introduce a controlled high index model PG system. It is made of monodisperse spherical TiO2_2 colloids to exploit strongly resonant Mie scattering for optimal turbidity. We report spectrally resolved combined measurements of turbidity and light energy velocity from large monolithic crack-free samples. This material class reveals pronounced resonances enabled by the possibility to tune both the refractive index of the extremely low polydisperse constituents and their radius. All our results are rationalized by a model based on the energy coherent potential approximation, which is free of any fitting parameter. Surprisingly good quantitative agreement is found even at high index and elevated packing fraction. This class of PGs may be the key to optimized tunable photonic materials and also central to understand fundamental questions such as isotropic structural colors, random lasing or strong light localization in 3D.Comment: Main text: 8 pages, 4 figures; Supporting Information: 5 pages, 5 figure

    Acceleration of Convergence in Dontchev’s Iterative Method for Solving Variational Inclusions

    Get PDF
    2000 Mathematics Subject Classification: 47H04, 65K10.In this paper we investigate the existence of a sequence (xk ) satisfying 0 ∈ f (xk )+ ∇f (xk )(xk+1 − xk )+ 1/2 ∇2 f (xk )(xk+1 − xk )^2 + G(xk+1 ) and converging to a solution x∗ of the generalized equation 0 ∈ f (x) + G(x); where f is a function and G is a set-valued map acting in Banach spaces

    The Distribution of High Redshift Galaxy Colors: Line of Sight Variations in Neutral Hydrogen Absorption

    Get PDF
    We model, via Monte Carlo simulations, the distribution of observed U-B, B-V, V-I galaxy colors in the range 1.75<z<5 caused by variations in the line-of-sight opacity due to neutral hydrogen (HI). We also include HI internal to the source galaxies. Even without internal HI absorption, comparison of the distribution of simulated colors to the analytic approximations of Madau (1995) and Madau et al (1996) reveals systematically different mean colors and scatter. Differences arise in part because we use more realistic distributions of column densities and Doppler parameters. However, there are also mathematical problems of applying mean and standard deviation opacities, and such application yields unphysical results. These problems are corrected using our Monte Carlo approach. Including HI absorption internal to the galaxies generaly diminishes the scatter in the observed colors at a given redshift, but for redshifts of interest this diminution only occurs in the colors using the bluest band-pass. Internal column densities < 10^17 cm^2 do not effect the observed colors, while column densities > 10^18 cm^2 yield a limiting distribution of high redshift galaxy colors. As one application of our analysis, we consider the sample completeness as a function of redshift for a single spectral energy distribution (SED) given the multi-color selection boundaries for the Hubble Deep Field proposed by Madau et al (1996). We argue that the only correct procedure for estimating the z>3 galaxy luminosity function from color-selected samples is to measure the (observed) distribution of redshifts and intrinsic SED types, and then consider the variation in color for each SED and redshift. A similar argument applies to the estimation of the luminosity function of color-selected, high redshift QSOs.Comment: accepted for publication in ApJ; 25 pages text, 14 embedded figure

    Pre-logarithmic and logarithmic fields in a sandpile model

    Full text link
    We consider the unoriented two-dimensional Abelian sandpile model on the half-plane with open and closed boundary conditions, and relate it to the boundary logarithmic conformal field theory with central charge c=-2. Building on previous results, we first perform a complementary lattice analysis of the operator effecting the change of boundary condition between open and closed, which confirms that this operator is a weight -1/8 boundary primary field, whose fusion agrees with lattice calculations. We then consider the operators corresponding to the unit height variable and to a mass insertion at an isolated site of the upper half plane and compute their one-point functions in presence of a boundary containing the two kinds of boundary conditions. We show that the scaling limit of the mass insertion operator is a weight zero logarithmic field.Comment: 18 pages, 9 figures. v2: minor corrections + added appendi

    High efficiency GaAs-Ge tandem solar cells grown by MOCVD

    Get PDF
    High conversion efficiency and low weight are obviously desirable for solar cells intended for space applications. One promising structure is GaAs on Ge. The advantages of using Ge wafers as substrates include the following: they offer high efficiency by forming a two-junction tandem cell; low weight combined with superior strength allows usage of thin (3 mil) wafers; and they are a good substrate for GaAs, being lattice matched, thermal expansion matched, and available as large-area wafers

    Critical point network for drainage between rough surfaces

    Get PDF
    In this paper, we present a network method for computing two-phase flows between two rough surfaces with significant contact areas. Low-capillary number drainage is investigated here since one-phase flows have been previously investigated in other contributions. An invasion percolation algorithm is presented for modeling slow displacement of a wetting fluid by a non wetting one between two rough surfaces. Short-correlated Gaussian process is used to model random rough surfaces.The algorithm is based on a network description of the fracture aperture field. The network is constructed from the identification of critical points (saddles and maxima) of the aperture field. The invasion potential is determined from examining drainage process in a flat mini-channel. A direct comparison between numerical prediction and experimental visualizations on an identical geometry has been performed for one realization of an artificial fracture with a moderate fractional contact area of about 0.3. A good agreement is found between predictions and observations

    Accuracy of generalized gradient approximation functionals for density functional perturbation theory calculations

    Get PDF
    We assess the validity of various exchange-correlation functionals for computing the structural, vibrational, dielectric, and thermodynamical properties of materials in the framework of density-functional perturbation theory (DFPT). We consider five generalized-gradient approximation (GGA) functionals (PBE, PBEsol, WC, AM05, and HTBS) as well as the local density approximation (LDA) functional. We investigate a wide variety of materials including a semiconductor (silicon), a metal (copper), and various insulators (SiO2_2 α\alpha-quartz and stishovite, ZrSiO4_4 zircon, and MgO periclase). For the structural properties, we find that PBEsol and WC are the closest to the experiments and AM05 performs only slightly worse. All three functionals actually improve over LDA and PBE in contrast with HTBS, which is shown to fail dramatically for α\alpha-quartz. For the vibrational and thermodynamical properties, LDA performs surprisingly very good. In the majority of the test cases, it outperforms PBE significantly and also the WC, PBEsol and AM05 functionals though by a smaller margin (and to the detriment of structural parameters). On the other hand, HTBS performs also poorly for vibrational quantities. For the dielectric properties, none of the functionals can be put forward. They all (i) fail to reproduce the electronic dielectric constant due to the well-known band gap problem and (ii) tend to overestimate the oscillator strengths (and hence the static dielectric constant)
    corecore