1,240 research outputs found

    Super-Earth Atmospheres: Self-Consistent Gas Accretion and Retention

    Get PDF
    Some recently discovered short-period Earth to Neptune sized exoplanets (super Earths) have low observed mean densities which can only be explained by voluminous gaseous atmospheres. Here, we study the conditions allowing the accretion and retention of such atmospheres. We self-consistently couple the nebular gas accretion onto rocky cores and the subsequent evolution of gas envelopes following the dispersal of the protoplanetary disk. Specifically, we address mass-loss due to both photo-evaporation and cooling of the planet. We find that planets shed their outer layers (dozens of percents in mass) following the disk's dispersal (even without photo-evaporation), and their atmospheres shrink in a few Myr to a thickness comparable to the radius of the underlying rocky core. At this stage, atmospheres containing less particles than the core (equivalently, lighter than a few % of the planet's mass) can be blown away by heat coming from the cooling core, while heavier atmospheres cool and contract on a timescale of Gyr at most. By relating the mass-loss timescale to the accretion time, we analytically identify a Goldilocks region in the mass-temperature plane in which low-density super Earths can be found: planets have to be massive and cold enough to accrete and retain their atmospheres, while not too massive or cold, such that they do not enter runaway accretion and become gas giants (Jupiters). We compare our results to the observed super-Earth population and find that low-density planets are indeed concentrated in the theoretically allowed region. Our analytical and intuitive model can be used to investigate possible super-Earth formation scenarios.Comment: Updated (refereed) versio

    Stability of Service under Time-of-Use Pricing

    Full text link
    We consider "time-of-use" pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the {\em expected} demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm.Comment: To appear in STOC'1

    Efficient Out-of-Core Algorithms for Linear Relaxation Using Blocking Covers

    Get PDF
    AbstractWhen a numerical computation fails to fit in the primary memory of a serial or parallel computer, a so-called “out-of-core” algorithm, which moves data between primary and secondary memories, must be used. In this paper, we study out-of-core algorithms for sparse linear relaxation problems in which each iteration of the algorithm updates the state of every vertex in a graph with a linear combination of the states of its neighbors. We give a general method that can save substantially on the I/O traffic for many problems. For example, our technique allows a computer withMwords of primary memory to performT=Ω(M1/5) cycles of a multigrid algorithm for a two-dimensional elliptic solver over an n-point domain using onlyΘ(nT/M1/5) I/O transfers, as compared with the naive algorithm which requiresΩ(nT) I/O's. Our method depends on the existence of a “blocking” cover of the graph that underlies the linear relaxation. A blocking cover has the property that the subgraphs forming the cover have large diameters once a small number of vertices have been removed. The key idea in our method is to introduce a variable for each removed vertex for each time step of the algorithm. We maintain linear dependences among the removed vertices, thereby allowing each subgraph to be iteratively relaxed without external communication. We give a general theorem relating blocking covers to I/O-efficient relaxation schemes. We also give an automatic method for finding blocking covers for certain classes of graphs, including planar graphs andd-dimensional simplicial graphs with constant aspect ratio (i.e., graphs that arise from dividingd-space into “well-shaped” polyhedra). As a result, we can performTiterations of linear relaxation on anyn-vertex planar graph using onlyΘ(n+nTlgn/M1/4) I/O's or on anyn-noded-dimensional simplicial graph with constant aspect ratio using onlyΘ(n+nTlgn/MΩ(1/d)) I/O's

    Femtosecond-scale switching based on excited free-carriers

    Get PDF
    We describe novel optical switching schemes operating at femtosecond time scales by employing free carrier (FC) excitation. Such unprecedented switching times are made possible by spatially patterning the density of the excited FCs. In the first realization, we rely on diffusion, i.e., on the nonlocality of the FC nonlinear response of the semiconductor, to erase the initial FC pattern and, thereby, eliminate the reflectivity of the system. In the second realization, we erase the FC pattern by launching a second pump pulse at a controlled delay. We discuss the advantages and limitations of the proposed approaches and demonstrate their potential applicability for switching ultrashort pulses propagating in silicon waveguides. We show switching efficiencies of up to 50% for 100 fs pump pulses, which is an unusually high level of efficiency for such a short interaction time, a result of the use of the strong FC nonlinearity. Due to limitations of saturation and pattern effects, these schemes can be employed for switching applications that require femtosecond features but standard repetition rates. Such applications include switching of ultrashort pulses, femtosecond spectroscopy (gating), time-reversal of short pulses for aberration compensation, and many more. This approach is also the starting point for ultrafast amplitude modulations and a new route toward the spatio-temporal shaping of short optical pulse

    Central dislocation of the hip secondary to insufficiency fracture

    Get PDF
    We present a case report of a 45-year old man who sustained a central dislocation of the hip secondary to an insufficiency fracture of the acetabulum. At the time of presentation he was on alendronate therapy for osteoporosis which had been previously investigated. CT scanning of the pelvis was useful for pre-operative planning which confirmed collapse of the femoral head but no discontinuity of the pelvis. The femoral head was morcellized and used as bone graft for the acetabular defect and an uncemented total hip replacement was performed

    Cache-conscious scheduling of streaming applications

    Get PDF
    This paper considers the problem of scheduling streaming applications on uniprocessors in order to minimize the number of cache-misses. Streaming applications are represented as a directed graph (or multigraph), where nodes are computation modules and edges are channels. When a module fires, it consumes some data-items from its input channels and produces some items on its output channels. In addition, each module may have some state (either code or data) which represents the memory locations that must be loaded into cache in order to execute the module. We consider synchronous dataflow graphs where the input and output rates of modules are known in advance and do not change during execution. We also assume that the state size of modules is known in advance. Our main contribution is to show that for a large and important class of streaming computations, cache-efficient scheduling is essentially equivalent to solving a constrained graph partitioning problem. A streaming computation from this class has a cache-efficient schedule if and only if its graph has a low-bandwidth partition of the modules into components (subgraphs) whose total state fits within the cache, where the bandwidth of the partition is the number of data items that cross intercomponent channels per data item that enters the graph. Given a good partition, we describe a runtime strategy for scheduling two classes of streaming graphs: pipelines, where the graph consists of a single directed chain, and a fairly general class of directed acyclic graphs (dags) with some additional restrictions. The runtime scheduling strategy consists of adding large external buffers at the input and output edges of each component, allowing each component to be executed many times. Partitioning enables a reduction in cache misses in two ways. First, any items that are generated on edges internal to subgraphs are never written out to memory, but remain in cache. Second, each subgraph is executed many times, allowing the state to be reused. We prove the optimality of this runtime scheduling for all pipelines and for dags that meet certain conditions on buffer-size requirements. Specifically, we show that with constant-factor memory augmentation, partitioning on these graphs guarantees the optimal number of cache misses to within a constant factor. For the pipeline case, we also prove that such a partition can be found in polynomial time. For the dags we prove optimality if a good partition is provided; the partitioning problem itself is NP-complete.National Science Foundation (U.S.) (Grant CCF-1150036)National Science Foundation (U.S.) (Grant CNS-1017058)National Science Foundation (U.S.) (Grant CCF-0937860)United States-Israel Binational Science Foundation (Grant 2010231

    Optimal rotations of deformable bodies and orbits in magnetic fields

    Full text link
    Deformations can induce rotation with zero angular momentum where dissipation is a natural ``cost function''. This gives rise to an optimization problem of finding the most effective rotation with zero angular momentum. For certain plastic and viscous media in two dimensions the optimal path is the orbit of a charged particle on a surface of constant negative curvature with magnetic field whose total flux is half a quantum unit.Comment: 4 pages revtex, 4 figures + animation in multiframe GIF forma

    Chaos Thresholds in finite Fermi systems

    Full text link
    The development of Quantum Chaos in finite interacting Fermi systems is considered. At sufficiently high excitation energy the direct two-particle interaction may mix into an eigen-state the exponentially large number of simple Slater-determinant states. Nevertheless, the transition from Poisson to Wigner-Dyson statistics of energy levels is governed by the effective high order interaction between states very distant in the Fock space. The concrete form of the transition depends on the way one chooses to work out the problem of factorial divergency of the number of Feynman diagrams. In the proposed scheme the change of statistics has a form of narrow phase transition and may happen even below the direct interaction threshold.Comment: 9 pages, REVTEX, 2 eps figures. Enlarged versio
    • 

    corecore