30 research outputs found

    Landau damping in thin films irradiated by a strong laser field

    Full text link
    The rate of linear collisionless damping (Landau damping) in a classical electron gas confined to a heated ionized thin film is calculated. The general expression for the imaginary part of the dielectric tensor in terms of the parameters of the single-particle self-consistent electron potential is obtained. For the case of a deep rectangular well, it is explicitly calculated as a function of the electron temperature in the two limiting cases of specular and diffuse reflection of the electrons from the boundary of the self-consistent potential. For realistic experimental parameters, the contribution of Landau damping to the heating of the electron subsystem is estimated. It is shown that for films with a thickness below about 100 nm and for moderate laser intensities it may be comparable with or even dominate over electron-ion collisions and inner ionization.Comment: 15 pages, 2 figure

    Changes in Floquet state structure at avoided crossings: delocalization and harmonic generation

    Get PDF
    Avoided crossings are common in the quasienergy spectra of strongly driven nonlinear quantum wells. In this paper we examine the sinusoidally driven particle in a square potential well to show that avoided crossings can alter the structure of Floquet states in this system. Two types of avoided crossings are identified: on type leads only to temporary changes (as a function of driving field strength) in Floquet state structure while the second type can lead to permanent delocalization of the Floquet states. Radiation spectra from these latter states show significant increase in high harmonic generation as the system passes through the avoided crossing.Comment: 8 pages with 10 figures submitted to Physical Review

    Reviews and syntheses: The promise of big diverse soil data, moving current practices towards future potential

    Full text link
    In the age of big data, soil data are more available and richer than ever, but – outside of a few large soil survey resources – they remain largely unusable for informing soil management and understanding Earth system processes beyond the original study. Data science has promised a fully reusable research pipeline where data from past studies are used to contextualize new findings and reanalyzed for new insight. Yet synthesis projects encounter challenges at all steps of the data reuse pipeline, including unavailable data, labor-intensive transcription of datasets, incomplete metadata, and a lack of communication between collaborators. Here, using insights from a diversity of soil, data, and climate scientists, we summarize current practices in soil data synthesis across all stages of database creation: availability, input, harmonization, curation, and publication. We then suggest new soil-focused semantic tools to improve existing data pipelines, such as ontologies, vocabulary lists, and community practices. Our goal is to provide the soil data community with an overview of current practices in soil data and where we need to go to fully leverage big data to solve soil problems in the next century

    Reviews and syntheses: The promise of big diverse soil data, moving current practices towards future potential

    Get PDF
    In the age of big data, soil data are more available and richer than ever, but – outside of a few large soil survey resources – they remain largely unusable for informing soil management and understanding Earth system processes beyond the original study. Data science has promised a fully reusable research pipeline where data from past studies are used to contextualize new findings and reanalyzed for new insight. Yet synthesis projects encounter challenges at all steps of the data reuse pipeline, including unavailable data, labor-intensive transcription of datasets, incomplete metadata, and a lack of communication between collaborators. Here, using insights from a diversity of soil, data, and climate scientists, we summarize current practices in soil data synthesis across all stages of database creation: availability, input, harmonization, curation, and publication. We then suggest new soil-focused semantic tools to improve existing data pipelines, such as ontologies, vocabulary lists, and community practices. Our goal is to provide the soil data community with an overview of current practices in soil data and where we need to go to fully leverage big data to solve soil problems in the next century.</p

    Multifaceted roles of GSK-3 and Wnt/β-catenin in hematopoiesis and leukemogenesis: opportunities for therapeutic intervention

    Get PDF
    Glycogen synthase kinase-3 (GSK-3) is well documented to participate in a complex array of critical cellular processes. It was initially identified in rat skeletal muscle as a serine/threonine kinase that phosphorylated and inactivated glycogen synthase. This versatile protein is involved in numerous signaling pathways that influence metabolism, embryogenesis, differentiation, migration, cell cycle progression and survival. Recently, GSK-3 has been implicated in leukemia stem cell pathophysiology and may be an appropriate target for its eradication. In this review, we will discuss the roles that GSK-3 plays in hematopoiesis and leukemogenesis as how this pivotal kinase can interact with multiple signaling pathways such as: Wnt/β-catenin, phosphoinositide 3-kinase (PI3K)/phosphatase and tensin homolog (PTEN)/Akt/mammalian target of rapamycin (mTOR), Ras/Raf/MEK/extracellular signal-regulated kinase (ERK), Notch and others. Moreover, we will discuss how targeting GSK-3 and these other pathways can improve leukemia therapy and may overcome therapeutic resistance. In summary, GSK-3 is a crucial regulatory kinase interacting with multiple pathways to control various physiological processes, as well as leukemia stem cells, leukemia progression and therapeutic resistance. GSK-3 and Wnt are clearly intriguing therapeutic targets

    Data de- and re-dimensioning for optimized brokering access

    No full text
    <p>Data brokering systems aim to facilitate the exchange of data and models between disciplines in an increasingly transparent manner thereby accelerating scientific discovery. Researchers from many different, yet complimentary geoscience disciplines need to access cross field datasets from different fields with significantly different data formats, and in most cases differing time and space dimensionality than their own field commonly uses. This causes problems with large datasets with different time and space dimensions, as the difference in dimension often means that the entire dataset has to be read in order to provide the limited information the researcher is interested in. In this poster we present methods for removing the dimensionality from datasets, both physically on the data serving side as well demonstrate de- and redimensioning datasets from a broker based virtual perspective so the data brokering system can quickly access the smaller subset of data in the correct dimensionality for any given scientific field.</p> <p><strong>What we did</strong></p> <p>We de- and re-dimensioned the large reanalysis dataset, CFSR, to test alternative data paradigms to enhance the performance of single location extraction while maintain needed performace for spatial time step extraction.</p> <p><strong>Results</strong></p> <p>We were able to increase single location access performace by 10,000X, though spatial timestep access decreased by a factor of 10. Spatial requirements increased by a factor of 7 .</p> <p><strong>Discussion</strong></p> <p><em><strong>Cross Science Data Issues</strong></em></p> <p>Data brokering systems facilitate the exchange of data between disciplines, though the broker can not be responsible for optimizing the data structures for all sciences. An optimum data paradigm for one science is likely not efficient for other sciences, due to the native data dimensionalities of each field.</p> <p><em><strong>Benchmark Quality</strong></em></p> <p>While computational systems hosting each dataset are very dissimilar for this benchmark, it is fair to say that they are biased towards the CISL infrastructure.</p> <p><em><strong>Hosting Space Requirements</strong></em></p> <p>Other considerations that must be considered are the space requirements. Since the compression of shorter strings is less efficient than dense grids, and locational data has to be added to each point, the new paradigm for this study requires 4 times the storage space.</p> <p><em><strong>Re-dimensioned Usage</strong></em></p> <p>Over 22,000 single location and multi grid point requests have been made to TAMU and VTech Servers, estimated at ~ 3000 day equivalents from RDA subset service, which would not be possible with given computational resources.</p
    corecore