126 research outputs found

    Growth of Large Domain Epitaxial Graphene on the C-Face of SiC

    Full text link
    Growth of epitaxial graphene on the C-face of SiC has been investigated. Using a confinement controlled sublimation (CCS) method, we have achieved well controlled growth and been able to observe propagation of uniform monolayer graphene. Surface patterns uncover two important aspects of the growth, i.e. carbon diffusion and stoichiometric requirement. Moreover, a new "stepdown" growth mode has been discovered. Via this mode, monolayer graphene domains can have an area of hundreds of square micrometers, while, most importantly, step bunching is avoided and the initial uniformly stepped SiC surface is preserved. The stepdown growth provides a possible route towards uniform epitaxial graphene in wafer size without compromising the initial flat surface morphology of SiC.Comment: 18 pages, 8 figure

    Nonlinear Spring-Mass-Damper Modeling and Parameter Estimation of Train Frontal Crash Using CLGAN Model

    Get PDF
    Due to the complexity of a train crash, it is a challenging process to describe and estimate mathematically. Although different mathematical models have been developed, it is still difficult to balance the complexity of models and the accuracy of estimation. ,is paper proposes a nonlinear spring-mass-damper model of train frontal crash, which achieves high accuracy and maintains low complexity. ,e Convolutional Long-short-term-memory Generation Adversarial Network (CLGAN) model is applied to study the nonlinear parameters dynamic variation of the key components of a rail vehicle (e.g., the head car, anticlimbing energy absorber, and the coupler buffer devices). Firstly, the nonlinear lumped model of train frontal crash is built, and then the physical parameters are deduced in twenty different cases using D’Alembert’s principle. Secondly, the input/output relationship of the CLGAN model is determined, where the inputs are the nonlinear physical parameters in twenty initial conditions, and the output is the nonlinear relationship between the train crash nonlinear parameters under other initial cases. Finally, the train crash dynamic characteristics are accurately estimated during the train crash processes through the training of the CLGAN model, and then the crash processes under different given conditions can be described effectively. ,e estimation results exhibit good agreement with finite element (FE) simulations and experimental results. Furthermore, the CLGAN model shows great potential in nonlinear estimation, and CLGAN can better describe the variation of nonlinear spring damping compared with the traditional model. ,e nonlinear spring-mass-damper modeling is involved in improving the speed and accuracy of the train crash estimation, as well as being able to offer guidance for structure optimization in the early design stage

    Venice: Exploring Server Architectures for Effective Resource Sharing

    Get PDF
    Consolidated server racks are quickly becoming the backbone of IT infrastructure for science, engineering, and business, alike. These servers are still largely built and organized as when they were distributed, individual entities. Given that many fields increasingly rely on analytics of huge datasets, it makes sense to support flexible resource utilization across servers to improve cost-effectiveness and performance. We introduce Venice, a family of data-center server architectures that builds a strong communication substrate as a first-class resource for server chips. Venice provides a diverse set of resource-joining mechanisms that enables user programs to efficiently leverage non-local resources. To better understand the implications of design decisions about system support for resource sharing we have constructed a hardware prototype that allows us to more accurately measure end-to-end performance of at-scale applications and to explore tradeoffs among performance, power, and resource-sharing transparency. We present results from our initial studies analyzing these tradeoffs when sharing memory, accelerators, or NICs. We find that it is particularly important to reduce or hide latency, that data-sharing access patterns should match the features of the communication channels employed, and that inter-channel collaboration can be exploited for better performance
    • …
    corecore