2,931 research outputs found

    Parametric Analysis of Particle Spreading with Discrete Element Method

    Get PDF
    The spreading of metallic powder on the printing platform is vital in most additive manufacturing methods, including direct laser sintering. Several processing parameters such as particle size, inter-particle friction, blade speed, and blade gap size affect the spreading process and, therefore, the final product quality. The objective of this study is to parametrically analyze the particle flow behavior and the effect of the aforementioned parameters on the spreading process using the discrete element method (DEM). To effectively address the vast parameter space within computational constraints, novel parameter sweep algorithms based on low discrepancy sequence (LDS) are utilized in conjunction with parallel computing. Based on the parametric analysis, optimal material properties and machine setup are proposed for a higher quality spreading. Modeling suggests that lower friction, smaller particle size, lower blade speed, and a gap of two times the particle diameter result in a higher quality spreading process. In addition, a twoparameter Weibull distribution is adopted to investigate the influence of particle size distribution. The result suggests that smaller particles with a narrower distribution produce a higher-quality flow, with a proper selection of gap. Finally, parallel computing, in conjunction with the LDS parameter sweep algorithm, effectively shrinks the parameter space and improves the overall computational efficiency

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load

    Venice California, Gentrification in a Neo-Bohemian Beach Town: Structural violence, Powerstructures and Ideological Justification for Injustice, Exclusion and Dispossesion

    Get PDF
    Postponed access: the file will be accessible after 2022-08-07MasteroppgaveSANT350MASV-SAN

    Pathways to poverty:Theoretical and empirical analyses

    Get PDF
    The prevalence of poverty in advanced economies represents a challenge, both to economic theory and to society. We know that poverty is perpetuated by low levels of educational investment amongst disadvantaged children, but we have no credible theoretical explanation for the observed degree of that apparent underinvestment, and we have not yet developed sufficient policy tools to break the intergenerational cycle of deprivation. In response, this thesis undertakes theoretical and empirical analyses of the pathways that perpetuate poverty. I demonstrate that divergently low educational investment could arise as an equilibrium response to a grades-focussed educational system; I develop the existing state-of-the-art technique in econometric estimation of the educational production function; and I apply that technique to find strong empirical support for my theoretical model. In addition my results show that the average child’s propensity to think analytically has a substantial influence over their developmental pathway, which suggests that models of educational investment should adopt a generalisation of Expected Utility Theory that allows agents to maximise one of two possible objective functions
    corecore