6,833 research outputs found

    Simulation of metal powder packing behaviour in laser-based powder bed fusion

    Get PDF
    Laser-based powder bed fusion (L-PBF) is a method of additive manufacturing, in which metal powder is fused into solid parts, layer by layer. L-PBF shows high promise for manufacture of functional Tungsten parts, but the development of Tungsten powder feedstock for L-PBF processing is demanding and expensive. Therefore, computer simulation is explored as a possible tool for Tungsten powder feedstock development at EOS Finland Oy, with whom this thesis was made. The aim of this thesis was to develop a simulation model of the recoating process of an EOS M 290 L-PBF system, as well as a validation method for the simulation. The validated simulation model can be used to evaluate the applicability of the used simulation software (FLOW-3D DEM) in powder material development, and possibly use the model as a platform for future application with Tungsten powder. In order to reduce complexity and uncertainties, the irregular Tungsten powder is not yet simulated, and a well-known, spherical EOS IN718 powder feedstock was used instead. The validation experiment is based on building a low, enclosed wall using the M 290 L-PBF system. Recoated powder is trapped inside as the enclosure is being built, making it possible to remove the sampled powder from a known volume. This enables measuring the powder packing density (PD) of the powder bed. The experiment was repeated five times and some sources of error were also quantified. Average PD was found to be 52 % with a standard deviation of 0.2 %. The simulation was modelled after the IN718 powder and corresponding process used in the M 290 system. Material-related input values were found by dynamic image analysis, pycnometry, rheometry, and from literature. PD was measured with six different methods, and the method considered as most analogous to the practical validation experiment yielded a PD of 52 %. Various particle behavior phenomena were also observed and analyzed. Many of the powder bed characterization methods found in literature were not applicable to L-PBF processing or were not representative of the simulated conditions. Many simulation studies were also found to use no validation, or used a validation method which is not based on the investigated phenomena. The validation model developed in this thesis accurately represents the simulated conditions and is found to produce reliable and repeatable results. The simulation model was parametrized with values acquired from practical experiments or literature and closely matched the validation experiment, and could therefore be considered a truthful representation of the powder recoating process of an EOS M 290. The model can be used as a platform for future development of Tungsten powder simulation

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Review of Methodologies to Assess Bridge Safety During and After Floods

    Get PDF
    This report summarizes a review of technologies used to monitor bridge scour with an emphasis on techniques appropriate for testing during and immediately after design flood conditions. The goal of this study is to identify potential technologies and strategies for Illinois Department of Transportation that may be used to enhance the reliability of bridge safety monitoring during floods from local to state levels. The research team conducted a literature review of technologies that have been explored by state departments of transportation (DOTs) and national agencies as well as state-of-the-art technologies that have not been extensively employed by DOTs. This review included informational interviews with representatives from DOTs and relevant industry organizations. Recommendations include considering (1) acquisition of tethered kneeboard or surf ski-mounted single-beam sonars for rapid deployment by local agencies, (2) acquisition of remote-controlled vessels mounted with single-beam and side-scan sonars for statewide deployment, (3) development of large-scale particle image velocimetry systems using remote-controlled drones for stream velocity and direction measurement during floods, (4) physical modeling to develop Illinois-specific hydrodynamic loading coefficients for Illinois bridges during flood conditions, and (5) development of holistic risk-based bridge assessment tools that incorporate structural, geotechnical, hydraulic, and scour measurements to provide rapid feedback for bridge closure decisions.IDOT-R27-SP50Ope

    Modeling, Simulation and Data Processing for Additive Manufacturing

    Get PDF
    Additive manufacturing (AM) or, more commonly, 3D printing is one of the fundamental elements of Industry 4.0. and the fourth industrial revolution. It has shown its potential example in the medical, automotive, aerospace, and spare part sectors. Personal manufacturing, complex and optimized parts, short series manufacturing and local on-demand manufacturing are some of the current benefits. Businesses based on AM have experienced double-digit growth in recent years. Accordingly, we have witnessed considerable efforts in developing processes and materials in terms of speed, costs, and availability. These open up new applications and business case possibilities all the time, which were not previously in existence. Most research has focused on material and AM process development or effort to utilize existing materials and processes for industrial applications. However, improving the understanding and simulation of materials and AM process and understanding the effect of different steps in the AM workflow can increase the performance even more. The best way of benefit of AM is to understand all the steps related to that—from the design and simulation to additive manufacturing and post-processing ending the actual application.The objective of this Special Issue was to provide a forum for researchers and practitioners to exchange their latest achievements and identify critical issues and challenges for future investigations on “Modeling, Simulation and Data Processing for Additive Manufacturing”. The Special Issue consists of 10 original full-length articles on the topic

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Migration Research in a Digitized World: Using Innovative Technology to Tackle Methodological Challenges

    Get PDF
    This open access book explores implications of the digital revolution for migration scholars’ methodological toolkit. New information and communication technologies hold considerable potential to improve the quality of migration research by originating previously non-viable solutions to a myriad of methodological challenges in this field of study. Combining cutting-edge migration scholarship and methodological expertise, the book addresses a range of crucial issues related to both researcher-designed data collections and the secondary use of “big data”, highlighting opportunities as well as challenges and limitations. A valuable source for students and scholars engaged in migration research, the book will also be of keen interest to policymakers

    Design and Code Optimization for Systems with Next-generation Racetrack Memories

    Get PDF
    With the rise of computationally expensive application domains such as machine learning, genomics, and fluids simulation, the quest for performance and energy-efficient computing has gained unprecedented momentum. The significant increase in computing and memory devices in modern systems has resulted in an unsustainable surge in energy consumption, a substantial portion of which is attributed to the memory system. The scaling of conventional memory technologies and their suitability for the next-generation system is also questionable. This has led to the emergence and rise of nonvolatile memory ( NVM ) technologies. Today, in different development stages, several NVM technologies are competing for their rapid access to the market. Racetrack memory ( RTM ) is one such nonvolatile memory technology that promises SRAM -comparable latency, reduced energy consumption, and unprecedented density compared to other technologies. However, racetrack memory ( RTM ) is sequential in nature, i.e., data in an RTM cell needs to be shifted to an access port before it can be accessed. These shift operations incur performance and energy penalties. An ideal RTM , requiring at most one shift per access, can easily outperform SRAM . However, in the worst-cast shifting scenario, RTM can be an order of magnitude slower than SRAM . This thesis presents an overview of the RTM device physics, its evolution, strengths and challenges, and its application in the memory subsystem. We develop tools that allow the programmability and modeling of RTM -based systems. For shifts minimization, we propose a set of techniques including optimal, near-optimal, and evolutionary algorithms for efficient scalar and instruction placement in RTMs . For array accesses, we explore schedule and layout transformations that eliminate the longer overhead shifts in RTMs . We present an automatic compilation framework that analyzes static control flow programs and transforms the loop traversal order and memory layout to maximize accesses to consecutive RTM locations and minimize shifts. We develop a simulation framework called RTSim that models various RTM parameters and enables accurate architectural level simulation. Finally, to demonstrate the RTM potential in non-Von-Neumann in-memory computing paradigms, we exploit its device attributes to implement logic and arithmetic operations. As a concrete use-case, we implement an entire hyperdimensional computing framework in RTM to accelerate the language recognition problem. Our evaluation shows considerable performance and energy improvements compared to conventional Von-Neumann models and state-of-the-art accelerators

    Task-based Runtime Optimizations Towards High Performance Computing Applications

    Get PDF
    The last decades have witnessed a rapid improvement of computational capabilities in high-performance computing (HPC) platforms thanks to hardware technology scaling. HPC architectures benefit from mainstream advances on the hardware with many-core systems, deep hierarchical memory subsystem, non-uniform memory access, and an ever-increasing gap between computational power and memory bandwidth. This has necessitated continuous adaptations across the software stack to maintain high hardware utilization. In this HPC landscape of potentially million-way parallelism, task-based programming models associated with dynamic runtime systems are becoming more popular, which fosters developers’ productivity at extreme scale by abstracting the underlying hardware complexity. In this context, this dissertation highlights how a software bundle powered by a task-based programming model can address the heterogeneous workloads engendered by HPC applications., i.e., data redistribution, geospatial modeling and 3D unstructured mesh deformation here. Data redistribution aims to reshuffle data to optimize some objective for an algorithm, whose objective can be multi-dimensional, such as improving computational load balance or decreasing communication volume or cost, with the ultimate goal of increasing the efficiency and therefore reducing the time-to-solution for the algorithm. Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Meshing the deformable contour of moving 3D bodies is an expensive operation that can cause huge computational challenges in fluid-structure interaction (FSI) applications. Therefore, in this dissertation, Redistribute-PaRSEC, ExaGeoStat-PaRSEC and HiCMA-PaRSEC are proposed to efficiently tackle these HPC applications respectively at extreme scale, and they are evaluated on multiple HPC clusters, including AMD-based, Intel-based, Arm-based CPU systems and IBM-based multi-GPU system. This multidisciplinary work emphasizes the need for runtime systems to go beyond their primary responsibility of task scheduling on massively parallel hardware system for servicing the next-generation scientific applications
    • …
    corecore