9,701 research outputs found
End-of-life vehicle (ELV) recycling management: improving performance using an ISM approach
With booming of the automobile industry, China has become the country with increasing car ownership all over the world. However, the end-of-life vehicle (ELV) recycling industry is at infancy, and there is little systematic review on ELV recycling management, as well as low adoption amongst domestic automobile industry. This study presents a literature review and an interpretive structural modeling (ISM) approach is employed to identify the drivers towards Chinese ELV recycling business from government, recycling organizations and consumer’s perspectives, so as to improve the sustainability of automobile supply chain by providing some strategic insights. The results derived from the ISM analysis manifest that regulations on auto-factory, disassembly technique, and value mining of recycling business are the essential ingredients. It is most effective and efficient to promote ELV recycling business by improving these attributes, also the driving and dependence power analysis are deemed to provide guidance on performance improvement of ELV recycling in the Chinese market
A scalable parallel finite element framework for growing geometries. Application to metal additive manufacturing
This work introduces an innovative parallel, fully-distributed finite element
framework for growing geometries and its application to metal additive
manufacturing. It is well-known that virtual part design and qualification in
additive manufacturing requires highly-accurate multiscale and multiphysics
analyses. Only high performance computing tools are able to handle such
complexity in time frames compatible with time-to-market. However, efficiency,
without loss of accuracy, has rarely held the centre stage in the numerical
community. Here, in contrast, the framework is designed to adequately exploit
the resources of high-end distributed-memory machines. It is grounded on three
building blocks: (1) Hierarchical adaptive mesh refinement with octree-based
meshes; (2) a parallel strategy to model the growth of the geometry; (3)
state-of-the-art parallel iterative linear solvers. Computational experiments
consider the heat transfer analysis at the part scale of the printing process
by powder-bed technologies. After verification against a 3D benchmark, a
strong-scaling analysis assesses performance and identifies major sources of
parallel overhead. A third numerical example examines the efficiency and
robustness of (2) in a curved 3D shape. Unprecedented parallelism and
scalability were achieved in this work. Hence, this framework contributes to
take on higher complexity and/or accuracy, not only of part-scale simulations
of metal or polymer additive manufacturing, but also in welding, sedimentation,
atherosclerosis, or any other physical problem where the physical domain of
interest grows in time
The LifeV library: engineering mathematics beyond the proof of concept
LifeV is a library for the finite element (FE) solution of partial
differential equations in one, two, and three dimensions. It is written in C++
and designed to run on diverse parallel architectures, including cloud and high
performance computing facilities. In spite of its academic research nature,
meaning a library for the development and testing of new methods, one
distinguishing feature of LifeV is its use on real world problems and it is
intended to provide a tool for many engineering applications. It has been
actually used in computational hemodynamics, including cardiac mechanics and
fluid-structure interaction problems, in porous media, ice sheets dynamics for
both forward and inverse problems. In this paper we give a short overview of
the features of LifeV and its coding paradigms on simple problems. The main
focus is on the parallel environment which is mainly driven by domain
decomposition methods and based on external libraries such as MPI, the Trilinos
project, HDF5 and ParMetis.
Dedicated to the memory of Fausto Saleri.Comment: Review of the LifeV Finite Element librar
DALiuGE: A Graph Execution Framework for Harnessing the Astronomical Data Deluge
The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for
processing large astronomical datasets at a scale required by the Square
Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex
data reduction pipelines consisting of both data sets and algorithmic
components and an implementation run-time to execute such pipelines on
distributed resources. By mapping the logical view of a pipeline to its
physical realisation, DALiuGE separates the concerns of multiple stakeholders,
allowing them to collectively optimise large-scale data processing solutions in
a coherent manner. The execution in DALiuGE is data-activated, where each
individual data item autonomously triggers the processing on itself. Such
decentralisation also makes the execution framework very scalable and flexible,
supporting pipeline sizes ranging from less than ten tasks running on a laptop
to tens of millions of concurrent tasks on the second fastest supercomputer in
the world. DALiuGE has been used in production for reducing interferometry data
sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide
Spectral Radioheliograph; and is being developed as the execution framework
prototype for the Science Data Processor (SDP) consortium of the Square
Kilometre Array (SKA) telescope. This paper presents a technical overview of
DALiuGE and discusses case studies from the CHILES and MUSER projects that use
DALiuGE to execute production pipelines. In a companion paper, we provide
in-depth analysis of DALiuGE's scalability to very large numbers of tasks on
two supercomputing facilities.Comment: 31 pages, 12 figures, currently under review by Astronomy and
Computin
Ï€BUSS:a parallel BEAST/BEAGLE utility for sequence simulation under complex evolutionary scenarios
Background: Simulated nucleotide or amino acid sequences are frequently used
to assess the performance of phylogenetic reconstruction methods. BEAST, a
Bayesian statistical framework that focuses on reconstructing time-calibrated
molecular evolutionary processes, supports a wide array of evolutionary models,
but lacked matching machinery for simulation of character evolution along
phylogenies.
Results: We present a flexible Monte Carlo simulation tool, called piBUSS,
that employs the BEAGLE high performance library for phylogenetic computations
within BEAST to rapidly generate large sequence alignments under complex
evolutionary models. piBUSS sports a user-friendly graphical user interface
(GUI) that allows combining a rich array of models across an arbitrary number
of partitions. A command-line interface mirrors the options available through
the GUI and facilitates scripting in large-scale simulation studies. Analogous
to BEAST model and analysis setup, more advanced simulation options are
supported through an extensible markup language (XML) specification, which in
addition to generating sequence output, also allows users to combine simulation
and analysis in a single BEAST run.
Conclusions: piBUSS offers a unique combination of flexibility and
ease-of-use for sequence simulation under realistic evolutionary scenarios.
Through different interfaces, piBUSS supports simulation studies ranging from
modest endeavors for illustrative purposes to complex and large-scale
assessments of evolutionary inference procedures. The software aims at
implementing new models and data types that are continuously being developed as
part of BEAST/BEAGLE.Comment: 13 pages, 2 figures, 1 tabl
Coincidence Problem in CPS Simulations: the R-ROSACE Case Study
This paper presents ongoing work on the formalism of Cyber-Physical Systems (CPS) simulations. We focus on a distributed simulations architecture for CPS, where the running simulators exist in concurrent and sequential domains. This architecture of simulation allows the expression of structural and behavioral constraints on the simulation. We call scheduling of simulation the temporal organization of the simulators interconnection. In this paper we address the problem of the interconnected simulations representativity. To do so, we highlight the similarities and differences between task scheduling and simulation scheduling, and then we discuss the constraints expressible over that simulation scheduling. Finally, we illustrate a constraint on simulation scheduling with an extension of the open source case study ROSACE, implemented with CERTI, a compliant High-Level Architecture (HLA) RunTime Infrastructure (RTI). HLA is an IEEE standard for distributed simulation
Coincidence Problem in CPS Simulations: the R-ROSACE Case Study
This paper presents ongoing work on the formalism of Cyber-Physical Systems (CPS) simulations. We focus on a distributed simulations architecture for CPS, where the running simulators exist in concurrent and sequential domains. This architecture of simulation allows the expression of structural and behavioral constraints on the simulation. We call scheduling of simulation the temporal organization of the simulators interconnection. In this paper we address the problem of the interconnected simulations representativity. To do so, we highlight the similarities and differences between task scheduling and simulation scheduling, and then we discuss the constraints expressible over that simulation scheduling. Finally, we illustrate a constraint on simulation scheduling with an extension of the open source case study ROSACE, implemented with CERTI, a compliant High- Level Architecture (HLA) RunTime Infrastructure (RTI). HLA is an IEEE standard for distributed simulation
- …