11,727 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Evolutionary Multi-Objective Aerodynamic Design Optimization Using CFD Simulation Incorporating Deep Neural Network
An evolutionary multi-objective aerodynamic design optimization method using
the computational fluid dynamics (CFD) simulations incorporating deep neural
network (DNN) to reduce the required computational time is proposed. In this
approach, the DNN infers the flow field from the grid data of a design and the
CFD simulation starts from the inferred flow field to obtain the steady-state
flow field with a smaller number of time integration steps. To show the
effectiveness of the proposed method, a multi-objective aerodynamic airfoil
design optimization is demonstrated. The results indicate that the
computational time for design optimization is suppressed to 57.9% under 96
cores processor conditions
Intermodal Terminal Subsystem Technology Selection Using Integrated Fuzzy MCDM Model
Intermodal transportation is the use of multiple modes of transportation, which can lead to
greater sustainability by reducing environmental impact and traffic congestion and increasing the
efficiency of supply chains. One of the preconditions for efficient intermodal transport is the efficient
intermodal terminal (IT). ITs allow for the smooth and efficient handling of cargo, thus reducing the
time, cost, and environmental impact of transportation. Adequate selection of subsystem technologies
can significantly improve the efficiency and productivity of an IT, ultimately leading to cost savings
for businesses and a more efficient and sustainable transportation system. Accordingly, this paper
aims to establish a framework for the evaluation and selection of appropriate technologies for IT
subsystems. To solve the defined problem, an innovative hybrid multi-criteria decision making
(MCDM) model, which combines the fuzzy factor relationship (FFARE) and the fuzzy combinative
distance-based assessment (FCODAS) methods, is developed in this paper. The FFARE method
is used for obtaining criteria weights, while the FCODAS method is used for evaluation and a
final ranking of the alternatives. The established framework and the model are tested on a real-life
case study, evaluating and selecting the handling technology for a planned IT. The study defines
12 potential variants of handling equipment based on their techno-operational characteristics and
evaluates them using 16 criteria. The results indicate that the best handling technology variant is
the one that uses a rail-mounted gantry crane for trans-shipment and a reach stacker for horizontal
transport and storage. The results also point to the conclusion that instead of choosing equipment
for each process separately, it is important to think about the combination of different handling
technologies that can work together to complete a series of handling cycle processes. The main
contributions of this paper are the development of a new hybrid model and the establishment of
a framework for the selection of appropriate IT subsystem technologies along with a set of unique
criteria for their evaluation and selection
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
Qluster: An easy-to-implement generic workflow for robust clustering of health data
The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors
Intelligent Control Schemes for Maximum Power Extraction from Photovoltaic Arrays under Faults
Investigation of power output from PV arrays under different fault conditions is an essential task to enhance performance of a photovoltaic system under all operating conditions. Significant reduction in power output can occur during various PV faults such as module disconnection, bypass diode failure, bridge fault, and short circuit fault under non-uniform shading conditions. These PV faults may cause several peaks in the characteristics curve of PV arrays, which can lead to failure of the MPPT control strategy. In fact, impact of a fault can differ depending on the type of PV array, and it can make the control of the system more complex. Therefore, consideration of suitable PV arrays with an effective control design is necessary for maximum power output from a PV system. For this purpose, the proposed study presents a comparative study of two intelligent control schemes, i.e., fuzzy logic (FL) and particle swarm optimization (PSO), with a conventional control scheme known as perturb and observe (P&O) for power extraction from a PV system. The comparative analysis is based on the performance of the control strategies under several faults and the types of PV modules, i.e., monocrystalline and thin-film PV arrays. In this study, numerical analysis for complex fault scenarios like multiple faults under partial shading have also been performed. Different from the previous literature, this study will reveal the performance of FL-, PSO-, and P&O-based MPPT strategies to track maximum peak power during multiple severe fault conditions while considering the accuracy and fast-tracking efficiencies of the control techniques. A thorough analysis along with in-depth quantitative data are presented, confirming the superiority of intelligent control techniques under multiple faults and different PV types
A Reinforcement Learning-assisted Genetic Programming Algorithm for Team Formation Problem Considering Person-Job Matching
An efficient team is essential for the company to successfully complete new
projects. To solve the team formation problem considering person-job matching
(TFP-PJM), a 0-1 integer programming model is constructed, which considers both
person-job matching and team members' willingness to communicate on team
efficiency, with the person-job matching score calculated using intuitionistic
fuzzy numbers. Then, a reinforcement learning-assisted genetic programming
algorithm (RL-GP) is proposed to enhance the quality of solutions. The RL-GP
adopts the ensemble population strategies. Before the population evolution at
each generation, the agent selects one from four population search modes
according to the information obtained, thus realizing a sound balance of
exploration and exploitation. In addition, surrogate models are used in the
algorithm to evaluate the formation plans generated by individuals, which
speeds up the algorithm learning process. Afterward, a series of comparison
experiments are conducted to verify the overall performance of RL-GP and the
effectiveness of the improved strategies within the algorithm. The
hyper-heuristic rules obtained through efficient learning can be utilized as
decision-making aids when forming project teams. This study reveals the
advantages of reinforcement learning methods, ensemble strategies, and the
surrogate model applied to the GP framework. The diversity and intelligent
selection of search patterns along with fast adaptation evaluation, are
distinct features that enable RL-GP to be deployed in real-world enterprise
environments.Comment: 16 page
Thermodynamic Assessment and Optimisation of Supercritical and Transcritical Power Cycles Operating on CO2 Mixtures by Means of Artificial Neural Networks
Feb 21, 2022 to Feb 24, 2022, San Antonio, TX, United StatesClosed supercritical and transcritical power cycles operating on Carbon Dioxide have proven to be a promising technology for power generation and, as such, they are being researched by numerous international projects today. Despite the advantageous features of these cycles enabling very high efficiencies in intermediate temperature applications, the major shortcoming of the technology is a strong dependence on ambient temperature; in order to perform compression near the CO2 critical point (31ºC), low ambient temperatures are needed. This is particularly challenging in Concentrated Solar Power applications, typically found in hot, semi-arid locations.
To overcome this limitation, the SCARABEUS project explores the idea of blending raw carbon dioxide with small amounts of certain dopants in order to shift the critical temperature of the resulting working fluid to higher values, hence enabling gaseous compression near the critical point or even liquid compression regardless of a high ambient temperature. Different dopants have been studied within the project so far (i.e. C6F6, TiCl4 and SO2) but the final selection will have to account for trade-offs between thermodynamic performance, economic metrics and system reliability.
Bearing all this in mind, the present paper deals with the development of a non-physics-based model using Artificial Neural Networks (ANN), developed using Matlab’s Deep Learning Toolbox, to enable SCARABEUS system optimisation without running the detailed – and extremely time consuming – thermal models, developed with Thermoflex and Matlab software.
In the first part of the paper, the candidate dopants and cycle layouts are presented and discussed, and a thorough description of the ANN training methodology is provided, along with all the main assumptions and hypothesis made.
In the second part of the manuscript, results confirms that the ANN is a reliable tool capable of successfully reproducing the detailed Thermoflex model, estimating the cycle thermal efficiency with a Root Mean Square Error lower than 0.2 percentage points. Furthermore, the great advantage of using the Artificial Neural Network proposed is demonstrated by the huge reduction in the computational time needed, up to 99% lower than the one consumed by the detailed model. Finally, the high flexibility and versatility of the ANN is shown, applying this tool in different scenarios and estimating different cycle thermal efficiency for a great variety of boundary conditions.Unión Europea H2020-81498
Density-Based Topology Optimization in Method of Moments: Q-factor Minimization
Classical gradient-based density topology optimization is adapted for
method-of-moments numerical modeling to design a conductor-based system
attaining the minimal antenna Q-factor evaluated via an energy stored operator.
Standard topology optimization features are discussed, e.g., the interpolation
scheme and density and projection filtering. The performance of the proposed
technique is demonstrated in a few examples in terms of the realized Q-factor
values and necessary computational time to obtain a design. The optimized
designs are compared to the fundamental bound and well-known empirical
structures. The presented framework can provide a completely novel design, as
presented in the second example.Comment: 13 pages, 13 figure
- …