15,562 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    A two way process – Social capacity as a driver and outcome of equitable marine spatial planning

    Get PDF
    Although stakeholder engagement is one of the founding principles of marine spatial planning (MSP), meaningful representation of people and their connections to marine resources within marine governance is still lacking. A broad understanding of how concepts surrounding social capital and capacity is translated into MSP practice is missing. With this article, we describe detailed case studies in the United Kingdom, Brazil and South Africa to build a better understanding of the ways in which MSP and other ocean governance initiatives operationalise the concepts of social capital and capacity. Drawing on insights from the cases, we call for a rethinking of capacitation as a two-way process. In particular, trust-building, social learning and efforts to build social capacity should be elaborated without imposing a hierarchy between people ‘who know’ and people ‘who don’t’. Innovative approaches to relationship building, knowledge development, and collaboration highlighted in the case studies highlight ways to build social capacity both among stakeholders and planners, as is necessary for more equitable and sustainable MSP development and implementation

    Efficiency measurement based on novel performance measures in total productive maintenance (TPM) using a fuzzy integrated COPRAS and DEA method

    Get PDF
    Total Productive Maintenance (TPM) has been widely recognized as a strategic tool and lean manufacturing practice for improving manufacturing performance and sustainability, and therefore it has been successfully implemented in many organizations. The evaluation of TPM efficiency can assist companies in improving their operations across a variety of dimensions. This paper aims to propose a comprehensive and systematic framework for the evaluation of TPM performance. The proposed total productive maintenance performance measurement system (TPM PMS) is divided into four phases (e.g., design, evaluate, implement, and review): i) the design of new performance measures, ii) the evaluation of the new performance measures, iii) the implementation of the new performance measures to evaluate TPM performance, and iv) the reviewing of the TPM PMS. In the design phase, different types of performance measures impacting TPM are defined and analyzed by decision-makers. In the evaluation phase, novel performance measures are evaluated using the Fuzzy COmplex Proportional Assessment (FCOPRAS) method. In the implementation phase, a modified fuzzy data envelopment analysis (FDEA) is used to determine efficient and inefficient TPM performance with novel performance measures. In the review phase, TPM performance is periodically monitored, and the proposed TPM PMS is reviewed for successful implementation of TPM. A real-world case study from an international manufacturing company operating in the automotive industry is presented to demonstrate the applicability of the proposed TPM PMS. The main findings from the real-world case study showed that the proposed TPM PMS allows measuring TPM performance with different indicators especially soft ones, e.g., human-related, and supports decision makers by comparing the TPM performances of production lines and so prioritizing the most important preventive/predictive decisions and actions according to production lines, especially the ineffective ones in TPM program implementation. Therefore, this system can be considered a powerful monitoring tool and reliable evidence to make the implementation process of TPM more efficient in the real-world production environment

    Reinforcement Learning-based User-centric Handover Decision-making in 5G Vehicular Networks

    Get PDF
    The advancement of 5G technologies and Vehicular Networks open a new paradigm for Intelligent Transportation Systems (ITS) in safety and infotainment services in urban and highway scenarios. Connected vehicles are vital for enabling massive data sharing and supporting such services. Consequently, a stable connection is compulsory to transmit data across the network successfully. The new 5G technology introduces more bandwidth, stability, and reliability, but it faces a low communication range, suffering from more frequent handovers and connection drops. The shift from the base station-centric view to the user-centric view helps to cope with the smaller communication range and ultra-density of 5G networks. In this thesis, we propose a series of strategies to improve connection stability through efficient handover decision-making. First, a modified probabilistic approach, M-FiVH, aimed at reducing 5G handovers and enhancing network stability. Later, an adaptive learning approach employed Connectivity-oriented SARSA Reinforcement Learning (CO-SRL) for user-centric Virtual Cell (VC) management to enable efficient handover (HO) decisions. Following that, a user-centric Factor-distinct SARSA Reinforcement Learning (FD-SRL) approach combines time series data-oriented LSTM and adaptive SRL for VC and HO management by considering both historical and real-time data. The random direction of vehicular movement, high mobility, network load, uncertain road traffic situation, and signal strength from cellular transmission towers vary from time to time and cannot always be predicted. Our proposed approaches maintain stable connections by reducing the number of HOs by selecting the appropriate size of VCs and HO management. A series of improvements demonstrated through realistic simulations showed that M-FiVH, CO-SRL, and FD-SRL were successful in reducing the number of HOs and the average cumulative HO time. We provide an analysis and comparison of several approaches and demonstrate our proposed approaches perform better in terms of network connectivity

    Intermodal Terminal Subsystem Technology Selection Using Integrated Fuzzy MCDM Model

    Get PDF
    Intermodal transportation is the use of multiple modes of transportation, which can lead to greater sustainability by reducing environmental impact and traffic congestion and increasing the efficiency of supply chains. One of the preconditions for efficient intermodal transport is the efficient intermodal terminal (IT). ITs allow for the smooth and efficient handling of cargo, thus reducing the time, cost, and environmental impact of transportation. Adequate selection of subsystem technologies can significantly improve the efficiency and productivity of an IT, ultimately leading to cost savings for businesses and a more efficient and sustainable transportation system. Accordingly, this paper aims to establish a framework for the evaluation and selection of appropriate technologies for IT subsystems. To solve the defined problem, an innovative hybrid multi-criteria decision making (MCDM) model, which combines the fuzzy factor relationship (FFARE) and the fuzzy combinative distance-based assessment (FCODAS) methods, is developed in this paper. The FFARE method is used for obtaining criteria weights, while the FCODAS method is used for evaluation and a final ranking of the alternatives. The established framework and the model are tested on a real-life case study, evaluating and selecting the handling technology for a planned IT. The study defines 12 potential variants of handling equipment based on their techno-operational characteristics and evaluates them using 16 criteria. The results indicate that the best handling technology variant is the one that uses a rail-mounted gantry crane for trans-shipment and a reach stacker for horizontal transport and storage. The results also point to the conclusion that instead of choosing equipment for each process separately, it is important to think about the combination of different handling technologies that can work together to complete a series of handling cycle processes. The main contributions of this paper are the development of a new hybrid model and the establishment of a framework for the selection of appropriate IT subsystem technologies along with a set of unique criteria for their evaluation and selection

    Qluster: An easy-to-implement generic workflow for robust clustering of health data

    Get PDF
    The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors

    A spatio-temporal framework for modelling wastewater concentration during the COVID-19 pandemic

    Get PDF
    The potential utility of wastewater-based epidemiology as an early warning tool has been explored widely across the globe during the current COVID-19 pandemic. Methods to detect the presence of SARS-CoV-2 RNA in wastewater were developed early in the pandemic, and extensive work has been conducted to evaluate the relationship between viral concentration and COVID-19 case numbers at the catchment areas of sewage treatment works (STWs) over time. However, no attempt has been made to develop a model that predicts wastewater concentration at fine spatio-temporal resolutions covering an entire country, a necessary step towards using wastewater monitoring for the early detection of local outbreaks. We consider weekly averages of flow-normalised viral concentration, reported as the number of SARS-CoV-2N1 gene copies per litre (gc/L) of wastewater available at 303 STWs over the period between 1 June 2021 and 30 March 2022. We specify a spatially continuous statistical model that quantifies the relationship between weekly viral concentration and a collection of covariates covering socio-demographics, land cover and virus associated genomic characteristics at STW catchment areas while accounting for spatial and temporal correlation. We evaluate the model’s predictive performance at the catchment level through 10-fold cross-validation. We predict the weekly viral concentration at the population-weighted centroid of the 32,844 lower super output areas (LSOAs) in England, then aggregate these LSOA predictions to the Lower Tier Local Authority level (LTLA), a geography that is more relevant to public health policy-making. We also use the model outputs to quantify the probability of local changes of direction (increases or decreases) in viral concentration over short periods (e.g. two consecutive weeks). The proposed statistical framework can predict SARS-CoV-2 viral concentration in wastewater at high spatio-temporal resolution across England. Additionally, the probabilistic quantification of local changes can be used as an early warning tool for public health surveillance

    A Reinforcement Learning-assisted Genetic Programming Algorithm for Team Formation Problem Considering Person-Job Matching

    Full text link
    An efficient team is essential for the company to successfully complete new projects. To solve the team formation problem considering person-job matching (TFP-PJM), a 0-1 integer programming model is constructed, which considers both person-job matching and team members' willingness to communicate on team efficiency, with the person-job matching score calculated using intuitionistic fuzzy numbers. Then, a reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions. The RL-GP adopts the ensemble population strategies. Before the population evolution at each generation, the agent selects one from four population search modes according to the information obtained, thus realizing a sound balance of exploration and exploitation. In addition, surrogate models are used in the algorithm to evaluate the formation plans generated by individuals, which speeds up the algorithm learning process. Afterward, a series of comparison experiments are conducted to verify the overall performance of RL-GP and the effectiveness of the improved strategies within the algorithm. The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams. This study reveals the advantages of reinforcement learning methods, ensemble strategies, and the surrogate model applied to the GP framework. The diversity and intelligent selection of search patterns along with fast adaptation evaluation, are distinct features that enable RL-GP to be deployed in real-world enterprise environments.Comment: 16 page

    Airportscape and its Effect on Airport Sense of Place and Destination Image Perception

    Get PDF
    Purpose – This study aims to validate a conceptual definition of airportscape and develop a multidimensional scale that integrates servicescape and service quality dimensions in order to comprehensively investigate airport service management. In addition, the study examines the structural relationships amongst airportscape, sense of place, airport image and destination image. Design/methodology/approach – Covariance-based structural equation modelling has been employed. This study has collected the responses from 1,189 Thai respondents who had their experience in an international airport in the past 12 months. Findings – Key findings reveal the set of three airportscape attributes which positively influenced the air traveller’s perceived sense of place. Four other dimensions were found to positively influence the airport image. The results also suggested the positive relationships amongst sense of place, airport image and destination image. Sense of place strongly predicted the destination image and airport image, and was found to be an important mediator of the relationship between airportscape dimension and perceived image variables Originality/value – The study validates the airportscape scale and introduces “sense of place”, a concept that has not been objectively investigated in the airport context, and in relation to tourism. The findings provide insights to airport managers and tourism authorities by examining areas that highlight an airport’s sense of place and representation of the destination. The study also strengthens the theoretical link between airport and tourism knowledge, by showing that the airport’s sense of place can strongly influence airport image and destination image. The result ascertains that the airport can be a representative of a destination through the creation of a sense of place
    corecore