11,458 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Central-provincial Politics and Industrial Policy-making in the Electric Power Sector in China

    Get PDF
    In addition to the studies that provide meaningful insights into the complexity of technical and economic issues, increasing studies have focused on the political process of market transition in network industries such as the electric power sector. This dissertation studies the central–provincial interactions in industrial policy-making and implementation, and attempts to evaluate the roles of Chinese provinces in the market reform process of the electric power sector. Market reforms of this sector are used as an illustrative case because the new round of market reforms had achieved some significant breakthroughs in areas such as pricing reform and wholesale market trading. Other policy measures, such as the liberalization of the distribution market and cross-regional market-building, are still at a nascent stage and have only scored moderate progress. It is important to investigate why some policy areas make greater progress in market reforms than others. It is also interesting to examine the impacts of Chinese central-provincial politics on producing the different market reform outcomes. Guangdong and Xinjiang are two provinces being analyzed in this dissertation. The progress of market reforms in these two provinces showed similarities although the provinces are very different in terms of local conditions such as the stages of their economic development and energy structures. The actual reform can be understood as the outcomes of certain modes of interactions between the central and provincial actors in the context of their particular capabilities and preferences in different policy areas. This dissertation argues that market reform is more successful in policy areas where the central and provincial authorities are able to engage mainly in integrative negotiations than in areas where they engage mainly in distributive negotiations

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Breast mass segmentation from mammograms with deep transfer learning

    Get PDF
    Abstract. Mammography is an x-ray imaging method used in breast cancer screening, which is a time consuming process. Many different computer assisted diagnosis have been created to hasten the image analysis. Deep learning is the use of multilayered neural networks for solving different tasks. Deep learning methods are becoming more advanced and popular for segmenting images. One deep transfer learning method is to use these neural networks with pretrained weights, which typically improves the neural networks performance. In this thesis deep transfer learning was used to segment cancerous masses from mammography images. The convolutional neural networks used were pretrained and fine-tuned, and they had an an encoder-decoder architecture. The ResNet22 encoder was pretrained with mammography images, while the ResNet34 encoder was pretrained with various color images. These encoders were paired with either a U-Net or a Feature Pyramid Network decoder. Additionally, U-Net model with random initialization was also tested. The five different models were trained and tested on the Oulu Dataset of Screening Mammography (9204 images) and on the Portuguese INbreast dataset (410 images) with two different loss functions, binary cross-entropy loss with soft Jaccard loss and a loss function based on focal Tversky index. The best models were trained on the Oulu Dataset of Screening Mammography with the focal Tversky loss. The best segmentation result achieved was a Dice similarity coefficient of 0.816 on correctly segmented masses and a classification accuracy of 88.7% on the INbreast dataset. On the Oulu Dataset of Screening Mammography, the best results were a Dice score of 0.763 and a classification accuracy of 83.3%. The results between the pretrained models were similar, and the pretrained models had better results than the non-pretrained models. In conclusion, deep transfer learning is very suitable for mammography mass segmentation and the choice of loss function had a large impact on the results.Rinnan massojen segmentointi mammografiakuvista syvÀ- ja siirto-oppimista hyödyntÀen. TiivistelmÀ. Mammografia on röntgenkuvantamismenetelmÀ, jota kÀytetÀÀn rintÀsyövÀn seulontaan. Mammografiakuvien seulonta on aikaa vievÀÀ ja niiden analysoimisen avuksi on kehitelty useita tietokoneavusteisia ratkaisuja. SyvÀoppimisella tarkoitetaan monikerroksisten neuroverkkojen kÀyttöÀ eri tehtÀvien ratkaisemiseen. SyvÀoppimismenetelmÀt ovat ajan myötÀ kehittyneet ja tulleet suosituiksi kuvien segmentoimiseen. Yksi tapa yhdistÀÀ syvÀ- ja siirtooppimista on hyödyntÀÀ neuroverkkoja esiopetettujen painojen kanssa, mikÀ auttaa parantamaan neuroverkkojen suorituskykyÀ. TÀssÀ diplomityössÀ tutkittiin syvÀ- ja siirto-oppimisen kÀyttöÀ syöpÀisten massojen segmentoimiseen mammografiakuvista. KÀytetyt konvoluutioneuroverkot olivat esikoulutettuja ja hienosÀÀdettyjÀ. LisÀksi niillÀ oli enkooderi-dekooderi arkkitehtuuri. ResNet22 enkooderi oli esikoulutettu mammografia kuvilla, kun taas ResNet34 enkooderi oli esikoulutettu monenlaisilla vÀrikuvilla. NÀihin enkoodereihin yhdistettiin joko U-Net:n tai piirrepyramidiverkon dekooderi. LisÀksi kÀytettiin U-Net mallia ilman esikoulutusta. NÀmÀ viisi erilaista mallia koulutettiin ja testattiin sekÀ Oulun Mammografiaseulonta DatasetillÀ (9204 kuvaa) ettÀ portugalilaisella INbreast datasetillÀ (410 kuvaa) kÀyttÀen kahta eri tavoitefunktiota, jotka olivat binÀÀriristientropia yhdistettynÀ pehmeÀllÀ Jaccard-indeksillÀ ja fokaaliin Tversky indeksiin perustuva tavoitefunktiolla. Parhaat mallit olivat koulutettu Oulun datasetillÀ fokaalilla Tversky tavoitefunktiolla. Parhaat tulokset olivat 0,816 Dice kerroin oikeissa positiivisissa segmentaatioissa ja 88,7 % luokittelutarkkuus INbreast datasetissÀ. Esikoulutetut mallit antoivat parempia tuloksia kuin mallit joita ei esikoulutettu. Oulun datasetillÀ parhaat tulokset olivat 0,763:n Dice kerroin ja 83,3 % luokittelutarkkuus. Tuloksissa ei ollut suurta eroa esikoulutettujen mallien vÀlillÀ. Tulosten perusteella syvÀ- ja siirto-oppiminen soveltuvat hyvin massojen segmentoimiseen mammografiakuvista. LisÀksi tavoitefunktiovalinnalla saatiin suuri vaikutus tuloksiin

    In vitro investigation of the effect of disulfiram on hypoxia induced NFÎșB, epithelial to mesenchymal transition and cancer stem cells in glioblastoma cell lines

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Glioblastoma multiforme (GBM) is one of the most aggressive and lethal cancers with a poor prognosis. Advances in the treatment of GBM are limited due to several resistance mechanisms and limited drug delivery into the central nervous system (CNS) compartment by the blood-brain barrier (BBB) and by actions of the normal brain to counteract tumour-targeting medications. Hypoxia is common in malignant brain tumours such as GBM and plays a significant role in tumour pathobiology. It is widely accepted that hypoxia is a major driver of GBM malignancy. Although it has been confirmed that hypoxia induces GBM stem-like-cells (GSCs), which are highly invasive and resistant to all chemotherapeutic agents, the detailed molecular pathways linking hypoxia, GSC traits and chemoresistance remain obscure. Evidence shows that hypoxia induces cancer stem cell phenotypes via epithelial-to-mesenchymal transition (EMT), promoting therapeutic resistance in most cancers, including GBM. This study demonstrated that spheroid cultured GBM cells consist of a large population of hypoxic cells with CSC and EMT characteristics. GSCs are chemo-resistant and displayed increased levels of HIFs and NFÎșB activity. Similarly, the hypoxia cultured GBM cells manifested GSC traits, chemoresistance and invasiveness. These results suggest that hypoxia is responsible for GBM stemness, chemoresistance and invasiveness. GBM cells transfected with nuclear factor kappa B-p65 (NFÎșB-p65) subunit exhibited CSC and EMT markers indicating the essential role of NFÎșB in maintaining GSC phenotypes. The study also highlighted the significance of NFÎșB in driving chemoresistance, invasiveness, and the potential role of NFÎșB as the central regulator of hypoxia-induced stemness in GBM cells. GSC population has the ability of self-renewal, cancer initiation and development of secondary heterogeneous cancer. The very poor prognosis of GBM could largely be attributed to the existence of GSCs, which promote tumour propagation, maintenance, radio- and chemoresistance and local infiltration. In this study, we used Disulfiram (DS), a drug used for more than 65 years in alcoholism clinics, in combination with copper (Cu) to target the NFÎșB pathway, reverse chemoresistance and block invasion in GSCs. The obtained results showed that DS/Cu is highly cytotoxic to GBM cells and completely eradicated the resistant CSC population at low dose levels in vitro. DS/Cu inhibited the migration and invasion of hypoxia-induced CSC and EMT like GBM cells at low nanomolar concentrations. DS is an FDA approved drug with low toxicity to normal tissues and can pass through the BBB. Further research may lead to the quick translation of DS into cancer clinics and provide new therapeutic options to improve treatment outcomes in GBM patients

    QSAR based virtual screening derived identification of a novel hit as a SARS CoV-229E 3CLpro Inhibitor: GA-MLR QSAR modeling supported by molecular Docking, molecular dynamics simulation and MMGBSA calculation approaches

    Get PDF
    Congruous coronavirus drug targets and analogous lead molecules must be identified as quickly as possible to produce antiviral therapeutics against human coronavirus (HCoV SARS 3CLpro) infections. In the present communication, we bear recognized a HIT candidate for HCoV SARS 3CLpro inhibition. Four Parametric GA-MLR primarily based QSAR model (R2:0.84, R2adj:0.82, Q2loo: 0.78) was once promoted using a dataset over 37 structurally diverse molecules along QSAR based virtual screening (QSAR-VS), molecular docking (MD) then molecular dynamic simulation (MDS) analysis and MMGBSA calculations. The QSAR-based virtual screening was utilized to find novel lead molecules from an in-house database of 100 molecules. The QSAR-vS successfully offered a hit molecule with an improved PEC50 value from 5.88 to 6.08. The benzene ring, phenyl ring, amide oxygen and nitrogen, and other important pharmacophoric sites are revealed via MD and MDS studies. Ile164, Pro188, Leu190, Thr25, His41, Asn46, Thr47, Ser49, Asn189, Gln191, Thr47, and Asn141 are among the key amino acid residues in the S1 and S2 pocket. A stable complex of a lead molecule with the HCoV SARS 3CLpro was discovered using MDS. MM-GBSA calculations resulted from MD simulation results well supported with the binding energies calculated from the docking results. The results of this study can be exploited to develop a novel antiviral target, such as an HCoV SARS 3CLpro Inhibitor

    Moisture Content and In-place Density of Cold-Recycling Treatments

    Get PDF
    Cold-recycling treatments are gaining popularity in the United States because of their economic and environmental benefits. Curing is the most critical phase for these treatments. Curing is the process where emulsion breaks and water evaporates, leaving residual binder in the treated material. In this process, the cold-recycled mix gains strength. Sufficient strength is required before opening the cold-treated layer to traffic or placing an overlay. Otherwise, premature failure, related to insufficient strength and trapped moisture, would be expected. However, some challenges arise from the lack of relevant information and specifications to monitor treatment curing. This report presents the outcomes of a research project funded by the Illinois Department for Transportation to investigate the feasibility of using the nondestructive ground-penetrating radar (GPR) for density and moisture content estimation of cold-recycled treatments. Monitoring moisture content is an indicator of curing level; treated layers must meet a threshold of maximum allowable moisture content (2% in Illinois) to be considered sufficiently cured. The methodology followed in this report included GPR numerical simulations and GPR indoor and field tests for data sources. The data were used to correlate moisture content to dielectric properties calculated from GPR measurements. Two models were developed for moisture content estimation: the first is based on numerical simulations and the second is based on electromagnetic mixing theory and called the Al-Qadi-Cao-Abufares (ACA) model. The simulation model had an average error of 0.33% for moisture prediction for five different field projects. The ACA model had an average error of 2% for density prediction and an average root-mean-square error of less than 0.5% for moisture content prediction for both indoor and field tests. The ACA model is presented as part of a developed user-friendly tool that could be used in the future to continuously monitor curing of cold-recycled treatments.IDOT-R27-227Ope

    Increased lifetime of Organic Photovoltaics (OPVs) and the impact of degradation, efficiency and costs in the LCOE of Emerging PVs

    Get PDF
    Emerging photovoltaic (PV) technologies such as organic photovoltaics (OPVs) and perovskites (PVKs) have the potential to disrupt the PV market due to their ease of fabrication (compatible with cheap roll-to-roll processing) and installation, as well as their significant efficiency improvements in recent years. However, rapid degradation is still an issue present in many emerging PVs, which must be addressed to enable their commercialisation. This thesis shows an OPV lifetime enhancing technique by adding the insulating polymer PMMA to the active layer, and a novel model for quantifying the impact of degradation (alongside efficiency and cost) upon levelized cost of energy (LCOE) in real world emerging PV installations. The effect of PMMA morphology on the success of a ternary strategy was investigated, leading to device design guidelines. It was found that either increasing the weight percent (wt%) or molecular weight (MW) of PMMA resulted in an increase in the volume of PMMA-rich islands, which provided the OPV protection against water and oxygen ingress. It was also found that adding PMMA can be effective in enhancing the lifetime of different active material combinations, although not to the same extent, and that processing additives can have a negative impact in the devices lifetime. A novel model was developed taking into account realistic degradation profile sourced from a literature review of state-of-the-art OPV and PVK devices. It was found that optimal strategies to improve LCOE depend on the present characteristics of a device, and that panels with a good balance of efficiency and degradation were better than panels with higher efficiency but higher degradation as well. Further, it was found that low-cost locations were more favoured from reductions in the degradation rate and module cost, whilst high-cost locations were more benefited from improvements in initial efficiency, lower discount rates and reductions in install costs

    Defining Service Level Agreements in Serverless Computing

    Get PDF
    The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead. Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model. This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%
    • 

    corecore