9,226 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era

    Full text link
    OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated ([email protected]

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Coloniality and the Courtroom: Understanding Pre-trial Judicial Decision Making in Brazil

    Get PDF
    This thesis focuses on judicial decision making during custody hearings in Rio de Janeiro, Brazil. The impetus for the study is that while national and international protocols mandate the use of pre-trial detention only as a last resort, judges continue to detain people pre-trial in large numbers. Custody hearings were introduced in 2015, but the initiative has not produced the reduction in pre-trial detention that was hoped. This study aims to understand what informs judicial decision making at this stage. The research is approached through a decolonial lens to foreground legacies of colonialism, overlooked in mainstream criminological scholarship. This is an interview-based study, where key court actors (judges, prosecutors, and public defenders) and subject matter specialists were asked about influences on judicial decision making. Interview data is complemented by non-participatory observation of custody hearings. The research responds directly to Aliverti et al.'s (2021) call to ‘decolonize the criminal question’ by exposing and explaining how colonialism informs criminal justice practices. Answering the call in relation to judicial decision making, findings provide evidence that colonial-era assumptions, dynamics, and hierarchies were evident in the practice of custody hearings and continue to inform judges’ decisions, thus demonstrating the coloniality of justice. This study is significant for the new empirical data presented and theoretical innovation is also offered via the introduction of the ‘anticitizen’. The concept builds on Souza’s (2007) ‘subcitizen’ to account for the active pursuit of dangerous Others by judges casting themselves as crime fighters in a modern moral crusade. The findings point to the limited utility of human rights discourse – the normative approach to influencing judicial decision making around pre-trial detention – as a plurality of conceptualisations compete for dominance. This study has important implications for all actors aiming to reduce pre-trial detention in Brazil because unless underpinning colonial logics are addressed, every innovation risks becoming the next lei para inglês ver (law [just] for the English to see)

    Defining Service Level Agreements in Serverless Computing

    Get PDF
    The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead. Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model. This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%

    The interpretation of Islam and nationalism by the elite through the English language media in Pakistan.

    Get PDF
    The media is constructed and interpreted through what people 'know'. That knowledge is, forthe most part, created through day to day experiences. In Pakistan, Islam and nationalism aretwo components of this social knowledge which are intrinsically tied to the experiences of thePakistani people. Censorship and selection are means through which this knowledge isarticulated and interpreted.General conceptions of partially shared large scale bodies of knowledge and ideas reinforce,and are reinforced by, general medium of mass communication: the print and electronic media.Focusing on the govermnent, media institutions and Pakistani elites, I describe and analyse thedifferent, sometimes conflicting, interpretations of Islam and Pakistani nationalism manifest inand through media productions presented in Pakistan.The media means many things, not least of which is power. It is the media as a source ofpower that is so frequently controlled, directed and manipulated. The terminology may beslightly different according to the context within which one is talking - propaganda, selection,etc. - but ultimately it comes down to the same thing - censorship. Each of the three groups:government, media institutions and Pakistani elites - have the power to interpret and censormedia content and consideration must be taken of each of the other power holders consequentlyrestricting the power of each group in relation to the other two. The processes of thismanipulation and their consequences form the major themes of this thesis

    A productive response to legacy system petrification

    Get PDF
    Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored

    Developing automated meta-research approaches in the preclinical Alzheimer's disease literature

    Get PDF
    Alzheimer’s disease is a devastating neurodegenerative disorder for which there is no cure. A crucial part of the drug development pipeline involves testing therapeutic interventions in animal disease models. However, promising findings in preclinical experiments have not translated into clinical trial success. Reproducibility has often been cited as a major issue affecting biomedical research, where experimental results in one laboratory cannot be replicated in another. By using meta-research (research on research) approaches such as systematic reviews, researchers aim to identify and summarise all available evidence relating to a specific research question. By conducting a meta-analysis, researchers can also combine the results from different experiments statistically to understand the overall effect of an intervention and to explore reasons for variations seen across different publications. Systematic reviews of the preclinical Alzheimer’s disease literature could inform decision making, encourage research improvement, and identify gaps in the literature to guide future research. However, due to the vast amount of potentially useful evidence from animal models of Alzheimer’s disease, it remains difficult to make sense of and utilise this data effectively. Systematic reviews are common practice within evidence based medicine, yet their application to preclinical research is often limited by the time and resources required. In this thesis, I develop, build-upon, and implement automated meta-research approaches to collect, curate, and evaluate the preclinical Alzheimer’s literature. I searched several biomedical databases to obtain all research relevant to Alzheimer’s disease. I developed a novel deduplication tool to automatically identify and remove duplicate publications identified across different databases with minimal human effort. I trained a crowd of reviewers to annotate a subset of the publications identified and used this data to train a machine learning algorithm to screen through the remaining publications for relevance. I developed text-mining tools to extract model, intervention, and treatment information from publications and I improved existing automated tools to extract reported measures to reduce the risk of bias. Using these tools, I created a categorised database of research in transgenic Alzheimer’s disease animal models and created a visual summary of this dataset on an interactive, openly accessible online platform. Using the techniques described, I also identified relevant publications within the categorised dataset to perform systematic reviews of two key outcomes of interest in transgenic Alzheimer’s disease models: (1) synaptic plasticity and transmission in hippocampal slices and (2) motor activity in the open field test. Over 400,000 publications were identified across biomedical research databases, with 230,203 unique publications. In a performance evaluation across different preclinical datasets, the automated deduplication tool I developed could identify over 97% of duplicate citations and a had an error rate similar to that of human performance. When evaluated on a test set of publications, the machine learning classifier trained to identify relevant research in transgenic models performed was highly sensitive (captured 96.5% of relevant publications) and excluded 87.8% of irrelevant publications. Tools to identify the model(s) and outcome measure(s) within the full-text of publications may reduce the burden on reviewers and were found to be more sensitive than searching only the title and abstract of citations. Automated tools to assess risk of bias reporting were highly sensitive and could have the potential to monitor research improvement over time. The final dataset of categorised Alzheimer’s disease research contained 22,375 publications which were then visualised in the interactive web application. Within the application, users can see how many publications report measures to reduce the risk of bias and how many have been classified as using each transgenic model, testing each intervention, and measuring each outcome. Users can also filter to obtain curated lists of relevant research, allowing them to perform systematic reviews at an accelerated pace with reduced effort required to search across databases, and a reduced number of publications to screen for relevance. Both systematic reviews and meta-analyses highlighted failures to report key methodological information within publications. Poor transparency of reporting limited the statistical power I had to understand the sources of between-study variation. However, some variables were found to explain a significant proportion of the heterogeneity. Transgenic animal model had a significant impact on results in both reviews. For certain open field test outcomes, wall colour of the open field arena and the reporting of measures to reduce the risk of bias were found to impact results. For in vitro electrophysiology experiments measuring synaptic plasticity, several electrophysiology parameters, including magnesium concentration of the recording solution, were found to explain a significant proportion of the heterogeneity. Automated meta-research approaches and curated web platforms summarising preclinical research could have the potential to accelerate the conduct of systematic reviews and maximise the potential of existing evidence to inform translation
    corecore