9,494 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Interview with Wolfgang Knauss

    Get PDF
    An oral history in four sessions (September 2019–January 2020) with Wolfgang Knauss, von Kármán Professor of Aeronautics and Applied Mechanics, Emeritus. Born in Germany in 1933, he speaks about his early life and experiences under the Nazi regime, his teenage years in Siegen and Heidelberg during the Allied occupation, and his move to Pasadena, California, in 1954 under the sponsorship of a local minister and his family. He enrolled in Caltech as an undergraduate in 1957, commencing a more than half-century affiliation with the Institute and GALCIT (today the Graduate Aerospace Laboratories of Caltech). He recalls the roots of his interest in aeronautics, his PhD solid mechanics studies with his advisor, M. Williams, and the GALCIT environment in the late 1950s and 1960s at the dawn of the Space Age, including the impact of Sputnik and classes with NASA astronauts. He discusses his experimental and theoretical work on materials deformation, dynamic fracture, and crack propagation, including his solid-propellant fuels research for NASA and the US Army, wide-ranging programs with the US Navy, and his pioneering micromechanics investigations and work on the time-dependent fracture of polymers in the 1990s. He offers his perspective on GALCIT’s academic culture, its solid mechanics and fluid mechanics programs, and its evolving administrative directions over the course of five decades, as well as its impact and reputation both within and beyond Caltech. He describes his work with Caltech’s undergraduate admissions committee and his scientific collaborations with numerous graduate students and postdocs and shares his recollections of GALCIT and other Caltech colleagues, including C. Babcock, D. Coles, R.P. Feynman, Y.C. Fung, G. Neugebauer, G. Housner, D. Hudson, H. Liepmann, A. Klein, G. Ravichandran, A. Rosakis, A. Roshko, and E. Sechler. Six appendices contributed by Dr. Knauss, offering further insight into his life and career, also form part of this oral history and are cross-referenced in the main text

    Defining Service Level Agreements in Serverless Computing

    Get PDF
    The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead. Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model. This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%

    Educating Sub-Saharan Africa:Assessing Mobile Application Use in a Higher Learning Engineering Programme

    Get PDF
    In the institution where I teach, insufficient laboratory equipment for engineering education pushed students to learn via mobile phones or devices. Using mobile technologies to learn and practice is not the issue, but the more important question lies in finding out where and how they use mobile tools for learning. Through the lens of Kearney et al.’s (2012) pedagogical model, using authenticity, personalisation, and collaboration as constructs, this case study adopts a mixed-method approach to investigate the mobile learning activities of students and find out their experiences of what works and what does not work. Four questions are borne out of the over-arching research question, ‘How do students studying at a University in Nigeria perceive mobile learning in electrical and electronic engineering education?’ The first three questions are answered from qualitative, interview data analysed using thematic analysis. The fourth question investigates their collaborations on two mobile social networks using social network and message analysis. The study found how students’ mobile learning relates to the real-world practice of engineering and explained ways of adapting and overcoming the mobile tools’ limitations, and the nature of the collaborations that the students adopted, naturally, when they learn in mobile social networks. It found that mobile engineering learning can be possibly located in an offline mobile zone. It also demonstrates that investigating the effectiveness of mobile learning in the mobile social environment is possible by examining users’ interactions. The study shows how mobile learning personalisation that leads to impactful engineering learning can be achieved. The study shows how to manage most interface and technical challenges associated with mobile engineering learning and provides a new guide for educators on where and how mobile learning can be harnessed. And it revealed how engineering education can be successfully implemented through mobile tools

    Substrate-specificity of the DNA-protein crosslink repair protease SPRTN

    Get PDF

    How to Be a God

    Get PDF
    When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers. Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong. Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice. That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer. The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves? How should we be gods

    Linguistic- and Acoustic-based Automatic Dementia Detection using Deep Learning Methods

    Get PDF
    Dementia can affect a person's speech and language abilities, even in the early stages. Dementia is incurable, but early detection can enable treatment that can slow down and maintain mental function. Therefore, early diagnosis of dementia is of great importance. However, current dementia detection procedures in clinical practice are expensive, invasive, and sometimes inaccurate. In comparison, computational tools based on the automatic analysis of spoken language have the potential to be applied as a cheap, easy-to-use, and objective clinical assistance tool for dementia detection. In recent years, several studies have shown promise in this area. However, most studies focus heavily on the machine learning aspects and, as a consequence, often lack sufficient incorporation of clinical knowledge. Many studies also concentrate on clinically less relevant tasks such as the distinction between HC and people with AD which is relatively easy and therefore less interesting both in terms of the machine learning and the clinical application. The studies in this thesis concentrate on automatically identifying signs of neurodegenerative dementia in the early stages and distinguishing them from other clinical, diagnostic categories related to memory problems: (FMD, MCI, and HC). A key focus, when designing the proposed systems has been to better consider (and incorporate) currently used clinical knowledge and also to bear in mind how these machine-learning based systems could be translated for use in real clinical settings. Firstly, a state-of-the-art end-to-end system is constructed for extracting linguistic information from automatically transcribed spontaneous speech. The system's architecture is based on hierarchical principles thereby mimicking those used in clinical practice where information at both word-, sentence- and paragraph-level is used when extracting information to be used for diagnosis. Secondly, hand-crafted features are designed that are based on clinical knowledge of the importance of pausing and rhythm. These are successfully joined with features extracted from the end-to-end system. Thirdly, different classification tasks are explored, each set up so as to represent the types of diagnostic decision-making that is relevant in clinical practice. Finally, experiments are conducted to explore how to better deal with the known problem of confounding and overlapping symptoms on speech and language from age and cognitive decline. A multi-task system is constructed that takes age into account while predicting cognitive decline. The studies use the publicly available DementiaBank dataset as well as the IVA dataset, which has been collected by our collaborators at the Royal Hallamshire Hospital, UK. In conclusion, this thesis proposes multiple methods of using speech and language information for dementia detection with state-of-the-art deep learning technologies, confirming the automatic system's potential for dementia detection

    Exploring Blockchain Adoption Supply Chains: Opportunities and Challenges

    Get PDF
    Acquisition Management / Grant technical reportAcquisition Research Program Sponsored Report SeriesSponsored Acquisition Research & Technical ReportsIn modern supply chains, acquisition often occurs with the involvement of a network of organizations. The resilience, efficiency, and effectiveness of supply networks are crucial for the viability of acquisition. Disruptions in the supply chain require adequate communication infrastructure to ensure resilience. However, supply networks do not have a shared information technology infrastructure that ensures effective communication. Therefore decision-makers seek new methodologies for supply chain management resilience. Blockchain technology offers new decentralization and service delegation methods that can transform supply chains and result in a more flexible, efficient, and effective supply chain. This report presents a framework for the application of Blockchain technology in supply chain management to improve resilience. In the first part of this study, we discuss the limitations and challenges of the supply chain system that can be addressed by integrating Blockchain technology. In the second part, the report provides a comprehensive Blockchain-based supply chain network management framework. The application of the proposed framework is demonstrated using modeling and simulation. The differences in the simulation scenarios can provide guidance for decision-makers who consider using the developed framework during the acquisition process.Approved for public release; distribution is unlimited

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
    • 

    corecore