8,424 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse

    Get PDF
    The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth 6.30billion,thatisexpectedtogrowto6.30 billion, that is expected to grow to 84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment

    Being Comes from Not-being: Open-vocabulary Text-to-Motion Generation with Wordless Training

    Full text link
    Text-to-motion generation is an emerging and challenging problem, which aims to synthesize motion with the same semantics as the input text. However, due to the lack of diverse labeled training data, most approaches either limit to specific types of text annotations or require online optimizations to cater to the texts during inference at the cost of efficiency and stability. In this paper, we investigate offline open-vocabulary text-to-motion generation in a zero-shot learning manner that neither requires paired training data nor extra online optimization to adapt for unseen texts. Inspired by the prompt learning in NLP, we pretrain a motion generator that learns to reconstruct the full motion from the masked motion. During inference, instead of changing the motion generator, our method reformulates the input text into a masked motion as the prompt for the motion generator to ``reconstruct'' the motion. In constructing the prompt, the unmasked poses of the prompt are synthesized by a text-to-pose generator. To supervise the optimization of the text-to-pose generator, we propose the first text-pose alignment model for measuring the alignment between texts and 3D poses. And to prevent the pose generator from overfitting to limited training texts, we further propose a novel wordless training mechanism that optimizes the text-to-pose generator without any training texts. The comprehensive experimental results show that our method obtains a significant improvement against the baseline methods. The code is available at https://github.com/junfanlin/oohmg

    Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing

    Get PDF
    Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging. In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging. Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid. Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components

    Steering is initiated based on error accumulation.

    Get PDF
    Vehicle control by humans is possible because the central nervous system is capable of using visual information to produce complex sensorimotor actions. Drivers must monitor errors and initiate steering corrections of appropriate magnitude and timing to maintain a safe lane position. The perceptual mechanisms determining how a driver processes visual information and initiates steering corrections remain unclear. Previous research suggests 2 potential alternative mechanisms for responding to errors: (a) perceptual evidence (error) satisficing fixed constant thresholds (Threshold), or (b) the integration of perceptual evidence over time (Accumulator). To distinguish between these mechanisms, an experiment was conducted using a computer-generated steering correction paradigm. Drivers (N = 20) steered toward an intermittently appearing “road-line” that varied in position and orientation with respect to the driver’s position and trajectory. One key prediction from a Threshold framework is a fixed absolute error response across conditions regardless of the rate of error development, whereas the Accumulator framework predicts that drivers would respond to larger absolute errors when the error signal develops at a faster rate. Results were consistent with an Accumulator framework; thus we propose that models of steering should integrate perceived control error over time in order to accurately capture human perceptual performance

    Optimising acoustic cavitation for industrial application

    Get PDF
    The ultrasonic horn is one of the most commonly used acoustic devices in laboratories and industry. For its efficient application to cavitation mediated process, the cavitation generated at its tip as a function of its tip-vibration amplitudes still needed to be studied in detail. High-speed imaging and acoustic detection are used to investigate the cavitation generated at the tip of an ultrasonic horn, operating at a fundamental frequency, f0, of 20 kHz. Tip-vibration amplitudes are sampled at fine increments across the range of input powers available. The primary bubble cluster under the tip is found to undergo subharmonic periodic collapse, with concurrent shock wave emission, at frequencies of f0/m, with m increasing through integer values with increasing tip-vibration amplitude. The contribution of periodic shock waves to the noise spectra of the acoustic emissions is confirmed. Transitional input powers for which the value of m is indistinct, and shock wave emission irregular and inconsistent, are identified through Vrms of the acoustic detector output. For cavitation applications mediated by bubble collapse, sonications at transitional powers may lead to inefficient processing. The ultrasonic horn is also deployed to investigate the role of shock waves in the fragmentation of intermetallic crystals, nominally for ultrasonic treatment of Aluminium melt, and in a novel two-horn configuration for potential cavitation enhancement effects. An experiment investigating nitrogen fixation via cavitation generated by focused ultrasound exposures is also described. Vrms from the acoustic detector is again used to quantify the acoustic emissions for comparison to the sonochemical nitrite yield and for optimisation of sonication protocols at constant input energy. The findings revealed that the acoustic cavitation could be enhanced at constant input energy through optimisation of the pulse duration and pulse interval. Anomalous results may be due to inadequate assessment for the nitrate generated. The studies presented in this thesis have illustrated means of improving the cavitation efficiency of the used acoustic devices, which may be important to some selected industrial processes

    DeviceD: Audience–dancer interaction via social media posts and wearable for haptic feedback

    Get PDF
    The performative installation DeviceD utilizes a network of systems toward facilitating interaction between dancer, digital media, and audience. Central to the work is a wearable haptic feedback system able to wirelessly deliver vibrotactile stimuli, with the latter initiated by the audience through posting on Twitter social media platform; the system in use searches for specific mentions, hashtags, and keywords, with positive results causing the system to trigger patterns of haptic biofeedback across the wearable’s four actuator motors. The system acts as the intermediator between the audience’s online actions and the dancer receiving physical stimuli; the dancer interprets these biofeedback signals according to Laban’s Effort movement qualities, with the interpretation informing different states of habitual and conscious choreographic performance. In this article, the authors reflect on their collaborative process while developing DeviceD alongside a multidisciplinary team of technologists, detailing their experience of refining the technology and methodology behind the work while presenting it in three different settings. A literature review is used to situate the work among contemporary research on interaction over internet and haptics in performance practice; haptic feedback devices have been widely used within artistic work for the past 25 years, with more recent practice and research outputs suggesting an increased interest for haptics in the field of dance research. The authors detail both technological and performative elements making up the work, and provide a transparent evaluation of the system, as means of providing a foundation for further research on wearable haptic devices

    Response of saline reservoir to different phaseCO₂-brine: experimental tests and image-based modelling

    Get PDF
    Geological CO₂ storage in saline rocks is a promising method for meeting the target of net zero emission and minimizing the anthropogenic CO₂ emitted into the earth’s atmosphere. Storage of CO₂ in saline rocks triggers CO₂-brine-rock interaction that alters the properties of the rock. Properties of rocks are very crucial for the integrity and efficiency of the storage process. Changes in properties of the reservoir rocks due to CO₂-brine-rock interaction must be well predicted, as some changes can reduce the storage integrity of the reservoir. Considering the thermodynamics, phase behavior, solubility of CO₂ in brine, and the variable pressure-temperature conditions of the reservoir, there will be undissolved CO₂ in a CO₂ storage reservoir alongside the brine for a long time, and there is a potential for phase evolution of the undissolved CO₂. The phase of CO₂ influence the CO₂-brine-rock interaction, different phaseCO₂-brine have a unique effect on the properties of the reservoir rocks, Therefore, this study evaluates the effect of four different phaseCO₂-brine reservoir states on the properties of reservoir rocks using experimental and image-based approach. Samples were saturated with the different phaseCO₂-brine, then subjected to reservoir conditions in a triaxial compression test. The representative element volume (REV)/representative element area (REA) for the rock samples was determined from processed digital images, and rock properties were evaluated using digital rock physics and rock image analysis techniques. This research has evaluated the effect of different phaseCO₂-brine on deformation rate and deformation behavior, bulk modulus, compressibility, strength, and stiffness as well as porosity and permeability of sample reservoir rocks. Changes in pore geometry properties, porosity, and permeability of the rocks in CO₂ storage conditions with different phaseCO₂-brine have been evaluated using digital rock physics techniques. Microscopic rock image analysis has been applied to provide evidence of changes in micro-fabric, the topology of minerals, and elemental composition of minerals in saline rocks resulting from different phaseCO₂-br that can exist in a saline CO₂ storage reservoir. It was seen that the properties of the reservoir that are most affected by the scCO₂-br state of the reservoir include secondary fatigue rate, bulk modulus, shear strength, change in the topology of minerals after saturation as well as change in shape and flatness of pore surfaces. The properties of the reservoir that is most affected by the gCO₂-br state of the reservoir include primary fatigue rate, change in permeability due to stress, change in porosity due to stress, and change topology of minerals due to stress. For all samples, the roundness and smoothness of grains as well as smoothness of pores increased after compression while the roundness of pores decreased. Change in elemental composition in rock minerals in CO₂-brine-rock interaction was seen to depend on the reactivity of the mineral with CO₂ and/or brine and the presence of brine accelerates such change. Carbon, oxygen, and silicon can be used as index minerals for elemental changes in a CO₂-brine-rock system. The result of this work can be applied to predicting the effect the different possible phases of CO₂ will have on the deformation, geomechanics indices, and storage integrity of giant CO₂ storage fields such as Sleipner, In Salah, etc

    When does SGD favor flat minima? A quantitative characterization via linear stability

    Full text link
    The observation that stochastic gradient descent (SGD) favors flat minima has played a fundamental role in understanding implicit regularization of SGD and guiding the tuning of hyperparameters. In this paper, we provide a quantitative explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018). Specifically, we consider training over-parameterized models with square loss. We prove that if a global minimum θ\theta^* is linearly stable for SGD, then it must satisfy H(θ)FO(B/η)\|H(\theta^*)\|_F\leq O(\sqrt{B}/\eta), where H(θ)F,B,η\|H(\theta^*)\|_F, B,\eta denote the Frobenius norm of Hessian at θ\theta^*, batch size, and learning rate, respectively. Otherwise, SGD will escape from that minimum \emph{exponentially} fast. Hence, for minima accessible to SGD, the flatness -- as measured by the Frobenius norm of the Hessian -- is bounded independently of the model size and sample size. The key to obtaining these results is exploiting the particular geometry awareness of SGD noise: 1) the noise magnitude is proportional to loss value; 2) the noise directions concentrate in the sharp directions of local landscape. This property of SGD noise provably holds for linear networks and random feature models (RFMs) and is empirically verified for nonlinear networks. Moreover, the validity and practical relevance of our theoretical findings are justified by extensive numerical experiments

    A comprehensive review on laser powder bed fusion of steels : processing, microstructure, defects and control methods, mechanical properties, current challenges and future trends

    Get PDF
    Laser Powder Bed Fusion process is regarded as the most versatile metal additive manufacturing process, which has been proven to manufacture near net shape up to 99.9% relative density, with geometrically complex and high-performance metallic parts at reduced time. Steels and iron-based alloys are the most predominant engi-neering materials used for structural and sub-structural applications. Availability of steels in more than 3500 grades with their wide range of properties including high strength, corrosion resistance, good ductility, low cost, recyclability etc., have put them in forefront of other metallic materials. However, LPBF process of steels and iron-based alloys have not been completely established in industrial applications due to: (i) limited insight available in regards to the processing conditions, (ii) lack of specific materials standards, and (iii) inadequate knowledge to correlate the process parameters and other technical obstacles such as dimensional accuracy from a design model to actual component, part variability, limited feedstock materials, manual post-processing and etc. Continued efforts have been made to address these issues. This review aims to provide an overview of steels and iron-based alloys used in LPBF process by summarizing their key process parameters, describing thermophysical phenomena that is strongly linked to the phase transformation and microstructure evolution during solidifica-tion, highlighting metallurgical defects and their potential control methods, along with the impact of various post-process treatments; all of this have a direct impact on the mechanical performance. Finally, a summary of LPBF processed steels and iron-based alloys with functional properties and their application perspectives are presented. This review can provide a foundation of knowledge on LPBF process of steels by identifying missing information from the existing literature
    corecore