17,678 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Computational approach to the Schottky problem

    Full text link
    We present a computational approach to the classical Schottky problem based on Fay's trisecant identity for genus g4g\geq 4. For a given Riemann matrix BHg\mathbb{B}\in\mathbb{H}^{g}, the Fay identity establishes linear dependence of secants in the Kummer variety if and only if the Riemann matrix corresponds to a Jacobian variety as shown by Krichever. The theta functions in terms of which these secants are expressed depend on the Abel maps of four arbitrary points on a Riemann surface. However, there is no concept of an Abel map for general BHg\mathbb{B} \in \mathbb{H}^{g}. To establish linear dependence of the secants, four components of the vectors entering the theta functions can be chosen freely. The remaining components are determined by a Newton iteration to minimize the residual of the Fay identity. Krichever's theorem assures that if this residual vanishes within the finite numerical precision for a generic choice of input data, then the Riemann matrix is with this numerical precision the period matrix of a Riemann surface. The algorithm is compared in genus 4 for some examples to the Schottky-Igusa modular form, known to give the Jacobi locus in this case. It is shown that the same residuals are achieved by the Schottky-Igusa form and the approach based on the Fay identity in this case. In genera 5, 6 and 7, we discuss known examples of Riemann matrices and perturbations thereof for which the Fay identity is not satisfied

    Bayesian Optimization with Conformal Prediction Sets

    Full text link
    Bayesian optimization is a coherent, ubiquitous approach to decision-making under uncertainty, with applications including multi-arm bandits, active learning, and black-box optimization. Bayesian optimization selects decisions (i.e. objective function queries) with maximal expected utility with respect to the posterior distribution of a Bayesian model, which quantifies reducible, epistemic uncertainty about query outcomes. In practice, subjectively implausible outcomes can occur regularly for two reasons: 1) model misspecification and 2) covariate shift. Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models and a simple mechanism to correct for covariate shift. We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity, and investigate its behavior on a suite of black-box optimization tasks and tabular ranking tasks. In many cases we find that query coverage can be significantly improved without harming sample-efficiency.Comment: For code, see https://www.github.com/samuelstanton/conformal-bayesopt.gi

    Patching Weak Convolutional Neural Network Models through Modularization and Composition

    Full text link
    Despite great success in many applications, deep neural networks are not always robust in practice. For instance, a convolutional neuron network (CNN) model for classification tasks often performs unsatisfactorily in classifying some particular classes of objects. In this work, we are concerned with patching the weak part of a CNN model instead of improving it through the costly retraining of the entire model. Inspired by the fundamental concepts of modularization and composition in software engineering, we propose a compressed modularization approach, CNNSplitter, which decomposes a strong CNN model for NN-class classification into NN smaller CNN modules. Each module is a sub-model containing a part of the convolution kernels of the strong model. To patch a weak CNN model that performs unsatisfactorily on a target class (TC), we compose the weak CNN model with the corresponding module obtained from a strong CNN model. The ability of the weak CNN model to recognize the TC can thus be improved through patching. Moreover, the ability to recognize non-TCs is also improved, as the samples misclassified as TC could be classified as non-TCs correctly. Experimental results with two representative CNNs on three widely-used datasets show that the averaged improvement on the TC in terms of precision and recall are 12.54% and 2.14%, respectively. Moreover, patching improves the accuracy of non-TCs by 1.18%. The results demonstrate that CNNSplitter can patch a weak CNN model through modularization and composition, thus providing a new solution for developing robust CNN models.Comment: Accepted at ASE'2

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks

    Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process

    Get PDF
    Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine). In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model. AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development. Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models. In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri

    Defining Service Level Agreements in Serverless Computing

    Get PDF
    The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead. Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model. This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%

    Desarrollo de una herramienta integral de gestión de gases de efecto invernadero para la toma de decisión contra el cambio climático a nivel regional y local en la Comunitat Valenciana

    Full text link
    Tesis por compendio[ES] Actualmente, los responsables de tomar decisiones contra el cambio climático carecen de herramientas para desarrollar inventarios de emisiones de gases de efecto invernadero (GEI) con suficiente rigor científico-técnico y precisión para priorizar e invertir los recursos disponibles de manera eficiente en las medidas necesarias para luchar contra el cambio climático. Por ello, en esta tesis se expone el desarrollo de un sistema de información territorial y sectorial (SITE) para monitorear las emisiones de GEI que sirva como herramienta de gobernanza climática local y regional. SITE combina las ventajas de los enfoques metodológicos descendente o top-down (de arriba hacia abajo) y ascendente o bottom-up (de abajo hacia arriba), para lograr un enfoque híbrido innovador para contabilizar y gestionar de manera eficiente las emisiones de GEI. Por tanto, en esta tesis se definen los diferentes desarrollos metodológicos, tanto generales como específicos de sectores clave del Panel Intergubernamental de Cambio Climático (IPPC) (edificación, transporte, sector forestal, etc.), un desarrollo informático para la parte de SITE que se ejecuta del lado del servidor, que de ahora en adelante denominaremos back-end del sistema, y siete implementaciones como casos de estudio representativos, a diferentes escalas y aplicados sobre diferentes sectores. Estas implementaciones a diferentes escalas y sectores demuestran el potencial del sistema como herramienta de apoyo en la toma de decisión contra el cambio climático a nivel regional y local. Las diferentes implementaciones en casos piloto representativos, tanto a nivel regional en la Comunitat Valenciana como a nivel local en municipios grandes (València) y medianos (Quart de Poblet y Llíria) muestran el potencial de adaptación territorial y sectorial que tiene la herramienta. Las metodologías desarrolladas para los sectores específicos de tráfico rodado, edificación o sector forestal, ofrecen cuantificaciones con una resolución espacial con gran capacidad de optimizar las políticas locales y regionales. Por tanto, la herramienta cuenta con un gran potencial de escalabilidad y gran capacidad de mejora continua mediante la inclusión de nuevos enfoques metodológicos, adaptación de las metodologías a la disponibilidad de datos, metodologías concretas para sectores clave y actualización a las mejores metodologías disponibles derivadas de actividades de investigación de la comunidad científica.[CA] Actualment, els responsables de prendre decisions contra el canvi climàtic no tenen eines per aconseguir inventaris d'emissions de gasos d'efecte hivernacle (GEH) amb prou cientificotècnic rigor, precisió i integritat per invertir els recursos disponibles de manera eficient en les mesures necessàries contra el canvi climàtic. Per això, en aquesta tesis se exposa el desenvolupa un sistema d'informació territorial i sectorial (SITE) per monitoritzar les emissions de GEH com a eina de governança climàtica local i regional. Aquest sistema combina els avantatges dels enfocaments metodològics descendent o top-down (de dalt a baix) i ascendent o bottom-up (de baix a dalt), per aconseguir un enfocament híbrid innovador per comptabilitzar i gestionar de manera eficient les emissions de GEH. Per tant, en aquesta tesi doctoral es descriuen els diferents desenvolupaments metodològics, tant generals com específics de sectors clau del Panel Intergovernamental contra el Canvi Climàtic (edificació, transport, forestal, etc.), un desenvolupament informàtic per al back-end del sistema i set implementacions com a casos d'estudi representatius, a diferents escales, amb els diferents enfocaments metodològics i aplicats sobre diferents sectors. Això queda descrit en sis capítols. Aquestes implementacions a diferents escales i sectors demostren el potencial del sistema com a eina de suport en la presa de decisió contra el canvi climàtic a nivell regional i local. Les diferents implementacions en casos pilot representatius, tant a nivell regional a la Comunitat Valenciana com a nivell local en municipis grans (València) i mitjans (Quart de Poblet i Llíria,) mostren el potencial d'adaptació territorial i sectorial que té l'eina. Les metodologies desenvolupades per als sectors específics de trànsit rodat, edificació i forestal, ofereixen quantificacions amb una resolució espacial amb gran capacitat d'optimitzar les polítiques locals i regionals. Per tant, l'eina compta amb un gran potencial d'escalabilitat i gran capacitat de millora contínua mitjançant la inclusió de nous enfocaments metodològics, adaptació de les metodologies a la disponibilitat de dades, metodologies concretes per a sectors clau, i actualització a les millors metodologies disponibles derivades de activitats de investigació de la comunitat científica.[EN] Currently, regional and local decision-makers lack of tools to achieve greenhouse gases (GHG) emissions inventories with enough rigor, accuracy and completeness in order to prioritize available resources efficiently against climate change. Thus, in this thesis the development of a territorial and sectoral information system (SITE) to monitor GHG emissions as a local and regional climate governance tool is exposed. This system combines the advantages of both, top-down and bottom-up approaches, to achieve an innovative hybrid approach to account and manage efficiently GHG emissions. Furthermore, this thesis defines the methodologies developed, a computer proposal for the back-end of the system and seven implementations as representative case studies at different scales (local and regional level), with the different methodological approaches and applied to different sectors. Thus, these implementations demonstrate the potential of the system as decision-making tool against climate change at the regional and local level as climate governance tool. The different implementations in representative pilot cases, both at the regional level in the Valencian Community and at the local level in large (Valencia) and medium-sized municipalities (Quart de Poblet and Llíria) demonstrate the potential for territorial and sectoral adaptation of the system developed. The methodologies developed for the specific sectors of road transport, building and forestry, offer quantifications with a spatial resolution with a great capacity to optimize local and regional policies. Therefore, the tool has a great potential for scalability and a great capacity for continuous improvement through the inclusion of new methodological approaches, adapting the methodologies to the availability of data, specific methodologies for key sectors, and updating to the best methodologies available in the scientific community.Lorenzo Sáez, E. (2022). Desarrollo de una herramienta integral de gestión de gases de efecto invernadero para la toma de decisión contra el cambio climático a nivel regional y local en la Comunitat Valenciana [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181662TESISCompendi
    corecore