19,248 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio
This paper reviews the most common situations where one or more regularity
conditions which underlie classical likelihood-based parametric inference fail.
We identify three main classes of problems: boundary problems, indeterminate
parameter problems -- which include non-identifiable parameters and singular
information matrices -- and change-point problems. The review focuses on the
large-sample properties of the likelihood ratio statistic. We emphasize
analytical solutions and acknowledge software implementations where available.
We furthermore give summary insight about the possible tools to derivate the
key results. Other approaches to hypothesis testing and connections to
estimation are listed in the annotated bibliography of the Supplementary
Material
Offline and Online Models for Learning Pairwise Relations in Data
Pairwise relations between data points are essential for numerous machine learning algorithms. Many representation learning methods consider pairwise relations to identify the latent features and patterns in the data. This thesis, investigates learning of pairwise relations from two different perspectives: offline learning and online learning.The first part of the thesis focuses on offline learning by starting with an investigation of the performance modeling of a synchronization method in concurrent programming using a Markov chain whose state transition matrix models pairwise relations between involved cores in a computer process.Then the thesis focuses on a particular pairwise distance measure, the minimax distance, and explores memory-efficient approaches to computing this distance by proposing a hierarchical representation of the data with a linear memory requirement with respect to the number of data points, from which the exact pairwise minimax distances can be derived in a memory-efficient manner. Then, a memory-efficient sampling method is proposed that follows the aforementioned hierarchical representation of the data and samples the data points in a way that the minimax distances between all data points are maximally preserved. Finally, the thesis proposes a practical non-parametric clustering of vehicle motion trajectories to annotate traffic scenarios based on transitive relations between trajectories in an embedded space.The second part of the thesis takes an online learning perspective, and starts by presenting an online learning method for identifying bottlenecks in a road network by extracting the minimax path, where bottlenecks are considered as road segments with the highest cost, e.g., in the sense of travel time. Inspired by real-world road networks, the thesis assumes a stochastic traffic environment in which the road-specific probability distribution of travel time is unknown. Therefore, it needs to learn the parameters of the probability distribution through observations by modeling the bottleneck identification task as a combinatorial semi-bandit problem. The proposed approach takes into account the prior knowledge and follows a Bayesian approach to update the parameters. Moreover, it develops a combinatorial variant of Thompson Sampling and derives an upper bound for the corresponding Bayesian regret. Furthermore, the thesis proposes an approximate algorithm to address the respective computational intractability issue.Finally, the thesis considers contextual information of road network segments by extending the proposed model to a contextual combinatorial semi-bandit framework and investigates and develops various algorithms for this contextual combinatorial setting
Copy-paste data augmentation for domain transfer on traffic signs
City streets carry a lot of information that can be exploited to improve the quality of the services the citizens receive. For example, autonomous vehicles need to act accordingly to all the element that are nearby the vehicle itself, like pedestrians, traffic signs and other vehicles. It is also possible to use such information for smart city applications, for example to predict and analyze the traffic or pedestrian flows.
Among all the objects that it is possible to find in a street, traffic signs are very important because of the information they carry. This information can in fact be exploited both for autonomous driving and for smart city applications. Deep learning and, more generally, machine learning models however need huge quantities to learn. Even though modern models are very good at gener- alizing, the more samples the model has, the better it can generalize between different samples.
Creating these datasets organically, namely with real pictures, is a very tedious task because of the wide variety of signs available in the whole world and especially because of all the possible light, orientation conditions and con- ditions in general in which they can appear. In addition to that, it may not be easy to collect enough samples for all the possible traffic signs available, cause some of them may be very rare to find.
Instead of collecting pictures manually, it is possible to exploit data aug- mentation techniques to create synthetic datasets containing the signs that are needed. Creating this data synthetically allows to control the distribution and the conditions of the signs in the datasets, improving the quality and quantity of training data that is going to be used. This thesis work is about using copy-paste data augmentation to create synthetic data for the traffic sign recognition task
GEE Training Manual on Use of Earth Observation data and Google Earth Engine monitoring and early warning of floods and droughts in Zambia
This training manual supported participants in learning the pre-processing tool to provide the user with enhanced time-series processing capabilities and access to various open-source satellite data, learning basic scripts in Google Earth Engine for activities related to floods and drought in showcasing the application of water resource management. Specifically, the experts will give more focus to Google’s Earth Engine platform to showcase large- and small-scale scientific analysis and visualization of geospatial datasets. The codes and step by step procedure are given in the manual
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
A Visual Modeling Method for Spatiotemporal and Multidimensional Features in Epidemiological Analysis: Applied COVID-19 Aggregated Datasets
The visual modeling method enables flexible interactions with rich graphical
depictions of data and supports the exploration of the complexities of
epidemiological analysis. However, most epidemiology visualizations do not
support the combined analysis of objective factors that might influence the
transmission situation, resulting in a lack of quantitative and qualitative
evidence. To address this issue, we have developed a portrait-based visual
modeling method called +msRNAer. This method considers the spatiotemporal
features of virus transmission patterns and the multidimensional features of
objective risk factors in communities, enabling portrait-based exploration and
comparison in epidemiological analysis. We applied +msRNAer to aggregate
COVID-19-related datasets in New South Wales, Australia, which combined
COVID-19 case number trends, geo-information, intervention events, and
expert-supervised risk factors extracted from LGA-based censuses. We perfected
the +msRNAer workflow with collaborative views and evaluated its feasibility,
effectiveness, and usefulness through one user study and three subject-driven
case studies. Positive feedback from experts indicates that +msRNAer provides a
general understanding of analyzing comprehension that not only compares
relationships between cases in time-varying and risk factors through portraits
but also supports navigation in fundamental geographical, timeline, and other
factor comparisons. By adopting interactions, experts discovered functional and
practical implications for potential patterns of long-standing community
factors against the vulnerability faced by the pandemic. Experts confirmed that
+msRNAer is expected to deliver visual modeling benefits with spatiotemporal
and multidimensional features in other epidemiological analysis scenarios
A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse
The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth 84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment
Inferring networks from time series: a neural approach
Network structures underlie the dynamics of many complex phenomena, from gene
regulation and foodwebs to power grids and social media. Yet, as they often
cannot be observed directly, their connectivities must be inferred from
observations of their emergent dynamics. In this work we present a powerful and
fast computational method to infer large network adjacency matrices from time
series data using a neural network. Using a neural network provides uncertainty
quantification on the prediction in a manner that reflects both the
non-convexity of the inference problem as well as the noise on the data. This
is useful since network inference problems are typically underdetermined, and a
feature that has hitherto been lacking from network inference methods. We
demonstrate our method's capabilities by inferring line failure locations in
the British power grid from observations of its response to a power cut. Since
the problem is underdetermined, many classical statistical tools (e.g.
regression) will not be straightforwardly applicable. Our method, in contrast,
provides probability densities on each edge, allowing the use of hypothesis
testing to make meaningful probabilistic statements about the location of the
power cut. We also demonstrate our method's ability to learn an entire cost
matrix for a non-linear model from a dataset of economic activity in Greater
London. Our method outperforms OLS regression on noisy data in terms of both
speed and prediction accuracy, and scales as where OLS is cubic. Since
our technique is not specifically engineered for network inference, it
represents a general parameter estimation scheme that is applicable to any
parameter dimension
- …