6,610 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Consolidation of Urban Freight Transport – Models and Algorithms
Urban freight transport is an indispensable component of economic and social life in cities. Compared to other types of transport, however, it contributes disproportionately to the negative impacts of traffic. As a result, urban freight transport is closely linked to social, environmental, and economic challenges. Managing urban freight transport and addressing these issues poses challenges not only for local city administrations but also for companies, such as logistics service providers (LSPs). Numerous policy measures and company-driven initiatives exist in the area of urban freight transport to overcome these challenges. One central approach is the consolidation of urban freight transport. This dissertation focuses on urban consolidation centers (UCCs) which are a widely studied and applied measure in urban freight transport. The fundamental idea of UCCs is to consolidate freight transport across companies in logistics facilities close to an urban area in order to increase the efficiency of vehicles delivering goods within the urban area. Although the concept has been researched and tested for several decades and it was shown that it can reduce the negative externalities of freight transport in cities, in practice many UCCs struggle with a lack of business participation and financial difficulties. This dissertation is primarily focused on the costs and savings associated with the use of UCCs from the perspective of LSPs. The cost-effectiveness of UCC use, which is also referred to as cost attractiveness, can be seen as a crucial condition for LSPs to be interested in using UCC systems. The overall objective of this dissertation is two-fold. First, it aims to develop models to provide decision support for evaluating the cost-effectiveness of using UCCs. Second, it aims to analyze the impacts of urban freight transport regulations and operational characteristics on the cost attractiveness of using UCCs from the perspective of LSPs. In this context, a distinction is made between UCCs that are jointly operated by a group of LSPs and UCCs that are operated by third parties who offer their urban transport service for a fee. The main body of this dissertation is based on three research papers. The first paper focuses on jointly-operated UCCs that are operated by a group of cooperating LSPs. It presents a simulation model to analyze the financial impacts on LSPs participating in such a scheme. In doing so, a particular focus is placed on urban freight transport regulations. A case study is used to analyze the operation of a jointly-operated UCC for scenarios involving three freight transport regulations. The second and third papers take on a different perspective on UCCs by focusing on third-party operated UCCs. In contrast to the first paper, the second and third papers present an evaluation approach in which the decision to use UCCs is integrated with the vehicle route planning of LSPs. In addition to addressing the basic version of this integrated routing problem, known as the vehicle routing problem with transshipment facilities (VRPTF), the second paper presents problem extensions that incorporate time windows, fleet size and mix decisions, and refined objective functions. To heuristically solve the basic problem and the new problem variants, an adaptive large neighborhood search (ALNS) heuristic with embedded local search heuristic and set partitioning problem (SPP) is presented. Furthermore, various factors influencing the cost attractiveness of UCCs, including time windows and usage fees, are analyzed using a real-world case study. The third paper extends the work of the second paper and incorporates daily and entrance-based city toll schemes and enables multi-trip routing. A mixed-integer linear programming (MILP) formulation of the resulting problem is proposed, as well as an ALNS solution heuristic. Moreover, a real-world case study with three European cities is used to analyze the impact of the two city toll systems in different operational contexts
Industry 4.0: product digital twins for remanufacturing decision-making
Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle.
The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
Elasto-plastic deformations within a material point framework on modern GPU architectures
Plastic strain localization is an important process on Earth. It strongly influ- ences the mechanical behaviour of natural processes, such as fault mechanics, earthquakes or orogeny. At a smaller scale, a landslide is a fantastic example of elasto-plastic deformations. Such behaviour spans from pre-failure mech- anisms to post-failure propagation of the unstable material. To fully resolve the landslide mechanics, the selected numerical methods should be able to efficiently address a wide range of deformation magnitudes.
Accurate and performant numerical modelling requires important compu- tational resources. Mesh-free numerical methods such as the material point method (MPM) or the smoothed-particle hydrodynamics (SPH) are particu- larly computationally expensive, when compared with mesh-based methods, such as the finite element method (FEM) or the finite difference method (FDM). Still, mesh-free methods are particularly well-suited to numerical problems involving large elasto-plastic deformations. But, the computational efficiency of these methods should be first improved in order to tackle complex three-dimensional problems, i.e., landslides.
As such, this research work attempts to alleviate the computational cost of the material point method by using the most recent graphics processing unit (GPU) architectures available. GPUs are many-core processors originally designed to refresh screen pixels (e.g., for computer games) independently. This allows GPUs to delivers a massive parallelism when compared to central processing units (CPUs).
To do so, this research work first investigates code prototyping in a high- level language, e.g., MATLAB. This allows to implement vectorized algorithms and benchmark numerical results of two-dimensional analysis with analytical solutions and/or experimental results in an affordable amount of time. After- wards, low-level language such as CUDA C is used to efficiently implement a GPU-based solver, i.e., ep2-3De v1.0, can resolve three-dimensional prob- lems in a decent amount of time. This part takes advantages of the massive parallelism of modern GPU architectures. In addition, a first attempt of GPU parallel computing, i.e., multi-GPU codes, is performed to increase even more the performance and to address the on-chip memory limitation. Finally, this GPU-based solver is used to investigate three-dimensional granular collapses and is compared with experimental evidences obtained in the laboratory.
This research work demonstrates that the material point method is well suited to resolve small to large elasto-plastic deformations. Moreover, the computational efficiency of the method can be dramatically increased using modern GPU architectures. These allow fast, performant and accurate three- dimensional modelling of landslides, provided that the on-chip memory limi- tation is alleviated with an appropriate parallel strategy
False textual information detection, a deep learning approach
Many approaches exist for analysing fact checking for fake news identification, which is the focus of this thesis. Current approaches still perform badly on a large scale due to a lack of authority, or insufficient evidence, or in certain cases reliance on a single piece of evidence.
To address the lack of evidence and the inability of models to generalise across domains, we propose a style-aware model for detecting false information and improving existing performance. We discovered that our model was effective at detecting false information when we evaluated its generalisation ability using news articles and Twitter corpora.
We then propose to improve fact checking performance by incorporating warrants. We developed a highly efficient prediction model based on the results and demonstrated that incorporating is beneficial for fact checking. Due to a lack of external warrant data, we develop a novel model for generating warrants that aid in determining the credibility of a claim. The results indicate that when a pre-trained language model is combined with a multi-agent model, high-quality, diverse warrants are generated that contribute to task performance improvement.
To resolve a biased opinion and making rational judgments, we propose a model that can generate multiple perspectives on the claim. Experiments confirm that our Perspectives Generation model allows for the generation of diverse perspectives with a higher degree of quality and diversity than any other baseline model.
Additionally, we propose to improve the model's detection capability by generating an explainable alternative factual claim assisting the reader in identifying subtle issues that result in factual errors. The examination demonstrates that it does indeed increase the veracity of the claim.
Finally, current research has focused on stance detection and fact checking separately, we propose a unified model that integrates both tasks. Classification results demonstrate that our proposed model outperforms state-of-the-art methods
An exploration of the person-related markers in finite synthetic verbs in C16 Basque
From an examination of the emergence of Batua, dialect classification, the relationship of sixteenth century Basque to Batua, two sets of sixteenth century sources, the thesis contends that, over the last half-millennium, Basque has changed to a greater extent than generally acknowledged. Semantic, aspectual, syntactic, phonological and morphological change is illustrated, showing how different sources reflect different stages of key transitions. Investigation of the morphosyntax of sixteenth century person-related markers contrasts patterns of distribution, positioning, pleonasm and omission with those of the modern language. Indexing between pre- and post-root features suggests a history of serial verbs, or possibly root suppletion; in particular the shift from sixteenth century predominantly pre-root (where they exist) to the modern overwhelmingly post-root positioning of dative flags lends weight to the contention that Basque might have transitioned from a language with previously greater pre-inflective typology than the overwhelmingly post-inflective language of today. Sixteenth century intermediate forms permit insights into an earlier history of reanalysis and repurposing and suggest foci for future research
Deposição de filmes do diamante para dispositivos electrónicos
This PhD thesis presents details about the usage of diamond in electronics. It presents a review of the properties of diamond and the mechanisms of its growth using hot filament chemical vapour deposition (HFCVD). Presented in the thesis are the experimental details and discussions that follow from it about the optimization of the deposition technique and the growth of diamond on various electronically relevant substrates. The discussions present an analysis of the parameters typically involved in the HFCVD, particularly the pre-treatment that the substrates receive- namely, the novel nucleation procedure (NNP), as well as growth temperatures and plasma chemistry and how they affect the characteristics of the thus-grown films. Extensive morphological and spectroscopic analysis has been made in order to characterise these films.Este trabalho discute a utilização de diamante em aplicações electrónicas. É apresentada uma revisão detalhada das propriedades de diamante e dos respectivos mecanismos de crescimento utilizando deposição química a partir da fase vapor com filament quente (hot filament chemical vapour deposition - HFCVD). Os detalhes experimentais relativos à otimização desta técnica tendo em vista o crescimento de diamante em vários substratos com relevância em eletrónica são apresentados e discutidos com detalhe. A discussão inclui a análise dos parâmetros tipicamente envolvidos em HFCVD, em particular do pré-tratamento que o substrato recebe e que é conhecido na literatura como "novel nucleation procedure" (NNP), assim como das temperaturas de crescimento e da química do plasma, bem como a influência de todos estes parâmetros nas características finais dos filmes. A caracterização morfológica dos filmes envolveu técnicas de microscopia e espetroscopia.Programa Doutoral em Engenharia Eletrotécnic
Scalable software and models for large-scale extracellular recordings
The brain represents information about the world through the electrical activity of
populations of neurons. By placing an electrode near a neuron that is firing (spiking), it
is possible to detect the resulting extracellular action potential (EAP) that is transmitted
down an axon to other neurons. In this way, it is possible to monitor the communication
of a group of neurons to uncover how they encode and transmit information. As the
number of recorded neurons continues to increase, however, so do the data processing
and analysis challenges. It is crucial that scalable software and analysis tools are developed
and made available to the neuroscience community to keep up with the large
amounts of data that are already being gathered.
This thesis is composed of three pieces of work which I develop in order to better
process and analyze large-scale extracellular recordings. My work spans all stages of extracellular
analysis from the processing of raw electrical recordings to the development
of statistical models to reveal underlying structure in neural population activity.
In the first work, I focus on developing software to improve the comparison and adoption
of different computational approaches for spike sorting. When analyzing neural
recordings, most researchers are interested in the spiking activity of individual neurons,
which must be extracted from the raw electrical traces through a process called
spike sorting. Much development has been directed towards improving the performance
and automation of spike sorting. This continuous development, while essential,
has contributed to an over-saturation of new, incompatible tools that hinders rigorous
benchmarking and complicates reproducible analysis. To address these limitations, I
develop SpikeInterface, an open-source, Python framework designed to unify preexisting
spike sorting technologies into a single toolkit and to facilitate straightforward
benchmarking of different approaches. With this framework, I demonstrate that modern,
automated spike sorters have low agreement when analyzing the same dataset, i.e.
they find different numbers of neurons with different activity profiles; This result holds
true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a
consensus-based approach to spike sorting, where the outputs of multiple spike sorters
are combined, can dramatically reduce the number of falsely detected neurons.
In the second work, I focus on developing an unsupervised machine learning approach
for determining the source location of individually detected spikes that are
recorded by high-density, microelectrode arrays. By localizing the source of individual
spikes, my method is able to determine the approximate position of the recorded neuriii
ons in relation to the microelectrode array. To allow my model to work with large-scale
datasets, I utilize deep neural networks, a family of machine learning algorithms that
can be trained to approximate complicated functions in a scalable fashion. I evaluate
my method on both simulated and real extracellular datasets, demonstrating that it is
more accurate than other commonly used methods. Also, I show that location estimates
for individual spikes can be utilized to improve the efficiency and accuracy of spike
sorting. After training, my method allows for localization of one million spikes in approximately
37 seconds on a TITAN X GPU, enabling real-time analysis of massive
extracellular datasets.
In my third and final presented work, I focus on developing an unsupervised machine
learning model that can uncover patterns of activity from neural populations
associated with a behaviour being performed. Specifically, I introduce Targeted Neural
Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity
and any external behavioural variables. TNDM decomposes neural dynamics (i.e.
temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics;
the behaviourally relevant dynamics constitute all activity patterns required
to generate the behaviour of interest while behaviourally irrelevant dynamics may be
completely unrelated (e.g. other behavioural or brain states), or even related to behaviour
execution (e.g. dynamics that are associated with behaviour generally but are not
task specific). Again, I implement TNDM using a deep neural network to improve its
scalability and expressivity. On synthetic data and on real recordings from the premotor
(PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching
task, I show that TNDM is able to extract low-dimensional neural dynamics that are
highly predictive of behaviour without sacrificing its fit to the neural data
Optimización del rendimiento y la eficiencia energética en sistemas masivamente paralelos
RESUMEN Los sistemas heterogéneos son cada vez más relevantes, debido a sus capacidades de rendimiento y eficiencia energética, estando presentes en todo tipo de plataformas de cómputo, desde dispositivos embebidos y servidores, hasta nodos HPC de grandes centros de datos. Su complejidad hace que sean habitualmente usados bajo el paradigma de tareas y el modelo de programación host-device. Esto penaliza fuertemente el aprovechamiento de los aceleradores y el consumo energético del sistema, además de dificultar la adaptación de las aplicaciones.
La co-ejecución permite que todos los dispositivos cooperen para computar el mismo problema, consumiendo menos tiempo y energía. No obstante, los programadores deben encargarse de toda la gestión de los dispositivos, la distribución de la carga y la portabilidad del código entre sistemas, complicando notablemente su programación.
Esta tesis ofrece contribuciones para mejorar el rendimiento y la eficiencia energética en estos sistemas masivamente paralelos. Se realizan propuestas que abordan objetivos generalmente contrapuestos: se mejora la usabilidad y la programabilidad, a la vez que se garantiza una mayor abstracción y extensibilidad del sistema, y al mismo tiempo se aumenta el rendimiento, la escalabilidad y la eficiencia energética. Para ello, se proponen dos motores de ejecución con enfoques completamente distintos.
EngineCL, centrado en OpenCL y con una API de alto nivel, favorece la máxima compatibilidad entre todo tipo de dispositivos y proporciona un sistema modular extensible. Su versatilidad permite adaptarlo a entornos para los que no fue concebido, como aplicaciones con ejecuciones restringidas por tiempo o simuladores HPC de dinámica molecular, como el utilizado en un centro de investigación internacional.
Considerando las tendencias industriales y enfatizando la aplicabilidad profesional, CoexecutorRuntime proporciona un sistema flexible centrado en C++/SYCL que dota de soporte a la co-ejecución a la tecnología oneAPI. Este runtime acerca a los programadores al dominio del problema, posibilitando la explotación de estrategias dinámicas adaptativas que mejoran la eficiencia en todo tipo de aplicaciones.ABSTRACT Heterogeneous systems are becoming increasingly relevant, due to their performance and energy efficiency capabilities, being present in all types of computing platforms, from embedded devices and servers to HPC nodes in large data centers. Their complexity implies that they are usually used under the task paradigm and the host-device programming model. This strongly penalizes accelerator utilization and system energy consumption, as well as making it difficult to adapt applications.
Co-execution allows all devices to simultaneously compute the same problem, cooperating to consume less time and energy. However, programmers must handle all device management, workload distribution and code portability between systems, significantly complicating their programming.
This thesis offers contributions to improve performance and energy efficiency in these massively parallel systems. The proposals address the following generally conflicting objectives: usability and programmability are improved, while ensuring enhanced system abstraction and extensibility, and at the same time performance, scalability and energy efficiency are increased. To achieve this, two runtime systems with completely different approaches are proposed.
EngineCL, focused on OpenCL and with a high-level API, provides an extensible modular system and favors maximum compatibility between all types of devices. Its versatility allows it to be adapted to environments for which it was not originally designed, including applications with time-constrained executions or molecular dynamics HPC simulators, such as the one used in an international research center.
Considering industrial trends and emphasizing professional applicability, CoexecutorRuntime provides a flexible C++/SYCL-based system that provides co-execution support for oneAPI technology. This runtime brings programmers closer to the problem domain, enabling the exploitation of dynamic adaptive strategies that improve efficiency in all types of applications.Funding: This PhD has been supported by the Spanish Ministry of Education (FPU16/03299 grant),
the Spanish Science and Technology Commission under contracts TIN2016-76635-C2-2-R
and PID2019-105660RB-C22.
This work has also been partially supported by the Mont-Blanc 3: European Scalable and
Power Efficient HPC Platform based on Low-Power Embedded Technology project (G.A. No.
671697) from the European Union’s Horizon 2020 Research and Innovation Programme
(H2020 Programme). Some activities have also been funded by the Spanish Science and Technology
Commission under contract TIN2016-81840-REDT (CAPAP-H6 network).
The Integration II: Hybrid programming models of Chapter 4 has been partially performed
under the Project HPC-EUROPA3 (INFRAIA-2016-1-730897), with the support of the EC
Research Innovation Action under the H2020 Programme. In particular, the author gratefully
acknowledges the support of the SPMT Department of the High Performance Computing
Center Stuttgart (HLRS)
- …