16,210 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Reinforcement Learning-based User-centric Handover Decision-making in 5G Vehicular Networks
The advancement of 5G technologies and Vehicular Networks open a new paradigm for Intelligent Transportation Systems (ITS) in safety and infotainment services in urban and highway scenarios. Connected vehicles are vital for enabling massive data sharing and supporting such services. Consequently, a stable connection is compulsory to transmit data across the network successfully. The new 5G technology introduces more bandwidth, stability, and reliability, but it faces a low communication range, suffering from more frequent handovers and connection drops. The shift from the base station-centric view to the user-centric view helps to cope with the smaller communication range and ultra-density of 5G networks. In this thesis, we propose a series of strategies to improve connection stability through efficient handover decision-making. First, a modified probabilistic approach, M-FiVH, aimed at reducing 5G handovers and enhancing network stability. Later, an adaptive learning approach employed Connectivity-oriented SARSA Reinforcement Learning (CO-SRL) for user-centric Virtual Cell (VC) management to enable efficient handover (HO) decisions. Following that, a user-centric Factor-distinct SARSA Reinforcement Learning (FD-SRL) approach combines time series data-oriented LSTM and adaptive SRL for VC and HO management by considering both historical and real-time data. The random direction of vehicular movement, high mobility, network load, uncertain road traffic situation, and signal strength from cellular transmission towers vary from time to time and cannot always be predicted. Our proposed approaches maintain stable connections by reducing the number of HOs by selecting the appropriate size of VCs and HO management. A series of improvements demonstrated through realistic simulations showed that M-FiVH, CO-SRL, and FD-SRL were successful in reducing the number of HOs and the average cumulative HO time. We provide an analysis and comparison of several approaches and demonstrate our proposed approaches perform better in terms of network connectivity
Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location in the presence of background noises and irrelevant visual objects. The ability to decode the attended spatial location would facilitate brain computer interfaces (BCI) for complex scene analysis. Here, we tested two different neuroimaging technologies and investigated their capability to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. For functional near-infrared spectroscopy (fNIRS), we targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. We found that fNIRS provides robust decoding of attended spatial locations for most participants and correlates with behavioral performance. Moreover, we found that FEF makes a large contribution to decoding performance. Surprisingly, the performance was significantly above chance level 1s after cue onset, which is well before the peak of the fNIRS response.
For electroencephalography (EEG), while there are several successful EEG-based algorithms, to date, all of them focused exclusively on auditory modality where eye-related artifacts are minimized or controlled. Successful integration into a more ecological typical usage requires careful consideration for eye-related artifacts which are inevitable. We showed that fast and reliable decoding can be done with or without ocular-removal algorithm. Our results show that EEG and fNIRS are promising platforms for compact, wearable technologies that could be applied to decode attended spatial location and reveal contributions of specific brain regions during complex scene analysis
Recommended from our members
Machine Learning for Gravitational-Wave Astronomy: Methods and Applications for High-Dimensional Laser Interferometry Data
Gravitational-wave astronomy is an emerging field in observational astrophysics concerned with the study of gravitational signals proposed to exist nearly a century ago by Albert Einstein but only recently confirmed to exist. Such signals were theorized to result from astronomical events such as the collisions of black holes, but they were long thought to be too faint to measure on Earth. In recent years, the construction of extremely sensitive detectors—including the Laser Interferometer Gravitational-Wave Observatory (LIGO) project—has enabled the first direct detections of these gravitational waves, corroborating the theory of general relativity and heralding a new era of astrophysics research.
As a result of their extraordinary sensitivity, the instruments used to study gravitational waves are also subject to noise that can significantly limit their ability to detect the signals of interest with sufficient confidence. The detectors continuously record more than 200,000 time series of auxiliary data describing the state of a vast array of internal components and sensors, the environmental state in and around the detector, and so on. This data offers significant value for understanding the nearly innumerable potential sources of noise and ultimately reducing or eliminating them, but it is clearly impossible to monitor, let alone understand, so much information manually. The field of machine learning offers a variety of techniques well-suited to problems of this nature.
In this thesis, we develop and present several machine learning–based approaches to automate the process of extracting insights from the vast, complex collection of data recorded by LIGO detectors. We introduce a novel problem formulation for transient noise detection and show for the first time how an efficient and interpretable machine learning method can accurately identify detector noise using all of these auxiliary data channels but without observing the noise itself. We present further work employing more sophisticated neural network–based models, demonstrating how they can reduce error rates by over 60% while also providing LIGO scientists with interpretable insights into the detector’s behavior. We also illustrate the methods’ utility by demonstrating their application to a specific, recurring type of transient noise; we show how we can achieve a classification accuracy of over 97% while also independently corroborating the results of previous manual investigations into the origins of this type of noise.
The methods and results presented in the following chapters are applicable not only to the specific gravitational-wave data considered but also to a broader family of machine learning problems involving prediction from similarly complex, high-dimensional data containing only a few relevant components in a sea of irrelevant information. We hope this work proves useful to astrophysicists and other machine learning practitioners seeking to better understand gravitational waves, extremely complex and precise engineered systems, or any of the innumerable extraordinary phenomena of our civilization and universe
Underwater optical wireless communications in turbulent conditions: from simulation to experimentation
Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel.
This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated.
Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link.
Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions
Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing
Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging.
In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging.
Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid.
Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components
Interference mitigation in LiFi networks
Due to the increasing demand for wireless data, the radio frequency (RF) spectrum has
become a very limited resource. Alternative approaches are under investigation to support
the future growth in data traffic and next-generation high-speed wireless communication
systems. Techniques such as massive multiple-input multiple-output (MIMO), millimeter
wave (mmWave) communications and light-fidelity (LiFi) are being explored. Among
these technologies, LiFi is a novel bi-directional, high-speed and fully networked wireless
communication technology. However, inter-cell interference (ICI) can significantly restrict the
system performance of LiFi attocell networks. This thesis focuses on interference mitigation
in LiFi attocell networks.
The angle diversity receiver (ADR) is one solution to address the issue of ICI as well as
frequency reuse in LiFi attocell networks. With the property of high concentration gain and
narrow field of view (FOV), the ADR is very beneficial for interference mitigation. However,
the optimum structure of the ADR has not been investigated. This motivates us to propose the
optimum structures for the ADRs in order to fully exploit the performance gain. The impact
of random device orientation and diffuse link signal propagation are taken into consideration.
The performance comparison between the select best combining (SBC) and maximum ratio
combining (MRC) is carried out under different noise levels. In addition, the double source
(DS) system, where each LiFi access point (AP) consists of two sources transmitting the same
information signals but with opposite polarity, is proven to outperform the single source (SS)
system under certain conditions.
Then, to overcome issues around ICI, random device orientation and link blockage, hybrid
LiFi/WiFi networks (HLWNs) are considered. In this thesis, dynamic load balancing (LB)
considering handover in HLWNs is studied. The orientation-based random waypoint (ORWP)
mobility model is considered to provide a more realistic framework to evaluate the performance
of HLWNs. Based on the low-pass filtering effect of the LiFi channel, we firstly propose
an orthogonal frequency division multiple access (OFDMA)-based resource allocation (RA)
method in LiFi systems. Also, an enhanced evolutionary game theory (EGT)-based LB scheme
with handover in HLWNs is proposed.
Finally, due to the characteristic of high directivity and narrow beams, a vertical-cavity
surface-emitting laser (VCSEL) array transmission system has been proposed to mitigate
ICI. In order to support mobile users, two beam activation methods are proposed. The
beam activation based on the corner-cube retroreflector (CCR) can achieve low power
consumption and almost-zero delay, allowing real-time beam activation for high-speed users.
The mechanism based on the omnidirectional transmitter (ODTx) is suitable for low-speed
users and very robust to random orientation
Development of in-vitro in-silico technologies for modelling and analysis of haematological malignancies
Worldwide, haematological malignancies are responsible for roughly 6% of all the cancer-related deaths. Leukaemias are one of the most severe types of cancer, as only about 40% of the patients have an overall survival of 10 years or more. Myelodysplastic Syndrome (MDS), a pre-leukaemic condition, is a blood disorder characterized by the presence of dysplastic, irregular, immature cells, or blasts, in the peripheral blood (PB) and in the bone marrow (BM), as well as multi-lineage cytopenias.
We have created a detailed, lineage-specific, high-fidelity in-silico erythroid model that incorporates known biological stimuli (cytokines and hormones) and a competing diseased haematopoietic population, correctly capturing crucial biological checkpoints (EPO-dependent CFU-E differentiation) and replicating the in-vivo erythroid differentiation dynamics. In parallel, we have also proposed a long-term, cytokine-free 3D cell culture system for primary MDS cells, which was firstly optimized using easily-accessible healthy controls. This system enabled long-term (24-day) maintenance in culture with high (>75%) cell viability, promoting spontaneous expansion of erythroid phenotypes (CD71+/CD235a+) without the addition of any exogenous cytokines. Lastly, we have proposed a novel in-vitro in-silico framework using GC-MS metabolomics for the metabolic profiling of BM and PB plasma, aiming not only to discretize between haematological conditions but also to sub-classify MDS patients, potentially based on candidate biomarkers. Unsupervised multivariate statistical analysis showed clear intra- and inter-disease separation of samples of 5 distinct haematological malignancies, demonstrating the potential of this approach for disease characterization.
The work herein presented paves the way for the development of in-vitro in-silico technologies to better, characterize, diagnose, model and target haematological malignancies such as MDS and AML.Open Acces
Digital asset management via distributed ledgers
Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems
Full stack development toward a trapped ion logical qubit
Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical
qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates
can be performed.
The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each
physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator.
This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated.
The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger
scale iterations.Open Acces
- …