16,119 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    TOI-969: a late-K dwarf with a hot mini-Neptune in the desert and an eccentric cold Jupiter

    Get PDF
    Context. The current architecture of a given multi-planetary system is a key fingerprint of its past formation and dynamical evolution history. Long-term follow-up observations are key to complete their picture. Aims. In this paper, we focus on the confirmation and characterization of the components of the TOI-969 planetary system, where TESS detected a Neptune-size planet candidate in a very close-in orbit around a late K-dwarf star. Methods. We use a set of precise radial velocity observations from HARPS, PFS, and CORALIE instruments covering more than two years in combination with the TESS photometric light curve and other ground-based follow-up observations to confirm and characterize the components of this planetary system. Results. We find that TOI-969 b is a transiting close-in (Pb ∼ 1.82 days) mini-Neptune planet (Formula Presented), placing it on the lower boundary of the hot-Neptune desert (Teq,b = 941 \ub1 31 K). The analysis of its internal structure shows that TOI-969 b is a volatile-rich planet, suggesting it underwent an inward migration. The radial velocity model also favors the presence of a second massive body in the system, TOI-969 c, with a long period of (Formula Presented) days, a minimum mass of (Formula Presented), and a highly eccentric orbit of (Formula Presented). Conclusions. The TOI-969 planetary system is one of the few around K-dwarfs known to have this extended configuration going from a very close-in planet to a wide-separation gaseous giant. TOI-969 b has a transmission spectroscopy metric of 93 and orbits a moderately bright (G = 11.3 mag) star, making it an excellent target for atmospheric studies. The architecture of this planetary system can also provide valuable information about migration and formation of planetary systems

    Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time

    Full text link
    Natural videos captured by consumer cameras often suffer from low framerate and motion blur due to the combination of dynamic scene complexity, lens and sensor imperfection, and less than ideal exposure setting. As a result, computational methods that jointly perform video frame interpolation and deblurring begin to emerge with the unrealistic assumption that the exposure time is known and fixed. In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame interpolation and deblurring under unknown exposure time. Toward this goal, we first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames. We then train two U-Nets for intra-motion and inter-motion analysis, respectively, adapting to the learned exposure representation via gain tuning. We finally build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement. Extensive experiments on both simulated and real-world datasets show that our optimized method achieves notable performance gains over the state-of-the-art on the joint video x8 interpolation and deblurring task. Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1.5 dB in terms of PSNR.Comment: Accepted by CVPR 2023, available at https://github.com/shangwei5/VIDU

    3d mirror symmetry of braided tensor categories

    Full text link
    We study the braided tensor structure of line operators in the topological A and B twists of abelian 3d N=4\mathcal{N}=4 gauge theories, as accessed via boundary vertex operator algebras (VOA's). We focus exclusively on abelian theories. We first find a non-perturbative completion of boundary VOA's in the B twist, which start out as certain affine Lie superalebras; and we construct free-field realizations of both A and B-twist VOA's, finding an interesting interplay with the symmetry fractionalization group of bulk theories. We use the free-field realizations to establish an isomorphism between A and B VOA's related by 3d mirror symmetry. Turning to line operators, we extend previous physical classifications of line operators to include new monodromy defects and bound states. We also outline a mechanism by which continuous global symmetries in a physical theory are promoted to higher symmetries in a topological twist -- in our case, these are infinite one-form symmetries, related to boundary spectral flow, which structure the categories of lines and control abelian gauging. Finally, we establish the existence of braided tensor structure on categories of line operators, viewed as non-semisimple categories of modules for boundary VOA's. In the A twist, we obtain the categories by extending modules of symplectic boson VOA's, corresponding to gauging free hypermultiplets; in the B twist, we instead extend Kazhdan-Lusztig categories for affine Lie superalgebras. We prove braided tensor equivalences among the categories of 3d-mirror theories. All results on VOA's and their module categories are mathematically rigorous; they rely strongly on recently developed techniques to access non-semisimple extensions.Comment: 158 pages, comments welcome

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Application of advanced fluorescence microscopy and spectroscopy in live-cell imaging

    Get PDF
    Since its inception, fluorescence microscopy has been a key source of discoveries in cell biology. Advancements in fluorophores, labeling techniques and instrumentation have made fluorescence microscopy a versatile quantitative tool for studying dynamic processes and interactions both in vitro and in live-cells. In this thesis, I apply quantitative fluorescence microscopy techniques in live-cell environments to investigate several biological processes. To study Gag processing in HIV-1 particles, fluorescence lifetime imaging microscopy and single particle tracking are combined to follow nascent HIV-1 virus particles during assembly and release on the plasma membrane of living cells. Proteolytic release of eCFP embedded in the Gag lattice of immature HIV-1 virus particles results in a characteristic increase in its fluorescence lifetime. Gag processing and rearrangement can be detected in individual virus particles using this approach. In another project, a robust method for quantifying Förster resonance energy transfer in live-cells is developed to allow direct comparison of live-cell FRET experiments between laboratories. Finally, I apply image fluctuation spectroscopy to study protein behavior in a variety of cellular environments. Image cross-correlation spectroscopy is used to study the oligomerization of CXCR4, a G-protein coupled receptor on the plasma membrane. With raster image correlation spectroscopy, I measure the diffusion of histones in the nucleoplasm and heterochromatin domains of the nuclei of early mouse embryos. The lower diffusion coefficient of histones in the heterochromatin domain supports the conclusion that heterochromatin forms a liquid phase-separated domain. The wide range of topics covered in this thesis demonstrate that fluorescence microscopy is more than just an imaging tool but also a powerful instrument for the quantification and elucidation of dynamic cellular processes

    Deciphering Regulation in Escherichia coli: From Genes to Genomes

    Get PDF
    Advances in DNA sequencing have revolutionized our ability to read genomes. However, even in the most well-studied of organisms, the bacterium Escherichia coli, for ≈ 65% of promoters we remain ignorant of their regulation. Until we crack this regulatory Rosetta Stone, efforts to read and write genomes will remain haphazard. We introduce a new method, Reg-Seq, that links massively-parallel reporter assays with mass spectrometry to produce a base pair resolution dissection of more than 100 E. coli promoters in 12 growth conditions. We demonstrate that the method recapitulates known regulatory information. Then, we examine regulatory architectures for more than 80 promoters which previously had no known regulatory information. In many cases, we also identify which transcription factors mediate their regulation. This method clears a path for highly multiplexed investigations of the regulatory genome of model organisms, with the potential of moving to an array of microbes of ecological and medical relevance.</p
    • …
    corecore