158 research outputs found

    Harnessing the Potential of Optical Communications for the Metaverse

    Full text link
    The Metaverse is a digital world that offers an immersive virtual experience. However, the Metaverse applications are bandwidth-hungry and delay-sensitive that require ultrahigh data rates, ultra-low latency, and hyper-intensive computation. To cater for these requirements, optical communication arises as a key pillar in bringing this paradigm into reality. We highlight in this paper the potential of optical communications in the Metaverse. First, we set forth Metaverse requirements in terms of capacity and latency; then, we introduce ultra-high data rates requirements for various Metaverse experiences. Then, we put forward the potential of optical communications to achieve these data rate requirements in backbone, backhaul, fronthaul, and access segments. Both optical fiber and optical wireless communication (OWC) technologies, as well as their current and future expected data rates, are detailed. In addition, we propose a comprehensive set of configurations, connectivity, and equipment necessary for an immersive Metaverse experience. Finally, we identify a set of key enablers and research directions such as analog neuromorphic optical computing, optical intelligent reflective surfaces (IRS), hollow core fiber (HCF), and terahertz (THz)

    Terahertz dynamic aperture imaging at stand-off distances using a Compressed Sensing protocol

    Full text link
    In this text, results of a 0.35 terahertz (THz) dynamic aperture imaging approach are presented. The experiments use an optical modulation approach and a single pixel detector at a stand-off imaging distance of approx 1 meter. The optical modulation creates dynamic apertures of 5cm diameter with approx 2000 individually controllable elements. An optical modulation approach is used here for the first time at a large far-field distance, for the investigation of various test targets in a field-of-view of 8 x 8 cm. The results highlight the versatility of this modulation technique and show that this imaging paradigm is applicable even at large far-field distances. It proves the feasibility of this imaging approach for potential applications like stand-off security imaging or far field THz microscopy.Comment: 9 pages, 13 figure

    Indexing Techniques for Image and Video Databases: an approach based on Animate Vision Paradigm

    Get PDF
    [ITALIANO]In questo lavoro di tesi vengono presentate e discusse delle innovative tecniche di indicizzazione per database video e di immagini basate sul paradigma della “Animate Vision” (Visione Animata). Da un lato, sarà mostrato come utilizzando, quali algoritmi di analisi di una data immagine, alcuni meccanismi di visione biologica, come i movimenti saccadici e le fissazioni dell'occhio umano, sia possibile ottenere un query processing in database di immagini più efficace ed efficiente. In particolare, verranno discussi, la metodologia grazie alla quale risulta possibile generare due sequenze di fissazioni, a partire rispettivamente, da un'immagine di query I_q ed una di test I_t del data set, e, come confrontare tali sequenze al fine di determinare una possibile misura della similarità (consistenza) tra le due immagini. Contemporaneamente, verrà discusso come tale approccio unito a tecniche classiche di clustering possa essere usato per scoprire le associazioni semantiche nascoste tra immagini, in termini di categorie, che, di contro, permettono un'automatica pre-classificazione (indicizzazione) delle immagini e possono essere usate per guidare e migliorare il processo di query. Saranno presentati, infine, dei risultati preliminari e l'approccio proposto sarà confrontato con le più recenti tecniche per il recupero di immagini descritte in letteratura. Dall'altro lato, sarà mostrato come utilizzando la precedente rappresentazione “foveata” di un'immagine, risulti possibile partizionare un video in shot. Più precisamente, il metodo per il rilevamento dei cambiamenti di shot si baserà sulla computazione, in ogni istante di tempo, della misura di consistenza tra le sequenze di fissazioni generate da un osservatore ideale che guarda il video. Lo schema proposto permette l'individuazione, attraverso l'utilizzo di un'unica tecnica anziché di più metodi dedicati, sia delle transizioni brusche sia di quelle graduali. Vengono infine mostrati i risultati ottenuti su varie tipologie di video e, come questi, validano l'approccio proposto. / [INGLESE]In this dissertation some novel indexing techniques for video and image database based on “Animate Vision” Paradigm are presented and discussed. From one hand, it will be shown how, by embedding within image inspection algorithms active mechanisms of biological vision such as saccadic eye movements and fixations, a more effective query processing in image database can be achieved. In particular, it will be discussed the way to generate two fixation sequences from a query image I_q and a test image I_t of the data set, respectively, and how to compare the two sequences in order to compute a possible similarity (consistency) measure between the two images. Meanwhile, it will be shown how the approach can be used with classical clustering techniques to discover and represent the hidden semantic associations among images, in terms of categories, which, in turn, allow an automatic pre-classification (indexing), and can be used to drive and improve the query processing. Eventually, preliminary results will be presented and the proposed approach compared with the most recent techniques for image retrieval described in the literature. From the other one, it will be discussed how by taking advantage of such foveated representation of an image, it is possible to partitioning of a video into shots. More precisely, the shot-change detection method will be based on the computation, at each time instant, of the consistency measure of the fixation sequences generated by an ideal observer looking at the video. The proposed scheme aims at detecting both abrupt and gradual transitions between shots using a single technique, rather than a set of dedicated methods. Results on videos of various content types are reported and validate the proposed approach

    Methods and Apparatus for Autonomous Robotic Control

    Get PDF
    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements
    corecore