6,899 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

    Full text link
    [ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI

    Supernatural crossing in Republican Chinese fiction, 1920s–1940s

    Get PDF
    This dissertation studies supernatural narratives in Chinese fiction from the mid-1920s to the 1940s. The literary works present phenomena or elements that are or appear to be supernatural, many of which remain marginal or overlooked in Sinophone and Anglophone academia. These sources are situated in the May Fourth/New Culture ideological context, where supernatural narratives had to make way for the progressive intellectuals’ literary realism and their allegorical application of supernatural motifs. In the face of realism, supernatural narratives paled, dismissed as impractical fantasies that distract one from facing and tackling real life. Nevertheless, I argue that the supernatural narratives do not probe into another mystical dimension that might co-exist alongside the empirical world. Rather, they imagine various cases of the characters’ crossing to voice their discontent with contemporary society or to reflect on the notion of reality. “Crossing” relates to characters’ acts or processes of trespassing the boundary that separates the supernatural from the conventional natural world, thus entailing encounters and interaction between the natural and the supernatural. The dissertation examines how crossing, as a narrative device, disturbs accustomed and mundane situations, releases hidden tensions, and discloses repressed truths in Republican fiction. There are five types of crossing in the supernatural narratives. Type 1 is the crossing into “haunted” houses. This includes (intangible) human agency crossing into domestic spaces and revealing secrets and truths concealed by the scary, feigned ‘haunting’, thus exposing the hidden evil and the other house occupiers’ silenced, suffocated state. Type 2 is men crossing into female ghosts’ apparitional residences. The female ghosts allude to heart-breaking, traumatic experiences in socio-historical reality, evoking sympathetic concern for suffering individuals who are caught in social upheavals. Type 3 is the crossing from reality into the characters’ delusional/hallucinatory realities. While they physically remain in the empirical world, the characters’ abnormal perceptions lead them to exclusive, delirious, and quasi-supernatural experiences of reality. Their crossings blur the concrete boundaries between the real and the unreal on the mental level: their abnormal perceptions construct a significant, meaningful reality for them, which may be as real as the commonly regarded objective reality. Type 4 is the crossing into the netherworld modelled on the real world in the authors’ observation and bears a spectrum of satirised objects of the Republican society. The last type is immortal visitors crossing into the human world. This type satirises humanity’s vices and destructive potential. The primary sources demonstrate their writers’ witty passion to play with super--natural notions and imagery (such as ghosts, demons, and immortals) and stitch them into vivid, engaging scenes using techniques such as the gothic, the grotesque, and the satirical, in order to evoke sentiments such as terror, horror, disgust, dis--orientation, or awe, all in service of their insights into realist issues. The works also creatively tailor traditional Chinese modes and motifs, which exemplifies the revival of Republican interest in traditional cultural heritage. The supernatural narratives may amaze or disturb the reader at first, but what is more shocking, unpleasantly nudging, or thought-provoking is the problematic society and people’s lives that the supernatural (misunderstandings) eventually reveals. They present a more compre--hensive treatment of reality than Republican literature with its revolutionary consciousness surrounding class struggle. The critical perspectives of the supernatural narratives include domestic space, unacknowledged history and marginal individuals, abnormal mentality, and pervasive weaknesses in humanity. The crossing and supernatural narratives function as a means of better understanding the lived reality. This study gathers diverse primary sources written by Republican writers from various educational and political backgrounds and interprets them from a rare perspective, thus filling a research gap. It promotes a fuller view of supernatural narratives in twentieth-century Chinese literature. In terms of reflecting the social and personal reality of the Republican era, the supernatural narratives supplement the realist fiction of the time

    Sensors and Methods for Railway Signalling Equipment Monitoring

    Get PDF
    Signalling upgrade projects that have been installed in equipment rooms in the recent past have limited capability to monitor performance of certain types of external circuits. To modify the equipment rooms on the commissioned railway would prove very expensive to implement and would be unacceptable in terms of delays caused to passenger services due to re-commissioning circuits after modification, to comply with rail signalling standards. The use of magnetoresistive sensors to provide performance data on point circuit operation and point operation is investigated. The sensors are bench tested on their ability to measure current in a circuit in a non-intrusive manner. The effect of shielding on the sensor performance is tested and found to be significant. The response of the sensors with various levels of amplification produces linear responses across a range of circuit gain. The output of the sensor circuit is demonstrated for various periods of interruption of conductor current. A three-axis accelerometer is mounted on a linear actuator to demonstrate the type of output expected from similar sensors mounted on a set of points. Measurements of current in point detection circuits and acceleration forces resulting from vibration of out of tolerance mechanical assemblies can provide valuable information on performance and possible threats to safe operation of equipment. The sensors seem capable of measuring the current in a conductor with a comparatively high degree of sensitivity. There is development work required on shielding the sensor from magnetic fields other than those being measured. The accelerometer work is at a demonstration level and requires development. The future testing work with accelerometers should be at a facility where multiple point moves can be made; with the capability to introduce faults to the point mechanisms. Methods can then be developed for analysis of the vibration signatures produced by the various faults

    Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos

    Get PDF
    Data charts are widely used in our daily lives, being present in regular media, such as newspapers, magazines, web pages, books, and many others. A well constructed data chart leads to an intuitive understanding of its underlying data and in the same way, when data charts have wrong design choices, a redesign of these representations might be needed. However, in most cases, these charts are shown as a static image, which means that the original data are not usually available. Therefore, automatic methods could be applied to extract the underlying data from the chart images to allow these changes. The task of recognizing charts and extracting data from them is complex, largely due to the variety of chart types and their visual characteristics. Computer Vision techniques for image classification and object detection are widely used for the problem of recognizing charts, but only in images without any disturbance. Other features in real-world images that can make this task difficult are not present in most literature works, like photo distortions, noise, alignment, etc. Two computer vision techniques that can assist this task and have been little explored in this context are perspective detection and correction. These methods transform a distorted and noisy chart in a clear chart, with its type ready for data extraction or other uses. The task of reconstructing data is straightforward, as long the data is available the visualization can be reconstructed, but the scenario of reconstructing it on the same context is complex. Using a Visualization Grammar for this scenario is a key component, as these grammars usually have extensions for interaction, chart layers, and multiple views without requiring extra development effort. This work presents a model for automated support for custom recognition, and reconstruction of charts in images. The model automatically performs the process steps, such as reverse engineering, turning a static chart back into its data table for later reconstruction, while allowing the user to make modifications in case of uncertainties. This work also features a model-based architecture along with prototypes for various use cases. Validation is performed step by step, with methods inspired by the literature. This work features three use cases providing proof of concept and validation of the model. The first use case features usage of chart recognition methods focused on documents in the real-world, the second use case focus on vocalization of charts, using a visualization grammar to reconstruct a chart in audio format, and the third use case presents an Augmented Reality application that recognizes and reconstructs charts in the same context (a piece of paper) overlaying the new chart and interaction widgets. The results showed that with slight changes, chart recognition and reconstruction methods are now ready for real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando presentes nos meios de comunicação regulares, tais como jornais, revistas, páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma compreensão intuitiva dos seus dados inerentes e da mesma forma, quando os gráficos de dados têm escolhas de conceção erradas, poderá ser necessário um redesenho destas representações. Contudo, na maioria dos casos, estes gráficos são mostrados como uma imagem estática, o que significa que os dados originais não estão normalmente disponíveis. Portanto, poderiam ser aplicados métodos automáticos para extrair os dados inerentes das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande parte devido à variedade de tipos de gráficos e às suas características visuais. As técnicas de Visão Computacional para classificação de imagens e deteção de objetos são amplamente utilizadas para o problema de reconhecimento de gráficos, mas apenas em imagens sem qualquer ruído. Outras características das imagens do mundo real que podem dificultar esta tarefa não estão presentes na maioria das obras literárias, como distorções fotográficas, ruído, alinhamento, etc. Duas técnicas de visão computacional que podem ajudar nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e correção da perspetiva. Estes métodos transformam um gráfico distorcido e ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que os dados estejam disponíveis a visualização pode ser reconstruída, mas o cenário de reconstrução no mesmo contexto é complexo. A utilização de uma Gramática de Visualização para este cenário é um componente chave, uma vez que estas gramáticas têm normalmente extensões para interação, camadas de gráficos, e visões múltiplas sem exigir um esforço extra de desenvolvimento. Este trabalho apresenta um modelo de suporte automatizado para o reconhecimento personalizado, e reconstrução de gráficos em imagens estáticas. O modelo executa automaticamente as etapas do processo, tais como engenharia inversa, transformando um gráfico estático novamente na sua tabela de dados para posterior reconstrução, ao mesmo tempo que permite ao utilizador fazer modificações em caso de incertezas. Este trabalho também apresenta uma arquitetura baseada em modelos, juntamente com protótipos para vários casos de utilização. A validação é efetuada passo a passo, com métodos inspirados na literatura. Este trabalho apresenta três casos de uso, fornecendo prova de conceito e validação do modelo. O primeiro caso de uso apresenta a utilização de métodos de reconhecimento de gráficos focando em documentos no mundo real, o segundo caso de uso centra-se na vocalização de gráficos, utilizando uma gramática de visualização para reconstruir um gráfico em formato áudio, e o terceiro caso de uso apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos gráficos e widgets de interação. Os resultados mostraram que com pequenas alterações, os métodos de reconhecimento e reconstrução dos gráficos estão agora prontos para os gráficos do mundo real, tendo em consideração o tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic

    AIUCD 2022 - Proceedings

    Get PDF
    L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies

    The dynamics and ISM properties of high-redshift dusty star-forming galaxies

    Get PDF
    In this thesis we present a range of observations of submillimetre galaxies (SMGs), a subclass of dust-obscured star-forming galaxies (DSFGs) at redshifts of z~1-5. SMGs are among the most actively star forming sources ever observed, believed to contribute significantly to the star-formation rate density (SFRD) at its peak, so-called 'cosmic noon', at z~2. Given their extreme nature, SMGs provide a strong test of galaxy formation and evolution models. Advancements in instrumentation, in particular with the Submillimetre Common-User Bolometer Area 2 (SCUBA-2) and the Atacama Large (sub-)Millimeter Array (ALMA), have driven significant progress in SMGs studies over the last decade. We have now identified samples of hundreds of SMGs in survey fields with a plethora of photometric coverage, such as the Cosmic Evolution Survey (COSMOS), the UKIDSS Ultra Deep Survey (UDS) and the Extended Chandra Deep Field Survey (ECDFS). Indeed, the main motivation of this thesis is to exploit these samples of SMGs, with a particular focus on the molecular and ionised gas properties, using state-of-the-art instrumentation such as ALMA and the Northern Extended Millimeter Array (NOEMA) for the former, and the K-band Multi-Object Spectrograph (KMOS) mounted on the Very Large Telescope for the latter. Firstly, in Chapter 2 we present CO observations of 47 SMGs, providing one of the largest and highest quality samples of its kind. With this study we demonstrate the capability of ALMA and NOEMA to undertake blind redshift scans in the 3mm waveband, and in doing so add significantly to the number of SMGs with spectroscopic redshifts, which prior to the work presented in this thesis was small. We also exploit the multi-wavelength coverage of the samples, together with the robust new spectroscopic redshifts, to model their spectral energy distributions (SEDs) with the MAGPHYS code and subsequently estimate key physical properties such as stellar masses and star-formation rates. Perhaps more importantly, this survey has allowed us to characterise the molecular gas content in the SMG population, along with its excitation properties, results from which we present in Chapter 3. We also show that the gas depletion timescale in SMGs remains constant, and given that SMGs are significant contributors to the star-formation rate density (SFRD) at z~2, the global evolution of star-formation in SMGs appears to coincide with the evolution of the molecular gas content, as opposed to any variation in star-formation efficiency. We provide a new test of the SMG population as descendants of massive local early-type galaxies, using the derived CO linewidths and baryonic masses. In Chapter 4 we present our Large Programme with KMOS which, when completed, will have observed ~400 SMGs in the COSMOS, UDS and ECDFS fields. Expanding on the work of Chapters 2 and 3 this is designed to further add to the catalog of SMGs with spectroscopic redshifts by detecting the H_alpha and/or [OIII] emission, which probes ionised gas and can also be used to estimate star-formation rates. We detail the target selection and observing strategy of this survey, before presenting early results for 43 emission line-detected sources, including the H_alpha-derived star-formation rates, the mass-metallicity relation and BPT diagram. We also compare the H_alpha, rest-frame optical/near-infrared and dust sizes where available, finding median radii of R_e = 3.6+/-0.3 kpc, R_Halpha = 4.2+/-0.4 kpc and R_dust = 1.2+/-0.3 kpc. Additionally, the sample are consistent with a median Sersic index of n=1, i.e. with an exponential disc-like light profile. The integral field spectrograph (IFS) capabilities of KMOS allow us to spatially resolve the H_alpha/[OIII] emission when it is sufficiently bright and extended, and this provides valuable diagnostics of the galaxy kinematics. Therefore, in Chapter 5 we present resolved H_alpha/[OIII] velocity and velocity dispersion maps for 36 SMGs, from which we derive rotation curves and dispersion profiles. We compare the derived kinematics of our SMGs with less active galaxies at lower redshifts, and divide the sample into 28 'ordered' sources with clear velocity gradients, and rotation curves which can be modelled as Freeman disks, and eight 'disordered' sources with much more messy velocity maps, from which little reliable kinematic information can be obtained. We measure a median rotational velocity of v_rot = 190+/-20 km/s and a median intrinsic velocity dispersion of sigma_0 = 87+/-5 km/s from the 'ordered' subset, both of which are significantly higher than the less actively star-forming galaxies to which we compare. The median ratio of rotational velocity to intrinsic velocity dispersion in the 'ordered' sample is v_rot/sigma_0 = 2.2+/-0.5, indicating that our sources are somewhat rotationally supported, and we therefore suggest that our SMG sample likely represents 'scaled-up' versions of more 'normal' star-forming galaxies, rather than merger-dominated systems

    Open Vocabulary Object Detection with Pseudo Bounding-Box Labels

    Full text link
    Despite great progress in object detection, most existing methods work only on a limited set of object categories, due to the tremendous human effort needed for bounding-box annotations of training data. To alleviate the problem, recent open vocabulary and zero-shot detection methods attempt to detect novel object categories beyond those seen during training. They achieve this goal by training on a pre-defined base categories to induce generalization to novel objects. However, their potential is still constrained by the small set of base categories available for training. To enlarge the set of base classes, we propose a method to automatically generate pseudo bounding-box annotations of diverse objects from large-scale image-caption pairs. Our method leverages the localization ability of pre-trained vision-language models to generate pseudo bounding-box labels and then directly uses them for training object detectors. Experimental results show that our method outperforms the state-of-the-art open vocabulary detector by 8% AP on COCO novel categories, by 6.3% AP on PASCAL VOC, by 2.3% AP on Objects365 and by 2.8% AP on LVIS. Code is available at https://github.com/salesforce/PB-OVD.Comment: ECCV 202

    Graphical scaffolding for the learning of data wrangling APIs

    Get PDF
    In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
    corecore