7,199 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks

    Breaking Ub with Leishmania mexicana: a ubiquitin activating enzyme as a novel therapeutic target for leishmaniasis

    Get PDF
    Leishmaniasis is a neglected tropical disease, which inflicts a variety of gruesome pathologies on humans. The number of individuals afflicted with leishmaniasis is thought to vary between 0.7 and 1.2 million annually, of whom it is estimated that 20 to 40 thousand die. This problem is exemplary of inequality in healthcare – current leishmaniasis treatments are inadequate due to toxicity, cost, and ineffectiveness, so there is an urgent need for improved chemotherapies. Ubiquitination is a biochemical pathway that has received attention in cancer research. It is the process of adding the ubiquitin protein as a post-translational modification to substrate proteins, using an enzymatic cascade comprised of enzymes termed E1s, E2s, and E3s. Ubiquitination can lead to degradation of substrate proteins, or otherwise modulate their function. As the name suggests, this modification can be found across eukaryotic cell biology. As such, interfering with ubiquitination may interfere with essential biological processes, which means ubiquitination may present a new therapeutic target for leishmaniasis. Before ubiquitination inhibitors can be designed, components of the ubiquitination system must be identified. To this end, a bioinformatic screening campaign employed BLASTs and hidden Markov models, using characterised orthologs from model organisms as bait, to screen publicly-available Leishmania mexicana genome sequence databases, searching for genes encoding putative E1s, E2s, and E3s. To confirm some of these identifications on a protein level, activity-based probes, protein pulldowns, and mass spectrometry were used. Using an activity-based probe that emulates the structure of adenylated ubiquitin, E1s were identified, and their relative abundance quantified. A chemical crosslinker extended the reach of this probe, allowing the identification of an E2 (LmxM.33.0900). It is noted that L. mexicana has two E1s – unusual for a single celled organism. Of these E1s, LmxM.34.3060 was considerably more abundant than LmxM.23.0550 in both major life cycle stages of the in vitro Leishmania cultures. It is important to describe the wider context of these enzymes – what is their interactome, what are their substrates? To study this, CRISPR was used to fuse a proximity-based labelling system, BioID, on genes of interest – LmxM.34.3060 and LmxM.33.0900. The E2 (LmxM.33.0900) was shown to interact with the E1 (LmxM.34.3060), validating the results from the activity-based probe and crosslinker experiments. Due to sequence homology with characterised orthologs, the E2 was hypothesised to function in the endoplasmic reticulum degradation pathway. Immunoprecipitations of a ubiquitin motif, diglycine, were conducted with a view to gathering information on the substrates of ubiquitin. Anti-diglycine peptides included some of those identified by BioID. Experiments examining ubiquitin’s role in the DNA damage response were also initiated, as were improvements to the proximity-based labelling system, however these were not followed to completion due to a lack of time and resources. To examine the possibility of finding novel drug targets in the ubiquitination cascade, recombinant proteins were expressed. LmxM.34.3060 was expressed in a functional form, while a putative SUMO E2 (LmxM.02.0390) was functional after refolding. Expressed LmxM.33.0900 was not functional and could not be refolded into a functional form. Drug assays were conducted on LmxM.34.3060, which found an inhibitor of the human ortholog, TAK-243, to be 20-fold less effective against the Leishmania enzyme. Additional assays found an inhibitor that was 50-fold more effective at inhibiting the Leishmania enzyme as opposed to its human equivalent - 5'O-sulfamoyl adenosine. Furthermore, a new mechanism of action, inhibiting the E1, for was identified for drugs previously characterised to inhibit protein synthesis. LmxM.34.3060 underwent biophysical characterisation, with structural information obtained using SAXS and protein crystallography. A crystal structure was solved to 3.1 Å, with the in-solution SAXS structure complementary to this. TAK-243 was modelled into the LmxM.34.3060 structure and clashes were predicted, concurring with TAK-243’s reduced efficacy against the Leishmania enzyme in the drug assays. This project aimed to characterise the potential of an understudied biochemical system to provide novel therapeutic targets for a neglected tropical pathogen. To achieve this aim it presents the identifications of two E1s, an interactome, a structure, and a potent, selective inhibitor of a Leishmania ubiquitin activating enzyme

    Applying cognitive electrophysiology to neural modelling of the attentional blink

    Get PDF
    This thesis proposes a connection between computational modelling of cognition and cognitive electrophysiology. We extend a previously published neural network model of working memory and temporal attention (Simultaneous Type Serial Token (ST2 ) model ; Bowman & Wyble, 2007) that was designed to simulate human behaviour during the attentional blink, an experimental nding that seems to illustrate the temporal limits of conscious perception in humans. Due to its neural architecture, we can utilise the ST2 model's functionality to produce so-called virtual event-related potentials (virtual ERPs) by averaging over activation proles of nodes in the network. Unlike predictions from textual models, the virtual ERPs from the ST2 model allow us to construe formal predictions concerning the EEG signal and associated cognitive processes in the human brain. The virtual ERPs are used to make predictions and propose explanations for the results of two experimental studies during which we recorded the EEG signal from the scalp of human participants. Using various analysis techniques, we investigate how target items are processed by the brain depending on whether they are presented individually or during the attentional blink. Particular emphasis is on the P3 component, which is commonly regarded as an EEG correlate of encoding items into working memory and thus seems to re ect conscious perception. Our ndings are interpreted to validate the ST2 model and competing theories of the attentional blink. Virtual ERPs also allow us to make predictions for future experiments. Hence, we show how virtual ERPs from the ST2 model provide a powerful tool for both experimental design and the validation of cognitive models

    The Evolution of Smart Buildings: An Industrial Perspective of the Development of Smart Buildings in the 2010s

    Get PDF
    Over the course of the 2010s, specialist research bodies have failed to provide a holistic view of the changes in the prominent reason (as driven by industry) for creating a smart building. Over the 2010s, research tended to focus on remaining deeply involved in only single issues or value drivers. Through an analysis of the author’s peer reviewed and published works (book chapters, articles, essays and podcasts), supplemented with additional contextual academic literature, a model for how the key drivers for creating a smart building have evolved in industry during the 2010s is presented. The critical research commentary within this thesis, tracks the incremental advances of technology and their application to the built environment via academic movements, industrial shifts, or the author’s personal contributions. This thesis has found that it is demonstrable, through the chronology and publication dates of the included research papers, that as the financial cost and complexity of sensors and cloud computing reduced, smart buildings became increasingly prevalent. Initially, sustainability was the primary focus with the use of HVAC analytics and advanced metering in the early 2010s. The middle of the decade saw an economic transformation of the commercial office sector and the driver for creating a smart building was concerned with delivering flexible yet quantifiably used space. Driven by society’s emphasis on health, wellbeing and productivity, smart buildings pivoted their focus towards the end of the 2010s. Smart building technologies were required to demonstrate the impacts of architecture on the human. This research has evidenced that smart buildings use data to improve performance in sustainability, in space usage or for humancentric outcomes

    Between the Market and State: Middle Class Clientelism in Central and Eastern Europe

    Get PDF
    In Central and Eastern Europe, wealth is on the rise, but democracy is in decline. Populist parties assail the foundations of constitutional rule of law and enhance their networks of patronage and clientelism to gain greater support with the electorate. Yet, it is little understood as to why citizens vote for illiberal parties in the region. This paper seeks to address this ongoing phenomenon by exploring voter support for clientelistic behavior by the middle classes of Russia, Poland, and Estonia. I develop and test a theory of “middle class clientelism” which seeks to explain under what conditions more wealthier voters become a cost-effective target for vote buying, patronage, and particularistic goods. The literature on clientelism has been fairly consistent in explaining that middle class voters are too cost prohibitive for parties and elites to clientelize because they have better access to personal wealth and employment opportunities. However, I determine two critical variables that can account for this occurrence. These are the levels of state management of the economy and vulnerabilities within the middle class that has been induced by years of financial crisis in Central and Eastern Europe. This type of clientelism is damaging for democratic outcomes because it allows parties to participate in state capture and fuse themselves into the state without responsive democratic pressure in response from the middle

    The voice of science: Ideology, Sherlock Holmes, and the Strand Magazine

    Get PDF
    This thesis uses The Strand Magazine and Arthur Conan Doyle's Sherlock Holmes stories to examine the different ways in which science and ideology interacted in popular culture between 1891 and 1930. It is interested in the relationship between high and low cultures and the different experiences of the fin-de-siecle and modernity that they betray. It attempts to reconstruct an epistemology of scientific knowledge from 'the artefacts of low culture' and challenges prevailing critical attitudes in periodical criticism and Holmesian criticism. The methodology is derived from a mixture of Marxist literary criticism, ideology theory and the history of science in the belief that attitudes from all three critical traditions are necessary to properly unpack the culturally-embedded nature of periodicals. It plots the relationship between scientific and popular discourses and examines the different ways in which fiction was able to ideologically commodify scientific knowledge and incorporate it into everyday representations of the real world. The thesis is split into four main sections that analyse, respectively, class relations in the 1890s, scientific articles after the turn of the century, depictions of the male body in the aftermath of the Second Boer War and the effect of the onset of a knowledge economy of traditional genre fiction between 1913 and 1930
    corecore