2,547 research outputs found

    Tightest Admissible Shortest Path

    Full text link
    The shortest path problem in graphs is fundamental to AI. Nearly all variants of the problem and relevant algorithms that solve them ignore edge-weight computation time and its common relation to weight uncertainty. This implies that taking these factors into consideration can potentially lead to a performance boost in relevant applications. Recently, a generalized framework for weighted directed graphs was suggested, where edge-weight can be computed (estimated) multiple times, at increasing accuracy and run-time expense. We build on this framework to introduce the problem of finding the tightest admissible shortest path (TASP); a path with the tightest suboptimality bound on the optimal cost. This is a generalization of the shortest path problem to bounded uncertainty, where edge-weight uncertainty can be traded for computational cost. We present a complete algorithm for solving TASP, with guarantees on solution quality. Empirical evaluation supports the effectiveness of this approach.Comment: arXiv admin note: text overlap with arXiv:2208.1148

    Computer Analysis of Architecture Using Automatic Image Understanding

    Full text link
    In the past few years, computer vision and pattern recognition systems have been becoming increasingly more powerful, expanding the range of automatic tasks enabled by machine vision. Here we show that computer analysis of building images can perform quantitative analysis of architecture, and quantify similarities between city architectural styles in a quantitative fashion. Images of buildings from 18 cities and three countries were acquired using Google StreetView, and were used to train a machine vision system to automatically identify the location of the imaged building based on the image visual content. Experimental results show that the automatic computer analysis can automatically identify the geographical location of the StreetView image. More importantly, the algorithm was able to group the cities and countries and provide a phylogeny of the similarities between architectural styles as captured by StreetView images. These results demonstrate that computer vision and pattern recognition algorithms can perform the complex cognitive task of analyzing images of buildings, and can be used to measure and quantify visual similarities and differences between different styles of architectures. This experiment provides a new paradigm for studying architecture, based on a quantitative approach that can enhance the traditional manual observation and analysis. The source code used for the analysis is open and publicly available

    ANALISIS KINERJA TEKNOLOGI AUGMENTED REALITY BERDASARKAN FITUR ALAMI DALAM TARGET GAMBAR

    Get PDF
    Seiring dengan perkembangan zaman, teknologi semakin dikembangkan secara inovatif dan kreatif. Salah satunya adalah Augmented Realty (AR). AR merupakan terobosan teknologi yang dapat memunculkan objek virtual ke dalam dunia nyata. AR terus dikembangkan dengan menggunakan metode markerless, saat ini AR ini tidak lagi menggunakan penada khusus hitam berlatar putih. AR dapat diterapkan di berbagi bidang contohnya di bidang industri film. Penelitian ini berfokus dalam menganalisis kinerja AR dengan metode Natural Fetaure Tracking berbasis FAST Corner Detection dan Vuforia sebagai software enginenya yang diimplementasikan pada android, sehingga pengguna dapat memindai suatu target berupa gambar poster film untuk mendapatkan informasi berupa video trailer. Untuk menganalisis kinerja teknologi AR tersebut, maka dilakukan pengujian dengan parameter berupa jarak, sudut kamera, ukuran target, dan kondisi target dengan membandingkan akurasi jumlah kecocokan keypoints yang terdeteksi. Sistem berhasil diimplementasikan dengan hasil akurasi sistem sebesar 79% dan akurasi keypoints sebesar 42% berdasarkan pengujian kondisi target

    What Do You Meme? An Exploration of Internet Communication Through Memes

    Get PDF
    The topic of memes and the ethnographies they create are discussed. Memes that have been created and adopted by alt-right communities, specifically incels, illustrate their ideologies while simultaneously validating their views and recruiting insecure, vulnerable populations. Memes from times past evolve to be viewed as cringeworthy as they fail the test of time and cultural expectations for humor. Content that exists out of the confines of normality in an embarrassing way also becomes constituted as cringe, and can become a meme in this way. New social media platforms allow for novel meme formats to emerge. Furthermore, the concept of new meme formats are explained through TikTok case studies, highlighting the platform’s unique and novel features. Finally, the ethnography that surrounds mental health memes is explored through content analyses of memes discussing mental illnesses like Bipolar Disorder, as well as memes depicting mindful practices. The possibility for trends arising from memes to romanticize negative attributes of mental illness illustrates the potential for negative consequences, like triggering a relapse in self-harm. Memes and their real-world consequences must be discussed as social media pervades daily life

    Clinical data wrangling using Ontological Realism and Referent Tracking

    Get PDF
    Ontological realism aims at the development of high quality ontologies that faithfully represent what is general in reality and to use these ontologies to render heterogeneous data collections comparable. To achieve this second goal for clinical research datasets presupposes not merely (1) that the requisite ontologies already exist, but also (2) that the datasets in question are faithful to reality in the dual sense that (a) they denote only particulars and relationships between particulars that do in fact exist and (b) they do this in terms of the types and type-level relationships described in these ontologies. While much attention has been devoted to (1), work on (2), which is the topic of this paper, is comparatively rare. Using Referent Tracking as basis, we describe a technical data wrangling strategy which consists in creating for each dataset a template that, when applied to each particular record in the dataset, leads to the generation of a collection of Referent Tracking Tuples (RTT) built out of unique identifiers for the entities described by means of the data items in the record. The proposed strategy is based on (i) the distinction between data and what data are about, and (ii) the explicit descriptions of portions of reality which RTTs provide and which range not only over the particulars described by data items in a dataset, but also over these data items themselves. This last feature allows us to describe particulars that are only implicitly referred to by the dataset; to provide information about correspondences between data items in a dataset; and to assert which data items are unjustifiably or redundantly present in or absent from the dataset. The approach has been tested on a dataset collected from patients seeking treatment for orofacial pain at two German universities and made available for the NIDCR-funded OPMQoL project

    Between Codes and Palimpsest: Stephanie Strickland's Dragon Logic

    Get PDF
    This article will study the impact of programming languages on poetic language in Stephanie Strickland’s print poetry collection Dragon Logic (2013). In this article, I argue that Dragon Logic not only ponders on the changes that occur in contemporary literature with the invasion of digital technologies, but it also articulates via the use of the print form certain concerns relating to the electronic, and finally helps readers reinvent the way one reads a print book. This article follows the theoretical insights provided by N. Katherine Hayles about the connection of natural language and computer code, as well as the different reading practices that are brought forward by computation. Through a selection of close readings of poems in Dragon Logic, I will discuss the layering of codes and how this layering affects the ways natural language is informed by programming language via feedback loops, a process that by extension influences not only human readers but also reading machines.

    Investigating the Effects of Engaging in Interactive Fiction on Players' Spatial Abilities

    Get PDF
    Text-based interactive fiction games require a player to navigate through an environment without visual input. Research has been done in the areas of spatial cognition to improve spatial ability test scores, however, interactive fiction has not previously been examined as a means to improve spatial ability. This thesis investigates the effects of engaging in interactive fiction on players’ spatial abilities. Nine interactive fiction games were developed based on 3 fairy tale stories with 3 levels of difficulty each. A between-subjects study was conducted over 3 days with 20 participants in the experimental group and 8 in the control group. Both groups took spatial ability measures at the beginning and end of the study, but only the experimental group participated in the interactive fiction game intervention. Qualitative and quantitative data was collected. The results deepen our understanding about whether interactive fiction may function as an effective intervention to affect spatial abilities, as well as about spatial strategies that players use to engage in interactive fiction. The results have implications for the design and use of interactive fiction for purposes other than entertainment, such as education and training

    Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

    Full text link
    Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments
    corecore