1,379 research outputs found

    FinBook: literary content as digital commodity

    Get PDF
    This short essay explains the significance of the FinBook intervention, and invites the reader to participate. We have associated each chapter within this book with a financial robot (FinBot), and created a market whereby book content will be traded with financial securities. As human labour increasingly consists of unstable and uncertain work practices and as algorithms replace people on the virtual trading floors of the worlds markets, we see members of society taking advantage of FinBots to invest and make extra funds. Bots of all kinds are making financial decisions for us, searching online on our behalf to help us invest, to consume products and services. Our contribution to this compilation is to turn the collection of chapters in this book into a dynamic investment portfolio, and thereby play out what might happen to the process of buying and consuming literature in the not-so-distant future. By attaching identities (through QR codes) to each chapter, we create a market in which the chapter can ‘perform’. Our FinBots will trade based on features extracted from the authors’ words in this book: the political, ethical and cultural values embedded in the work, and the extent to which the FinBots share authors’ concerns; and the performance of chapters amongst those human and non-human actors that make up the market, and readership. In short, the FinBook model turns our work and the work of our co-authors into an investment portfolio, mediated by the market and the attention of readers. By creating a digital economy specifically around the content of online texts, our chapter and the FinBook platform aims to challenge the reader to consider how their personal values align them with individual articles, and how these become contested as they perform different value judgements about the financial performance of each chapter and the book as a whole. At the same time, by introducing ‘autonomous’ trading bots, we also explore the different ‘network’ affordances that differ between paper based books that’s scarcity is developed through analogue form, and digital forms of books whose uniqueness is reached through encryption. We thereby speak to wider questions about the conditions of an aggressive market in which algorithms subject cultural and intellectual items – books – to economic parameters, and the increasing ubiquity of data bots as actors in our social, political, economic and cultural lives. We understand that our marketization of literature may be an uncomfortable juxtaposition against the conventionally-imagined way a book is created, enjoyed and shared: it is intended to be

    Approaches for Incorporating a Variety of Metadata in Transformer Operation

    Get PDF
    A plain transformer model typically leverages only one piece of metadata - position encoding - directly in the transformer model. The use of transformers typically involves expensive and complex external scaffolding before or after output generation to avoid issues such as hallucination, irrelevance, etc. This disclosure describes techniques to incorporate a variety of metadata types into the native architecture of transformer models. The additional signals can help avoid hallucinations, improve relevance, and minimize the need of expensive external scaffolding. Generalizing transformer operation to incorporate a diversity of metadata can be achieved in various ways such as adding a metadata embedding layer, conditioning self-attention on the metadata, conditioning with gated self-attention, employing a different encoder-decoder architecture, etc. Different types of metadata can help in different ways to improve the quality of the output generated by the transformer and reduce hallucinations. The techniques described in this disclosure can also support multimodal data, such as images, audio, video, or text, etc., with the metadata used representing the specific mode

    Money & Trust in Digital Society, Bitcoin and Stablecoins in ML enabled Metaverse Telecollaboration

    Full text link
    We present a state of the art and positioning book, about Digital society tools, namely; Web3, Bitcoin, Metaverse, AI/ML, accessibility, safeguarding and telecollaboration. A high level overview of Web3 technologies leads to a description of blockchain, and the Bitcoin network is specifically selected for detailed examination. Suitable components of the extended Bitcoin ecosystem are described in more depth. Other mechanisms for native digital value transfer are described, with a focus on `money'. Metaverse technology is over-viewed, primarily from the perspective of Bitcoin and extended reality. Bitcoin is selected as the best contender for value transfer in metaverses because of it's free and open source nature, and network effect. Challenges and risks of this approach are identified. A cloud deployable virtual machine based technology stack deployment guide with a focus on cybersecurity best practice can be downloaded from GitHub to experiment with the technologies. This deployable lab is designed to inform development of secure value transaction, for small and medium sized companies

    Provenance in the archives.The challenge of the digital environment

    Get PDF
    The Principle of Provenance is a pillar of Archival Science. In its very early stages it mostly meant not to intermingle documents from different origins. This view has been challenged in the past fifty years: archival provenance has moved from a simplistic one-to-one relationship to a multi-dimensional concept based on a network of relationships between objects, agents and functions. The digital environment has posed new and unpredictable challenges: digital objects are often aggregations of several different pieces, and it is extremely easy to mix and re-use them, which makes it difficult to trace their provenance. Cloud computing has complicated the picture further. However, new technologies help us to cope with such complexity. Resource Description Framework (RDF) and ontologies can be used to represent provenance in a granular and articulated way that was not even conceivable in the past, giving us the opportunity to review and refine established practices and concepts

    Blockchain Real Estate and NFTs

    Full text link
    Non-fungible tokens (popularly known as NFTs) and blockchains are frequently promoted as the solution to a multitude of property ownership problems. The promise of an immutable blockchain is often touted as a mechanism to resolve disputes over intangible rights, notably intellectual property rights, and even to facilitate quicker and easier real estate transactions. In this Symposium Article, we question the use of distributed ledger technologies as a method of facilitating and verifying the transfer of physical assets. As our example of an existing transfer method, we use real property law, which is characterized by centuries-old common law rules regarding fractionalized ownership and local land records that still, in many jurisdictions, rely on paper. We explain the history of real property title protection and then identify the problems with the existing system. We then compare the extant system (and its problems) with what blockchain could offer, concluding that a blockchain system would provide few, if any, benefits. That said, we concede that tracking and transferring ownership of certain rights—specifically, purely intangible rights—is a longstanding legal problem that begs for resolution. We focus on ownership signals and contrast ownership of physical assets—which is broadcast in part by manual possession in addition to, in the real estate realm, recording—with ownership of intangible assets, which cannot be possessed in a way that easily gives a signal to the entire world that the possessor is the owner. Because of that difference, we conclude that the true use case for NFTs and distributed ledgers is in tracking and verifying ownership of intangibles

    Natural Language Interfaces to Data

    Full text link
    Recent advances in NLU and NLP have resulted in renewed interest in natural language interfaces to data, which provide an easy mechanism for non-technical users to access and query the data. While early systems evolved from keyword search and focused on simple factual queries, the complexity of both the input sentences as well as the generated SQL queries has evolved over time. More recently, there has also been a lot of focus on using conversational interfaces for data analytics, empowering a line of non-technical users with quick insights into the data. There are three main challenges in natural language querying (NLQ): (1) identifying the entities involved in the user utterance, (2) connecting the different entities in a meaningful way over the underlying data source to interpret user intents, and (3) generating a structured query in the form of SQL or SPARQL. There are two main approaches for interpreting a user's NLQ. Rule-based systems make use of semantic indices, ontologies, and KGs to identify the entities in the query, understand the intended relationships between those entities, and utilize grammars to generate the target queries. With the advances in deep learning (DL)-based language models, there have been many text-to-SQL approaches that try to interpret the query holistically using DL models. Hybrid approaches that utilize both rule-based techniques as well as DL models are also emerging by combining the strengths of both approaches. Conversational interfaces are the next natural step to one-shot NLQ by exploiting query context between multiple turns of conversation for disambiguation. In this article, we review the background technologies that are used in natural language interfaces, and survey the different approaches to NLQ. We also describe conversational interfaces for data analytics and discuss several benchmarks used for NLQ research and evaluation.Comment: The full version of this manuscript, as published by Foundations and Trends in Databases, is available at http://dx.doi.org/10.1561/190000007

    Versioning Cultural Objects : Digital Approaches

    Get PDF
    This volume approaches an understanding of the term versioning in the broadest sense, discussing ideas about how versions differ across forms of media, including text, image, and sound. Versions of cultural objects are identified, defined, articulated, and analysed through diverse mechanisms in different fields of research. The study of versions allows for the investigation of the creative processes behind the conception of works, a closer inspection of their socio-political contexts, and promotes investigation of their provenance and circulation. Chapters in this volume include discussion of what a “version” means in different fields, case studies implementing digital versioning techniques, conceptual models for representing versions digitally, and computational and management issues for digital projects
    • 

    corecore