600,099 research outputs found

    Reconsidering online reputation systems

    Get PDF
    Social and socioeconomic interactions and transactions often require trust. In digital spaces, the main approach to facilitating trust has effectively been to try to reduce or even remove the need for it through the implementation of reputation systems. These generate metrics based on digital data such as ratings and reviews submitted by users, interaction histories, and so on, that are intended to label individuals as more or less reliable or trustworthy in a particular interaction context. We suggest that conventional approaches to the design of such systems are rooted in a capitalist, competitive paradigm, relying on methodological individualism, and that the reputation technologies themselves thus embody and enact this paradigm in whatever space they operate in. We question whether the politics, ethics and philosophy that contribute to this paradigm align with those of some of the contexts in which reputation systems are now being used, and suggest that alternative approaches to the establishment of trust and reputation in digital spaces need to be considered for alternative contexts

    Distributed-based massive processing of activity logs for efficient user modeling in a Virtual Campus

    Get PDF
    This paper reports on a multi-fold approach for the building of user models based on the identification of navigation patterns in a virtual campus, allowing for adapting the campus’ usability to the actual learners’ needs, thus resulting in a great stimulation of the learning experience. However, user modeling in this context implies a constant processing and analysis of user interaction data during long-term learning activities, which produces huge amounts of valuable data stored typically in server log files. Due to the large or very large size of log files generated daily, the massive processing is a foremost step in extracting useful information. To this end, this work studies, first, the viability of processing large log data files of a real Virtual Campus using different distributed infrastructures. More precisely, we study the time performance of massive processing of daily log files implemented following the master-slave paradigm and evaluated using Cluster Computing and PlanetLab platforms. The study reveals the complexity and challenges of massive processing in the big data era, such as the need to carefully tune the log file processing in terms of chunk log data size to be processed at slave nodes as well as the bottleneck in processing in truly geographically distributed infrastructures due to the overhead caused by the communication time among the master and slave nodes. Then, an application of the massive processing approach resulting in log data processed and stored in a well-structured format is presented. We show how to extract knowledge from the log data analysis by using the WEKA framework for data mining purposes showing its usefulness to effectively build user models in terms of identifying interesting navigation patters of on-line learners. The study is motivated and conducted in the context of the actual data logs of the Virtual Campus of the Open University of Catalonia.Peer ReviewedPostprint (author's final draft

    Variation, jumps, market frictions and high frequency data in financial econometrics

    Get PDF
    We will review the econometrics of non-parametric estimation of the components of the variation of asset prices. This very active literature has been stimulated by the recent advent of complete records of transaction prices, quote data and order books. In our view the interaction of the new data sources with new econometric methodology is leading to a paradigm shift in one of the most important areas in econometrics: volatility measurement, modelling and forecasting. We will describe this new paradigm which draws together econometrics with arbitrage free financial economics theory. Perhaps the two most influential papers in this area have been Andersen, Bollerslev, Diebold and Labys(2001) and Barndorff-Nielsen and Shephard(2002), but many other papers have made important contributions. This work is likely to have deep impacts on the econometrics of asset allocation and risk management. One of our observations will be that inferences based on these methods, computed from observed market prices and so under the physical measure, are also valid as inferences under all equivalent measures. This puts this subject also at the heart of the econometrics of derivative pricing. One of the most challenging problems in this context is dealing with various forms of market frictions, which obscure the efficient price from the econometrician. Here we will characterise four types of statistical models of frictions and discuss how econometricians have been attempting to overcome them.

    Rámec pro posouzení kvalitativních hledisek informačních systémů

    Get PDF
    Záměrem předložené disertační práce je porozumět tomu, jak investoři v konkrétním společenském kontextu vnímají význam kvality informačních systémů. Ze studia literatury zabývající se přístupy a rámci hodnocení kvality informačních systémů vyplývá, že tato kvalita je obecně hodnocena z hlediska striktního přístupu. V této práci je ukázáno, že kvalitu informačního systému lze smysluplně pochopit použitím interpretačního paradigmatu a že kvalita informačního systému je definována společensky a ovlivňována kontextem tohoto systému. Studie byla zahájena průzkumem dvaceti libyjských organizací. Podrobnější data byla získána z případové studie dvou vybraných libyjských organizací působících ve veřejném sektoru. Při empirické analýze nashromážděných dat bylo využito rámce mnohočetné perspektivy, který zahrnuje hlediska teorie strukturalizace, pojem mnohočetných perspektiv a metodologii měkkých systémů. V práci se dospělo ke zjištění, že: a) kvalita informačních systémů je pojata šíře, než je tomu u tradiční definice kvality, b) mnohočetné perspektivy kvality informačních systémů jsou ovlivněny opakovanou interakcí mezi investorem a institucionálními vlastnostmi kontextu informačního systému a že c) rozdílné hodnoty v kulturním prostředí a vnějším kontextu ovlivňují rozsah působnosti investora a interakce v kontextu informačního systému. Ze závěru práce vyplývá, že společenská skladba mnohočetných perspektiv kvality informačního systému je ovlivněna strukturalizačními procesy mezi investory a vlastnostmi v kontextu informačního systému.This thesis is concerned with understanding how stakeholders in a particular cultural context construct the multiple meanings of ‘Information Systems Quality’ (IS Quality). A review of literature on approaches and frameworks for IS quality shows that the IS quality is generally examined through a ‘hard approach’. This study demonstrates that IS quality can be meaningfully understood through an interpretive paradigm, and that IS quality is socially constructed and influenced by the IS context. The study began with an exploratory survey of twenty Libyan organizations. Data were gathered through a case study of two public sector organizations in Libya. A Multiple Perspective Framework (MPF) that incorporates ideas from structuration theory, multiple perspectives concept, and soft systems methodology (SSM) was used to analyze the empirical work. The findings revealed that: (a) IS quality is a broader conception than the traditional quality definition, (b) the multiple perspectives of IS quality are influenced by repeated interaction between the stakeholder and institutional properties in the IS context, and (c) mediation of different values in the culture system and in the external context influence the extent of stakeholder agency and interaction in the IS context. The study concluded that the social construction of multiple perspectives of IS quality is influenced by the structuration processes between stakeholders and properties in the IS context.

    Stable evolutionary signal in a Yeast protein interaction network

    Get PDF
    BACKGROUND: The recently emerged protein interaction network paradigm can provide novel and important insights into the innerworkings of a cell. Yet, the heavy burden of both false positive and false negative protein-protein interaction data casts doubt on the broader usefulness of these interaction sets. Approaches focusing on one-protein-at-a-time have been powerfully employed to demonstrate the high degree of conservation of proteins participating in numerous interactions; here, we expand his 'node' focused paradigm to investigate the relative persistence of 'link' based evolutionary signals in a protein interaction network of S. cerevisiae and point out the value of this relatively untapped source of information. RESULTS: The trend for highly connected proteins to be preferably conserved in evolution is stable, even in the context of tremendous noise in the underlying protein interactions as well as in the assignment of orthology among five higher eukaryotes. We find that local clustering around interactions correlates with preferred evolutionary conservation of the participating proteins; furthermore the correlation between high local clustering and evolutionary conservation is accompanied by a stable elevated degree of coexpression of the interacting proteins. We use this conserved interaction data, combined with P. falciparum /Yeast orthologs, as proof-of-principle that high-order network topology can be used comparatively to deduce local network structure in non-model organisms. CONCLUSION: High local clustering is a criterion for the reliability of an interaction and coincides with preferred evolutionary conservation and significant coexpression. These strong and stable correlations indicate that evolutionary units go beyond a single protein to include the interactions among them. In particular, the stability of these signals in the face of extreme noise suggests that empirical protein interaction data can be integrated with orthologous clustering around these protein interactions to reliably infer local network structures in non-model organisms

    Introduction to innovation analytics

    Get PDF
    Innovation analytics (IA) is an emerging paradigm that integrates advances in the data engineering field, innovation field, and artificial intelligence field to support and manage the entire life cycle of a product and pro-cesses. In this chapter, we have identified several possibilities where ana-lytics can help in innovation. First, we aim to explain using a few cases how analytics can help in innovating new products to the market specifi-cally through collaborative engagement of designers and data. Second, we will explain the use of artificial intelligence (AI) techniques in the manu-facturing context, which progresses at different levels, i.e., from process, function to function interaction, and factory-level innovations

    In-context Autoencoder for Context Compression in a Large Language Model

    Full text link
    We propose the In-context Autoencoder (ICAE) for context compression in a large language model (LLM). The ICAE has two modules: a learnable encoder adapted with LoRA from an LLM for compressing a long context into a limited number of memory slots, and a fixed decoder which is the target LLM that can condition on the memory slots for various purposes. We first pretrain the ICAE using both autoencoding and language modeling objectives on massive text data, enabling it to generate memory slots that accurately and comprehensively represent the original context. Then, we fine-tune the pretrained ICAE on a small amount of instruct data to enhance its interaction with various prompts for producing desirable responses. Our experimental results demonstrate that the ICAE learned with our proposed pretraining and fine-tuning paradigm can effectively produce memory slots with 4×4\times context compression, which can be well conditioned on by the target LLM to respond to various prompts. The promising results demonstrate significant implications of the ICAE for its novel approach to the long context problem and its potential to reduce computation and memory overheads for LLM inference in practice, suggesting further research effort in context management for an LLM. Our code and data will be released shortly.Comment: Work in progres

    THE SOURCE OF LIST STRENGTH EFFECT WITHIN THE RETRIEVING EFFECTIVELY FROM MEMORY FRAMEWORK: THE LEVEL OF COMPETITION

    Get PDF
    To study episodic memory in a laboratory, we study interference within a list of stimuli. With list strength paradigm, we study how such interference is affected by how well stimuli are encoded, and the encoding strength of other items in the list. A stimulus can be weak or strong, and it can be in a pure list, composed of all weak or all strong stimuli, or a mixed list, composed of both weak and strong stimuli. A list strength effect (LSE) refers to the interaction between stimulus strength and list type. In free recall, where the cue used at test is only context, we have consistently observed a LSE. Yet, in cued recall, where the cue is made of item, we have consistently observed a null LSE. Thus, we attributed the source of LSE to the type of cue used at retrieval. Based on REM, this framing is potentially misleading. It is not the type of cue (context or item) that is critical, but the level of competition. Typical experiments have confounded these two factors because they have manipulated item to create a low level of competition within a list, while context to create a high level of competition. Therefore, in this study, we manipulated the level of competition (between-subject) and the type of cue (within-subject) simultaneously on the list strength paradigm. Data shows LSE was determined by the level of competition, not the type of cue probing memory at test. Fitting REM to the data confirms this statement. Nevertheless, whether memory was cued with context or item affected recall performance differently
    corecore