29,469 research outputs found

    Rational Trust Modeling

    Get PDF
    Trust models are widely used in various computer science disciplines. The main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of "rational trust modeling" is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is where the novelty of our approach comes from. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer's perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivise trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses

    Keyword-Based Delegable Proofs of Storage

    Full text link
    Cloud users (clients) with limited storage capacity at their end can outsource bulk data to the cloud storage server. A client can later access her data by downloading the required data files. However, a large fraction of the data files the client outsources to the server is often archival in nature that the client uses for backup purposes and accesses less frequently. An untrusted server can thus delete some of these archival data files in order to save some space (and allocate the same to other clients) without being detected by the client (data owner). Proofs of storage enable the client to audit her data files uploaded to the server in order to ensure the integrity of those files. In this work, we introduce one type of (selective) proofs of storage that we call keyword-based delegable proofs of storage, where the client wants to audit all her data files containing a specific keyword (e.g., "important"). Moreover, it satisfies the notion of public verifiability where the client can delegate the auditing task to a third-party auditor who audits the set of files corresponding to the keyword on behalf of the client. We formally define the security of a keyword-based delegable proof-of-storage protocol. We construct such a protocol based on an existing proof-of-storage scheme and analyze the security of our protocol. We argue that the techniques we use can be applied atop any existing publicly verifiable proof-of-storage scheme for static data. Finally, we discuss the efficiency of our construction.Comment: A preliminary version of this work has been published in International Conference on Information Security Practice and Experience (ISPEC 2018

    Mining Top-K Frequent Itemsets Through Progressive Sampling

    Full text link
    We study the use of sampling for efficiently mining the top-K frequent itemsets of cardinality at most w. To this purpose, we define an approximation to the top-K frequent itemsets to be a family of itemsets which includes (resp., excludes) all very frequent (resp., very infrequent) itemsets, together with an estimate of these itemsets' frequencies with a bounded error. Our first result is an upper bound on the sample size which guarantees that the top-K frequent itemsets mined from a random sample of that size approximate the actual top-K frequent itemsets, with probability larger than a specified value. We show that the upper bound is asymptotically tight when w is constant. Our main algorithmic contribution is a progressive sampling approach, combined with suitable stopping conditions, which on appropriate inputs is able to extract approximate top-K frequent itemsets from samples whose sizes are smaller than the general upper bound. In order to test the stopping conditions, this approach maintains the frequency of all itemsets encountered, which is practical only for small w. However, we show how this problem can be mitigated by using a variation of Bloom filters. A number of experiments conducted on both synthetic and real bench- mark datasets show that using samples substantially smaller than the original dataset (i.e., of size defined by the upper bound or reached through the progressive sampling approach) enable to approximate the actual top-K frequent itemsets with accuracy much higher than what analytically proved.Comment: 16 pages, 2 figures, accepted for presentation at ECML PKDD 2010 and publication in the ECML PKDD 2010 special issue of the Data Mining and Knowledge Discovery journa

    High performance organic transistor active-matrix driver developed on paper substrate

    Get PDF
    published_or_final_versio

    Introducing a framework to assess newly created questions with Natural Language Processing

    Full text link
    Statistical models such as those derived from Item Response Theory (IRT) enable the assessment of students on a specific subject, which can be useful for several purposes (e.g., learning path customization, drop-out prediction). However, the questions have to be assessed as well and, although it is possible to estimate with IRT the characteristics of questions that have already been answered by several students, this technique cannot be used on newly generated questions. In this paper, we propose a framework to train and evaluate models for estimating the difficulty and discrimination of newly created Multiple Choice Questions by extracting meaningful features from the text of the question and of the possible choices. We implement one model using this framework and test it on a real-world dataset provided by CloudAcademy, showing that it outperforms previously proposed models, reducing by 6.7% the RMSE for difficulty estimation and by 10.8% the RMSE for discrimination estimation. We also present the results of an ablation study performed to support our features choice and to show the effects of different characteristics of the questions' text on difficulty and discrimination.Comment: Accepted at the International Conference of Artificial Intelligence in Educatio

    An Electrocorticographic Brain Interface in an Individual with Tetraplegia

    Get PDF
    Brain-computer interface (BCI) technology aims to help individuals with disability to control assistive devices and reanimate paralyzed limbs. Our study investigated the feasibility of an electrocorticography (ECoG)-based BCI system in an individual with tetraplegia caused by C4 level spinal cord injury. ECoG signals were recorded with a high-density 32-electrode grid over the hand and arm area of the left sensorimotor cortex. The participant was able to voluntarily activate his sensorimotor cortex using attempted movements, with distinct cortical activity patterns for different segments of the upper limb. Using only brain activity, the participant achieved robust control of 3D cursor movement. The ECoG grid was explanted 28 days post-implantation with no adverse effect. This study demonstrates that ECoG signals recorded from the sensorimotor cortex can be used for real-time device control in paralyzed individuals

    Asymmetric interlimb transfer of concurrent adaptation to opposing dynamic forces

    Get PDF
    Interlimb transfer of a novel dynamic force has been well documented. It has also been shown that unimanual adaptation to opposing novel environments is possible if they are associated with different workspaces. The main aim of this study was to test if adaptation to opposing velocity dependent viscous forces with one arm could improve the initial performance of the other arm. The study also examined whether this interlimb transfer occurred across an extrinsic, spatial, coordinative system or an intrinsic, joint based, coordinative system. Subjects initially adapted to opposing viscous forces separated by target location. Our measure of performance was the correlation between the speed profiles of each movement within a force condition and an ‘average’ trajectory within null force conditions. Adaptation to the opposing forces was seen during initial acquisition with a significantly improved coefficient in epoch eight compared to epoch one. We then tested interlimb transfer from the dominant to non-dominant arm (D → ND) and vice-versa (ND → D) across either an extrinsic or intrinsic coordinative system. Interlimb transfer was only seen from the dominant to the non-dominant limb across an intrinsic coordinative system. These results support previous studies involving adaptation to a single dynamic force but also indicate that interlimb transfer of multiple opposing states is possible. This suggests that the information available at the level of representation allowing interlimb transfer can be more intricate than a general movement goal or a single perceived directional error

    Transferrin-bound Yb2 uptake by U-87 MG cells and effect of Yb on proliferation of the cells

    Get PDF
    2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
    corecore