77 research outputs found

    The World Wide Web

    Get PDF

    The World Wide Web

    Get PDF

    Preventing Additive Attacks to Relational Database Watermarking

    Get PDF
    False ownership claims are carried on through additive and invertibility attacks and, as far as we know, current relational watermarking techniques are not always able to solve the ownership doubts raising from the latter attacks. In this paper, we focus on additive attacks. We extend a conventional image-based relational data watermarking scheme by creating a non-colluded backup of the data owner marks, the so-called secondary marks positions. The technique we propose is able to identify the data owner beyond any doubt

    Blockchain-Based Distributed Marketplace

    Get PDF
    Developments in Blockchain technology have enabled the creation of smart contracts; i.e., self-executing code that is stored and executed on the Blockchain. This has led to the creation of distributed, decentralised applications, along with frameworks for developing and deploying them easily. This paper describes a proof-of-concept system that implements a distributed online marketplace using the Ethereum framework, where buyers and sellers can engage in e-commerce transactions without the need of a large central entity coordinating the process. The performance of the system was measured in terms of cost of use through the concept of ‘gas usage’. It was determined that such costs are significantly less than that of Amazon and eBay for high volume users. The findings generally support the ability to use Ethereum to create a distributed on-chain market, however, there are still areas that require further research and development

    Annotation-Based Static Analysis for Personal Data Protection

    Full text link
    This paper elaborates the use of static source code analysis in the context of data protection. The topic is important for software engineering in order for software developers to improve the protection of personal data during software development. To this end, the paper proposes a design of annotating classes and functions that process personal data. The design serves two primary purposes: on one hand, it provides means for software developers to document their intent; on the other hand, it furnishes tools for automatic detection of potential violations. This dual rationale facilitates compliance with the General Data Protection Regulation (GDPR) and other emerging data protection and privacy regulations. In addition to a brief review of the state-of-the-art of static analysis in the data protection context and the design of the proposed analysis method, a concrete tool is presented to demonstrate a practical implementation for the Java programming language

    Nitrogen Excretion and Ammonia Emissions from Pigs Fed Modified Diets

    Get PDF
    Two swine feeding trials were conducted (initial body weight = 47 ± 2 and 41 ± 3 kg for Trials 1 and 2, respectively) to evaluate reduced crude protein (CP) and yucca (Yucca schidigera Roezl ex Ortgies) extract–supplemented diets on NH3 emissions. In Trial 1, nine pigs were offered a corn–soybean meal diet (C, 174 g kg−1 CP), a Lys-supplemented diet (L, 170 g kg−1 CP), or a 145 g kg−1 CP diet supplemented with Lys, Met, Thr, and Trp (LMTT). In Trial 2, nine pigs were fed diet L supplemented with 0, 62.5, or 125 mg of yucca extract per kg diet. Each feeding period consisted of a 4-d dietary adjustment followed by 72 h of continuous NH3 measurement. Urine and fecal samples were collected each period. Feeding the LMTT diet reduced (P \u3c 0.05) average daily gain (ADG) and feed efficiency (G:F) compared to diet L. Fecal N concentration decreased with a reduction in dietary CP, but urinary ammonium increased from pigs fed diet LMTT (2.0 g kg−1, wet basis) compared to those fed diet C (1.1 g kg−1) or L (1.0 g kg−1). When pigs were fed reduced CP diets NH3emission rates decreased (2.46, 2.16, and 1.05 mg min−1 for diets C, L, and LMTT). Yucca had no effect on feed intake, ADG, or G:F. Ammonium and N concentrations of manure and NH3 emission rates did not differ with yucca content. Caution must be executed to maintain animal performance when strategies are implemented to reduce NH3 emissions

    The Case For Probabilistic Grammars

    Get PDF
    The purpose of this paper is to briefly examine two proposed extensions of statistical/probabilistic methodology, long familiar to the sciences, to linguistics. On the one hand it will be argued that the invocation of probabilistic measures is indispensable to any sensible criteria of grammatical adequacy, and on the other hand it will be suggested that probabilistic automata can be relevant to studies of language behavior. 1. The fully adequate (categorial/generative) grammar is one with which there corresponds an algorithm by means of which we can (recognize/ generate) all and only those syntactically correct sequences in the corresponding language. At this writing, there does not exist any such \u27ideal\u27 grammar for any natural language; and as long as this situation remains, it will be necessary for the linguist to \u27rank\u27 competing grammars for both reasons of suitability for corpora, and assessment in terms of potential adequacy. Because of the prima facie potential of the transformational grammars introduced since the mid-1950\u27s, linguists have not made any rigorous attempt at providing a measure of descriptive adequacy of grammars. Lately, such intuitive criteria as simplicity, intuitivity, economy, etc. have been levied against competing grammars, in adjudication of adequacy. But these are certainly not the kinds of objective criteria necessary to any independently valuable method of resolving disputes over relative adequacy. This is not to say that these quasi-criteria are without import to the linguist. Surely, in a ceteris paribus situation it is reasonable to prefer the simpler model to the more complex. But up to now there is no method of \u27ranking\u27 available by which we can determine when a ceteris paribus situation obtains. In linguistics, just as in the sciences, only when the adequacies of competing models are established are issues of simplicity, economy and the like, germane. Certainly, the application of statistical/probabilistic procedures to the field of linguistics is not new. Precedents have been established in taxonomic studies, analyses of distributions of word types in corpora (viz. Zipf\u27s Law), etc. But the notion of using an interjacent probabilistic grammar in determining descriptive adequacy is quite innovative. Of the recent developments in this area, perhaps the most notable is that of Suppes (1970). Suppes\u27 motivation for this paper was the disregard of conventional grammatical models to such fundamental and universal characteristics of natural languages as relatively short utterance length, predominance of grammatically simple utterances, etc. It seems irrational to Suppes to be tolerant of grammars which pay an inordinate amount of attention to those syntactic structures which are \u27deviant,\u27 or at least atypical of general usage, and whose relative frequency of occurrence in the corpus is low. To put the matter differently, if any putatively adequate grammar is to be of value, it must be able to account for a sizeable portion of the corpus, thereby identifying those grammatical types which demand further scrutiny. In order to establish the relative values for alternative grammars, Suppes suggests we consult a probabilistic grammar
    • …
    corecore