24,082 research outputs found

    A Novel Method to Calculate Click Through Rate for Sponsored Search

    Full text link
    Sponsored search adopts generalized second price (GSP) auction mechanism which works on the concept of pay per click which is most commonly used for the allocation of slots in the searched page. Two main aspects associated with GSP are the bidding amount and the click through rate (CTR). The CTR learning algorithms currently being used works on the basic principle of (#clicks_i/ #impressions_i) under a fixed window of clicks or impressions or time. CTR are prone to fraudulent clicks, resulting in sudden increase of CTR. The current algorithms are unable to find the solutions to stop this, although with the use of machine learning algorithms it can be detected that fraudulent clicks are being generated. In our paper, we have used the concept of relative ranking which works on the basic principle of (#clicks_i /#clicks_t). In this algorithm, both the numerator and the denominator are linked. As #clicks_t is higher than previous algorithms and is linked to the #clicks_i, the small change in the clicks which occurs in the normal scenario have a very small change in the result but in case of fraudulent clicks the number of clicks increases or decreases rapidly which will add up with the normal clicks to increase the denominator, thereby decreasing the CTR.Comment: 10 pages, 1 figur

    Is it possible to establish reference values for ankle muscle isokinetic strength? A meta-analytical study

    Get PDF
    BACKGROUND: The importance of measuring ankle muscle strength (AMS) has been demonstrated in a variety of clinical areas. Much data has been accumulated using the Cybex Norm isokinetic dynamometer but a uniform framework does not exist. OBJECTIVE: To identify pertinent studies which have used the Cybex Norm to measure AMS in order to establish reference values. METHODS: A narrative review of the literature was used to identify papers that have used the Cybex Norm to measure isokinetic concentric and eccentric AMS. RESULTS: Fifty five research papers were identified but each study used a different isokinetic protocol. CONCLUSIONS: It is not possible to produce AMS reference values due to the wide variation in data collection methods. This is therefore an area of research that needs further exploration

    Scraping the Social? Issues in live social research

    Get PDF
    What makes scraping methodologically interesting for social and cultural research? This paper seeks to contribute to debates about digital social research by exploring how a ‘medium-specific’ technique for online data capture may be rendered analytically productive for social research. As a device that is currently being imported into social research, scraping has the capacity to re-structure social research, and this in at least two ways. Firstly, as a technique that is not native to social research, scraping risks to introduce ‘alien’ methodological assumptions into social research (such as an pre-occupation with freshness). Secondly, to scrape is to risk importing into our inquiry categories that are prevalent in the social practices enabled by the media: scraping makes available already formatted data for social research. Scraped data, and online social data more generally, tend to come with ‘external’ analytics already built-in. This circumstance is often approached as a ‘problem’ with online data capture, but we propose it may be turned into virtue, insofar as data formats that have currency in the areas under scrutiny may serve as a source of social data themselves. Scraping, we propose, makes it possible to render traffic between the object and process of social research analytically productive. It enables a form of ‘real-time’ social research, in which the formats and life cycles of online data may lend structure to the analytic objects and findings of social research. By way of a conclusion, we demonstrate this point in an exercise of online issue profiling, and more particularly, by relying on Twitter to profile the issue of ‘austerity’. Here we distinguish between two forms of real-time research, those dedicated to monitoring live content (which terms are current?) and those concerned with analysing the liveliness of issues (which topics are happening?)

    Why Print and Electronic Resources Are Essential to the Academic Law Library

    Get PDF
    Libraries have supported multiple formats for decades, from paper and microforms to audiovisual tapes and CDs. However, the newest medium, digital transmission, has presented a wider scope of challenges and caused library patrons to question the established and recognized multiformat library. Within the many questions posed, two distinct ones echo repeatedly. The first doubts the need to sustain print in an increasingly digital world, and the second warns of the dangers of relying on a still-developing technology. This article examines both of these positions and concludes that abandoning either format would translate into a failure of service to patrons, both present and future

    Notes on the Margins of Metadata; Concerning the Undecidability of the Digital Image

    Full text link
    This paper considers the significance of metadata in relation to the image economy of the web. Social practices such as keywording, tagging, rating and viewing increasingly influence the modes of navigation and hence the utility of images in online environments. To a user faced with an avalanche of images, metadata promises to make photographs machine-readable in order to mobilize new knowledge, in a continuation of the archival paradigm. At the same time, metadata enables new topologies of the image, new temporalities and multiplicities which present a challenge to historical models of representation. As photography becomes an encoded discourse, we suggest that the turning away from the visual towards the mathematical and the algorithmic establishes undecidability as a key property of the networked image

    Digital Image

    Full text link
    This paper considers the ontological significance of invisibility in relation to the question ‘what is a digital image?’ Its argument in a nutshell is that the emphasis on visibility comes at the expense of latency and is symptomatic of the style of thinking that dominated Western philosophy since Plato. This privileging of visible content necessarily binds images to linguistic (semiotic and structuralist) paradigms of interpretation which promote representation, subjectivity, identity and negation over multiplicity, indeterminacy and affect. Photography is the case in point because until recently critical approaches to photography had one thing in common: they all shared in the implicit and incontrovertible understanding that photographs are a medium that must be approached visually; they took it as a given that photographs are there to be looked at and they all agreed that it is only through the practices of spectatorship that the secrets of the image can be unlocked. Whatever subsequent interpretations followed, the priori- ty of vision in relation to the image remained unperturbed. This undisputed belief in the visibility of the image has such a strong grasp on theory that it imperceptibly bonded together otherwise dissimilar and sometimes contradictory methodol- ogies, preventing them from noticing that which is the most unexplained about images: the precedence of looking itself. This self-evident truth of visibility casts a long shadow on im- age theory because it blocks the possibility of inquiring after everything that is invisible, latent and hidden

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure

    Query-Based Sampling using Only Snippets

    Get PDF
    Query-based sampling is a popular approach to model the content of an uncooperative server. It works by sending queries to the server and downloading the returned documents in the search results in full. This sample of documents then represents the server’s content. We present an approach that uses the document snippets as samples instead of downloading entire documents. This yields more stable results at the same amount of bandwidth usage as the full document approach. Additionally, we show that using snippets does not necessarily incur more latency, but can actually save time

    Translation into any natural language of the error messages generated by any computer program

    Full text link
    Since the introduction of the Fortran programming language some 60 years ago, there has been little progress in making error messages more user-friendly. A first step in this direction is to translate them into the natural language of the students. In this paper we propose a simple script for Linux systems which gives word by word translations of error messages. It works for most programming languages and for all natural languages. Understanding the error messages generated by compilers is a major hurdle for students who are learning programming, particularly for non-native English speakers. Not only may they never become "fluent" in programming but many give up programming altogether. Whereas programming is a tool which can be useful in many human activities, e.g. history, genealogy, astronomy, entomology, in many countries the skill of programming remains confined to a narrow fringe of professional programmers. In all societies, besides professional violinists there are also amateurs. It should be the same for programming. It is our hope that once translated and explained the error messages will be seen by the students as an aid rather than as an obstacle and that in this way more students will enjoy learning and practising programming. They should see it as a funny game.Comment: 14 pages, 1 figur
    corecore