181 research outputs found

    Information overload: CCTV, your networks, communities and crime

    Get PDF
    Electronic surveillance continues to play a central but often unobserved role in contemporary Western societies and attempts to police them. This paper focuses on closed circuit television (CCTV) footage and its technological implications, particularly relating infrastructure and data storage and integrity. While CCTV might appear attractive in augmenting law enforcement systems, the authors argue that the debate on use of CCTV in crime prevention remains incomplete without an effective understanding of the diverse costs. This discussion reveals startling ICT resource needs and associated costs, together with very specific technological capacity. These contribute significantly to the costs of such systems, reinforcing the authors’ argument that CCTV is no golden bullet for law enforcement

    Business Intelligence (BI) Critical Success Factors

    Get PDF
    Companies are increasingly focussing their information systems efforts around Business Intelligence (BI) solutions. The benefits realised from BI vary significantly from company to company. BI systems are now being used as extensions of Enterprise Resource Planning (ERP) systems as they consolidate, transform and analyse the vast amounts of data generated by the firm. Much attention has been given to the identification of critical success factors (CSF) associated with the adoption of ERP systems. However, there is only limited research that has focussed on the CSF associated with BI implementations as part of an ERP system environment. Hence, this research documents BI specific critical success factors that industry partners, venders or systems users have identified in their presentations at conferences, education forms or formal user group meetings

    TechNews digests: Jan - Nov 2008

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    User Variability and IR System Evaluation

    Get PDF
    ABSTRACT Test collection design eliminates sources of user variability to make statistical comparisons among information retrieval (IR) systems more affordable. Does this choice unnecessarily limit generalizability of the outcomes to real usage scenarios? We explore two aspects of user variability with regard to evaluating the relative performance of IR systems, assessing effectiveness in the context of a subset of topics from three TREC collections, with the embodied information needs categorized against three levels of increasing task complexity. First, we explore the impact of widely differing queries that searchers construct for the same information need description. By executing those queries, we demonstrate that query formulation is critical to query effectiveness. The results also show that the range of scores characterizing effectiveness for a single system arising from these queries is comparable or greater than the range of scores arising from variation among systems using only a single query per topic. Second, our experiments reveal that searchers display substantial individual variation in the numbers of documents and queries they anticipate needing to issue, and there are underlying significant differences in these numbers in line with increasing task complexity levels. Our conclusion is that test collection design would be improved by the use of multiple query variations per topic, and could be further improved by the use of metrics which are sensitive to the expected numbers of useful documents

    Managing tail latency in large scale information retrieval systems

    Get PDF
    As both the availability of internet access and the prominence of smart devices continue to increase, data is being generated at a rate faster than ever before. This massive increase in data production comes with many challenges, including efficiency concerns for the storage and retrieval of such large-scale data. However, users have grown to expect the sub-second response times that are common in most modern search engines, creating a problem - how can such large amounts of data continue to be served efficiently enough to satisfy end users? This dissertation investigates several issues regarding tail latency in large-scale information retrieval systems. Tail latency corresponds to the high percentile latency that is observed from a system - in the case of search, this latency typically corresponds to how long it takes for a query to be processed. In particular, keeping tail latency as low as possible translates to a good experience for all users, as tail latency is directly related to the worst-case latency and hence, the worst possible user experience. The key idea in targeting tail latency is to move from questions such as "what is the median latency of our search engine?" to questions which more accurately capture user experience such as "how many queries take more than 200ms to return answers?" or "what is the worst case latency that a user may be subject to, and how often might it occur?" While various strategies exist for efficiently processing queries over large textual corpora, prior research has focused almost entirely on improvements to the average processing time or cost of search systems. As a first contribution, we examine some state-of-the-art retrieval algorithms for two popular index organizations, and discuss the trade-offs between them, paying special attention to the notion of tail latency. This research uncovers a number of observations that are subsequently leveraged for improved search efficiency and effectiveness. We then propose and solve a new problem, which involves processing a number of related queries together, known as multi-queries, to yield higher quality search results. We experiment with a number of algorithmic approaches to efficiently process these multi-queries, and report on the cost, efficiency, and effectiveness trade-offs present with each. Ultimately, we find that some solutions yield a low tail latency, and are hence suitable for use in real-time search environments. Finally, we examine how predictive models can be used to improve the tail latency and end-to-end cost of a commonly used multi-stage retrieval architecture without impacting result effectiveness. By combining ideas from numerous areas of information retrieval, we propose a prediction framework which can be used for training and evaluating several efficiency/effectiveness trade-off parameters, resulting in improved trade-offs between cost, result quality, and tail latency

    Enhancing reliability with Latin Square redundancy on desktop grids.

    Get PDF
    Computational grids are some of the largest computer systems in existence today. Unfortunately they are also, in many cases, the least reliable. This research examines the use of redundancy with permutation as a method of improving reliability in computational grid applications. Three primary avenues are explored - development of a new redundancy model, the Replication and Permutation Paradigm (RPP) for computational grids, development of grid simulation software for testing RPP against other redundancy methods and, finally, running a program on a live grid using RPP. An important part of RPP involves distributing data and tasks across the grid in Latin Square fashion. Two theorems and subsequent proofs regarding Latin Squares are developed. The theorems describe the changing position of symbols between the rows of a standard Latin Square. When a symbol is missing because a column is removed the theorems provide a basis for determining the next row and column where the missing symbol can be found. Interesting in their own right, the theorems have implications for redundancy. In terms of the redundancy model, the theorems allow one to state the maximum makespan in the face of missing computational hosts when using Latin Square redundancy. The simulator software was developed and used to compare different data and task distribution schemes on a simulated grid. The software clearly showed the advantage of running RPP, which resulted in faster completion times in the face of computational host failures. The Latin Square method also fails gracefully in that jobs complete with massive node failure while increasing makespan. Finally an Inductive Logic Program (ILP) for pharmacophore search was executed, using a Latin Square redundancy methodology, on a Condor grid in the Dahlem Lab at the University of Louisville Speed School of Engineering. All jobs completed, even in the face of large numbers of randomly generated computational host failures

    Operator-based approaches to harm minimisation in gambling: summary, review and future directions

    Get PDF
    In this report we give critical consideration to the nature and effectiveness of harm minimisation in gambling. We identify gambling-related harm as both personal (e.g., health, wellbeing, relationships) and economic (e.g., financial) harm that occurs from exceeding one’s disposable income or disposable leisure time. We have elected to use the term ‘harm minimisation’ as the most appropriate term for reducing the impact of problem gambling, given its breadth in regard to the range of goals it seeks to achieve, and the range of means by which they may be achieved. The extent to which an employee can proactively identify a problem gambler in a gambling venue is uncertain. Research suggests that indicators do exist, such as sessional information (e.g., duration or frequency of play) and negative emotional responses to gambling losses. However, the practical implications of requiring employees to identify and interact with customers suspected of experiencing harm are questionable, particularly as the employees may not possess the clinical intervention skills which may be necessary. Based on emerging evidence, behavioural indicators identifiable in industryheld data, could be used to identify customers experiencing harm. A programme of research is underway in Great Britain and in other jurisdiction

    Implementation of an information retrieval system within a central knowledge management system

    Get PDF
    PĂĄginas numeradas: I-XIII, 14-126EstĂĄgio realizado na Wipro Portugal SA e orientado pelo Eng.Âș Hugo NetoTese de mestrado integrado. Engenharia InformĂĄtica e Computação. Faculdade de Engenharia. Universidade do Porto. 201
    • 

    corecore