392,511 research outputs found

    Important Lessons Derived from X.500 Case Studies

    Get PDF
    X.500 is a new and complex electronic directory technology, whose basic specification was first published as an international standard in 1988, with an enhanced revision in 1993. The technology is still unproven in many organisations. This paper presents case studies of 15 pioneering pilot and operational X.500 based directory services. The paper provides valuable insights into how organisations are coming to understand this new technology, are using X.500 for both traditional and novel directory based services, and consequently are deriving benefits from it. Important lessons that have been learnt by these X.500 pioneers are presented here, so that future organisations can benefit from their experiences. Factors critical to the success of implementing X.500 in an organisation are derived from the studies

    An Analysis of Next Generation Access Networks Deployment in rural areas

    Full text link
    Next generation access networks (NGAN) will support a renewed electronic communication market where main opportunities lie in the provision of ubiquitous broadband connectivity, applications and content. From their deployment it is expected a wealth of innovations. Within this framework, the project reviews the variety of NGAN deployment options available for rural environments, derives a simple method for approximate cost calculations, and then discusses and compares the results obtained. Data for Spain are used for practical calculations, but the model is applicable with minor modifications to most of the rural areas of European countries. The final part of the paper is devoted to review the techno-economic implications of a network deployment in a rural environment as well as the adequacy and possible developments of the regulatory framework involve

    PoliSave: Efficient Power Management of Campus PCs

    Get PDF
    In this paper we study the power consumption of networked devices in a large Campus network, focusing mainly on PC usage. We first define a methodology to monitor host power state, which we then apply to our Campus network. Results show that typically people refrain from turning off their PC during non-working hours so that more than 1500 PCs are always powered on, causing a large energy waste. We then design PoliSave, a simple web-based architecture which allows users to schedule power state of their PCs, avoiding the frustration of wasting long power-down and bootstrap times of today PCs. By exploiting already available technologies like Wake-On-Lan, Hibernation and Web services, PoliSave reduces the average PC uptime from 15.9h to 9.7h during working days, generating an energy saving of 0.6kW/h per PC per day, or a saving of more than 250,000 Euros per year considering our Campus Universit

    Preventing False Discovery in Interactive Data Analysis is Hard

    Full text link
    We show that, under a standard hardness assumption, there is no computationally efficient algorithm that given nn samples from an unknown distribution can give valid answers to n3+o(1)n^{3+o(1)} adaptively chosen statistical queries. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is valid if it is "close" to the correct expectation over the distribution. Our result stands in stark contrast to the well known fact that exponentially many statistical queries can be answered validly and efficiently if the queries are chosen non-adaptively (no query may depend on the answers to previous queries). Moreover, a recent work by Dwork et al. shows how to accurately answer exponentially many adaptively chosen statistical queries via a computationally inefficient algorithm; and how to answer a quadratic number of adaptive queries via a computationally efficient algorithm. The latter result implies that our result is tight up to a linear factor in n.n. Conceptually, our result demonstrates that achieving statistical validity alone can be a source of computational intractability in adaptive settings. For example, in the modern large collaborative research environment, data analysts typically choose a particular approach based on previous findings. False discovery occurs if a research finding is supported by the data but not by the underlying distribution. While the study of preventing false discovery in Statistics is decades old, to the best of our knowledge our result is the first to demonstrate a computational barrier. In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent
    corecore