282,937 research outputs found

    Evaluation of the use of web technology by government of Sri Lanka to ensure food security for its citizens

    Get PDF
    Web technology is one of the key areas in information and communication technology to be used as a powerful tool in ensuring food security which is one of the main issues in Sri Lanka. Web technology involves in communicating and sharing resources in network of computers all over the world. Main focus of food security is to ensure that all people have fair access to sufficient and quality food without endangering the future supply of the same food. In this context, web sites play a vital and major role in achieving food security in Sri Lanka. In this case study, websites pertaining to Sri Lankan government and link with food security were analyzed to find out their impact in achieving the goals of food security using web technologies and how they are being involved in ensuring food security in Sri Lanka. The other objective of this study is to make the Sri Lankan government aware of present situation of those websites in addressing food security related issues and how modern web technologies could be effectively and efficiently used to address those issues. So, the relevant websites were checked against several criteria and scores were used to assess their capabilities to address the concerns of food security. It was found that the amount of emphasis given by these websites to address the issues of food security is not satisfactory. Further, it showed that if these web sites could be improved further, they would generate a powerful impact on ensuring food security in Sri Lanka.Comment: International Conference of Sabaragamuwa University of Sri Lanka 2015 (ICSUSL 2015

    Networks without wires: Human networks in the Information Society

    Get PDF
    It is the purpose of this paper to argue that the very significant skills we have brought as a profession to making the printed word uniformly and universally available have been overlooked. An electronic environment is being created which is inimical to scholarship and which is largely being designed by commercial and entertainment forces, which are irrelevant to the scholarly process. Even if that environment is modified and the issues described are resolved, it will remain an essentially hostile commercial environment. The academy remains largely unaware of the dangers - particularly in the area of preservation of both primary and secondary research resources. Our electronic house is built on shifting sands and a much more active approach is required from the profession to demonstrate that we can, like Sisyphus, reclimb the hill of bibliographic control and access and use that most basic skill of library school courses - the Organisation of Knowledge - to define scholarly requirements for the emerging information society. It is in fact by ensuring that our human networks are active and effective and by managing the flow of paper-based information effectively that we can best serve our readers, earn their professional respect, and position ourselves to act as guides to rather than bystanders at the information revolution

    Adaptive Partitioning for Large-Scale Dynamic Graphs

    Get PDF
    Abstract—In the last years, large-scale graph processing has gained increasing attention, with most recent systems placing particular emphasis on latency. One possible technique to improve runtime performance in a distributed graph processing system is to reduce network communication. The most notable way to achieve this goal is to partition the graph by minimizing the num-ber of edges that connect vertices assigned to different machines, while keeping the load balanced. However, real-world graphs are highly dynamic, with vertices and edges being constantly added and removed. Carefully updating the partitioning of the graph to reflect these changes is necessary to avoid the introduction of an extensive number of cut edges, which would gradually worsen computation performance. In this paper we show that performance degradation in dynamic graph processing systems can be avoided by adapting continuously the graph partitions as the graph changes. We present a novel highly scalable adaptive partitioning strategy, and show a number of refinements that make it work under the constraints of a large-scale distributed system. The partitioning strategy is based on iterative vertex migrations, relying only on local information. We have implemented the technique in a graph processing system, and we show through three real-world scenarios how adapting graph partitioning reduces execution time by over 50 % when compared to commonly used hash-partitioning. I

    Get yourself connected: conceptualising the role of digital technologies in Norwegian career guidance

    Get PDF
    This report outlines the role of digital technologies in the provision of career guidance. It was commissioned by the c ommittee on career guidance which is advising the Norwegian Government following a review of the countries skills system by the OECD. In this report we argue that career guidance and online career guidance in particular can support the development of Norwa y’s skills system to help meet the economic challenges that it faces.The expert committee advising Norway’s Career Guidance Initiativ

    Notes on Cloud computing principles

    Get PDF
    This letter provides a review of fundamental distributed systems and economic Cloud computing principles. These principles are frequently deployed in their respective fields, but their inter-dependencies are often neglected. Given that Cloud Computing first and foremost is a new business model, a new model to sell computational resources, the understanding of these concepts is facilitated by treating them in unison. Here, we review some of the most important concepts and how they relate to each other

    Quality Assessment of Linked Datasets using Probabilistic Approximation

    Full text link
    With the increasing application of Linked Open Data, assessing the quality of datasets by computing quality metrics becomes an issue of crucial importance. For large and evolving datasets, an exact, deterministic computation of the quality metrics is too time consuming or expensive. We employ probabilistic techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient estimation for implementing a broad set of data quality metrics in an approximate but sufficiently accurate way. Our implementation is integrated in the comprehensive data quality assessment framework Luzzu. We evaluated its performance and accuracy on Linked Open Datasets of broad relevance.Comment: 15 pages, 2 figures, To appear in ESWC 2015 proceeding
    corecore