33,811 research outputs found

    The State of Network Neutrality Regulation

    Get PDF
    The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the Internet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network management practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.BMBF, 16DII111, Verbundprojekt: Weizenbaum-Institut für die vernetzte Gesellschaft - Das Deutsche Internet-Institut; Teilvorhaben: Wissenschaftszentrum Berlin für Sozialforschung (WZB)EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Community next steps for making globally unique identifiers work for biocollections data

    Get PDF
    Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided

    A bibliographic metadata infrastructure for the twenty-first century

    Get PDF
    The current library bibliographic infrastructure was constructed in the early days of computers – before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified

    On the emergent Semantic Web and overlooked issues

    Get PDF
    The emergent Semantic Web, despite being in its infancy, has already received a lotof attention from academia and industry. This resulted in an abundance of prototype systems and discussion most of which are centred around the underlying infrastructure. However, when we critically review the work done to date we realise that there is little discussion with respect to the vision of the Semantic Web. In particular, there is an observed dearth of discussion on how to deliver knowledge sharing in an environment such as the Semantic Web in effective and efficient manners. There are a lot of overlooked issues, associated with agents and trust to hidden assumptions made with respect to knowledge representation and robust reasoning in a distributed environment. These issues could potentially hinder further development if not considered at the early stages of designing Semantic Web systems. In this perspectives paper, we aim to help engineers and practitioners of the Semantic Web by raising awareness of these issues

    Creating a Foursquare Communications Platform: Easy Steps to Build the Communications Capacity of Your Grantees

    Get PDF
    Spitfire Strategies specializes in building the capacity of foundations and their grantees to plan and implement highly successful communications strategies. Over the past seven years, we have learned a lot about the right way to approach capacity building -- and the wrong way. This document offers foundations a few of the lessons we have learned when it comes to offering capacity-building opportunities to grantees

    Winding Down the Atlantic Philanthropies: 2009-2010: Beginning the End Game

    Get PDF
    Reviews late-term program planning, including envisioning the end of the foundation and translating that vision into concrete plans. Examines challenges and opportunities for final grantmaking in the Population Health and Children and Youth programs

    Obamacare to Come: Seven Bad Ideas for Health Care Reform

    Get PDF
    President Obama has made it clear that reforming the American health care system will be one of his top priorities. In response, congressional leaders have promised to introduce legislation by this summer, and they hope for an initial vote in the Senate before the Labor Day recess. While the Obama administration has not, and does not seem likely to, put forward a specific reform plan, it is possible to discern the key components of any plan likely to emerge from Congress: At a time of rising unemployment, the government would raise the cost of hiring workers by requiring employers to provide health insurance to their workers or pay a fee (tax) to subsidize government coverage. Every American would be required to buy an insurance policy that meets certain government requirements. Even individuals who are currently insured -- and happy with their insurance -- will have to switch to insurance that meets the government's definition of "acceptable insurance." A government-run plan similar to Medicare would be set up in competition with private insurance, with people able to choose either private insurance or the taxpayer-subsidized public plan. Subsidies and cost-shifting would encourage Americans to shift to the government plan. The government would undertake comparative-effectiveness research and cost-effectiveness research, and use the results of that research to impose practice guidelines on providers -- initially, in government programs such as Medicare and Medicaid, but possibly eventually extending such rationing to private insurance plans. Private insurance would face a host of new regulations, including a requirement to insure all applicants and a prohibition on pricing premiums on the basis of risk. Subsidies would be available to help middle-income people purchase insurance, while government programs such as Medicare and Medicaid would be expanded. Finally, the government would subsidize and manage the development of a national system of electronic medical records.Taken individually, each of these proposals would be a bad idea. Taken collectively, they would dramatically transform the American health care system in a way that would harm taxpayers, health care providers, and -- most importantly -- the quality and range of care given to patients

    From SpaceStat to CyberGIS: Twenty Years of Spatial Data Analysis Software

    Get PDF
    This essay assesses the evolution of the way in which spatial data analytical methods have been incorporated into software tools over the past two decades. It is part retrospective and prospective, going beyond a historical review to outline some ideas about important factors that drove the software development, such as methodological advances, the open source movement and the advent of the internet and cyberinfrastructure. The review highlights activities carried out by the author and his collaborators and uses SpaceStat, GeoDa, PySAL and recent spatial analytical web services developed at the ASU GeoDa Center as illustrative examples. It outlines a vision for a spatial econometrics workbench as an example of the incorporation of spatial analytical functionality in a cyberGIS.
    corecore