8,193 research outputs found

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    Related variety and regional growth in Italy

    Get PDF
    Research & Development, Multinational Firms, Location Strategies

    Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier

    Full text link
    As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of 'grey data' about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This paper explores the competing values inherent in data stewardship and makes recommendations for practice, drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201

    Visual communication in urban planning and urban design

    Get PDF
    This report documents the current status of visual communication in urban design and planning. Visual communication is examined through discussion of standalone and network media, specifically concentrating on visualisation on the World Wide Web(WWW).Firstly, we examine the use of Solid and Geometric Modelling for visualising urban planning and urban design. This report documents and compares examples of the use of Virtual Reality Modelling Language (VRML) and proprietary WWW based Virtual Reality modelling software. Examples include the modelling of Bath and Glasgow using both VRML 1.0 and 2.0. A review is carried out on the use of Virtual Worldsand their role in visualising urban form within multi-user environments. The use of Virtual Worlds is developed into a case study of the possibilities and limitations of Virtual Internet Design Arenas (ViDAs), an initiative undertaken at the Centre for Advanced Spatial Analysis, University College London. The use of Virtual Worlds and their development towards ViDAs is seen as one of the most important developments in visual communication for urban planning and urban design since the development plan.Secondly, photorealistic media in the process of communicating plans is examined.The process of creating photorealistic media is documented, examples of the Virtual Streetscape and Wired Whitehall Virtual Urban Interface System are provided. The conclusion is drawn that although the use of photo-realistic media on the WWW provides a way to visually communicate planning information, its use is limited. The merging of photorealistic media and solid geometric modelling is reviewed in the creation of Augmented Reality. Augmented Reality is seen to provide an important step forward in the ability to quickly and easily visualise urban planning and urban design information.Thirdly, the role of visual communication of planning data through GIS is examined interms of desktop, three dimensional and Internet based GIS systems. The evolution to Internet GIS is seen as a critical component in the development of virtual cities which will allow urban planners and urban designers to visualise and model the complexity of the built environment in networked virtual reality.Finally a viewpoint is put forward of the Virtual City, linking Internet GIS with photorealistic multi-user Virtual Worlds. At present there are constraints on how far virtual cities can be developed, but a view is provided on how these networked virtual worlds are developing to aid visual communication in urban planning and urban design

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Evolving database systems : a persistent view

    Get PDF
    Submitted to POS7 This work was supported in St Andrews by EPSRC Grant GR/J67611 "Delivering the Benefits of Persistence"Orthogonal persistence ensures that information will exist for as long as it is useful, for which it must have the ability to evolve with the growing needs of the application systems that use it. This may involve evolution of the data, meta-data, programs and applications, as well as the users' perception of what the information models. The need for evolution has been well recognised in the traditional (data processing) database community and the cost of failing to evolve can be gauged by the resources being invested in interfacing with legacy systems. Zdonik has identified new classes of application, such as scientific, financial and hypermedia, that require new approaches to evolution. These applications are characterised by their need to store large amounts of data whose structure must evolve as it is discovered by the applications that use it. This requires that the data be mapped dynamically to an evolving schema. Here, we discuss the problems of evolution in these new classes of application within an orthogonally persistent environment and outline some approaches to these problems.Postprin
    • 

    corecore