117,096 research outputs found
Usability of the Access Control System for OpenLDAP
This thesis addresses the usability of the Access Control System of OpenLDAP. OpenLDAP is a open source implementation of the Lightweight Directory Access Protocol (LDAP), which is a protocol that communicates with a directory service. A directory service is a database that stores information about network resources, such as files, printers and users. An access control system is the mechanism that mediates access, for example, read or write, to a resource by a user. The access control system makes these decisions based on an access control policy which states who should have access to what. We hypothesize that the access control system of OpenLDAP has poor usability. By usability, in this context, we mean how easy it is for a systems administrator to encode a high-level, informally expressed, enterprise security policy as an access control policy in syntax that OpenLDAP expects. We discuss the design and carrying out of a human-subject study to validate this hypothesis. The study consist of presenting a high-level policy to the participants and asking them to translate it into an OpenLDAP policy. The study has been approved by the University of Waterlooâs office of research ethics. We have carried out the study with a total of 54 users. We present the results from analyzing the data we collected from the study. We observe that our hypothesis is validated in that only few (20%) people were able to express a high-level policy as a correct OpenLDAP policy. There is a low correlation between self reported correctness and actual correctness which suggest that people are not aware if they made any mistake in their submission. The main source of error comes from confusion about the OpenLDAP syntax and how precedence rule works
Identity Management
Identity Management (IdM) has been a serious problem since the establishment of the Internet as a global network used for business and pleasure. Originally identified in a Peter Steinersâ 1993 New Yorker cartoon âOn the Internet nobody knows youâre a dogâ, the problem is over 15 years old. Yet, little progress has been made towards an optimal solution. In its early stages, IdM was designed to address the problem of controlling access to resources and managing the matching of capabilities with people in well defined situations (e.g., Access Control Lists). In todayâs computing environment, IdM involves a variety of user centric, distinct, personal forms of digital identities. Starting with the basics of traditional access control often assimilated to âdirectory entriesâ (i.e., ID, password and capability) IdM is generalized to the global networked society we now live in. With the advent Inter-organizational systems (IOS), social networks, e-commerce, m-commerce, service oriented computing and automated agents (such as botnets), the characteristics of IdM evolved to include people, devices, and services. In addition, as the complexity of IdM increases so did related social issues such as legitimacy, authoritativeness, privacy rights, personal information protection as well as broader problems of cyber predators and threats. The tutorial addresses the following IdM topics: history and background (access control), what is IdM, technical challenges, social issues, life cycle, standards, research projects, industry initiatives, paradigms, vendor solutions, implementation challenges, emerging trends, and research concepts
A File System Abstraction for Sense and Respond Systems
The heterogeneity and resource constraints of sense-and-respond systems pose
significant challenges to system and application development. In this paper, we
present a flexible, intuitive file system abstraction for organizing and
managing sense-and-respond systems based on the Plan 9 design principles. A key
feature of this abstraction is the ability to support multiple views of the
system via filesystem namespaces. Constructed logical views present an
application-specific representation of the network, thus enabling high-level
programming of the network. Concurrently, structural views of the network
enable resource-efficient planning and execution of tasks. We present and
motivate the design using several examples, outline research challenges and our
research plan to address them, and describe the current state of
implementation.Comment: 6 pages, 3 figures Workshop on End-to-End, Sense-and-Respond Systems,
Applications, and Services In conjunction with MobiSys '0
Deceit: A flexible distributed file system
Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness
The ODO project: a Case Study in Integration of Multimedia Services
Recent years have witnessed a steady growth in the availability of wide-area multi-service networks. These support a variety of traffic types including data, control messages, audio and video. Consequently they are often thought of as integrated media carriers. To date, however, use of these networks has been limited to isolated applications which exhibit very little or no integration amongst themselves. This paper describes a project which investigated organisational, user interfacing and programming techniques to exploit this integration of services at the application level
Important Lessons Derived from X.500 Case Studies
X.500 is a new and complex electronic directory technology, whose basic specification was first published as an international standard in 1988, with an enhanced revision in 1993. The technology is still unproven in many organisations. This paper presents case studies of 15 pioneering pilot and operational X.500 based directory services. The paper provides valuable insights into how organisations are coming to understand this new technology, are using X.500 for both traditional and novel directory based services, and consequently are deriving benefits from it. Important lessons that have been learnt by these X.500 pioneers are presented here, so that future organisations can benefit from their experiences. Factors critical to the success of implementing X.500 in an organisation are derived from the studies
A Peer-to-Peer Middleware Framework for Resilient Persistent Programming
The persistent programming systems of the 1980s offered a programming model
that integrated computation and long-term storage. In these systems, reliable
applications could be engineered without requiring the programmer to write
translation code to manage the transfer of data to and from non-volatile
storage. More importantly, it simplified the programmer's conceptual model of
an application, and avoided the many coherency problems that result from
multiple cached copies of the same information. Although technically
innovative, persistent languages were not widely adopted, perhaps due in part
to their closed-world model. Each persistent store was located on a single
host, and there were no flexible mechanisms for communication or transfer of
data between separate stores. Here we re-open the work on persistence and
combine it with modern peer-to-peer techniques in order to provide support for
orthogonal persistence in resilient and potentially long-running distributed
applications. Our vision is of an infrastructure within which an application
can be developed and distributed with minimal modification, whereupon the
application becomes resilient to certain failure modes. If a node, or the
connection to it, fails during execution of the application, the objects are
re-instantiated from distributed replicas, without their reference holders
being aware of the failure. Furthermore, we believe that this can be achieved
within a spectrum of application programmer intervention, ranging from minimal
to totally prescriptive, as desired. The same mechanisms encompass an
orthogonally persistent programming model. We outline our approach to
implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200
Asynchronously Replicated Shared Workspaces for a Multi-Media Annotation Service over Internet
This paper describes a world wide collaboration system through multimedia Post-its (user generated annotations). DIANE is a service to create multimedia annotations to every application output on the computer, as well as to existing multimedia annotations. Users collaborate by registering multimedia documents and user generated annotation in shared workspaces. However, DIANE only allows effective participation in a shared workspace over a high performance network (ATM, fast Ethernet) since it deals with large multimedia object. When only slow or unreliable connections are available between a DIANE terminal and server, useful work becomes impossible. To overcome these restrictions we need to replicate DIANE servers so that users do not suffer degradation in the quality of service. We use the asynchronous replication service ODIN to replicate the shared workspaces to every interested site in a transparent way to users. ODIN provides a cost-effective object replication by building a dynamic virtual network over Internet. The topology of this virtual network optimizes the use of network resources while it satisfies the changing requirements of the users
- âŚ