115 research outputs found

    User Provisioning Processes in Identity Management addressing SAP Campus Management

    Get PDF
    This document is the report of the work of an ISWA working team on a WUSKAR case study. This study tackles on the desire of meta directory synchronisation with a proprietary SAP R/3 system in the context of an identity management system. Early tasks concern identifying exact desires and scenarios, modelling the synchronisation process, identifying what relevant data is to be processed, as well as proposing templates for the matching and transformation process. Intermediate tasks are related to the technical aspects of the case study, as well as problem task division and progress management, regular review of strategic and technical choices

    Document Maintenance With Multiple Access Strategy

    Get PDF
    This thesis should yield a system that is capable of providing access to documents. Such documents and the file access thereto are of central importance. The access to these documents should not be limited by a single interface or access manner. Databases and database management systems (DBMS) using the Structured Query Language (SQL) and following the ACID paradigm are commonly used to manage data. Such databases are robust and performant but minimize flexibility regarding the data structure, as it has to be determined at the creation of a system and should not be changed over time. Additionally, the data can only be accessed by the database management system (DBMS). Hence, the data cannot be accessed if the database management system is not running, because in major systems like MySQL, PostgreSQL or DB2 the data is stored in a binary manner to increase performance. As this thesis aims to provide a system wherein access is independent of single components, especially of the database server such traditional SQL servers cannot be used. Moreover, single data attributes can not be inserted, in performance-oriented SQL servers, even if the DBMS was not running as it would require knowledge over the internal database structure, as well as complex insertion and modification methods, whereas the focus should lie on the documents, their consistency, rather than on the meta information. In consideration of these goals, the implementation of a system with multiple access strategy and easy modification will be discussed in the following chapters of this thesis

    REMOTE MOBILE SCREEN (RMS): AN APPROACH FOR SECURE BYOD ENVIRONMENTS

    Get PDF
    Bring Your Own Device (BYOD) is a policy where employees use their own personal mobile devices to perform work-related tasks. Enterprises reduce their costs since they do not have to purchase and provide support for the mobile devices. BYOD increases job satisfaction and productivity in the employees, as they can choose which device to use and do not need to carry two or more devices. However, BYOD policies create an insecure environment, as the corporate network is extended and it becomes harder to protect it from attacks. In this scenario, the corporate information can be leaked, personal and corporate spaces are not separated, it becomes difficult to enforce security policies on the devices, and employees are worried about their privacy. Consequently, a secure BYOD environment must achieve the following goals: space isolation, corporate data protection, security policy enforcement, true space isolation, non-intrusiveness, and low resource consumption. We found that none of the currently available solutions achieve all of these goals. We developed Remote Mobile Screen (RMS), a framework that meets all the goals for a secure BYOD environment. To achieve this, the enterprise provides the employee with a Virtual Machine (VM) running a mobile operating system, which is located in the enterprise network and to which the employee connects using the mobile device. We provide an implementation of RMS using commonly available software for an x86 architecture. We address RMS challenges related to compatibility, scalability and latency. For the first challenge, we show that at least 90.2% of the productivity applications from Google Play can be installed on an x86 architecture, while at least 80.4% run normally. For the second challenge, we deployed our implementation on a high-performance server and run up to 596 VMs using 256 GB of RAM. Further, we show that the number of VMs is proportional to the available RAM. For the third challenge, we used our implementation on GENI and conclude that an application latency of 150 milliseconds can be achieved. Adviser: Byrav Ramamurth

    Algorithms for verifying the integrity of untrusted storage

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 71-72).This work addresses the problem of verifying that untrusted storage behaves like valid storage. The problem is important in a system such as a network file system or database where a client accesses data stored remotely on an untrusted server. Past systems have used a hash tree-based checker to check the integrity of data stored on untrusted storage. This method has high overhead as the tree must be traversed on each load or store operation. In the offline approach, developed by Clarke et al. in [6], multiset hashes are used to verify a sequence of load and store operations. The overhead of this scheme is very low if checks are infrequent, but can be quite high if checks are performed frequently. The hybrid scheme combines the advantages of the two schemes and is efficient in most real world situations. The various schemes were implemented on top of Berkeley DB, an embedded database. Real world performance measurements were taken using OpenLDAP, a lightweight directory service, which relies heavily on Berkeley DB. All read and writes to the database were replaced with secure read and secure write operations. Using the DirectoryMark LDAP test suite, the online scheme had an overhead of 113% when compared to the an unmodified server, while the offline scheme with infrequent checks (T=50000) resulted in 39% fewer DOPS. The offline scheme, however, outperformed the online scheme by 31%, while the hybrid scheme outperformed the online scheme by only 19%. In the worst case, when checks were frequent (T=500), the hybrid scheme was 185% slower (65% fewer DOPS) than the online scheme. With frequent checks, the offline scheme was 101% slower (50% fewer DOPS) than the online scheme.by Ajay Sudan.M.Eng

    DSpace Manual: Software version 1.5

    Full text link
    DSpace is an open source software platform that enables organizations to: - Capture and describe digital material using a submission workflow module, or a variety of programmatic ingest options - Distribute an organization's digital assets over the web through a search and retrieval system - Preserve digital assets over the long term This system documentation includes a functional overview of the system, which is a good introduction to the capabilities of the system, and should be readable by nontechnical personnel. Everyone should read this section first because it introduces some terminology used throughout the rest of the documentation. For people actually running a DSpace service, there is an installation guide, and sections on configuration and the directory structure. Note that as of DSpace 1.2, the administration user interface guide is now on-line help available from within the DSpace system. Finally, for those interested in the details of how DSpace works, and those potentially interested in modifying the code for their own purposes, there is a detailed architecture and design section

    User modeling servers - requirements, design, and evaluation

    Get PDF
    Softwaresysteme, die ihre Services an Charakteristika individueller Benutzer anpassen haben sich bereits als effektiver und/oder benutzerfreundlicher als statische Systeme in mehreren Anwendungsdomänen erwiesen. Um solche Anpassungsleistungen anbieten zu können, greifen benutzeradaptive Systeme auf Modelle von Benutzercharakteristika zurück. Der Aufbau und die Verwaltung dieser Modelle wird durch dezidierte Benutzermodellierungskomponenten vorgenommen. Ein wichtiger Zweig der Benutzermodellierungsforschung beschäftigt sich mit der Entwicklung sogenannter ?Benutzermodellierungs-Shells?, d.h. generischen Benutzermodellierungssystemen, die die Entwicklung anwendungsspezifischer Benutzermodellierungskomponenten erleichtern. Die Bestimmung des Leistungsumfangs dieser generischen Benutzermodellierungssysteme und deren Dienste bzw. Funktionalitäten wurde bisher in den meisten Fällen intuitiv vorgenommen und/oder aus Beschreibungen weniger benutzeradaptiver Systeme in der Literatur abgeleitet. In der jüngeren Vergangenheit führte der Trend zur Personalisierung im World Wide Web zur Entwicklung mehrerer kommerzieller Benutzermodellierungsserver. Die für diese Systeme als wichtig erachteten Eigenschaften stehen im krassen Gegensatz zu denen, die bei der Entwicklung der Benutzermodellierungs-Shells im Vordergrund standen und umgekehrt. Vor diesem Hintergrund ist das Ziel dieser Dissertation (i) Anforderungen an Benutzermodellierungsserver aus einer multi-disziplinären wissenschaftlichen und einer einsatzorientierten (kommerziellen) Perspektive zu analysieren, (ii) einen Server zu entwerfen und zu implementieren, der diesen Anforderungen genügt, und (iii) die Performanz und Skalierbarkeit dieses Servers unter der Arbeitslast kleinerer und mittlerer Einsatzumgebungen gegen die diesbezüglichen Anforderungen zu überprüfen. Um dieses Ziel zu erreichen, verfolgen wir einen anforderungszentrierten Ansatz, der auf Erfahrungen aus verschiedenen Forschungsbereichen aufbaut. Wir entwickeln eine generische Architektur für einen Benutzermodellierungsserver, die aus einem Serverkern für das Datenmanagement und modular hinzufügbaren Benutzermodellierungskomponenten besteht, von denen jede eine wichtige Benutzermodellierungstechnik implementiert. Wir zeigen, dass wir durch die Integration dieser Benutzermodellierungskomponenten in einem Server Synergieeffekte zwischen den eingesetzten Lerntechniken erzielen und bekannte Defizite einzelner Verfahren kompensieren können, beispielsweise bezüglich Performanz, Skalierbarkeit, Integration von Domänenwissen, Datenmangel und Kaltstart. Abschließend präsentieren wir die wichtigsten Ergebnisse der Experimente, die wir durchgeführt haben um empirisch nachzuweisen, dass der von uns entwickelte Benutzermodellierungsserver zentralen Performanz- und Skalierbarkeitskriterien genügt. Wir zeigen, dass unser Benutzermodellierungsserver die vorbesagten Kriterien in Anwendungsumgebungen mit kleiner und mittlerer Arbeitslast in vollem Umfang erfüllt. Ein Test in einer Anwendungsumgebung mit mehreren Millionen Benutzerprofilen und einer Arbeitslast, die als repräsentativ für größere Web Sites angesehen werden kann bestätigte, dass die Performanz der Benutzermodellierung unseres Servers keine signifikante Mehrbelastung für eine personalisierte Web Site darstellt. Gleichzeitig können die Anforderungen an die verfügbare Hardware als moderat eingestuft werden

    Elastic Highly Available Cloud Computing

    Get PDF
    High availability and elasticity are two the cloud computing services technical features. Elasticity is a key feature of cloud computing where provisioning of resources is closely tied to the runtime demand. High availability assure that cloud applications are resilient to failures. Existing cloud solutions focus on providing both features at the level of the virtual resource through virtual machines by managing their restart, addition, and removal as needed. These existing solutions map applications to a specific design, which is not suitable for many applications especially virtualized telecommunication applications that are required to meet carrier grade standards. Carrier grade applications typically rely on the underlying platform to manage their availability by monitoring heartbeats, executing recoveries, and attempting repairs to bring the system back to normal. Migrating such applications to the cloud can be particularly challenging, especially if the elasticity policies target the application only, without considering the underlying platform contributing to its high availability (HA). In this thesis, a Network Function Virtualization (NFV) framework is introduced; the challenges and requirements of its use in mobile networks are discussed. In particular, an architecture for NFV framework entities in the virtual environment is proposed. In order to reduce signaling traffic congestion and achieve better performance, a criterion to bundle multiple functions of virtualized evolved packet-core in a single physical device or a group of adjacent devices is proposed. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. Moreover, a comprehensive framework for the elasticity of highly available applications that considers the elastic deployment of the platform and the HA placement of the application’s components is proposed. The approach is applied to an internet protocol multimedia subsystem (IMS) application and demonstrate how, within a matter of seconds, the IMS application can be scaled up while maintaining its HA status

    dspace 6.0 manual

    Get PDF
    corecore