6 research outputs found

    A FINE GRAINED ACCESS CONTROL MODEL BASED ON DIVERSE ATTRIBUTES

    Get PDF
    As the web has become a place for sharing of information and resources across varied domains, there is a need for providing authorization services in addition to authentication services provided by public key infrastructure (PKI). In distributed systems the use of attribute certificates (AC) has been explored as a solution for implementation of authorization services and their use is gaining popularity. AC issued by attribute authority (AA) facilitates identification of a service requester and can be used to enforce access control for resources. AC of a service requester is used as part of credentials supplied during the service request for accessing any resource. As there exist potentially multiple issuing domains which issue credentials, therefore the target domain must allow access to resources by considering different credentials and must be able to decide about which set of attributes can be considered as valid attributes for making access control decisions. In this paper, we present an authorization based access control model that allows a fine grained access control to resources in an open domain by utilizing attributes issued by diverse attribute authorities

    Video-on-Demand over Internet: a survey of existing systems and solutions

    Get PDF
    Video-on-Demand is a service where movies are delivered to distributed users with low delay and free interactivity. The traditional client/server architecture experiences scalability issues to provide video streaming services, so there have been many proposals of systems, mostly based on a peer-to-peer or on a hybrid server/peer-to-peer solution, to solve this issue. This work presents a survey of the currently existing or proposed systems and solutions, based upon a subset of representative systems, and defines selection criteria allowing to classify these systems. These criteria are based on common questions such as, for example, is it video-on-demand or live streaming, is the architecture based on content delivery network, peer-to-peer or both, is the delivery overlay tree-based or mesh-based, is the system push-based or pull-based, single-stream or multi-streams, does it use data coding, and how do the clients choose their peers. Representative systems are briefly described to give a summarized overview of the proposed solutions, and four ones are analyzed in details. Finally, it is attempted to evaluate the most promising solutions for future experiments. Résumé La vidéo à la demande est un service où des films sont fournis à distance aux utilisateurs avec u

    On the performance of probabilistic flooding in wireless mobile ad hoc networks

    Get PDF
    Broadcasting in MANET’s has traditionally been based on flooding, but this can induce broadcast storms that severely degrade network performance due to redundant retransmission, collision and contention. Probabilistic flooding, where a node rebroadcasts a newly arrived one-to-all packet with some probability, p, was an early suggestion to reduce the broadcast storm problem. The first part of this thesis investigates the effects on the performance of probabilistic flooding of a number of important MANET parameters, including node speed, traffic load and node density. It transpires that these parameters have a critical impact both on reachability and on the number of so-called “saved rebroadcast packets” achieved. For instance, across a range of rebroadcast probability values, as network density increases from 25 to 100 nodes, reachability achieved by probabilistic flooding increases from 85% to 100%. Moreover, as node speed increases from 2 to 20 m/sec, reachability increases from 90% to 100%. The second part of this thesis proposes two new probabilistic algorithms that dynamically adjust the rebroadcasting probability contingent on node distribution using only one-hop neighbourhood information, without requiring any assistance of distance measurements or location-determination devices. The performance of the new algorithm is assessed and compared to blind flooding as well as the fixed probabilistic approach. It is demonstrated that the new algorithms have superior performance characteristics in terms of both reachability and saved rebroadcasts. For instance, the suggested algorithms can improve saved rebroadcasts by up to 70% and 47% compared to blind and fixed probabilistic flooding, respectively, even under conditions of high node mobility and high network density without degrading reachability. The final part of the thesis assesses the impact of probabilistic flooding on the performance of routing protocols in MANETs. Our performance results indicate that using our new probabilistic flooding algorithms during route discovery enables AODV to achieve a higher delivery ratio of data packets while keeping a lower routing overhead compared to using blind and fixed probabilistic flooding. For instance, the packet delivery ratio using our algorithm is improved by up to 19% and 12% compared to using blind and fixed probabilistic flooding, respectively. This performance advantage is achieved with a routing overhead that is lower by up to 28% and 19% than in fixed probabilistic and blind flooding, respectively

    Modern techniques for constraint solving the CASPER experience

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaConstraint programming is a well known paradigm for addressing combinatorial problems which has enjoyed considerable success for solving many relevant industrial and academic problems. At the heart of constraint programming lies the constraint solver, a computer program which attempts to find a solution to the problem, i.e. an assignment of all the variables in the problemsuch that all the constraints are satisfied. This dissertation describes a set of techniques to be used in the implementation of a constraint solver. These techniques aim at making a constraint solver more extensible and efficient,two properties which are hard to integrate in general, and in particular within a constraint solver. Specifically, this dissertation addresses two major problems: generic incremental propagation and propagation of arbitrary decomposable constraints. For both problemswe present a set of techniques which are novel, correct, and directly concerned with extensibility and efficiency. All the material in this dissertation emerged from our work in designing and implementing a generic constraint solver. The CASPER (Constraint Solving Platformfor Engineering and Research)solver does not only act as a proof-of-concept for the presented techniques, but also served as the common test platform for the many discussed theoretical models. Besides the work related to the design and implementation of a constraint solver, this dissertation also presents the first successful application of the resulting platform for addressing an open research problem, namely finding good heuristics for efficiently directing search towards a solution

    Nutzung kryptographischer Funktionen zur Verbesserung der Systemzuverlässigkeit

    Get PDF
    Cryptographic techniques deal with securing information against unwanted usage, while coding techniques deals with keeping data error-free and retrieving them reliably. However, both techniques share many tools, bounds and limitations. In this thesis, several novel approaches towards improving system reliability by combining cryptographic and coding techniques in several constellations are presented. The first constellation is deploying pure cryptographic functions to improve reliability issues overshadowed in systems that previously had no reliability-supporting coding mechanisms. Such systems could have just authenticity, secrecy and/or integrity mechanisms for security services. The second constellation deploys a mixture of both cryptographic functions and error correction codes to improve the overall system reliability. The first contribution in this thesis, presents a new practical approach for detection and correction of execution errors for AES cipher. The source of such errors could be natural or as a result of fault injection attacks. The proposed approach is making use of the two linear mappings in the AES round structure for error control. The second contribution is investigating the possibility and ability of deploying pure cryptographic hash functions to detect and correct a class of errors. The error correction is achieved by deploying a part of the hash bits to correct a class of selected unidirectional error class with high probability. The error correction process would degrade the authentication level in a non-significant fashion. In the third and fourth contributions, we propose algorithms to improve system correctability beyond classical limits by combining coding and cryptographic functions. The new algorithms are based mainly on the fundamentals investigated in the second contribution as mechanisms to detect and correct errors. The new algorithms are investigated in terms of collision and attacking complexity, as error correction via hash matching is similar to a successful authentication attack. The resulting performance showed achievable good error correctability, authenticity, and integrity figures.Kryptografische Methoden zielen der Sicherung von Information gegen unerwünschte Nutzung, wobei Codierungstechnik behandelt die Korrektur der Fehler in den Daten und deren zuverlässigen Rückgewinnung. Beide Techniken bedienen sich ähnlich Instrumente und besitzen ähnliche grenzen und Grenzwerte. In diese Dissertation, werden mehrere neue Verfahren zur Verbesserung der Systemzuverlässigkeit durch verschiedene Konstellationen zur Kombination der beiden Fehlerkontrollcodierung und Kryptografische Verfahren. In der ersten Konstellation werden reine kryptologische Funktionen verwendet, die zur Verbesserung der Zuverlässigkeitsaspekte in den Systemen die keine Zuverlässigkeitsfördernde Codierungs-Maßnahme enthalten dienen. Solche Systeme besitzen z. B. nur Authentifikation, Geheimhaltung oder Integritäts-Mechanismen in den Sicherheitsdiensten. Die zweite Konstellation verwendet eine Kombination von Fehlerkorrigierende Codes und Krypto-Mechanismen für die Verbesserung der Zuverlässigkeit des Systems. Der erste Beitrag in diese Arbeit präsentiert ein neues praktisches Verfahren zur Erkennung und Korrektur von Verarbeitungsfehler in AES Chiffre. Die Ursachen solche Fehler konnten natürlich oder als Resultat eines beabsichtigten „Fault Injection“ Angriff sein. Das Verfahren nutzt die linearen Abbildungen im AES Runden-Funktion für Fehlerkontrolle. Der zweite Beitrag untersucht die Möglichkeit und Fähigkeit zur Einsatz von Hashfunktionen zur Erkennung und Korrektur vom Fehler. Die Fehlerkorrektur ist erreicht durch die Nutzung eines Anteil des Hash Bits um eine Klasse von ausgewähltem Unidirektionalen-Fehler mit höhe Wahrscheinlichkeit zu korrigieren. Dabei wird der Fehlerkorrekturprozess die Authentifikationsgrad des Hashfunktion nicht signifikant reduzieren. In den dritten und vierten Beitrag werden Algorithmen vorgeschlagen um die Zuverlässigkeit des System über die klassischen grenzen verbessert. Das wird durch Kombination von Kryptologischen und Codierung Funktionen erreicht. Die neuen Algorithmen sind auf die fundamentale Untersuchungen des zweiten Beitrag als Mechanismen für Fehlererkennung und Fehlerkorrektur basiert. Die neuen Algorithmen sind auf deren Kollision und Angriffskomplexität Verhalten untersucht worden, da Fehlerkorrektur durch Hashwert-Anpassung eines erfolgreichen Authentifikationsangriff ähnlich ist. Die resultierenden Verhalten zeigen gute Werte für erreichbare Fehlerkorrekturfähigkeit, Authentifikations-Grad und Integrität
    corecore