10 research outputs found
Study on elliptic curve public key cryptosystems with application of pseudorandom number generator.
by Yuen Ching Wah.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 61-[63]).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Why use cryptography? --- p.1Chapter 1.2 --- Why is authentication important ïŒ --- p.2Chapter 1.3 --- What is the relationship between authentication and digital sig- nature? --- p.3Chapter 1.4 --- Why is random number important? --- p.3Chapter 2 --- Background --- p.5Chapter 2.1 --- Cryptography --- p.5Chapter 2.1.1 --- Symmetric key cryptography --- p.5Chapter 2.1.2 --- Asymmetric key cryptography --- p.7Chapter 2.1.3 --- Authentication --- p.8Chapter 2.2 --- Elliptic curve cryptography --- p.9Chapter 2.2.1 --- Mathematical background for Elliptic curve cryptography --- p.10Chapter 2.3 --- Pseudorandom number generator --- p.12Chapter 2.3.1 --- Linear Congruential Generator --- p.13Chapter 2.3.2 --- Inversive Congruential Generator --- p.13Chapter 2.3.3 --- PN-sequence generator --- p.14Chapter 2.4 --- Digital Signature Scheme --- p.14Chapter 2.5 --- Babai's lattice vector algorithm --- p.16Chapter 2.5.1 --- First Algorithm: Rounding Off --- p.17Chapter 2.5.2 --- Second Algorithm: Nearest Plane --- p.17Chapter 3 --- Several Digital Signature Schemes --- p.18Chapter 3.1 --- DSA --- p.19Chapter 3.2 --- Nyberg-Rueppel Digital Signature --- p.21Chapter 3.3 --- EC.DSA --- p.23Chapter 3.4 --- EC-Nyberg-Rueppel Digital Signature Scheme --- p.26Chapter 4 --- Miscellaneous Digital Signature Schemes and their PRNG --- p.29Chapter 4.1 --- DSA with LCG --- p.30Chapter 4.2 --- DSA with PN-sequence --- p.33Chapter 4.2.1 --- Solution --- p.35Chapter 4.3 --- DSA with ICG --- p.39Chapter 4.3.1 --- Solution --- p.40Chapter 4.4 --- EC_DSA with PN-sequence --- p.43Chapter 4.4.1 --- Solution --- p.44Chapter 4.5 --- ECäžDSA with LCG --- p.45Chapter 4.5.1 --- Solution --- p.46Chapter 4.6 --- EC-DSA with ICG --- p.46Chapter 4.6.1 --- Solution --- p.47Chapter 4.7 --- Nyberg-Rueppel Digital Signature with PN-sequence --- p.48Chapter 4.7.1 --- Solution --- p.49Chapter 4.8 --- Nyberg-Rueppel Digital Signature with LCG --- p.50Chapter 4.8.1 --- Solution --- p.50Chapter 4.9 --- Nyberg-Rueppel Digital Signature with ICG --- p.51Chapter 4.9.1 --- Solution --- p.52Chapter 4.10 --- EC- Nyberg-Rueppel Digital Signature with LCG --- p.53Chapter 4.10.1 --- Solution --- p.54Chapter 4.11 --- EC- Nyberg-Rueppel Digital Signature with PN-sequence --- p.55Chapter 4.11.1 --- Solution --- p.56Chapter 4.12 --- EC-Nyberg-Rueppel Digital Signature with ICG --- p.56Chapter 4.12.1 --- Solution --- p.57Chapter 5 --- Conclusion --- p.59Bibliography --- p.6
Recommended from our members
High-Performance Secure Database Access Technologies for HEP Grids
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicistâs computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.â There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systemsâ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratoryâs (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS projectâs current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access
Security in a Distributed Processing Environment
Distribution plays a key role in telecommunication and computing systems today. It
has become a necessity as a result of deregulation and anti-trust legislation, which has
forced businesses to move from centralised, monolithic systems to distributed systems
with the separation of applications and provisioning technologies, such as the service
and transportation layers in the Internet. The need for reliability and recovery requires
systems to use replication and secondary backup systems such as those used in ecommerce.
There are consequences to distribution. It results in systems being implemented in
heterogeneous environment; it requires systems to be scalable; it results in some loss
of control and so this contributes to the increased security issues that result from
distribution. Each of these issues has to be dealt with. A distributed processing
environment (DPE) is middleware that allows heterogeneous environments to operate
in a homogeneous manner. Scalability can be addressed by using object-oriented
technology to distribute functionality. Security is more difficult to address because it
requires the creation of a distributed trusted environment.
The problem with security in a DPE currently is that it is treated as an adjunct service,
i.e. and after-thought that is the last thing added to the system. As a result, it is not
pervasive and therefore is unable to fully support the other DPE services. DPE
security needs to provide the five basic security services, authentication, access
control, integrity, confidentiality and non-repudiation, in a distributed environment,
while ensuring simple and usable administration.
The research, detailed in this thesis, starts by highlighting the inadequacies of the
existing DPE and its services. It argues that a new management structure was
introduced that provides greater flexibility and configurability, while promoting
mechanism and service independence. A new secure interoperability framework was
introduced which provides the ability to negotiate common mechanism and service
level configurations. New facilities were added to the non-repudiation and audit
services.
The research has shown that all services should be security-aware, and therefore
would able to interact with the Enhanced Security Service in order to provide a more
secure environment within a DPE. As a proof of concept, the Trader service was
selected. Its security limitations were examined, new security behaviour policies
proposed and it was then implemented as a Security-aware Trader, which could
counteract the existing security limitations.IONA TECHNOLOGIES PLC & ORANG
Security in Cloud Computing: Evaluation and Integration
Au cours de la derniĂšre dĂ©cennie, le paradigme du Cloud Computing a rĂ©volutionnĂ© la maniĂšre dont nous percevons les services de la Technologie de lâInformation (TI). Celui-ci nous a donnĂ© lâopportunitĂ© de rĂ©pondre Ă la demande constamment croissante liĂ©e aux besoins informatiques
des usagers en introduisant la notion dâexternalisation des services et des donnĂ©es. Les consommateurs du Cloud ont gĂ©nĂ©ralement accĂšs, sur demande, Ă un large Ă©ventail bien
rĂ©parti dâinfrastructures de TI offrant une plĂ©thore de services. Ils sont Ă mĂȘme de configurer dynamiquement les ressources du Cloud en fonction des exigences de leurs applications, sans toutefois devenir partie intĂ©grante de lâinfrastructure du Cloud. Cela leur permet dâatteindre
un degrĂ© optimal dâutilisation des ressources tout en rĂ©duisant leurs coĂ»ts dâinvestissement en TI. Toutefois, la migration des services au Cloud intensifie malgrĂ© elle les menaces existantes Ă la sĂ©curitĂ© des TI et en crĂ©e de nouvelles qui sont intrinsĂšques Ă lâarchitecture du Cloud
Computing. Câest pourquoi il existe un rĂ©el besoin dâĂ©valuation des risques liĂ©s Ă la sĂ©curitĂ© du Cloud durant le procĂ©dĂ© de la sĂ©lection et du dĂ©ploiement des services. Au cours des derniĂšres annĂ©es, lâimpact dâune efficace gestion de la satisfaction des besoins en sĂ©curitĂ© des
services a Ă©tĂ© pris avec un sĂ©rieux croissant de la part des fournisseurs et des consommateurs. Toutefois, lâintĂ©gration rĂ©ussie de lâĂ©lĂ©ment de sĂ©curitĂ© dans les opĂ©rations de la gestion des ressources du Cloud ne requiert pas seulement une recherche mĂ©thodique, mais aussi une modĂ©lisation mĂ©ticuleuse des exigences du Cloud en termes de sĂ©curitĂ©.
Câest en considĂ©rant ces facteurs que nous adressons dans cette thĂšse les dĂ©fis liĂ©s Ă lâĂ©valuation de la sĂ©curitĂ© et Ă son intĂ©gration dans les environnements indĂ©pendants et interconnectĂ©s du Cloud Computing. Dâune part, nous sommes motivĂ©s Ă offrir aux consommateurs du Cloud un ensemble de mĂ©thodes qui leur permettront dâoptimiser la sĂ©curitĂ© de leurs services et, dâautre part, nous offrons aux fournisseurs un Ă©ventail de stratĂ©gies qui leur permettront de mieux sĂ©curiser leurs services dâhĂ©bergements du Cloud. LâoriginalitĂ© de cette thĂšse porte sur deux aspects : 1) la description innovatrice des exigences des applications du Cloud relativement Ă la sĂ©curitĂ© ; et 2) la conception de modĂšles mathĂ©matiques rigoureux qui intĂšgrent le facteur de sĂ©curitĂ© dans les problĂšmes traditionnels du dĂ©ploiement des applications, dâapprovisionnement des ressources et de la gestion de la charge de travail au coeur des infrastructures
actuelles du Cloud Computing. Le travail au sein de cette thÚse est réalisé en trois phases.----------ABSTRACT: Over the past decade, the Cloud Computing paradigm has revolutionized the way we envision IT services. It has provided an opportunity to respond to the ever increasing computing needs of the users by introducing the notion of service and data outsourcing. Cloud consumers usually
have online and on-demand access to a large and distributed IT infrastructure providing a plethora of services. They can dynamically configure and scale the Cloud resources according to the requirements of their applications without becoming part of the Cloud infrastructure, which allows them to reduce their IT investment cost and achieve optimal resource utilization. However, the migration of services to the Cloud increases the vulnerability to existing IT security threats and creates new ones that are intrinsic to the Cloud Computing architecture, thus the need for a thorough assessment of Cloud security risks during the process of service selection and deployment. Recently, the impact of effective management of service security satisfaction has been taken with greater seriousness by the Cloud Service Providers (CSP) and stakeholders. Nevertheless, the successful integration of the security element into the Cloud resource management operations does not only require methodical research, but also necessitates the meticulous modeling of the Cloud security requirements.
To this end, we address throughout this thesis the challenges to security evaluation and integration in independent and interconnected Cloud Computing environments. We are interested in providing the Cloud consumers with a set of methods that allow them to optimize the security of their services and the CSPs with a set of strategies that enable them to provide security-aware Cloud-based service hosting. The originality of this thesis lies within two aspects: 1) the innovative description of the Cloud applicationsâ security requirements, which paved the way for an effective quantification and evaluation of the security of Cloud infrastructures; and 2) the design of rigorous mathematical models that integrate the security factor into the traditional problems of application deployment, resource provisioning, and workload management within current Cloud Computing infrastructures. The work in this thesis is carried out in three phases
The concept of self-defending objects and the development of security aware applications
The self-defending object (SDO) concept is an extension to the object-oriented programming paradigm, whereby those objects that encapsulate the protected resources of a security aware application (SAA), are made aware of, and responsible for, the defence of those resources. That defence takes two forms, the enforcement of mandatory access control on protected resources and the generation of the corresponding portion of the SAA's audit trail. The SDO concept acts as the philosophy that guides the application level mandatory access control within SAAs which ensures that the provided access control is both complete and non bypassable. Although SDOs accept responsibility for controlling access to the protected data and functionality that they encapsulate, an SDO delegates the responsibility for making authorisation decisions to an associated authorisation object. Thus, SDOs fulfill their access control obligations by initiating the authorisation check and then enforcing the decision made on their behalf. A simple, yet effective mechanism for enforcing that access control at the object level involves controlling the ability to invoke those SDO methods that access protected resources. In the absence of previous research on this approach to the enforcement of application level access control, the primary aim of this research was to demonstrate that the SDO concept is a viable paradigm for developing SAAs. That aim was achieved in two stages. The first stage targeted the provision of a 'proof of concept', that demonstrated that the SDO concept could be applied to the development of non-distributed SAAs. The second stage demonstrated its applicability to the development of distributed SAAs. In the second stage, two versions of a distributed prototype were developed, one based on a traditional (proprietary) distributed computing model, (Java RMI), and the second using the currently popular Web services model, to demonstrate the general applicability of the SDO concept. Having already demonstrated that the SDO concept could be applied to SAAs executing on a single machine, the major focus of that research was to devise a mechanism by which SDOs could be transferred between machines.
The research then concentrated on determining what impacts the adoption of the SDO concept would have on SAA development. Experimentation carried out using the distributed prototypes demonstrated that (1) the adoption of the SDO does not restrict the use of inheritance hierarchies that include SDOs, (2) the restriction of the lifetime of SDOs can be supported, (3) usage rights enforcement can be employed, and (4) the use of cryptographic techniques to provide additional security guarantees is not affected. A key feature of the SDO concept, is that no major changes need to be made to current development tools orămethodologies, so its adoption is not hampered by significant financial or training impediments. This research demonstrated that the SDO concept is practical and constitutes a valuable extension to the object oriented paradigm that will help address the current lack of security in information systems. The SDO approach warrants additional research and adoption
Security Implications of Typical Grid Computing Usage Scenarios
A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users easy access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios is presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions is to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios is to increase the awareness of security issues in Grid Computing
RL-MADP: Reinforcement Learning-based Misdirection Attack Prevention Technique for WSN
Wireless Sensor Networks (WSNs) provide noteworthy advantages over conventional methods for various real-time applications, i.e., healthcare, temperature sensing, smart homes, homeland security, and environmental monitoring. However, limited resources, short life-time network constraints, and security vulnerabilities are the challenging issues in the era of WSNs. Besides, WSNs performance is susceptible to network anomalies, particularly to misdirection attacks. The above-mentioned issues pose our attentions to produce a security-aware application. In this work, therefore, we present a Reinforcement Learning (RL) algorithm for Misdirection Attack Detection and Prevention (RL-MADP) in WSNs. In our proposed approach, other than the flat architecture configuration for WSN, Markov Decision Process (MDP) from RL is considered. Where, each sensor node is fully aware of its environment. It is an online method and incurs minimal computation cost, and performs load-balancing with higher residual energy to prolong the network lifetime
Isolation physique contre les attaques logiques par canaux cachés basées sur le cache dans des architectures many-core
The technological evolution and the always increasing application performance demand have made of many-core architectures the necessary new trend in processor design. These architectures are composed of a large number of processing resources (hundreds or more) providing massive parallelism and high performance. Indeed, many-core architectures allow a wide number of applications coming from different sources, with a different level of sensitivity and trust, to be executed in parallel sharing physical resources such as computation, memory and communication infrastructure. However, this resource sharing introduces important security vulnerabilities. In particular, sensitive applications sharing cache memory with potentially malicious applications are vulnerable to logical cache-based side-channel attacks. These attacks allow an unprivileged application to access sensitive information manipulated by other applications despite partitioning methods such as memory protection and virtualization. While a lot of efforts on countering these attacks on multi-core architectures have been done, these have not been designed for recently emerged many-core architectures and require to be evaluated, and/or revisited in order to be practical for these new technologies. In this thesis work, we propose to enhance the operating system services with security-aware application deployment and resource allocation mechanisms in order to protect sensitive applications against cached-based attacks. Different application deployment strategies allowing spatial isolation are proposed and compared in terms of several performance indicators. Our proposal is evaluated through virtual prototyping based on SystemC and Open Virtual Platforms(OVP) technology.LâĂ©volution technologique ainsi que lâaugmentation incessante de la puissance de calcul requise par les applications font des architectures âmany-coreâ la nouvelle tendance dans la conception des processeurs. Ces architectures sont composĂ©es dâun grand nombre de ressources de calcul (des centaines ou davantage) ce qui offre du parallĂ©lisme massif et un niveau de performance trĂšs Ă©levĂ©. En effet, les architectures many-core permettent dâexĂ©cuter en parallĂšle un grand nombre dâapplications, venant dâorigines diverses et de niveaux de sensibilitĂ© et de confiance diffĂ©rents, tout en partageant des ressources physiques telles que des ressources de calcul, de mĂ©moire et de communication. Cependant, ce partage de ressources introduit Ă©galement des vulnĂ©rabilitĂ©s importantes en termes de sĂ©curitĂ©. En particulier, les applications sensibles partageant des mĂ©moires cache avec dâautres applications, potentiellement malveillantes, sont vulnĂ©rables Ă des attaques logiques de type canaux cachĂ©s basĂ©es sur le cache. Ces attaques, permettent Ă des applications non privilĂ©giĂ©es dâaccĂ©der Ă des informations secrĂštes sensibles appartenant Ă dâautres applications et cela malgrĂ© des mĂ©thodes de partitionnement existantes telles que la protection de la mĂ©moire et la virtualisation. Alors que dâimportants efforts ont Ă©tĂ© faits afin de dĂ©velopper des contremesures Ă ces attaques sur des architectures multicoeurs, ces solutions nâont pas Ă©tĂ© originellement conçues pour des architectures many-core rĂ©cemment apparues et nĂ©cessitent dâĂȘtre Ă©valuĂ©es et/ou revisitĂ©es afin dâĂȘtre applicables et efficaces pour ces nouvelles technologies. Dans ce travail de thĂšse, nous proposons dâĂ©tendre les services du systĂšme dâexploitation avec des mĂ©canismes de dĂ©ploiement dâapplications et dâallocation de ressources afin de protĂ©ger les applications sâexĂ©cutant sur des architectures many-core contre les attaques logiques basĂ©es sur le cache. Plusieurs stratĂ©gies de dĂ©ploiement sont proposĂ©es et comparĂ©es Ă travers diffĂ©rents indicateurs de performance. Ces contributions ont Ă©tĂ© implĂ©mentĂ©es et Ă©valuĂ©es par prototypage virtuel basĂ© sur SystemC et sur la technologie âOpen Virtual Platformsâ (OVP)