11,154 research outputs found
Do not trust me: Using malicious IdPs for analyzing and attacking Single Sign-On
Single Sign-On (SSO) systems simplify login procedures by using an an
Identity Provider (IdP) to issue authentication tokens which can be consumed by
Service Providers (SPs). Traditionally, IdPs are modeled as trusted third
parties. This is reasonable for SSO systems like Kerberos, MS Passport and
SAML, where each SP explicitely specifies which IdP he trusts. However, in open
systems like OpenID and OpenID Connect, each user may set up his own IdP, and a
discovery phase is added to the protocol flow. Thus it is easy for an attacker
to set up its own IdP. In this paper we use a novel approach for analyzing SSO
authentication schemes by introducing a malicious IdP. With this approach we
evaluate one of the most popular and widely deployed SSO protocols - OpenID. We
found four novel attack classes on OpenID, which were not covered by previous
research, and show their applicability to real-life implementations. As a
result, we were able to compromise 11 out of 16 existing OpenID implementations
like Sourceforge, Drupal and ownCloud. We automated discovery of these attacks
in a open source tool OpenID Attacker, which additionally allows fine-granular
testing of all parameters in OpenID implementations. Our research helps to
better understand the message flow in the OpenID protocol, trust assumptions in
the different components of the system, and implementation issues in OpenID
components. It is applicable to other SSO systems like OpenID Connect and SAML.
All OpenID implementations have been informed about their vulnerabilities and
we supported them in fixing the issues
ZETA - Zero-Trust Authentication: Relying on Innate Human Ability, not Technology
Reliable authentication requires the devices and
channels involved in the process to be trustworthy; otherwise
authentication secrets can easily be compromised. Given the
unceasing efforts of attackers worldwide such trustworthiness
is increasingly not a given. A variety of technical solutions,
such as utilising multiple devices/channels and verification
protocols, has the potential to mitigate the threat of untrusted
communications to a certain extent. Yet such technical solutions
make two assumptions: (1) users have access to multiple
devices and (2) attackers will not resort to hacking the human,
using social engineering techniques. In this paper, we propose
and explore the potential of using human-based computation
instead of solely technical solutions to mitigate the threat of
untrusted devices and channels. ZeTA (Zero Trust Authentication
on untrusted channels) has the potential to allow people to
authenticate despite compromised channels or communications
and easily observed usage. Our contributions are threefold:
(1) We propose the ZeTA protocol with a formal definition
and security analysis that utilises semantics and human-based
computation to ameliorate the problem of untrusted devices
and channels. (2) We outline a security analysis to assess
the envisaged performance of the proposed authentication
protocol. (3) We report on a usability study that explores the
viability of relying on human computation in this context
Term-based composition of security protocols
In the context of security protocol parallel composition, where messages
belonging to different protocols can intersect each other, we introduce a new
paradigm: term-based composition (i.e. the composition of message components
also known as terms). First, we create a protocol specification model by
extending the original strand spaces. Then, we provide a term composition
algorithm based on which new terms can be constructed. To ensure that security
properties are maintained, we introduce the concept of term connections to
express the existing connections between terms and encryption contexts. We
illustrate the proposed composition process by using two existing protocols.Comment: 2008 IEEE International Conference on Automation, Quality and
Testing, Robotics, Cluj-Napoca, Romania, May 2008, pp. 233-238, ISBN
978-1-4244-2576-
Public Key Infrastructure based on Authentication of Media Attestments
Many users would prefer the privacy of end-to-end encryption in their online
communications if it can be done without significant inconvenience. However,
because existing key distribution methods cannot be fully trusted enough for
automatic use, key management has remained a user problem. We propose a
fundamentally new approach to the key distribution problem by empowering
end-users with the capacity to independently verify the authenticity of public
keys using an additional media attestment. This permits client software to
automatically lookup public keys from a keyserver without trusting the
keyserver, because any attempted MITM attacks can be detected by end-users.
Thus, our protocol is designed to enable a new breed of messaging clients with
true end-to-end encryption built in, without the hassle of requiring users to
manually manage the public keys, that is verifiably secure against MITM
attacks, and does not require trusting any third parties
Using decision problems in public key cryptography
There are several public key establishment protocols as well as complete
public key cryptosystems based on allegedly hard problems from combinatorial
(semi)group theory known by now. Most of these problems are search problems,
i.e., they are of the following nature: given a property P and the information
that there are objects with the property P, find at least one particular object
with the property P. So far, no cryptographic protocol based on a search
problem in a non-commutative (semi)group has been recognized as secure enough
to be a viable alternative to established protocols (such as RSA) based on
commutative (semi)groups, although most of these protocols are more efficient
than RSA is.
In this paper, we suggest to use decision problems from combinatorial group
theory as the core of a public key establishment protocol or a public key
cryptosystem. By using a popular decision problem, the word problem, we design
a cryptosystem with the following features: (1) Bob transmits to Alice an
encrypted binary sequence which Alice decrypts correctly with probability "very
close" to 1; (2) the adversary, Eve, who is granted arbitrarily high (but
fixed) computational speed, cannot positively identify (at least, in theory),
by using a "brute force attack", the "1" or "0" bits in Bob's binary sequence.
In other words: no matter what computational speed we grant Eve at the outset,
there is no guarantee that her "brute force attack" program will give a
conclusive answer (or an answer which is correct with overwhelming probability)
about any bit in Bob's sequence.Comment: 12 page
Applications of tripled chaotic maps in cryptography
Security of information has become a major issue during the last decades. New
algorithms based on chaotic maps were suggested for protection of different
types of multimedia data, especially digital images and videos in this period.
However, many of them fundamentally were flawed by a lack of robustness and
security. For getting higher security and higher complexity, in the current
paper, we introduce a new kind of symmetric key block cipher algorithm that is
based on \emph{tripled chaotic maps}. In this algorithm, the utilization of two
coupling parameters, as well as the increased complexity of the cryptosystem,
make a contribution to the development of cryptosystem with higher security. In
order to increase the security of the proposed algorithm, the size of key space
and the computational complexity of the coupling parameters should be increased
as well. Both the theoretical and experimental results state that the proposed
algorithm has many capabilities such as acceptable speed and complexity in the
algorithm due to the existence of two coupling parameter and high security.
Note that the ciphertext has a flat distribution and has the same size as the
plaintext. Therefore, it is suitable for practical use in secure
communications.Comment: 21 pages, 10 figure
On the security of a new image encryption scheme based on chaotic map lattices
This paper reports a detailed cryptanalysis of a recently proposed encryption
scheme based on the logistic map. Some problems are emphasized concerning the
key space definition and the implementation of the cryptosystem using
floating-point operations. It is also shown how it is possible to reduce
considerably the key space through a ciphertext-only attack. Moreover, a timing
attack allows the estimation of part of the key due to the existent
relationship between this part of the key and the encryption/decryption time.
As a result, the main features of the cryptosystem do not satisfy the demands
of secure communications. Some hints are offered to improve the cryptosystem
under study according to those requirements.Comment: 8 pages, 8 Figure
Ensemble Machine Learning Approaches for Detection of SQL Injection Attack
In the current era, SQL Injection Attack is a serious threat to the security of the ongoing cyber world particularly for many web applications that reside over the internet. Many webpages accept the sensitive information (e.g. username, passwords, bank details, etc.) from the users and store this information in the database that also resides over the internet. Despite the fact that this online database has much importance for remotely accessing the information by various business purposes but attackers can gain unrestricted access to these online databases or bypass authentication procedures with the help of SQL Injection Attack. This attack results in great damage and variation to database and has been ranked as the topmost security risk by OWASP TOP 10. Considering the trouble of distinguishing unknown attacks by the current principle coordinating technique, a strategy for SQL injection detection dependent on Machine Learning is proposed. Our motive is to detect this attack by splitting the queries into their corresponding tokens with the help of tokenization and then applying our algorithms over the tokenized dataset. We used four Ensemble Machine Learning algorithms: Gradient Boosting Machine (GBM), Adaptive Boosting (AdaBoost), Extended Gradient Boosting Machine (XGBM), and Light Gradient Boosting Machine (LGBM). The results yielded by our models are near to perfection with error rate being almost negligible. The best results are yielded by LGBM with an accuracy of 0.993371, and precision, recall, f1 as 0.993373, 0.993371, and 0.993370, respectively. The LGBM also yielded less error rate with False Positive Rate (FPR) and Root Mean Squared Error (RMSE) to be 0.120761 and 0.007, respectively. The worst results are yielded by AdaBoost with an accuracy of 0.991098, and precision, recall, f1 as 0.990733, 0.989175, and 0.989942, respectively. The AdaBoost also yielded high False Positive Rate (FPR) to be 0.009
- …