30,820 research outputs found
Novel Framework for Hidden Data in the Image Page within Executable File Using Computation between Advanced Encryption Standard and Distortion Techniques
The hurried development of multimedia and internet allows for wide
distribution of digital media data. It becomes much easier to edit, modify and
duplicate digital information. In additional, digital document is also easy to
copy and distribute, therefore it may face many threats. It became necessary to
find an appropriate protection due to the significance, accuracy and
sensitivity of the information. Furthermore, there is no formal method to be
followed to discover a hidden data. In this paper, a new information hiding
framework is presented.The proposed framework aim is implementation of
framework computation between advance encryption standard (AES) and distortion
technique (DT) which embeds information in image page within executable file
(EXE file) to find a secure solution to cover file without change the size of
cover file. The framework includes two main functions; first is the hiding of
the information in the image page of EXE file, through the execution of four
process (specify the cover file, specify the information file, encryption of
the information, and hiding the information) and the second function is the
extraction of the hiding information through three process (specify the stego
file, extract the information, and decryption of the information).Comment: 6 Pages IEEE Format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.42
Systematization of a 256-bit lightweight block cipher Marvin
In a world heavily loaded by information, there is a great need for keeping
specific information secure from adversaries. The rapid growth in the research
field of lightweight cryptography can be seen from the list of the number of
lightweight stream as well as block ciphers that has been proposed in the
recent years. This paper focuses only on the subject of lightweight block
ciphers. In this paper, we have proposed a new 256 bit lightweight block cipher
named as Marvin, that belongs to the family of Extended LS designs.Comment: 12 pages,6 figure
Coping with Poorly Understood Domains: the Example of Internet Trust
The notion of trust, as required for secure operations over the Internet, is important for ascertaining the source of received messages. How can we measure the degree of trust in authenticating the source? Knowledge in the domain is not established, so knowledge engineering becomes knowledge generation rather than mere acquisition. Special techniques are required, and special features of KBS software become more important than in conventional domains. This paper generalizes from experience with Internet trust to discuss some techniques and software features that are important for poorly understood domains
Recommended from our members
An intelligent system for risk classification of stock investment projects
The proposed paper demonstrates that a hybrid fuzzy neural network can serve as a risk classifier of stock investment projects. The training algorithm for the regular part of the network is based on bidirectional incremental evolution proving more efficient than direct evolution. The approach is compared with other crisp and soft investment appraisal and trading techniques, while building a multimodel domain representation for an intelligent decision support system. Thus the advantages of each model are utilised while looking at the investment problem from different perspectives. The empirical results are based on UK companies traded on the London Stock Exchange
Uncertainty Quantification Using Neural Networks for Molecular Property Prediction
Uncertainty quantification (UQ) is an important component of molecular
property prediction, particularly for drug discovery applications where model
predictions direct experimental design and where unanticipated imprecision
wastes valuable time and resources. The need for UQ is especially acute for
neural models, which are becoming increasingly standard yet are challenging to
interpret. While several approaches to UQ have been proposed in the literature,
there is no clear consensus on the comparative performance of these models. In
this paper, we study this question in the context of regression tasks. We
systematically evaluate several methods on five benchmark datasets using
multiple complementary performance metrics. Our experiments show that none of
the methods we tested is unequivocally superior to all others, and none
produces a particularly reliable ranking of errors across multiple datasets.
While we believe these results show that existing UQ methods are not sufficient
for all common use-cases and demonstrate the benefits of further research, we
conclude with a practical recommendation as to which existing techniques seem
to perform well relative to others
- โฆ