22,323 research outputs found

    Transparent code authentication at the processor level

    Get PDF
    The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes

    Simple Tabulation, Fast Expanders, Double Tabulation, and High Independence

    Full text link
    Simple tabulation dates back to Zobrist in 1970. Keys are viewed as c characters from some alphabet A. We initialize c tables h_0, ..., h_{c-1} mapping characters to random hash values. A key x=(x_0, ..., x_{c-1}) is hashed to h_0[x_0] xor...xor h_{c-1}[x_{c-1}]. The scheme is extremely fast when the character hash tables h_i are in cache. Simple tabulation hashing is not 4-independent, but we show that if we apply it twice, then we get high independence. First we hash to intermediate keys that are 6 times longer than the original keys, and then we hash the intermediate keys to the final hash values. The intermediate keys have d=6c characters from A. We can view the hash function as a degree d bipartite graph with keys on one side, each with edges to d output characters. We show that this graph has nice expansion properties, and from that we get that with another level of simple tabulation on the intermediate keys, the composition is a highly independent hash function. The independence we get is |A|^{Omega(1/c)}. Our space is O(c|A|) and the hash function is evaluated in O(c) time. Siegel [FOCS'89, SICOMP'04] proved that with this space, if the hash function is evaluated in o(c) time, then the independence can only be o(c), so our evaluation time is best possible for Omega(c) independence---our independence is much higher if c=|A|^{o(1)}. Siegel used O(c)^c evaluation time to get the same independence with similar space. Siegel's main focus was c=O(1), but we are exponentially faster when c=omega(1). Applying our scheme recursively, we can increase our independence to |A|^{Omega(1)} with o(c^{log c}) evaluation time. Compared with Siegel's scheme this is both faster and higher independence. Our scheme is easy to implement, and it does provide realistic implementations of 100-independent hashing for, say, 32 and 64-bit keys

    Insider-proof encryption with applications for quantum key distribution

    Full text link
    It has been pointed out that current protocols for device independent quantum key distribution can leak key to the adversary when devices are used repeatedly and that this issue has not been addressed. We introduce the notion of an insider-proof channel. This allows us to propose a means by which devices with memories could be reused from one run of a device independent quantum key distribution protocol to the next while bounding the leakage to Eve, under the assumption that one run of the protocol could be completed securely using devices with memories.Comment: 20 pages, version 2: new presentation introducing the insider-proof channel as a cryptographic elemen

    Computation using Noise-based Logic: Efficient String Verification over a Slow Communication Channel

    Full text link
    Utilizing the hyperspace of noise-based logic, we show two string verification methods with low communication complexity. One of them is based on continuum noise-based logic. The other one utilizes noise-based logic with random telegraph signals where a mathematical analysis of the error probability is also given. The last operation can also be interpreted as computing universal hash functions with noise-based logic and using them for string comparison. To find out with 10^-25 error probability that two strings with arbitrary length are different (this value is similar to the error probability of an idealistic gate in today's computer) Alice and Bob need to compare only 83 bits of the noise-based hyperspace.Comment: Accepted for publication in European Journal of Physics B (November 10, 2010
    corecore