Experimental cipher using multiple hash algs for keystream.
|
3 weeks ago | |
---|---|---|
cmd | 3 weeks ago | |
README.md | 3 weeks ago | |
go.mod | 3 weeks ago | |
go.sum | 3 weeks ago | |
hopscotch.go | 3 weeks ago |
Experimental cipher using multiple hash algs for keystream.
The cipher uses multiple trusted hash algorithms, each updated on a schedule (the security factor, 1 to 10) based initially on the secret key, then on random data from a PRNG (currently MTWIST-64, also seeded from the secret key). The XOR value used to encrypt plaintext is picked from bytes of the hash outputs (being appended together into a single pool P), the hash output byte chosen used then as a modulus value to 'hop' to the next XOR value within P (hence the name 'hopscotch').
The security of the algorithm is premised on the following axioms:
Current implementation uses 2 hash algorithms, SHA512 and BLAKE2B, each giving outputs of fixed-length = 64 bytes. Empirically, using security factors ranging from 1 to 10 (count of input bytes encrypted before re-keying by feeding 32 bytes of PRNG data to derive new hash output XOR pool P) it is unlikely that 64 picks from the pool would re-use the same bytes often enough to compromise security. Tests with the 'circle' analysis tool and the 'Tux.ppm' image test indicate ciphertext does not resemble plaintext in any obvious manner. (TODO: diehard tests or others?)
The use purely of a PRNG plus two or more already-proven hash algorithms as keystream pool material offers a simply-verified security-primitive basis for confidence, plus easy extendability by adding more hash algorithms to the keystream pool P, without complexity of a full re-analysis. So long as each individual hash algorithm is considered safe, hopping between and within the output bytes of each to derive keystream XOR material should also be safe so long as hash output bytes are not re-used extensively.
On a modest test AMD (Linux amd_x64) encryption rates of approx. 140Mbits/s are achieved (-m 4). As this is a pure Go implementation and little effort has been put into optimization it is reasonable to expect higher rates could be achieved in the future.
$ time ./cmd -k "SuperSecret#@11ElevenTy" <blank700MB.bin >blank700MBenc.bin
real 0m40.096s
user 0m38.318s
sys 0m2.133s