[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: The French Hypothesis, Etc.
Hi Dennis,
What we're looking at is - to my eyes, of course - a clear example of the
mainstream ciphertext paradigm that thrived around 1400-1500. This has
certain easily recognisable features:-
(1) no discernible punctuation
(2) no numbers;
(3) an invented alphabet (frequently mysterious);
(4) no doubled letters;
(5) some, less, or perhaps more null characters;
(6) designed to structurally resemble its plaintext - and be easily
readable by encoders
(7) common words (or potential cribs) replaced by subcodes; and
(8) a character mapping designed for obfuscation
It's clear that (6) was an important reason behind the enduring popularity
of ciphertext - "diplomatic nomenclature" continued to be used even after
people knew it was breakable, and that better systems existed, because it
was a robust, well-understood and (above all else) transparent code - more
a sign of social inclusion than one of exclusion, I'd guess.
And from looking at the Tranchedino accumulation of ciphers, (7) and (8)
start out of relatively minor importance about 1450, but then become
progressively more heavily weighted closer to 1500, presumably as the
ability to crack the whole family of codes became more widespread.
However, the earliest obfuscating character map trick I noted there was
having "4" and "4o" as two separate characters within a single cipher
alphabet... and this is simply too huge a coincidence with the VMS for me
to bear. This, too, points to the place/time of Northern Italy / 1450-1500,
and a code constructed by someone working fair and square within the
mainstream culture of the time.
As far as all the stats go: I think that a combination of two things...
(a) a verbose (and deliberately obfuscated) character mapping
(b) a coding system that switches between multiple shallow subcodes
.....is what gives the Voynich its particular highly-variable entropic profile.
In particular, the shallowness of (b) means that the underlying language
patterns are still roughly visible through the noise, close to the surface
- which is why the artificial language question keeps arising.
But when devising a new language, would it really be necessary to so
closely mimic the forms and structure of the well-known (and understood)
ciphertext paradigm of the time - to the point that you take nothing at all
of any use from your existing language, like numbers, punctuation, or even
doubled letters? And instead use an invented alphabet derived in part from
trendy cipher tricks?
Personally... I don't think so. :-/ But I've been wrong before, so... who
knows? :-)
Cheers, .....Nick Pelling.....