[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: VMs: Worry - information loss in transcription - pictures ...
your E-mail is potentially interesting, but I can't
quite follow it.
> For example:-
> * Entropy of EVA = 221899 x 4.0 = 887596.00 bits
> * Entropy of simple glyphs (+ ee) = 198098 x 4.08 =
> 808239.84 bits
> * Entropy of pair transcription (+ ee) = 155349 x
> 4.36 = 677321.64 bits
What's the 4.0 mean? And what about the 4.08?
> So, what's happening here?
> (1) All figures are being "diluted" by the large
> number of spaces - a space
> happens every 6.4 tokens (glyphs) or every 5 tokens
Not too sure what that means. People who have in
the past reported about low entropy have based
this on comparisons between Voynichese and, say,
English. In the comparisons, spaces were either
used or omitted, but obviously both in the VMS
and the Plain texts. Both in English/Latin and
in Voynichese, the space character is the most
> So, yes - even in agglomerated alphabets, entropy
> is low... but I think it's being kept low by the
> large number of spaces and a small handful of
> frequent symbols...
You're looking at single-character entropy, which
is a bit on the low side for the VMs, but it's
the pair entropy (or the conditional single-
character entropy) which is really anomalous.
This is true whether one considers spaces as
characters or not.
> And isn't it strange how <o> and <y> are so common,
> yet so very rarely
> occur beside each other? Glyph transcription + ee +
> oy + yo ==> (oy = 0.07%
> and yo = 0.05%).
This is precisely the origin of the low pair
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
To unsubscribe, send mail to majordomo@xxxxxxxxxxx with a body saying: