[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: VMs: VMS: entropy of my plaintext data for comparison purposes



	Hi, Jacques,

	I think that h1-h2 may be the important statistic, 
since that tends to take out the size of the character
set:


Text		        h1-h2

MARIO                  1.15742
CESAR		       1.88721
Source		       1.38154
English		       1.0000

	The entropy drop of CESAR is indeed quite remarkable; 
Voynich in Frogguy is the only thing I've seen that
large, 
Voynich in EVA is around 1.8 . 

	The entropy drop of your source text is rather large 
to begin with, but I'm sure that's by design.  

	Of course, your sample isn't so large; I found I 
needed about 30K to get really stable results.

Dennis
______________________________________________________________________
To unsubscribe, send mail to majordomo@xxxxxxxxxxx with a body saying:
unsubscribe vms-list