[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

re: EVA Transcription



>2) Counting and exercising of theories
>Whatever transcription you use, you'll need an intermediate programmatic
>stage to convert it to your underlying alphabetic theory du jour (mine
>changes every few hours). But if you're going to perform your analysis
>stage programmatically *anyway*, adding a simple search-and-replace
>preprocessor isn't such a hardship, surely?

As I said, EVA doesn't exactly "search and replace" as precisely as it was
meant to.  I took a slightly different tack however, since there are only
two or three characters that I am routinely uncertain of.  (The first is the
cc combination.  Many times it's connected, other times it isn't.  Almost
always the m is a connected letter, but there are some instances where it is
written with a special "flourish", and I'm not sure then whether to count it
as one or two.)

I had to fall back on my basic crypto, transcribe what I saw as a unit of
text, (with reliance on previous transcriptons such as Currier), and go from
there.  Since I can't get any really valuable information from characters
like the o or the a, they being too populous, I focused only on the rare
characters on each page.

Out of 5 separate characters that only appear twice on f4v for instance, my
distance counts were 7, 14, 21, 20, and 3. (Don't know where the 3 came
from, except that it might mean the author stayed in a given alphabet for
possibly a word or so.)  I went back and checked, and there was one
character in the 20 range that could go both ways, giving me a set of 4 out
of 5 that are multiples of 7 for this page.

Different pages have a limited variance on this number scheme, such as one
longer page whose multiples are of 9, but I learned two things by charting
it this way.  One is that number sets, even when somewhat diluted, turn up
far more often than in normal text (I've tested English, Old English, Latin,
Spanish, Italian, German, all from the 15th/16th century.)  Number sets
turning up at higher levels means there is a numerical underpinning to the
text that wouldn't be found in language, even artificial.  This firmly sets
the VMS in the realm of cryptograpy, (hence my vehement belief and current
objections).

The second thing I learned is that outside of less than .3% of
oddballs,there are only a specific number of characters, but some are varied
forms of the same character.  The Gallows/picnic combination for instance,
has more than one form, but when counted together falls within a specific
number set in multiple instances, which I interpret to mean these are the
same character, just written differently as the mood strikes.  Take this all
into account and you discover that the average page uses only a 24 character
"alphabet". (Why is that not surprising?)

The rare characters are the information bearers here.  Whatever the alphabet
combinations or progressions, these rare characters are often useful because
they exist once only in a given alphabet, and are only used when that
alphabet comes up.  The discovery of a base number for any given page is
critical to discovering and building upon the rest of the mechanism for that
page, and for the entire manuscript.

These are counts that cannot be performed, and corrections that cannot be
made, using EVA, and as long as EVA is the medium for discussion, work such
as this will not go forward, be duplicated or discussed, simply because it
is impossible to reach from the EVA transcription.

The discussion at this point, if the work is to go ahead, is one that
involves discerning a unit of text and a method for expressing it as a unit.
To that end I consider EVA a very good intermediate step, the meticulous
examination of the text, but the strokes ultimately need to be put back
together to form the best representation possible of the real text, and the
bugs worked out from there.

The linguists will see no point in this, enjoying the fact that they have
pronounceable words like Qoteedy to keep them amused.  But those who have
viewed the evidence and understand fully that this is a late 15th or early
16th century WESTERN language manuscript should see the need to go beyond
EVA and work to establish a consensus transcription that will not baffle and
befuddle newer students.  As I said, if things are this muddled now, what
will Voynich research look like in another 3 or 4 years?

There was a time when this manuscript was discussed in terms of Currier and
modified Currier, and I hope this causes some to give thought to going back
to a "unit based" discussion.  Personally I'm tired of having to try to
teach inquisitive people a "new language" every time the subject comes up,
and right now it's simpler for me to ask them to insert their daiin in their
Qoteedy than it is to waste time trying to educate every EVAer that comes
along.

GC