[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: VMs: Re: Moot points, getting long



Rene wrote:

> --- Nick Pelling wrote:
> > I fully applaud GC's efforts to transcribe more of
> > what is there to be seen
> > than does EVA. My caveat there is this would need to
> > be built on a full
> > understanding of what strokes were made, when and
> > why - the current thread
> > on how to write a <ch> is perhaps a case in point.
>
> What GC writes and what you write all makes sense
> from one viewpont or another. But it's all
> theoretical. In this approach, one can only
> start transcribing after the MS has been
> deciphered.
>
> While that seems like an attempt to reduce ad
> absurdum, better realise that this is really what
> normal transcription of handwritten documents
> is all about.
> You can actually read the language so you know how
> to decide which particular squiggle on the paper
> is which letter.

Actually, I think you're wrong on this, both of you to one degree or
another.  Nick suggests that we need to understand how every stroke is laid
out on vellum, and the order in which this was done.  I went through that
stage, which is how I know that the <ch> representation is inaccurate, and
the <sh> representation as proposed by EVA is virtually non-existent in the
Voynich script.  I don't fully agree with Nick on this because the Voynich
script is written with separations between units in tens of thousands of
instances.  It is only in a handful of instances that the nature and order
of the strokes may or may not be of importance.  However, this order is a
detail that *should* be reflected in any stroke-based transcription, and is
lacking in EVA, which makes arbitrary assignments for these elements.  An
entire series of glyphs in EVA are represented by the arbitrary combination
of <c> and <h>, when an analysis of the script demonstrates that this is
most certainly not the means of construction, and the list of errant
assignments continues.

I disagree with Rene on virtually every point in this instance - my research
was not based on any pre-supposed theory, instead the theory was built after
due observation of the actual numbers of things present in the Voynich.  One
doesn't have to "start transcribing after the MS has been deciphered" in any
approach, unless you're using the EVA transcription, which refuses to draw
any intelligent conclusions whatsoever as to the nature of the script and
glyph identification.  This is not a problem that is unsolvable in any
sense.  It does not require a super-human ability to quantify the evidence
and formulate an approach that fits the evidence.  It's a tedious process,
but not one that involves much brain-work.  EVA does a lot of brain-work
while avoiding the tedium of observable fact.  Unfortunately not my way.

> While that seems like an attempt to reduce ad
> absurdum, better realise that this is really what
> normal transcription of handwritten documents
> is all about.
> You can actually read the language so you know how
> to decide which particular squiggle on the paper
> is which letter.

I also don't believe this statement to be true, from accounts by others that
have attempted or achieved breakthroughs in unknown languages, as well as
some contacts currently working on such projects.  Most of these accounts
are pre-computer age, but the approach is constant, that transcription,
whether on paper or computer, attempts to include *all* information and
variation until such a time as we know whether a "particular squiggle" is
important or not.  These people hold to my little edict for those who study
the Voynich - "Stay close to the text".  It's okay to use computers for the
task of gaining significant numbers, but a mechanism must be in place to
bounce back and forth between the images and the numbers for comparison.  I
personally have the microfilm images hyperlinked, by folio, line, and word,
(f3v.5.3 as example), and my database indexed for speedy access to these
links for comparison. This has allowed me to identify problems and make
comparisons in seconds, and has been invaluable to my understanding.  "Stay
close to the text" is good advice, and one sees very quickly that these are
not mere "squiggles on paper".

> Now imagine a Chinese or Arab, who is not
> too familiar with handwritten Latin script,
> transcribe a handwritten document. If he
> decides that all different variations in which,
> say, the letter 'g' has been written, should all
> be different, we may not be able to interpret
> his transcription. If he does that for a big
> enough number of letters, one will be competely
> lost with his output.

By the same token, if we hand a Chinese or Arab native the current EVA
transcription, and ask them to pull out the significant features and
reconstruct the writing based on these representations, the information
given them would be 'skewed', so the output would reflect the same 'skewing'
as the data handed them.  Takahashi's transcription does not use the
capitalization rule, and as I have stated, use of this rule would bring EVA
up to the level of 82-87% accuracy, but no higher.  Right now, without this
rule in force, the transcription suffers an extremely high error rate in
accurate reproduction of the Voynich glyphs.  How high I'm actually afraid
to place into print.

IF the Voynich is natural language, this discussion is indeed *moot*, but
the 'natural language' enthusiasts are getting fewer and fewer these days,
simply because we know more about the Voynich than at any other given time
in history.  But if the Voynich is cipher, shorthand, or any variant of the
two, this discussion is not *moot* at all, but the most important discussion
that can be conducted.  IF the Voynich is cipher, an error rate of 1%
generally translates to about 2 glyphs per page, but much higher in the
herbal section than in the later sections, given the short paragraphs. A
mis-transcribed glyph at the beginning of a page would make the entire page
inaccessible to all but those who know the majority of the system employed,
and would at the very least render the page unavailable for accurate
statistical analysis, and skew group analysis to an unacceptable level.  My
goal has been to render the Voynich to the level that the error rate is .2%
or less, an annoying but acceptable level of inaccuracy in transcription.
My motivation is clearly different from that of EVA, in that I demand a
specific level of accuracy, cannot accept less, and cannot accept "we don't
know" as a generic shrug-off to important questions such as this.

> In transcribing, decisions have to be made.
> It is a matter of taste how such decisions are
> made. Two people transcribing the Voynich MS
> will come up with two different sets of rules on
> how to do it.

I disagree again, and for a pleasant change I use Frogguy as my example.
The first phase of transcription must be "highly analytical", just as
Jacques' transcription tends to be.  He made his mistakes, but given the
imagery at hand, he did a pretty good job in the analytical process.
Jacques and I differed on the conclusions to be drawn from his analysis, but
his research was a very good attempt at discovering the roots of the Voynich
script.  Jacques stopped at this phase and didn't move beyond a few vague
conclusions.  Perhaps that reflects the intelligence of the man, not to go
too far beyond his own understanding, in which case this is also admirable.

You say that in transcribing decisions have to be made, but by what process
were these decisions made in respect to EVA?  Simply installing the
flexibility to represent the same glyph in various representations is not
making a decision based on the data present.  The representation <cth> as an
example, does not reflect any of the basics found in the construction of
this glyph.  In the majority of cases, it is constructed first <t>, then
<e>, then <h>.  You can tell this because the ink was wet when the strokes
were put in place, and there is a slight capillary draw in the direction of
the secondary stroke when one stroke crosses the underlying stroke of wet
ink.  Add up these instances and one gets a fairly consistent picture of
glyph construction.  Such priori observation and its reflection in
transcription is not a "matter of taste" as you suggest.  It is *essential*
to the approach of transcription.

> In reality, some letters which were meant to be
> the same will look different. Making more
> distinctions is therefore not necessarily
> better.

The reality of this is that very few glyphs have "variants" that could be
attributed to handwriting differences, and it is essential in transcription
to reflect these, even when it might be assumed, correctly or incorrectly,
that these variants are identical in form.

> And right now we don't just know where to draw
> the line. If GC says that he is certain that the
> different plumes were meant to be different,
> I won't say that he is wrong. Because I know
> just as little as GC what a 'sh' is meant to
> represent.
> Maybe it just indicates the mood of the author -
> sometimes he just feels like drawing a nice one,
> and sometimes just a plain one.
>
> Who knows?

I say there is a visual, observable difference in the plumes, and this has
been confirmed by others looking at the script.  I've at one time or another
delineated the differences, but no matter what I personally interpret as
their ultimate meaning, the solid fact is that EVA does not recognize this a
priori variance in any form.  EVA also may allow for certain <e> constructs,
but does not implement this in actual transcription.  EVA has arbitrarily
assigned meaning to the <c> and <h> constructs, and simply has no grasp of
the <s> construct.  It's a long list of errors in transcription.

> As long as we can distinguish between observations
> and opinions, this discussion can lead somewhere.

My posts on this subject have NEVER been opinion, but based on observation
and the numbers derived from those observations.  I accept your premise and
will in short order enumerate my observations.  I've sent a personal request
to Gabriel to allow me to use his copyrighted EVA font in a presentation of
the facts, and with each example you will find a list of non-conforming
articles as evidence.  The time is now, if you're ever going to understand
the Voynich as I do.  It's much simpler in construction than you make it out
to be.

One other comment from a previous post that I can't seem to find at present,
and this is relating to the statement that there are only so many keys on
the keyboard, and that the EVA font tried to make proper use of these keys.
I remember reading that the EVA cut-off for a representation was 10
representations or less.  My particular analysis demonstrates that a mere 41
glyphs count 10 or higher, and of this number we might be able by comparison
to reduce this number to around 28 or so, somewhere in that range.  The
standard keyboard offers slots for 94 representations before going into the
{alt} keys, so I've never had the assignment problem described by Gabriel.
Of course I don't arbitrarily hatchet things into little pieces and then try
to put them back together.  If this approach had a basis in observation I'd
get out my hockey mask and hatchet, and go to bloody town. As it stands
without the blood and gore, (such fun I'd hate to miss), I can represent the
glyphs down to those that number 2 or higher by using a standard keyboard,
and if anyone gives it some thought, the very appearance of the Voynich
should say that this *should* be the case, not the extended, arbitrary and
unnecessary complexities of the EVA alphabet.

GC

______________________________________________________________________
To unsubscribe, send mail to majordomo@xxxxxxxxxxx with a body saying:
unsubscribe vms-list