[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: <voynich@xxxxxxxx>*Subject*: RE: Naive question*From*: "Karl Kluge" <kckluge@xxxxxxxxxxxx>*Date*: Wed, 16 Jan 2002 21:07:32 -0700*Importance*: Normal*In-reply-to*: <3C4642A0.EB5B46C7@mail.msen.com>*Reply-to*: <kckluge@xxxxxxxxxxxx>

The short answer is "sort of." There are minimum description length measures (how big is the smallest automaton of a given class that generates a set of strings), but in the general case while you can *define* the measure, you can't necessarily effectively *compute* the measure. Caveat -- that's an off-the-cuff answer, but if you have access to something like INSPEC you can search for "minimum description length" and find more technical detail. Karl -----Original Message----- From: Bruce Grant [mailto:bgrant@xxxxxxxxxxxxx] Sent: Wednesday, January 16, 2002 8:19 PM To: voynich@xxxxxxxx Subject: Naive question Are there other measures of information content than entropy? As I understand it, if you interpret "in" as two characters instead of one, the entropy of the message containing it ought to appear lower because there are more possible combinations of characters (i.e. "ii" or "nn" is possible if "i" and "n" are separate characters, but not if "in" represents a single character). Is there such a thing as a measure of information content which is not so dependent on knowing exactly what is or is not a character? Bruce

**References**:**Naive question***From:*Bruce Grant

- Prev by Date:
**Re: Naive question** - Next by Date:
**Enfoques divergentes** - Previous by thread:
**Naive question** - Next by thread:
**Re: Naive question** - Index(es):