Randomness

warning: Creating default object from empty value in /var/www/chaoticlanguage.com/public_html/modules/taxonomy/taxonomy.pages.inc on line 33.

Schmidhuber's "New AI"

Schmidhuber's "New AI" seeks to base AI in prediction. It benefits by being based purely on the theory of computation. It need thus only be generally computable and not necessarily regular:

http://www.idsia.ch/%7Ejuergen/newai/newai.html

Grammar: formally incomplete or just random?

Natural language appears to be random (c.f. from the Hutter Prize page):

"...in 1950, Claude Shannon estimated the entropy (compression limit)
of written English to be about 1 bit per character [3]. To date, no
compression program has achieved this level."
(http://cs.fit.edu/~mmahoney/compression/rationale.html)

The most successful contemporary natural language technologies are probabilistic.

The usual explanation is that something external selects between alternatives which are equally probable on linguistic grounds. Commonly this external factor is assumed to be "meaning".

Relationship of Goedel's incompleteness theorem to uncertainty principles and randomness

From Heisenberg to Goedel via Chaitin
Authors: C.S. Calude, M.A. Stay
(Submitted on 26 Feb 2004 (v1), last revised 11 Jul 2006 (this version, v6))

Determinate system with random statistics

From a thread on the comp.compression discussion list, 2007 (http://groups.google.com/group/Hutter-Prize/browse_thread/thread/bfea185...):

I think the insight we have been looking for to model AI/NLP is that
the information needed to code different ways of ordering a system
(knowledge) is always greater than the information to code the system
itself (for a random system.)

In the context of AI/NLP it is important to note that random need not
mean indeterminate. I hope I demonstrated this in our earlier thread on

Intelligence defined as prediction

From a thread on "The value of ideas in language" on the grammatical-incompleteness Google group (http://groups.google.com/group/grammatical-incompleteness/browse_thread/...):

You may want to look at the AI work of Schmidhuber and co. which
defines intelligence in terms of prediction. This enables them to
avoid the pitfalls of logic, at least. They are hoping to build a new,
random, AI on this basis:

http://www.idsia.ch/~juergen/newai/newai.html

Hutter Prize - Incompressibility of text

The Hutter Prize reflects that we cannot compress natural language by as much as we would expect:

"...in 1950, Claude Shannon estimated the entropy (compression limit)
of written English to be about 1 bit per character [3]. To date, no
compression program has achieved this level."
(http://cs.fit.edu/~mmahoney/compression/rationale.html)

It is inspired by the work of Marcus Hutter to show that compression can be used as a functional definition for intelligence.

The idea of the prize is the old one that we (can't predict, and thus

Syndicate content