Rudy Rucker on cellular automata: "I was first hooked on modern cellular automata by [Wolfram84]. In this article, Wolfram suggested that many physical processes that seem random are in fact the deterministic outcome of computations that are simply so convoluted that they cannot be compressed into shorter form and predicted in advance. He spoke of these computations as "incompressible," and cited cellular automata as good examples."
From a thread on the comp.compression discussion list, 2007 (http://groups.google.com/group/Hutter-Prize/browse_thread/thread/bfea185...):
I think the insight we have been looking for to model AI/NLP is that
the information needed to code different ways of ordering a system
(knowledge) is always greater than the information to code the system
itself (for a random system.)
In the context of AI/NLP it is important to note that random need not
mean indeterminate. I hope I demonstrated this in our earlier thread on
From a thread on the "Corpora" email list (http://www.uib.no/mailman/public/corpora/2007-September/005000.html):
Personally I think we can clear up a lot of the mess, and get a very
predictive model, by abandoning just one assumption. I believe much of
machine learning to be quite sound for instance. We can use it (right from
the level of sound waves.) We can even keep grammar, in a sense.
The assumption I believe we need to abandon is the one that there is only
one grammar to be found.
Why do we insist on the assumption of global generalizations?
The Hutter Prize reflects that we cannot compress natural language by as much as we would expect:
"...in 1950, Claude Shannon estimated the entropy (compression limit)
of written English to be about 1 bit per character . To date, no
compression program has achieved this level."
It is inspired by the work of Marcus Hutter to show that compression can be used as a functional definition for intelligence.
The idea of the prize is the old one that we (can't predict, and thus