|Easy intuitive experiment surprisingly hard
||[Jul. 9th, 2008|09:53 pm]
The science of complexity and related research
A nice experiment involving a tank of glycerin with a convection current that suddenly changes direction, explains what might drive plate tectonics. I think this article and the following video explains it well:|
The periodic changes in the direction of the flow in the experiment may be the same mechanism that drove plate tectonics to repeatedly gather and split up supercontinents.
Some of the theories in Complex Systems focus on how many vastly different systems share the same underlying dynamics. The generalisation from this seemingly simple experiment to plate tectonics could be both an example of similar dynamics and the more old-fashioned art of generalising your findings, depending on where you stand.
The article however, offer a sobering lesson on just how hard this leap might be. Zhang, one of the authors, are quoted as saying:
People have been studying thermal convection most extensively for the last thirty years. But they studied them without the beads, an they saw no periodic changes
What took them so long when the experiment was so simple? Glycerin, beads and a heater? Professor Whitehead, another researcher trying to recreate plate tectonics, had among other things tried floating chunks of styrofoam. He chimed in to congratulate:
I tried to do this for the last twenty years, so my hat is off to Zhang for the beautiful work he did
It seems it took decades to find the right combination of components for a 3-piece experiment and not for a lack of trying! It's food for thought that many of us in the complexity sciences spend our time building system dynamical models, agent based models, sandpile avalanche experiments and whatnot, sometimes with far more components and trying to generalise to far more complicated systems, like human society. How do we avoid getting stuck in redesigning our models?
We could automate the design and testing of thousands of models. This is common, but we then need to specify what a "good" model is, in a form a computer would understand. In practice this tend to boil down to history matching. E.g. emulating the changes in oil price or crime rates over the last decades. With no other preconception of what a good model is, this scheme risk falling prey to "the curse of dimensionality". In short, we're fooled by models which perform well by chance on the limited dataset.
The first common remedy to this is to inject "domain knowledge" into the models. We demand that they include stuff we know is important in the real world system, such as convection in the earth's mantle. But does this sacrifice the belief that the same simple rules underly seemingly different systems? Or was this belief a theoretical statement, like how a given Turing computer can in principle simulate any given system, but dosn't tell us how to achieve that?
Another remedy, implemented in many machine learning systems, is some form of "pruning" or preference for the smallest models. An automated Occhams' razor if you like. Just as generalising from models, this can be justified both as core contents of the scientific method and by the belief that simple rules underly complex systems. Stephen Wolfram in his book "A New Kind of Science" takes it to extremes with the simplest possible one-dimensional cellular automata for his complex systems. But complexity behaves as a bulge on a wall-to-wall carpet. Smooth out the bulge in one place and it pops up elsewhere. Wolfram's Turing machines as well as self-reproducing automata in Conways' game of life have very simple rules and very few types of components, but require an astronomically large number of them.
Has the idea that "many complex systems are governed by the same small set of simple rules" simply missed the point?