Friday, June 27, 2008
Laplace's hypothesis was based on principles of precision in nature. In keeping with classical mechanics, the overriding assumption was that it is possible to know simultaneously the position and momentum of a particle exactly. This was critical to the Demon to enable computation of the data. What drove LD out of a job though was Heisenberg's uncertainty principle.
In quantum mechanics, the position and momentum of particles no longer have precise values. The best assertion one can make about a particle is that there's a probability it exists at a particular location, or has a particular momentum. Heck, at the particle level, this becomes obvious when we consider that an effort to observe a particle will involve shining light (a stream of photons) on it. These photons themselves are particles (yes, yes, they're waves too) that will disrupt the peace of the particle we're trying to observe.
It's at this point in the Q.physics-101 lecture that even the jocks at the back of the class wake up and listen - A cat is placed in a sealed box along with a flask containing cyanide gas. In addition, a tiny amount of a radioactive substance is placed in the box, alongside a Geiger counter. The amount of radioactive substance is small enough that in an hour, there's as much chance of it decaying as not. If the Geiger counter detects radiation, it triggers a mechanism that releases the cyanide gas.
In the real world, if we were to look in the box after an hour, the cat would either be alive or dead. Binary. In the quantum world though, the cat is simultaneously alive and dead, in a quantum superposition of these two states. It's at this point that the Laplace Demon hypothesis falls off the rails. The demon, at the particle level, cannot know for sure positions and momenta. An error in approximation at the particle level will blow up when extrapolated to the level of objects at the human scale.
The short of it is that quantum physics throws a wrench into the Newtonian/Laplace Demon machine and messes up the impetus that the determinism juggernaut had built up.
The long of it though is where it's most interesting: Heisenberg's postulate is unproven when we consider that the precise observation of a particle isn't possible because of human limitation. It is our inability to observe or measure a particle's characteristics without disturbing it that lead us to conclude that the particle's position is hazy within a probability cloud.
What do we know about the particle at a moment when we aren't observing it? Could it be, is it possible that the particle is at point x, y, z exactly? What if the demon knows this particle and its every last characteristic? The future won't be much of a shock to LD then.
This law definitely helped the 'demon' construct along, as a means of understanding how systems change with time.
*Note that James Clerk Maxwell showed that this law did not in fact apply across the board. Kick in the groin for L's demon. Maxwell did this by going back to the premise of Brownian motion, a basic model of how particles suspended in a liquid move randomly. It was already established that if this liquid was heated, the particles would move about faster, but still in a random manner.
In such a situation it was conceivable that the hotter particles at any one point in time could accumulate in one section of the liquid container (since the motion of these particles is random, this is a possibility). In that case, there would be a kink in the time-dispersal entropy-increasing graph of the system.
Thursday, June 26, 2008
A final point on determinism - it says there's no such thing as free will. Everything we do is the result of something else. If we know those 'something-elses', i.e. those causal factors, we can predict what will happen, what the next person will do and what this week's winning numbers are at the 649.
There's a school of thought, called determinism, that disregards the notion of randomness in any event. Every event that happens is the result of some causal factors or forces that went before it. Effectively, there's no coincidences anymore. Determinism says that if you've won the lottery on your birthday, there's a reason for it.
One way to rationalize this is by looking at the usual keystones of probability, rolling dice and flipping coins. Your rolling a six is usually considered to have the odds of 1 in 6. Obvious, right? But consider that the outcome (i.e. the 6 that you've rolled) was influenced by a bunch of different factors - the weights of the different faces of the dice, the force with which the dice was thrown, angle of impact, elasticity of the dice and contact surface, etc.
Hypothetically, if these factors had been been measured beforehand, then just understanding the interaction between these factors would mean that the rolled six could've been predicted.
Monday, June 16, 2008
Economists have their intricate mathematical models, politicians - their gauging of the collective sentiment and gamblers, their gut. All of these decision-making devices, fuzzy, neural, artificial, intuitive, etc are used on a daily basis to predict the future. That said, the world's population would all be paupers or trillionaires if any of these could boast consistency.
Given that such is not the case, i.e. there's only such a percentage of accuracy any decision logic machinery can claim, it's time to start looking at models that are organic - Letting reality and recallable (recent) history be the major inputs to your guesstimating. As an example, if the roulette ball lands on the 00 six times in a row, and you're unsure where to bet next ("hmm....even or odd?"), just go with the 00. Chances are the house is playing dirty. The Mahabharata would've been a shade less vengeful had Yudi figured this out before betting the kitchen sink.
Of course, this isn't revolutionary. It's part of any adaptive system in use today and all connectionist models rely on history (wiki this stuff). Here's another element to help focus the predictions - Your best guess for the future. Toss this element into the mix of inputs (i.e. along with what's happened already). An instance of where this might come in handy is with work flow automation at a call-center. If an extraordinary event has happened (an earthquake, discovery of a faulty part in the computers you sell, etc), it's safe to surmise that the future volume of calls will need more than the past week's call-volume as an input to forecast what's coming.
The last decision-making factor is the cornerstone of AI. Is there any information that can be derived from the interaction of your inputs? An instance of this is a loan officer assigning points to a prospective loanee based on standard criteria like their age (assume age directly proportional to points) and whether they rent or mortgage a house (assume renting gets higher points than mortgaging). This would mean that an elderly renter would be a better bet for the loan officer than a young house-owner. In the real world though, this is counter-intuitive, but you wouldn't get that from the discrete point-assigning system. And this is only two dimensional, the logical progression would be to extend this to n dimensions whose interactions can paint a picture.
My point though is to be able to spot patterns when theoretically none should exist - and to be an intelligent human, the macro-decider on when a decision needs to be made via an intelligent machine and when only in collaboration with one.