Discussion on 10/2/2011, led by Praneeth
The first computer programs were purely mathematical. ENIAC, for example, was used to calculate artillery firing tables for the US Army. These calculations were deterministic. The same input always resulted in the same answer. A shell fired with distance x, angle y, and velocity z would always have the shell land at position n.
As computing evolved, the nature of the tasks became less mathematical, but the underlying deterministic model remained. Consider loading a webpage in your browser. A layout engine reads the in HTML and other data that it downloaded from the webserver. The World Wide Web Consortium has hundreds of pages of documents that define exactly how this data should be processed to produce what you see on the screen, all down to the last pixel. According to the World Wide Web Consortium, a given webpage should always render exactly according to these rules, no matter what browser you use. Rendering a web page is therefore deterministic; a single input (the HTML) gives a single output (the display on your screen).
Obviously, this is not the way the human mind operates. The human mind operates mostly on intuition, accumulated experience that acts as a rough guide and predictor of future actions – the schema. Intuition is nondeterministic in the sense that it is impossible to predict the result of an algorithm without knowing the schema. Since schemas evolve over a lifetime and are the aggregation of millions of events, they are unknowns in computation. If we ignore the schema parameter altogether, then we have a function that takes n arguments and can give any number of different results for those n arguments.
On Sunday, we will discuss the advantages and disadvantages of nondeterministic approaches to AI across three categories. (What we will not do is talk about specific algorithms and strategies that are used in artificial intelligence programming.)
When IBM wrote the software for Deep Blue, it took a brute force approach: Deep Blue would consider 200 million possible outcomes per second and choose the one which yielded the highest probability of winning. The only problem was that Deep Blue could not predict its opponent’s moves. The most it could do was guess which move he was most likely to take. Thus, the problem of winning a game of chess became nondeterministic. Gary Kasparov took advantage of this and defeated Deep Blue several times by making unexpected moves. IBM responded with increased computing power, allowing Deep Blue to investigate many more of the unlikely paths that the game could take. But this solution does not scale. There are a finite number of outcomes for a chess game. The same cannot be said for most aspects of the real world. How can we create computer programs that can adapt to unexpected situations?
Today’s microtrading programs conduct transactions in fractions of a second. Transactions are happening so fast that the latency of sending data over the wire is becoming a problem. What happens if Citigroup's Program makes a massive trade at 10:00:53 and Goldman Sach’s program is making decisions based on data it downloaded at 10:00:58? Creators of microtrading programs have to account for the fact that they may not be dealing with the most accurate or up to date data.
Microtrading programmers and other AI programmers will have to consider divergence: what is the maximum amount of error or ambiguity that can be in the input while still allowing the program to compute the correct result? This is an extension of adaptability. An artificial intelligence program will not always have access to a complete data set, but it will still have to be able to produce an answer.
Predictability (or Reproducibility)
Many programs rely on the fact that deterministic algorithms always return the same result under the same conditions. Thus, one program can predict the output of another. This is a crucial part of what is known as unit testing, the process of ensuring that all parts of a program work. A unit test is a program which runs the program of interest with hundreds of different sets of inputs and makes sure that the output is always correct. But in a nondeterministic model, a program does not always give the same output. How can we test such programs?
If this problem seems to abstract, consider this: Isaac Asimov proposed the Three Laws which robots would be hardcoded follow to ensure that they never became violent towards humans. If we implement artificial intelligence with nondeterministic algorithms, how can we be sure that the programs will always follow the Three Laws? Even if they follow the Laws a thousand times, they could easily break them on the one thousand and first run.
The difficulty of testing is a problem even today. How can we ensure that microtrading programs trading on the NYSE don’t cause a stock market crash?