Saturday, November 12, 2011

The Implications of Infinite Resources

Discussion on 11/13/2011, led by Billy

Hey everyone,

I'll be leading the discussion this week. We will pick up the topic Craig brought up last time, but never got around to fully discussing: technology and the future of government.

Here is what Craig wrote two weeks ago. He phrased it nicely so please read this as a refresher:

If you look at history, a variety of goods once only available to the rich are now available to everyone — or at least a much greater portion of the population — due to improved technology.

However, the fundamental role of money has not changed as a result of this innovation. Money is still an economic necessity because there is still scarcity. In spite of all the amenities of the modern era in comparison to earlier times — take the variety of cheaply available food in grocery stores, fresh water and readily available shelter, and so on — there are still many things only available to the richest among us simply because we do not possess the resources to give everyone access. Not everyone can own a yacht, and fewer can own private jets.

The central question of this week's discussion will be whether or not this principle that has been universal thus far through human history will ever cease to apply. Is it possible, that with exponentially improving technology and innovations like molecular manufacturing and nanotech that scarcity will actually cease to exist?

Craig

I want to further the discussion by bringing up a few more points. Consider any type of governmental system: You'll find a central theme revolving around resource distribution. America, for example, relies on capitalism for resource distribution. On the other end of the spectrum, Marxists rely on communism for resource distribution. Why do we have the need for resource distribution? That's an easy question to answer. Resources are limited. Because resources cannot be equally distributed to everyone (at least up to this point in history), resource distribution is a high priority for any type of government. This should be obvious: Go on any news website and you'll find that half the articles discuss budgets or resource allocation. In fact, it wouldn't be far-fetched to define governments as systems that impose certain resource distribution schemes as "law."

Now, imagine a world where nanotechnology has progressed to a point where we can change materials on the molecular level. In such a world, resources effectively become unlimited. Please think about the following points for the discussion:

  1. In a world of unlimited resources, what would happen to resource distribution? Would this become obsolete or transformed into another form?
  2. If resource distribution ceases to be a problem, what would be a government's new top priority?
  3. In this case, would communism (or any other system that you can think of) become a system that could work? Which system do you think would be the most ideal?
  4. Assuming technologies that can radically redefine resource distribution (such as nanotech that can change structure on the molecular level) are ultimately obtainable with the input of some effort, should we continue to research into such technologies? What would be the societal advantages and disadvantages brought about by the advent of such technologies?

Saturday, October 29, 2011

The Future of Money

Discussion on 10/30/2011, led by Craig

If you look at history, a variety of goods once only available to the rich are now available to everyone — or at least a much greater portion of the population — due to improved technology.

However, the fundamental role of money has not changed as a result of this innovation. Money is still an economic necessity because there is still scarcity. In spite of all the amenities of the modern era in comparison to earlier times — take the variety of cheaply available food in grocery stores, fresh water and readily available shelter, and so on — there are still many things only available to the richest among us simply because we do not possess the resources to give everyone access. Not everyone can own a yacht, and fewer can own private jets.

The central question of this week's discussion will be whether or not this principle that has been universal thus far through human history will ever cease to apply. Is it possible, that with exponentially improving technology and innovations like molecular manufacturing and nanotech that scarcity will actually cease to exist?

Saturday, October 1, 2011

Nondeterminism in Artificial Intelligence

Discussion on 10/2/2011, led by Praneeth

The first computer programs were purely mathematical. ENIAC, for example, was used to calculate artillery firing tables for the US Army. These calculations were deterministic. The same input always resulted in the same answer. A shell fired with distance x, angle y, and velocity z would always have the shell land at position n.

As computing evolved, the nature of the tasks became less mathematical, but the underlying deterministic model remained. Consider loading a webpage in your browser. A layout engine reads the in HTML and other data that it downloaded from the webserver. The World Wide Web Consortium has hundreds of pages of documents that define exactly how this data should be processed to produce what you see on the screen, all down to the last pixel. According to the World Wide Web Consortium, a given webpage should always render exactly according to these rules, no matter what browser you use. Rendering a web page is therefore deterministic; a single input (the HTML) gives a single output (the display on your screen).

Obviously, this is not the way the human mind operates. The human mind operates mostly on intuition, accumulated experience that acts as a rough guide and predictor of future actions – the schema. Intuition is nondeterministic in the sense that it is impossible to predict the result of an algorithm without knowing the schema. Since schemas evolve over a lifetime and are the aggregation of millions of events, they are unknowns in computation. If we ignore the schema parameter altogether, then we have a function that takes n arguments and can give any number of different results for those n arguments.

On Sunday, we will discuss the advantages and disadvantages of nondeterministic approaches to AI across three categories. (What we will not do is talk about specific algorithms and strategies that are used in artificial intelligence programming.)

Adaptability

When IBM wrote the software for Deep Blue, it took a brute force approach: Deep Blue would consider 200 million possible outcomes per second and choose the one which yielded the highest probability of winning. The only problem was that Deep Blue could not predict its opponent’s moves. The most it could do was guess which move he was most likely to take. Thus, the problem of winning a game of chess became nondeterministic. Gary Kasparov took advantage of this and defeated Deep Blue several times by making unexpected moves. IBM responded with increased computing power, allowing Deep Blue to investigate many more of the unlikely paths that the game could take. But this solution does not scale. There are a finite number of outcomes for a chess game. The same cannot be said for most aspects of the real world. How can we create computer programs that can adapt to unexpected situations?

Dependability

Today’s microtrading programs conduct transactions in fractions of a second. Transactions are happening so fast that the latency of sending data over the wire is becoming a problem. What happens if Citigroup's Program makes a massive trade at 10:00:53 and Goldman Sach’s program is making decisions based on data it downloaded at 10:00:58? Creators of microtrading programs have to account for the fact that they may not be dealing with the most accurate or up to date data.

Microtrading programmers and other AI programmers will have to consider divergence: what is the maximum amount of error or ambiguity that can be in the input while still allowing the program to compute the correct result? This is an extension of adaptability. An artificial intelligence program will not always have access to a complete data set, but it will still have to be able to produce an answer.

Predictability (or Reproducibility)

Many programs rely on the fact that deterministic algorithms always return the same result under the same conditions. Thus, one program can predict the output of another. This is a crucial part of what is known as unit testing, the process of ensuring that all parts of a program work. A unit test is a program which runs the program of interest with hundreds of different sets of inputs and makes sure that the output is always correct. But in a nondeterministic model, a program does not always give the same output. How can we test such programs?

If this problem seems to abstract, consider this: Isaac Asimov proposed the Three Laws which robots would be hardcoded follow to ensure that they never became violent towards humans. If we implement artificial intelligence with nondeterministic algorithms, how can we be sure that the programs will always follow the Three Laws? Even if they follow the Laws a thousand times, they could easily break them on the one thousand and first run.

The difficulty of testing is a problem even today. How can we ensure that microtrading programs trading on the NYSE don’t cause a stock market crash?

Saturday, September 24, 2011

Religion and Transhumanism

Discussion on 9/25/2011, led by Mark

Like Brenda, I've condensed my main talking points into three short bullets for your consideration:

  1. The continuation of evolving technologies without the evolution of faith leads many faithful to reject religions, valid questions include what goods and evils draw upon a decreasingly religious society.
  2. Where does belief in science and technology become religion in and of itself? Example: Machine Cults in Warhammer 40,000
  3. When and how do religion and technology actually coincide, and why is such a unity more acceptable among eastern religion than western? Examples: Christian Transhumanism, Buddhism, Shintoism

Religion is a very touchy subject, and it carries the risk of devolution into name calling. I'd like you to bring an open mind to this topic in particular as we address it as an issue rather than in the contexts of everyone's particular beliefs.

Saturday, September 17, 2011

Obstacles to the Singularity: Information Overload, Technology Diffusion, and Physical Limits

Discussion on 9/18/2011, led by Brenda

Hey guys! For the discussion, I thought we could briefly go over the concept of the singularity and about some limitations people have proposed.

So, as most of you probably know, singularity is the point where technology and science develops globally fastest. At this point, greater-than human intelligence will emerge through technological means. Since human intellect would be inferior to this new intelligence, a brain-computer interface or something similar is proposed to emerge as well. The term "Singularity" was coined by Victor Vinge to describe the point at which artificial intelligence more powerful than humans is be created, causing changes that will be difficult for us to predict. While many intelligent people accept the idea that a technological singularity will take place at some point in the future, there are still some potential holes in this theory.

  1. Information Overload

    The amount of relevant information on any particular topic is growing at a staggering speed. As the amount of information grows, new scientists will have to learn all the previously discovered information before doing further research. The theoretical limitation is that in the long run we would have to spend practically all our productive lives learning and teaching instead of doing any research or substantive work, and that each discipline will become extremely specialized, with huge language and methodology barriers between them. There will so much to learn in a lifetime that eventually it will be hard to learn it all within a lifetime and still have time to do new research.
  2. Technology Diffusion

    Knowledge and skills are unevenly distributed. Right now, there is a diffusion of knowledge from people who know a lot to people who don't know a lot. However this diffusion of technology is less than optimal. As the rate of technological innovation increases exponentially, a slower change in the rate of technology diffusion will make the differences in knowledge across the population greater and greater, until it becomes a barrier to communication and diffusion stops. This disparity can be seen in the modern day. While a great deal of research takes place in developed nations, in third world countries scientific journals are essentially nonexistent. Similarly, it is possible that in the future only a very small part of humanity will experience the singularity, and this would create a huge knowledge gap. Furthermore, cultural barriers may slow the transmission of knowledge between areas. There are huge gaps in the spreading of technology across different places and cultures.
  3. Physical Limitations

    The speed of light is the most obvious limitation. Although on the human level, this limitation is irrelevant, it is very relevant for computers and electronics. The speed of light is already a limitation for the internet. (It takes .04s for it to travel around the world.)

How serious do you think these limitations are? Do you think we can overcome them?