How robust do we want/expect our program to be, and in what ways? To what extent do we want to guide/jump-start the evolution?
I decided to check out what Wikipedia had to say about “robustness,” and discovered that the main article was not in computer science or economics, but in statistics. We haven’t been using it to refer to statistics really, but in some ways all robustness is referring to “dealing with statistics,” if one allows for a looser understanding of “statistic.” This got me thinking…
In terms of our desired program, we want it to refer to the memory of past games and the players—one way of looking at this is to say that this bank is a collection of statistics. Another way of looking at it, however, is to say that this bank is just a collection of information, and only becomes a statistic once it is analyzed as such. (In other words: If a collection of information falls in the middle of the forest and there’s no one around to analyze it, does it count as statistics?)
Ignoring the juicy philosophical implications of this question (i.e. the semantical vagueness), I think this applies most in terms of how we’re approaching this. Consider a group of scientists trying to construct an artificial animal. They want their robot to be able to interact with their environment, so they plan to give it eyes and a brain. In constructing the brain and its program, it would be naïve of them to assume that the raw sensory data from the eye would be interpreted as a series of images. I wouldn’t put it past them to assume this, however, since this aspect of sight is so fundamental that we don’t normally have any idea that we do this.
One might have fun asking the mysterious question “if a TV flashes in the middle of the forest and no eye-brain system is around to experience and process it, does it make an image?” in this case too, but clearly the important thing is that we acknowledge there is more to processing an image than one step of “assessing the data.” There are at least two fundamental steps: an image must be assembled from the raw data input, which can then be analyzed by the ultimate logical process. Recalling what I learned in Lee’s class this fall about the processing of sight, I would bet that there are many more steps involving shape recognition and movement perception, actually.
If you haven’t already figured it out, it seems to me that the transformation of raw data to an image is a similar process to the transformation of raw data into statistics. An example from a hypothetical run of a Push tournament of our market game might involve one program alternating in and out each round. Give a human a table which includes this in the data and it will jump right out as a pattern, but when the other programs in our tournament look at this data there is no telling what they’ll get from it.
There are two things I want to say about this. First, the steps involved in how a human would play in this scenario are apparent: recognition of the pattern, prediction of how the pattern might continue, assimilation of this information into decision-making. Second, we can see here that robustness comes into play on more than one level here. If a program cannot detect patterns that involve some errors and outliers, it does not have very robust pattern seeking; yet, even an extremely robust pattern-detector could be a weak decision maker.
I think this distinction between pattern-seeking and ultimate decision-making is fundamentally important to how we go about constructing a program. It seems improbable to me that a practical, manageable system could exist without an explicit pattern-seeking component. Perhaps that is just my human bias limiting my imagination; perhaps we would unlock a new realm of decision-making completely unlike human thought. Nonetheless, it would undoubtedly be much simpler to follow the available model, investigating how we could improve it. Since we have yet to really delve into the research, I think we ought to keep it simple and see what we can do. (If our general goal is to invent a better form of transportation, we should start with what we know instead of trying to reinvent the wheel).
The next question, on which I am less opinionated, is whether we would focus on evolving patterns or decisions. I think it would be most efficient not to develop the aspects simultaneously, since it would square the improbability of succeeding. This means we would have to develop ourselves the aspect we are not evolving, which will probably be a challenge on its own. Maybe we can get away without doing this, but right now I can’t see how.