This blog is for documentation of the “evolution of software agents for economic games” project in the spring, 2011 course on Research in Artificial Intelligence at Hampshire College.
March 31, 2011
This week I’ve been working on hard coding most pattern-recognition tools for the GP. I haven’t had a huge amount of success so far but hopefully once the tools are simple enough for the GP to use I’ll see a significant improvement in its performance against random and semi-random AIs.
Fred also sent out a paper that he thinks could be really useful in figuring out our next couple steps — I’m reading that right now. The group is planning to meet this Saturday to share our progress.
March 24, 2011
Alternate every other
Out until specified round
Go in on rounds with number that satisfies function
if-let round satisfies [function] (for example, if round is in Fibonacci sequence)
Basic + change strat if certain amount of money lost
Out first round, go in if number of players who went in last round < market limit
Stay out until a certain number of consecutive rounds of unsaturated market
Predict change in # of in/out by averaging all preceding changes of players
Predict change in # of in/out by using median of preceding changes of players
Determine likelihood of each player going in (independent of round number)
Using Past Games:
Looking for patterns between how individuals play in different games (for example, player 3 always goes in past round 4).
Looking for group patterns (are there certain rounds that consistently have unsaturated markets)
“Reverse Psychology” (doing the opposite of one’s hunch because others might think along same lines)
Analysis only sways probability of going in or out
Special Exceptions (add these to occasionally override other processes):
Big loss, more conservative play (and the opposite)
Someone else gets lucky, copy them
“Few people have been going in, I should hop in”
Different forms of analysis available, decides based on all.
March 10, 2011
How robust do we want/expect our program to be, and in what ways? To what extent do we want to guide/jump-start the evolution?
I decided to check out what Wikipedia had to say about “robustness,” and discovered that the main article was not in computer science or economics, but in statistics. We haven’t been using it to refer to statistics really, but in some ways all robustness is referring to “dealing with statistics,” if one allows for a looser understanding of “statistic.” This got me thinking…
In terms of our desired program, we want it to refer to the memory of past games and the players—one way of looking at this is to say that this bank is a collection of statistics. Another way of looking at it, however, is to say that this bank is just a collection of information, and only becomes a statistic once it is analyzed as such. (In other words: If a collection of information falls in the middle of the forest and there’s no one around to analyze it, does it count as statistics?)
Ignoring the juicy philosophical implications of this question (i.e. the semantical vagueness), I think this applies most in terms of how we’re approaching this. Consider a group of scientists trying to construct an artificial animal. They want their robot to be able to interact with their environment, so they plan to give it eyes and a brain. In constructing the brain and its program, it would be naïve of them to assume that the raw sensory data from the eye would be interpreted as a series of images. I wouldn’t put it past them to assume this, however, since this aspect of sight is so fundamental that we don’t normally have any idea that we do this.
One might have fun asking the mysterious question “if a TV flashes in the middle of the forest and no eye-brain system is around to experience and process it, does it make an image?” in this case too, but clearly the important thing is that we acknowledge there is more to processing an image than one step of “assessing the data.” There are at least two fundamental steps: an image must be assembled from the raw data input, which can then be analyzed by the ultimate logical process. Recalling what I learned in Lee’s class this fall about the processing of sight, I would bet that there are many more steps involving shape recognition and movement perception, actually.
If you haven’t already figured it out, it seems to me that the transformation of raw data to an image is a similar process to the transformation of raw data into statistics. An example from a hypothetical run of a Push tournament of our market game might involve one program alternating in and out each round. Give a human a table which includes this in the data and it will jump right out as a pattern, but when the other programs in our tournament look at this data there is no telling what they’ll get from it.
There are two things I want to say about this. First, the steps involved in how a human would play in this scenario are apparent: recognition of the pattern, prediction of how the pattern might continue, assimilation of this information into decision-making. Second, we can see here that robustness comes into play on more than one level here. If a program cannot detect patterns that involve some errors and outliers, it does not have very robust pattern seeking; yet, even an extremely robust pattern-detector could be a weak decision maker.
I think this distinction between pattern-seeking and ultimate decision-making is fundamentally important to how we go about constructing a program. It seems improbable to me that a practical, manageable system could exist without an explicit pattern-seeking component. Perhaps that is just my human bias limiting my imagination; perhaps we would unlock a new realm of decision-making completely unlike human thought. Nonetheless, it would undoubtedly be much simpler to follow the available model, investigating how we could improve it. Since we have yet to really delve into the research, I think we ought to keep it simple and see what we can do. (If our general goal is to invent a better form of transportation, we should start with what we know instead of trying to reinvent the wheel).
The next question, on which I am less opinionated, is whether we would focus on evolving patterns or decisions. I think it would be most efficient not to develop the aspects simultaneously, since it would square the improbability of succeeding. This means we would have to develop ourselves the aspect we are not evolving, which will probably be a challenge on its own. Maybe we can get away without doing this, but right now I can’t see how.
March 3, 2011
This week I’ve been working on working on programming a multi-player Market Entry Game. It is two bugs away from being finished. They should be fixed by Saturday, probably sooner…
I couldn’t figure out anything useful to program while Micah and Fred were working on the core game functionality in clojure (presumably there will be stuff to help out with soon!) so I took this week to find and look over articles on GP coevolution and what we should look for in our initial tests and what we might expect in these tests vs. possible tests against programmed or random strategies. I also looked into more complex economic games — namely artificial stock markets that were created for similar agent tests.
Some interesting links:
February 24, 2011
Last week we brainstormed an idea for the actual initial game we were going to use and ended up trying to go with the market game. Fred found some really interesting articles on agent implementation in social systems and we started to think about how to implement agents/learning AI’s for the GP to compete against. In class, we decided that the learning AI’s might be too difficult to program and that we should figure out an alternative so we could focus our efforts on the GP itself — maybe still with the market game with random or pre-determined strategies, or maybe in a new game altogether at least for learning clojure.
February 17, 2011
We got my laptop all set up with the needed software–everything except for lein (which I didn’t need this week as I was just focusing on learning the language).
I’ve read a couple chapters of the pdf of Programming Clojure and I’ve read through/watched a few of the tutorials that Lee emailed me. I have yet to really try writing code on my own, but I’m confident I will be able to do that this week.
February 10, 2011
I spent a lot of this week figuring out how github does version control and getting it to play nice with netbeans/lein. I have github itself working but I’m still not entirely sure that leinigen is happy — I guess we’ll see. Then I forked clojush and looked at the code for awhile. Finally, with the goal of just writing the simplest program that works, I took odd.clj and recreated it from scratch, which was helpful in understanding how push works and how to work push through clojure. I still don’t entirely get it though, and I wasn’t able to meet with the group as often as I’d like — I definitely have to work on that next week.
So far it’s been a slow start. I downloaded a slew of programs and plugins and such, and I think that I’ve gotten them all installed correctly on windows 7, which was an experience in itself. I still need to figure out how exactly everything is used for and how to use it.
Since I was gone all this weekend I haven’t been able to spend as much time on this, but I still started in on reading through the links Lee emailed me last week. I have yet to start writing code, but I plan on getting going with that this weekend.