### Idea for an improved GA

I just had a slick idea for an improved genetic algorithm. I have not read about this anywhere, but I'm probably not the first to think of it:

*Lay a lattice over the surface map you are working with and only solve for points on the lattice to get a good idea about which hills to climb at a finer precision. This should stomp the performance of a vanilla randomly seeded GA.*

If you're curious about genetic algorithms or you just want to play with one, you can check out an old applet I made here. There is also good information available on the GA entry at wikipedia. GA applications in systems trading development are everywhere.

## 6 Comments:

Jon - that certainly depends on the variability of the surface you are working with. Think about an environment that has some seriously steep peaks that your evenly distributed lattice misses (in a valley somewhere). Also, you are assuming a 2D landscape, right? Or am I missing something?

Eric,

Missing those sharp peaks would actually help you avoid curve fitting if you are optimizing trading system parameters with the GA. You want to catch the broad peaks because that is where the robust parameters are.

The lattice can be extended in as many dimensions as you want. Think of it as taking equi-distance samples of the entire state space. For some state spaces I imagine you could use standard deviation or some other statistics to optimize the precision of the lattice, ie. as low-precision as you can and still represent the significant features of the state space.

-Jon

Jon - I see what you are saying (2d vs. n-dimensions), but golly - that is the whole reason for a random population at first - to NOT have to explore the whole space. When I start with a small population of 20 individuals, I am doing it for a reason - because it might take an hour or two to find the fitness of each of the individuals. In a high dimensional space - how many equi-distant samples are you planning on taking? I think you will find that either way you do it, the over all performance will be about the same - BUT, that is an untested "feeling", not a comparative analysis of the two techniques.

Eric,

If it takes you that long to compute the fitness of each member, you may just want to solve for a low-precision lattice and not even bother trying to get a perfect precision result.

To illustrate what cases this new technique might be useful, lets use the travelling salesman problem with 6 cities. The old GA in my applet would start with a few dozen random combinations of cities:

1 - 2 - 3 - 4 - 5 - 6 = [fitness]

4 - 2 - 1 - 3 - 6 - 5 = [fitness]

6 - 3 - 2 - 5 - 4 - 1 = [fitness]

etc.

The fitness is the distance travelled when you visit each city in the given order. The old way would then start hill climbing, starting with the random seeds. My new lattice idea would have you solve the problem in two steps, the first being randomly generated low-precision parameter combinations:

12 - 34 - 56 = [fitness]

34 - 12 - 56 = [fitness]

56 - 12 - 34 = [fitness]

etc.

There are WAY fewer combinations to solve for in this case, so we could rapidly find something good. The second step is to seed our old GA with the best combinations we found in the first step rather than completely random ones. We can do step 2 for each hill we want to climb that was found in the first step, or simply seed with the optimal from step 1 and run once.

If step one discovered that the best combination is this:

34 - 12 - 56 = [fitness]

It would load the old GA with a higher precision version from step 1:

3 - 4 - 1 - 2 - 5 - 6 = [fitness]

Step 2 will then try all sorts of slight variations that would have been skipped over by step 1.

When you scale this technique up to huge dimension problems I think it will yield a good performance increase. If I get time I might write a new version of the TSP applet to compare the performance of the two GA techniques.

It may work well for the type of problem you are solving, but I think a lattice-type starting point is in general inferior to random seeds.

The worst degenerate case for the lattice-type technique is where most points on the lattice have very similar fitness. Then, you will be no closer to a solution than before you started.

Of course you aren't guaranteed to avoid this issue with random points, but with most problems you can assume the solution space has some sort of structure, and using random points avoids accidentally fitting that structure right off in a way that keeps you from solving the problem well or quickly.

I have always considered it part of the random seeding technique that one uses enough random points to get lattice-like coverage of the solution space, anyway.

I should add that I've seen a number of cases where, due to the amount of computation involved, we had to use less seeds than we'd have liked. In those cases, we used half our points for equidistant solution-space points, and allocated the other half randomly. This was a kind of compromise, to make sure we tried something in every area of the solution space, but also keep the random quality we like.

Another technique I've seen tried is to use equidistant points, but disturb them by some small random amount. This way, you have approximately a lattice, but it's not rigid.

Again, in all these cases, the goal is to avoid accidentally picking points that coincide with some structure of the solution space.

Post a Comment

<< Home