evolving neural net

Yossi Mossel ymossel at hotmail.com
Wed Mar 19 09:52:10 EST 2003

Hello Yip,

I have been evolving neural networks with galib for quite a while now.
My networks control artificial agents in simulated environments. Hopefully 
you can learn from my experience.

I had some problems initially when using galib. Most of these were a result 
of the fact that my objective function was a noisy one - it does not give 
the same value for a genome in different runs.  I am not sure from your post 
whether you expect the same or not.
Just in case I'll mention a few pointers regarding using galib with a noisy 
activation function.

- You should implement a population evaluator which always reevaluates the 
population, something like:

void MyPopEvaluator(GAPopulation & p)
  for(int i=0; i<p.size(); i++)

- Elitism as implemented in the simplega did not work for me.  The best 
agent is usually not really the best but just the "luckiest" in the current 
generation.  I had better success using the steady state algorithm with 0.9 
replacement rate.  I had some problems with the statistics due to this.  If 
you search in google for me (yossi mossel) you should find an old post to 
this group regarding this.

And other pointers:

- In neural networks improvments are typically very small.  I found that 
using a linear scaling scheme as implemented in galib to be quite helpful.

- I would play around with the mutation operator.  Try gaussian and uniform 
mutation - see if it make a different.

- I evolved populations of size 100 for about 250 generations.  This would 
take 4 hours.  If your objective function takes 6-7 minutes your runs might 
be unreasonably long.

hope this is helps,

Yossi Mossel
Computational Neuroscience
Tel-Aviv University

----Original Message Follows----
From: Anthony Yip <yipsystem at yahoo.com>
Reply-To: galib at mit.edu
To: galib at mit.edu
Subject: evolving neural net
Date: Tue, 18 Mar 2003 14:30:04 -0800 (PST)

Hello All,

I am a newbie to GA and am wondering if anyone can share their valuable 
experience with me with regard to evolving a neural net..

i thank you in advance to reading about my situation.

i am currently using a neural net as a non-linear function approximator to 
learn some target function.. it is a binary target of 0 and 1 and 1 is only 
in about 15% of all training cases. as far as performance goes the trained 
net produced good signals at times but the other times the wrong signals can 
be costly.

therefore now i am turning into GA to hopefully able to evolve a set of 
weights that can produce better signals. i have heard arguments on both 
sides with regard to GA being able to produce better results.. so i am 
willing to give it a try.

now the detail of my situation. the objective function in which i have to 
run for each net would probably take 5-7 mins. i have a couple of nets that 
produce "semi-satisfactory results". my net is currently 25x50x1. so there 
are quite a bit of weights to evolve/train..

i am planning on using my own genome class which composed of solely 2 
2dgenomes of double/float. (one for inputxhidden weights another for 
hiddenxoutput weights). initalize the population size to a small size (any 
suggestions? time could be an issue) and initialize some (about 10) of the 
genomes with my own trained net weights.. and use steady state GA, tournment 

my goal is to be able to maximize GA's potential to help me.. given my 
current situation and my plan can someone please give me some pointers or 
suggestions as to how or what else to do to approach this? any advice from 
experienced GAers (especially those with NN experience) would be a treasure. 
Thanks greatly in advance. I would deeply appreciate any help.



Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!

MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. 

More information about the galib mailing list