Ed Gilbert wrote:
I never tried that experiment, but it is not intuitive to me that it would only be a minor disadvantage. Without an eval term for man difference, I would expect the engine might toss a piece just to gain some minor positional advantage. Do you understand why this doesn't seem to be a problem?
My guess is that the patterns themselves can do the count. Maybe patterns with more man get higher values and it is regulated that way, albeit with some loss of accuracy.
In one experiment, i trained my eval on 16m positions and got a score of 0.7 (out of 1.0) against some reference eval function. Removing the piececount lowered it to 0.67.
I also tried eval features for -1 piece, +1 piece, -2 pieces, +2 pieces, somewhat similar to Gérard plans on this. That added about 4-5 elo points (e.g. barely measureable after 50k games), but it adds memory and time requirements for the optimizer.
So as a very rough gideline (in breakthrough):
- each doubling of the amount of training positions adds about 40-60 points of elo.
- Fixed feature: piececount : about 30-50 elo points
- Fixed feature: specific piececount: about 4-5 elo points (compared to linear piececount)
- Fixed feature: tempo difference: so small it is not measureable after 50k games.
- Using an optimised set of patterns instead of squares: 20-40 elo.
For international draughts, just divide elo points by 10.