NNUE

Discussion about development of draughts in the time of computer and Internet.
BertTuyt
Posts: 1443
Joined: Wed Sep 01, 2004 19:42

NNUE

Post by BertTuyt » Wed Aug 19, 2020 22:15

Joost brought to my attention that a new breakthrough is happening in Computer Chess.
It is called NNUE , which stands for Efficiently Updatable Neural Networks (but spelled backwards :o ).

I would advice programmers to have a look at the talkchess.com website.
Maybe of interest for Computer Draughts.

Bert

Krzysztof Grzelak
Posts: 943
Joined: Thu Jun 20, 2013 17:16
Real name: Krzysztof Grzelak

Re: NNUE

Post by Krzysztof Grzelak » Thu Aug 20, 2020 13:14

A very interesting thing. I played a test match against Komodo version 14 and the match ended in a draw. With weaker engines, Stockfish usually wins. I think it would be a very good thing for draughts programs.

Madeleine Birchfield
Posts: 7
Joined: Mon Jun 22, 2020 12:36
Real name: Madeleine Birchfield

Re: NNUE

Post by Madeleine Birchfield » Mon Aug 24, 2020 12:19

A comment on the NNUE archirecture; chess and shogi engine devs usually use the halfkp architecture, which encodes on the first layer the relation between the player's own king and each piece on the board. However draughts doesn't have an equivalent of the 'king' piece in chess and shogi (draughts king = promoted piece in shogi/chess), so a different architecture must be used. The github repo at https://www.github.com/tttak/Stockfish has additional possible architectures in the src/eval/nnue/architectures folder, like mobility and p (piece), which might be usable in draughts, but more likely new architectures might have to be designed like piece-count and king-count, etc...

BertTuyt
Posts: 1443
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Tue Nov 17, 2020 20:54

I would invite all Draughts programmers to have a close look at http://www.3dkingdoms.com/checkers.htm.
A checkers program GuiNN Checkers 2.0 based upon a NN.
Also source code included, including NN training tools.

At least i have seen that Rein is already digging into the details.
Rein, maybe you can post/share some findings here, and your gut feel if this is the next thing for International Draughts.

Bert

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Wed Nov 18, 2020 11:42

BertTuyt wrote:
Tue Nov 17, 2020 20:54
I would invite all Draughts programmers to have a close look at http://www.3dkingdoms.com/checkers.htm.
A checkers program GuiNN Checkers 2.0 based upon a NN.
Also source code included, including NN training tools.

At least i have seen that Rein is already digging into the details.
Rein, maybe you can post/share some findings here, and your gut feel if this is the next thing for International Draughts.

Bert
Jonathan Kreuzer achieved pretty good results with it in 8x8 checkers, beating Cake by a small margin. It seems doable to also port this to 10x10 international draughts. I have no idea if it will beat Scan-like patterns. It does require quite some investment. The weight learning part is easy now, since Jonathan has open sourced a very clean Keras/Tensorflow script. (It should also be possible to train Scan-like patterns with it, but I haven't figured this out yet). The C++ part is quite involved, since a fast implementation does require some SIMD programming. The chess folks have optimized this using incremental updates (not yet in Jonathan's C++ code). The NNUE architecture came from the Computer Shogi community, so this is a great cross-game development.
Last edited by Rein Halbersma on Wed Nov 18, 2020 11:45, edited 1 time in total.

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Wed Nov 18, 2020 11:43

Rein Halbersma wrote:
Wed Nov 18, 2020 11:42
BertTuyt wrote:
Tue Nov 17, 2020 20:54
I would invite all Draughts programmers to have a close look at http://www.3dkingdoms.com/checkers.htm.
A checkers program GuiNN Checkers 2.0 based upon a NN.
Also source code included, including NN training tools.

At least i have seen that Rein is already digging into the details.
Rein, maybe you can post/share some findings here, and your gut feel if this is the next thing for International Draughts.

Bert
Jonathan Kreuzer achieved pretty good results with it in 8x8 checkers, beating Cake by a small margin. It seems doable to also port this to 10x10 international draughts. I have no idea if it will beat Scan-like patterns. It does require quite some investment. The weight learning part is easy now, since Jonathan has open sourced a very clean Keras/Tensorflow script. (It should also be possible to train Scan-like patterns with it, but I haven't figured this out yet). The C++ part is quite involved, since a fast implementation does require some SIMD programming. The chess folks have optimized this using incremental updates (not yet in Jonathan's C++ code). The NNUE architecture came from the Computer Shogi community, so this is a great cross-game development.

Fabien Letouzey
Posts: 299
Joined: Tue Jul 07, 2015 07:48
Real name: Fabien Letouzey

Re: NNUE

Post by Fabien Letouzey » Fri Nov 20, 2020 17:33

The forum appears to have crashed and we lost a day of messages. I copy mine just below

---

Thanks to Rein for showing this, as I'm not following chess news.

What's amazing is the speed; the rest might not be so important in comparison. The screenshot reveals MNPS range, in line with classic/pattern evals, and seemingly orders of magnitude faster than anything A0-related. It uses sparsity like patterns (they call the input layer 'overdetermined'), and I don't see any convolutions in sight.

This feels much more practical than CNNs. Also, it (optionally) uses the GPU only during learning, making it usable by everyone rather than just owners of Nvidia cards. 10x10 should not be too large a step-up, computationally-wise.

I'm more skeptical about input encoding. NNUEs originate from Shogi where basically only king attacks matter. So it makes perfect sense that they would put emphasis on the position of kings. How to translate that to non-chess games is probably an open question. The focus in chess is the relation between pieces (even at long distance, I'm guessing), while patterns bet on proximity being a key element, working more like a human eye (hence Scan's name).

---

Ed's answer also disappeared from the forum. He said that the screenshot might be from a non-NN version, and that the NN is actually 5 times slower compared to a previous version. Still, 5 times is nowhere near the slow down of A0-based programs (MCTS + CNN).

Also, the less tactical the game, the less speed should matter. Checkers doesn't have backward captures for men, so I wouldn't expect a lot of tactics. Frisian draughts and losing draughts on the other hand ...

Fabien.

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Fri Nov 20, 2020 22:02

In the spectrum of eval complexity, one could make roughly the following hierarchy:
  • Patterns: Pioneered by Fabien's Scan, strongest programs now for 8x8 checkers and 10x10 draughts. Input = K indices ranged 1..3^N for patterns of N squares, only K valid for every position. Fast index computations (PEXT) + direct lookup of K weights. No layers on top (sigmoid for training).
  • Raw board: Pioneered in backgammon in the 1990s, now by Jonathan's GuiNN_checkers. Slightly stronger than Cake, still weaker than Kingsrow for 8x8 checkers. Input = all Piece entries (both type and square). 3 fully connected layers on top. Requires Python float weights -> C++ int conversion + SIMD programming + incremental updates (not yet done by Jonathan) to be fast during game play.
  • NNUE: Pioneered in Shogi programs, now in Stockfish, currently strongest programs for chess, Shogi. Input = all King (square) * Piece (both type and square) entries. 3 fully connected layers on top. Same C++ machinery as for the above entry required (all implemented in Shogi and Stockfish).
  • CNN: Pioneered by AlphaZero, currently strongest for Go, formerly for chess, Shogi. No successful attempts for checkers/draughts AFAIK. Input = all Piece (both type and square) entries, but the expensive comes from 3x3 convolutions in 40-80 layers deep.
All eval types can be computed with a similar pipeline of reading in positions and the respective input formats, building a neural network that ends in a sigmoid to map to 0/1 score (or a tanh to map to -1/0/+1). The weights can be trained with gradient descent. Jonathan Kreuzer's Python script is extremely clean and should be straightforward to generalize to all these type of evals.

In Shogi, they abbreviate the inputs by terms like KP for NNUE (king times piece). They even had KPP (king-piece-piece combinations) prior to NNUE. One could call Raw board networks "P". For NNUE they also add both K and P inputs to the KP tables (re-using the non-zero entries, this is called "factorization"), making it more of a KP + K + P network.

For draughts, there is no unique king piece, so one could either try the "P" type of networks, or "PP" networks: all Piece (both type and square) * Piece (both type and square) entries. For draughts, PP would be about as expensive as KP in chess. It would require serious resources, but still 4 orders of magnitude less than A0 type CNNs.

A slightly cheaper version might be called "PN" networks: all Piece (both type and square) * Neighbor (both type and square) entries. So only the 4 neighboring squares get computed. This is only slightly more expensive than the "P" type networks, yet might offer a flexible form of Scan-like patterns (speculative!).

BertTuyt
Posts: 1443
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Mon Nov 30, 2020 11:52

Rein, thanks.
In the meantime I also started experimenting with NNUE and the checkers-code.

So far I was able to implement the checkers NNUE source-code in my Engine (Damage 15.3).
I used an input vector with 191 elements (2 * 45 white/black man, 2 * 50 white/black king and side to move).
I also succeeded to generate a weights-file with the Python script and TensorFlow, based upon my large set of games.

As usual I have introduced some bugs, but expect in 1 - 2 weeks to have all working.
Assume you also work under the radar screen on NNUE and TensorFlow.

No clue yet about performance.
My gutfeel is that the Scan patterns are still superior, and that the drop in Nodes/seconds will be to big.

Bert

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Mon Nov 30, 2020 12:00

BertTuyt wrote:
Mon Nov 30, 2020 11:52
Rein, thanks.
In the meantime I also started experimenting with NNUE and the checkers-code.

So far I was able to implement the checkers NNUE source-code in my Engine (Damage 15.3).
I used an input vector with 191 elements (2 * 45 white/black man, 2 * 50 white/black king and side to move).
I also succeeded to generate a weights-file with the Python script and TensorFlow, based upon my large set of games.

As usual I have introduced some bugs, but expect in 1 - 2 weeks to have all working.
Great, porting Jonathan's code to 10x10 draughts seems relatively straightforward. Keep us posted of updates!
Assume you also work under the radar screen on NNUE and TensorFlow.

No clue yet about performance.
My gutfeel is that the Scan patterns are still superior, and that the drop in Nodes/seconds will be to big.

Bert
I've managed to train the Kingsrow evaluation function in Keras/TensorFlow. Ed is going to run an engine match to see how strong the Keras trained weights are compared to his own trained weights. I'll post a separate thread as soon as the results are in.

BertTuyt
Posts: 1443
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Mon Nov 30, 2020 12:07

Rein, interesting developments.

How many games/positions did you use, and was was the input vector in your case?
Did you use the same python script, or were some modifications needed to cope with the 10x10 set?

Was your input vector also with 191 elements, or did you use the larger vector which you also proposed in your previous post?

Bert

BertTuyt
Posts: 1443
Joined: Wed Sep 01, 2004 19:42

Re: NNUE

Post by BertTuyt » Mon Nov 30, 2020 12:26

For those who did not had a look at the checkers python script.
When you remove all the code which process the input data, and write the output file, the real core is really small.
See below the optimization part, with only a few lines of code needed.
Tensorflow/Keras is really an impressive framework.

Code: Select all

# Create the neural net model
        model = keras.Sequential([	
                keras.layers.Dense(layerSizes[0], activation="relu"),
                keras.layers.Dense(layerSizes[1], activation="relu"),
                keras.layers.Dense(layerSizes[2], activation="relu"),
                keras.layers.Dense(layerSizes[3], activation="sigmoid"), # use sigmoid for our 0-1 training labels
                ])

        lr_schedule = keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=.01,decay_steps=5000,decay_rate=0.96)
        opt = keras.optimizers.Adam( learning_rate = lr_schedule  )

        #not sure what loss function should be or if should use sigmoid activation
        model.compile(optimizer=opt, loss="mean_squared_error")

        model.fit(positionData, positionLabels, batch_size= batchSizeParam, epochs= epochsParam )
Bert

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Mon Nov 30, 2020 12:41

BertTuyt wrote:
Mon Nov 30, 2020 12:07
Rein, interesting developments.

How many games/positions did you use, and was was the input vector in your case?
Did you use the same python script, or were some modifications needed to cope with the 10x10 set?

Was your input vector also with 191 elements, or did you use the larger vector which you also proposed in your previous post?

Bert
Just as trailer to the full movie: I wrote Kingsrow's eval from scratch as a Keras model, it's not related at all to Jonathan's eval code. Ed provided me with 231 million positions, and the input was 5 material features, 8 pattern indices and 2 game phases, as well as the game result (W/L/D). Not counting the data loading and some low-level helper functions, the neural network takes less than 10 lines of Python code. More to come later this week.

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Mon Nov 30, 2020 13:14

BertTuyt wrote:
Mon Nov 30, 2020 11:52
In the meantime I also started experimenting with NNUE and the checkers-code.
So to be clear: Jonathan Kreuzer's eval is indeed a NNUE, in the sense that you can do incremental updates and use SIMD programming for high speed in C++. It is however rather different than the Stockfish NNUE, in the sense that there are no piece interactions in the input layer. For Stockfish (and Shogi programs), the raw inputs already have all King-Piece combinations, not raw Piece squares.

Rein Halbersma
Posts: 1686
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: NNUE

Post by Rein Halbersma » Mon Nov 30, 2020 13:21

BertTuyt wrote:
Mon Nov 30, 2020 12:26
Tensorflow/Keras is really an impressive framework.
It indeed is awesome. Similar things can be done in PyTorch, Facebook's competitor tool for Google's TensorFlow.

Just to appreciate how fast things are moving: Scan 1.0 predates TensorFlow by 4 months (July vs November 2015)! I previously tried to rewrite Ed's Kingsrow C++ eval into TensorFlow (back in 2017) but I got bogged down in its Byzantine API. In the mean time, I learned Python properly, doing about a dozen small projects in it on GitHub. With a solid Python knowledge combined with Keras, you can code at a much higher level. Jonathan Kreuzer really opened my eyes on this with his script.

Post Reply