Flits self-learning mode

Discussion about development of draughts in the time of computer and Internet.
BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 15:56

Rein, im not aware that someone used Eds driver...

Bert

Suppose someone came in a tournament using Ed's endgame db driver? Would that be "powered by Kingsrow"?

Rein Halbersma
Posts: 1661
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Flits self-learning mode

Post by Rein Halbersma » Sat May 23, 2020 15:57

jj wrote:
Sat May 23, 2020 15:54
Rein Halbersma wrote:
Sat May 23, 2020 15:43
Suppose someone came in a tournament using Ed's endgame db driver? Would that be "powered by Kingsrow"?
You know very well this driver is public, and it was only for practical reasons (no Elo difference).
I previously already stated that programmer collaboration that refrains from copying code / weights into each other's programs should be allowed. Why would it matter that the ML code is private? Scan has private ML optimization code. Suppose Fabien shared it with a few French programmers. Who cares?

Rein Halbersma
Posts: 1661
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Flits self-learning mode

Post by Rein Halbersma » Sat May 23, 2020 15:58

BertTuyt wrote:
Sat May 23, 2020 15:56
Rein, im not aware that someone used Eds driver...

Bert

Suppose someone came in a tournament using Ed's endgame db driver? Would that be "powered by Kingsrow"?
I think Wieger Wesselink uses Scan with the Kingsrow drivers for his own private analysis engine (in order to be able to go full Linux).

Rein Halbersma
Posts: 1661
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Flits self-learning mode

Post by Rein Halbersma » Sat May 23, 2020 15:59

I think Bert has it exactly right. The data generation process is much more important than the optimization routine. IIRC, in AlphaGo they use >95% of resources on the data generation (half a billion games) and very little time on the actual learning. So tinkering with the search depth, features and other stuff seems much more important than knowing how to optimize the weights.

And I also think this is only a very crude algorithm, similar to supervised learning: generate a few million games once, and optimize and done. A more advanced approach would be to do this in a reinforcement learning loop: generate games, optimize weights, generate new games with the new program, etc. etc. Maybe the current style of patterns are too simple to be able to learn much more than has currently been achieved, but setting up such an end-to-end pipeline seems much more rewarding than bickering over a tiny piece of code that is best not written yourself to begin with :)

BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 16:05

JJ, I'm still puzzled where you see the problem?

I use my own games, my own evaluation function, my own weights file..
And yes I used the base of a optimization tool (source code by Ed, and modified for my own purposes) to generate these weights.
And i admit also that i use the pattern mechanisms as developed by Fabien, but with a PEXT.
But I did not use (so far) any complicated king weights, and this is reflected in some issues in specific endgames.
And next to that, I use breakthrough information (which is also not included in the eval), but i use this to extend some search() pv-lines.

So again, do i have an advantage, yes I have, as it has an advantage to collaborate which each other...

Bert

Rein Halbersma
Posts: 1661
Joined: Wed Apr 14, 2004 16:04
Contact:

Re: Flits self-learning mode

Post by Rein Halbersma » Sat May 23, 2020 16:09

jj wrote:
Thu May 21, 2020 19:21
In my opinion, giving one person access to such a powerful tool and not making it publicly available makes for an unfair competition. It is unfair to the people who don't have access to an optimization program and in a way also to the people who put in the work to make their own.
This is the point where I really disagree. Tensorflow, PyTorch, SciPy and R are all open source and freely downloadable, easily installable and with lots and lots of documentation. It still takes time to generate games, extract features and making data structures that can read in weights, but you have to do that regardless. Feeding this into any of these free and high quality optimization libraries should not be an obstacle. They all cost $0.

BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 16:29

Evaluation tuning, is not a new topic as it was already disucced in 2015 (see Eval tuning).

Michel already used it for ages, as he also wrote.

Dragon uses conjugated gradient algorithm for doing that. I found that in the 'numerical recipes in c' book. I made it run multi-threaded and minimized the memory footprint.

Gamefase 3 (the endgame) is currently the biggest and learned from 900 million examples to learn 17 million parameters. This takes about a week to run on my computer and uses about 17 GB of memory. Main limits are lack of memory and the slowness of the random memory accesses.

Smaller problems run much faster; learning 10000 weights from 1 million examples just takes a couple of seconds.


So I tend to agree with Rein.

Bert

jj
Posts: 180
Joined: Sun Sep 13, 2009 23:33
Real name: Jan-Jaap van Horssen
Location: Zeist, Netherlands

Re: Flits self-learning mode

Post by jj » Sat May 23, 2020 16:44

Rein Halbersma wrote:
Sat May 23, 2020 15:57
I previously already stated that programmer collaboration that refrains from copying code / weights into each other's programs should be allowed. Why would it matter that the ML code is private? Scan has private ML optimization code. Suppose Fabien shared it with a few French programmers. Who cares?
Well, there are rules that state that competing programs should be original. These days that means: own your code, own you data, own your tools. (Using ideas from cited authors is allowed of course.) Or, if you don't own something, it should be public, preferably. If it isn't public then firstly it should be with permission and secondly you should mention the author. I'm just saying we should discuss the rules of what is allowed and what is not.

And of course I'm looking out for the interest of my baby Maximus. Now it has one more stronger opponent (in the tournament) because of the new ML eval and I am not happy with how that happened. If Bert would have done it himself I would have congratulated him, now it feels a bit like Ed paid his lunch. But the majority decides on the rules of course, so far it is 2 against 1.

BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 17:14

JJ, do you believe that the strength of Damage is only based upon the optimization program?

To name a few:
* Damage at the start was not the weakest program around...
* I generated the endgame DBs myself (i know you use Ed DBs, but i don't accuse you for that, as you shared that information, and I'm sure you are able to do it yourself).
* I programmed the search and search extensions/reductions myself.
* I generated all games myself, and wrote a specific input program to parse all relevant games into a .bin file.
* I shared all the games.
* I wrote an input program myself.
* i optimized the evaluation and tested several.
* I wrote and shared (although it has nothing to do with ELO) the Damage GUI, and also included the hub protocol in the GUI.
* With Ed I developed the DXP Truus/Flits server.
* I shared always the movegenerator (sources via my Perft experiments).
* In the past i shared all Damage sources (with the evaluation as exception) including search.
* I shared all experiments with search
* I shared the GUI and Engine, so others could play with it.

And yes in the whole chain i did not program the tool myself to optimize weights.
Although I still believe (in line with the remarks with Rein) that I could do it, and it is not rocket science, but using the tools from Ed provided a head start (so Ed thanks for that), and I was open in that.

I now focus on improving my program.
And still in a later stage I want to write my own optimization, with the cpn.exe front-end, so all can test and design evaluation functions.
Also I will share it with the community, and if you want i can already share the cpn.exe tool.

So why do I do it.
As Rein some time ago pointed out, the reason the Chess world progress so fast, is a huge sharing and collaboration.
In Draughts we have a more closed community, and it is thanks to Ed and Fabien that we make so much progress.
I like to compete, but I also want to give something back to all.
And in this way we will make progress as a team and I constantly need to improve my program to catch cup.

So what is your next contribution to the community?

Bert

jj
Posts: 180
Joined: Sun Sep 13, 2009 23:33
Real name: Jan-Jaap van Horssen
Location: Zeist, Netherlands

Re: Flits self-learning mode

Post by jj » Sat May 23, 2020 18:18

BertTuyt wrote:
Sat May 23, 2020 17:14
JJ, do you believe that the strength of Damage is only based upon the optimization program?
No, but see last year's tournament. Although Damage was running at 1 thread then, I believe, which makes a small difference on 4 cores. I think Damage with old eval is at least 100 Elo weaker than Damage with ML eval, but you can test that.
To name a few:
* Damage at the start was not the weakest program around...
* I generated the endgame DBs myself (i know you use Ed DBs, but i don't accuse you for that, as you shared that information, and I'm sure you are able to do it yourself).
I generated them too (6 pieces), as I mentioned on the forum in 201?. I just chose not to distribute them because every user of Maximus already has Kingsrow installed. (Save the planet!)
* I programmed the search and search extensions/reductions myself.
* I generated all games myself, and wrote a specific input program to parse all relevant games into a .bin file.
* I shared all the games.
* I wrote an input program myself.
* i optimized the evaluation and tested several.
* I wrote and shared (although it has nothing to do with ELO) the Damage GUI, and also included the hub protocol in the GUI.
* With Ed I developed the DXP Truus/Flits server.
Thank you for that, sincerely!
* I shared always the movegenerator (sources via my Perft experiments).
* In the past i shared all Damage sources (with the evaluation as exception) including search.
* I shared all experiments with search
* I shared the GUI and Engine, so others could play with it.
Yes I also did many things myself and true, I did not share much yet, partly because I work in Java (see earlier post). I did make the public DSBS software to play with a DGT digital board via DXP. Never looked at Damage yet, sorry. No offense.
And yes in the whole chain i did not program the tool myself to optimize weights.
Although I still believe (in line with the remarks with Rein) that I could do it, and it is not rocket science, but using the tools from Ed provided a head start (so Ed thanks for that), and I was open in that.

I now focus on improving my program.
And still in a later stage I want to write my own optimization, with the cpn.exe front-end, so all can test and design evaluation functions.
Also I will share it with the community, and if you want i can already share the cpn.exe tool.

So why do I do it.
As Rein some time ago pointed out, the reason the Chess world progress so fast, is a huge sharing and collaboration.
In Draughts we have a more closed community, and it is thanks to Ed and Fabien that we make so much progress.
I like to compete, but I also want to give something back to all.
And in this way we will make progress as a team and I constantly need to improve my program to catch cup.

So what is your next contribution to the community?

Bert
My motivation is the love for games like chess and draughts, algorithms and curiosity. Bert, I know you have a long track record and contributed much to the community. I have not contributed as much as I would like to (yet) because firstly I am always more or less behind the front runners and secondly my health situation prevents me from working more or faster.

My next contribution will be a new version of the Maximus app with a completely new GUI and an engine update. My goal is to provide the ideal draughts app for (aspiring) club players. (I don't make money off this, it just covers some expenses.) And I would like to do more research and publish more articles in the future.

So yes, you and others did more for the community but that doesn't mean I can't ask questions.

BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 18:37

Altough Im a little irritated lets focus on the main point.

Where do we all agree and where do we all disagree.
* If all use open ideas from each other, freely available, and you code it in your own way (so not copy/paste) that is no problem.
* It is allowed to use source code as is, if allowed by the author, like the DB-interface, but it would be appropriate if this is mentioned.

Now the more tricky parts, and here Im interested in all opinions:
* Is it allowed to use an idea which is not common shared, but as a result of the collaboration between 2 people. I know there were many people who jointly worked on Computer Draughts, but kept ideas amongst each other. My opinion here that is allowed if both agree (and as Rein mentioned sharing ideas is more or less already an implicit approval). Again even if the idea is brilliant, and only 2 people now, that (to my opinion) is legal. As an example Stef Keetman (in a email exchange) shared (long ago somewhere in the 1990) that he divides the evaluation by 2 if both have a king, now commonly applied, and although there are dbs with more pieces.

* You share (in collaboration) a tool (like the optimization ML), but this tool is not shared (yet, or will never be shared) by the community. The author of the tool makes no issue that you use the tool for your own purpose. To my opinion that is allowed.

* A side scenario, there is a development tool like TurboDambase with many games, which you can buy. You could use this set for opening book preparation. As all can buy that is no problem. But now Klaas Bor gives it to you, as you shared something else with him, is it then allowed to use the free copy of Turbo Dambase for your own purpose, as you have an advantage as others need to pay. The same question if you share (for free) with someone else a very expensive compiler (like the Intel Compiler) which generates much faster code.

So my position, I had always the impression that Ed shared the tool with me, as he knew i was writing my own evaluation based upon my own input data. But if this is not the case, then I will not use the weights file, and program my own optimization myself.

My bottom line (and again if it applies to Ed is something he should speak out), if 2 Computer Draughts programmers share mutual exclusive tools and ideas, and both (or one side) uses them, that should be allowed by our rules.

Hope this helps, and hope we can continue this discussion in a constructive way

Bert

Sidiki
Posts: 170
Joined: Thu Jan 15, 2015 16:28
Real name: Coulibaly Sidiki

Re: Flits self-learning mode

Post by Sidiki » Sat May 23, 2020 18:46

Hi all

I sent the update to ED, Bert, Jan and Luzimar( a brazilian friend that share many many thing with me, very kind and opened person like everybody on this community), contain Kingsrow, Scan, Maximus and a few Damage.Damage don't manage correctly the time on dxp mode on my computer, i don't know why.So an update will be done more focused on Damage and the others of course.

Rein, i haven't your mail, i will send it to you too, if you are interested.

I see that my post, seem to be :agrue: , but what, i had the intention to show, it's that, a good program must be :
GOOD OPENING BOOK + GOOD EVALUATION + GOOD ENDGAME DATABASE

Friendly
Sidiki

BertTuyt
Posts: 1414
Joined: Wed Sep 01, 2004 19:42

Re: Flits self-learning mode

Post by BertTuyt » Sat May 23, 2020 18:54

JJ, thanks for your reaction. This will clear the air, appreciated!

Some points regarding the Damage strength.

The latest version from Damage before the ML, was equal with Kingsrow (before ML), you can also find it on the forum as it was documented, and i shared the pdn files.

What is the history regarding the Damage version in the tournament.
I was also quit competitive some years ago, and did not want to share Damage (the GUI was no problem), as I was too much focused in becoming no. 1.
Im now older ( :( ) and close to retirement and also health is not perfect, so i have other priorities, and sharing is one of them.

Although i never shared the previous Damage, I by accident put a version in a shared Dropbox folder and some-one (guess who :D ) find it.
There were several bugs in this version, especially as i removed the code for wing-locks (you can check the match games), and i wanted to start from scratch with programming them. Also i had disabled some search extensions.
In the end i did not bother that it was used.

My own measurements indicate an evaluation improvement by switching to ML of 30 - 50 ELO, and of course im further improving, just read through the Stockfish code, and you see we have many options to explore.

Ok and to share an idea with you.... :D
Scan uses 2 occasions (pruning and singular extension) where a move is not made, but the same position is searched with another depth and alpha beta window.

If the skip_move was move_none, than a 2nd time you redundant do all kinds off checks and function calls, you can speed up the program when for example you make a separate pruningsearch() routine....
Think Fabien is aware of this, but it makes the code lets elegant, and may be he is right...

Bert

Ed Gilbert
Posts: 790
Joined: Sat Apr 28, 2007 14:53
Real name: Ed Gilbert
Location: Morristown, NJ USA
Contact:

Re: Flits self-learning mode

Post by Ed Gilbert » Sat May 23, 2020 19:08

Well, I see I picked a good day to go kayaking, instead of reading the messages on the Forum :-)

Rein, to answer your question about TDam, yes I think Klaas will soon be releasing a version that interfaces to kingsrow-hub.
The gradient descent code itself is probably best done using a professional library.
Rein, I'm sure you know much more about this than I do, but from my point of view it was easier to write my own. I looked at a few of the big packages, and I was intimidated by what looked like a non-trivial exercise to figure out how to use them. I was not familiar with some of the terminology. In the past some of my biggest headaches have been trying to understand why some big black-box software doesn't do what I want it to, like with the GDI interface in Windows, and some other Windows APIs that can be hard to use and are not always very well documented. And gradient descent seemed simple enough. In retrospect, gradient descent was a little more difficult than I first estimated, but I still think I made the right decision. What difficulties I had were primarily getting the right cost function, and finding the right way to create training games. My optimizer can converge on all the weights using 200M training positions in about 1 hour, using 8 threads. Would Tensor Flow be able to do that? I don't know. But at least I know what my code is doing, and I don't have to fight with it.

I can also observe that Fabien wrote his own GD optimizer, rather than use an existing library, and he seems to have a lot of experience with ML. He calls it a "right of passage", and he may be right. If these ML libraries were the easiest way to create eval weights, why is it that no draughts programmer has used them yet?

That's all I'm going to say for the moment. I don't know if I will weigh in any further on this current controversy, I will re-read the messages and think about them for a while.

Joost Buijs
Posts: 320
Joined: Wed May 04, 2016 11:45
Real name: Joost Buijs

Re: Flits self-learning mode

Post by Joost Buijs » Sat May 23, 2020 19:30

Rein Halbersma wrote:
Sat May 23, 2020 15:32

If someone sends me a tool (binary or source) then I think permission is implied. So if that tool can parse a file of positions into a table of features and do gradient descent on it to get a bunch of feature weights, I would certainly use it to see what comes out of it. Using the source to reverse engineer the binary weight file in order to read in this weight file is not something I would ever do (this is what Joost did with Scan).
First of all it never was a secret that (out of curiosity) I did some tests with Scans evaluation function as I told several people on the forum, and I never stated anywhere that I had generated my own weights, so I assumed everybody knew. This has nothing to do with reverse engineering anything, the only difference was that (because it is more cache friendly) I stored midgame and endgame values in a struct array instead of two different tables, and if I remember well Scan used big-endian.

It was my intention to replace these weights with something of my own and to use totally different patterns from the beginning. Logistic Regression is not rocket-science and I was pretty sure that I would be able to replace these weights within one or two months of time since I'm already retired and can work on it whole day long.

If I had known beforehand that Fabien would make such a fuss about it I had done things differently, but despite this he should have contacted me personally before mobilizing all kinds of people against me including Jaap Bus.

For me programming is fun, I never would have any objection if somebody uses my source or my data, I assume it is just another way of thinking, I don't seek recognition for these kind of things.

Anyway, at that time it was a personal challenge for me to get LR with Draughts working, after a few months of trial and error I got good results obtained with a set of 4 million positions extracted from 200k games. After this I decided to stop completely with it, the atmosphere in the community was not exactly inviting me to continue and I more or less lost interest because of this.

Post Reply