Book learning

There is some confusion about book learning and the best settings for the opening book. In this article we try to clarify things a bit.

First of all book learning is only supported since update 1 of ChessPartner 5.0. To verify the version click on Help -> About, in the list with version numbers; CP5.EXE should be version and the file bookman.dll should be version or higher.

The book learning discussed here only applies to the GUI books and NOT to the engine books. This is mainly of importance for owners of the Rebel Tiger II engine as this engine has its own book learning.

To enable book learning select Extra -> Options, then select the Books page:


The relevant settings are:

Use GUI books - Must be checked
Move distribution - By learned scores
Variation - This controls how varied the book is
Learning strength - Controls how much learning affects the move selection

Now a very technical explanation follows.

The book learning data is stored in standard ChessPartner format books; the books simply have the win/loss statistics. These learning data books are created/updated at the end of a game. There are two books created for each engine, one for white and one for black. The books are stored in the installation directory and are called after the name of the chess engine. e.g. 

"Lokasoft standard Chess Engine - LCHESS"
"Lokasoft standard Chess Engine - LCHESS"

Learning data is only update when:
- It was a serious game without take backs and engine swaps.
- It was played with a tournament level or at least time constrained.
- The game was lost OR
- The game was won after more then 50 moves OR..
- The rating difference was less then 200 points. (if known)

The use of the learning data is straight forward, at the beginning of the game the proper learning data book is selected, then ChessPartner simply simply makes use of the win/loss statistics to update the scores of the selected books.

Clearing of learned data is also straight forward, simply delete the .bkl files.

The following formula is used to give a score to a learned move:

lscore = LearningStrength/100 * (Wins/(Wins+Losses)-Losses/(Wins+Losses)) * LearningConstant

The LearningStrength has a value between 0 and 100 and can be set by the user, its default is set at 50.

The LearningConstant is currently set to 200, this means the lscore can have a value between �200 and +200

The base score of a move is:

32 * Priority

The Priority of a move is 0 when it will never be played, it is 5 for the best move to play. Standard priority is 3

The score used in selection is then:

mscore = 32 * Priority + lscore

After all scores are assigned, the scores are adjusted such that the lowest score value equals a positive integer which is a constant that controls the variation, default variation constant is 50. The user can set this constant between 0 and 100. The final selection of a move is directly proportional to its mscore.

The probability of a move being played is: mscore / sum(mscore of moves)

An example:

Suppose there are 4 moves, all with a priority of 3.

The mscore for each move is now: 32 * 3 + 0 = 96

All moves are equal so they all have a 25% chance of being played.

Now suppose the first move lost one game, the mscore for the first move becomes:

( 32 * 3 + 0 ) + ( ((50/100 * ( 0-1 )) ) * 200 ) = -4

move1 = -4
move2 = 96
move3 = 96
move4 = 96

After adjustment the scores are:

50, 150, 150, 150

The probability of the first move being played is then: 50/500 = 10 %

There is one potential problem with this system, if a series of games is played against a weak opponent, then it will take many lost games before the score of that move is going to drop. 

In automated matches like with aut232 or ERT it is probably best to reset the learning at the start of the match.

An other interesting option is to preset the learning by importing a database of played games, but this is something for the future.

What are the best settings for Variation and Learning strength ?

This is difficult to say, best is to experiment with it. If you set the Learning Strength to 0% then no learning takes place and only the scores as defined in the books are used, setting it all the way to 100% means less games are needed to have an effect on the learned scores.

Variation has a different effect, setting it to 0% has the effect that the learned scores are further apart, which means that moves with a higher learned score have a relatively higher probability of getting played.

You can see the effect of the sliders by looking at the scores in the book moves window, after changing the settings the score is automatically updated.


One final note; the learning data files are always updated even if you have the Learning strength slider at 0%. The sliders only control how the data is used.

Rebel Tiger II engine book learning

The Rebel Tiger engine has its own book learning, but as this engine is not produced by Lokasoft we don't have full information about its learning. We know that it stores it learning information in a file called: ht.ini