refact(docs): clean docs

This commit is contained in:
Henri Bourcereau 2026-04-08 16:18:08 +02:00
parent 3b9a1277d8
commit a0e3cf5f19
16 changed files with 2 additions and 897 deletions

View file

@ -1,51 +0,0 @@
# Game state notation
## History
Jollyvet : rien
1698 Le jeu de trictrac...
Noirs T 1 2 .. 11
Blancs T 1 2 .. 11
1738 Le Grand Trictrac, Bernard Laurent Soumille
A B C D E F G H I K L M
& Z Y X V T S R Q P O N
1816 Guiton
Noirs T 1 2 .. 11
Blancs T 1 2 .. 11
1818 Cours complet de Trictrac, Pierre Marie Michel Lepeintre
m n o p q r s t u v x y
l k j i h g f e d c b a
1852 Le jeu de trictrac rendu facile
Noirs T 1 2 .. 11
Blancs T 1 2 .. 11
## Références actuelles
https://salondesjeux.fr/trictrac.htm : Guiton
Noirs T 1 2 .. 11
Blancs T 1 2 .. 11
http://trictrac.org/content/index2.html
N1 N2 .. N12
B1 B2 .. B12
Backgammon
13 14 .. 23 24
12 11 .. 2 1
=> utilisation de la notation Backgammon : uniformisation de la notation quelque soit le jeu de table.
Non dénuée d'avantages :
- on se débarrasse de la notation spéciale du talon
- on évite confusion entre côté noir et blanc.
- bien que l'orientation change par rapport à la plupart des traité, on suit celle du Lepeintre, et celle des vidéos de Philippe Lalanne
Backgammon notation : https://nymann.dev/2023/05/16/Introducing-the-Backgammon-Position-Notation-BPN-A-Standard-for-Representing-Game-State/
GnuBg : https://www.gnu.org/software/gnubg/manual/html_node/A-technical-description-of-the-Position-ID.html
- implémentation rust https://github.com/bungogood/bkgm/blob/main/src/position.rs

View file

@ -1,9 +0,0 @@
# Pourquoi le trictrac
Je vais montrer pourquoi il faut redécouvrir le trictrac, pourquoi il pourrait être votre nouveau jeu favori.
Je vais montrer pourquoi il est intéressant, complexe pour les humains comme les machines.
En quoi il diffère du backgammon de manière avantageuse. Pourquoi il a quasiment disparu.
Le backgammon était connu à la grande époque du trictrac. Il ne lui manquait que le dé doubleur, apparu au début du XXe siècle, qui a ajouté les prises de décision méta, que le trictrac possèdait déjà sur plusieurs niveaux (via la gestion des bredouilles en fin de partie par exemple).
Le hasard des dés permet au joueur faible d'espérer la victoire même en fin de partie (cf. Le tour du monde en 80 jeux).

View file

@ -1,40 +0,0 @@
# traité
En 12 chapitres (trous) de 12 sous-chapitres (points / niveaux de compréhension) ?
Célébration -> s'inspirer du _petit traité invitant à la découverte de l'art subtil du go_
comparaison échecs -> comparaison backgammon
Les règles
- le matériel
- le mouvement
- les points
- les écoles
- les combinaisons
La stratégie
- probabilités
- arbres de décision
- l'entraînement
L'encyclopédie
- comparaison avec d'autres jeux
- échecs/go ?
- histoire
- traités
- vocabulaire
- l'esthétique
- l'étiquette
- ressources web
- wikipedia
- l'encyclopédie des jeux + videos youtube
- le dictionnaire du trictrac
- fabriquer un boîtier/plateau de jeu
- jouer en ligne
## rêveries
Trictrac : un domaine grand et complexe, un univers dans lequel on peut s'absorber. Un jeu geek parfait. Qui a la noblesse d'avoir été populaire, qui a la noblesse de règles nécessitant apprentissage et presque companionage.
Pourquoi s'investir dans ce genre d'activité ? Toucher un absolu. Sauver de la mort une pépite du passé. Entrer dans le monde des morts comme Orphée ou Ulysse ?
Et maîtriser un vocabulaire, des gestes, des règles affinées au fil des siècles.

View file

@ -1,130 +0,0 @@
# Inspirations
tools
- config clippy ?
- bacon : tests runner (ou loom ?)
## Rust libs
cf. <https://blessed.rs/crates>
nombres aléatoires avec seed : <https://richard.dallaway.com/posts/2021-01-04-repeat-resume/>
- cli : <https://lib.rs/crates/pico-args> ( ou clap )
- reseau async : tokio
- web serveur : axum (uses tokio)
- <https://fasterthanli.me/series/updating-fasterthanli-me-for-2022/part-2#the-opinions-of-axum-also-nice-error-handling>
- db : sqlx
- eyre, color-eyre (Results)
- tracing (logging)
- rayon ( sync <-> parallel )
- front : yew + tauri
- egui
- <https://docs.rs/board-game/latest/board_game/>
## network games
- <https://www.mattkeeter.com/projects/pont/>
- <https://github.com/jackadamson/onitama> (wasm, rooms)
- <https://github.com/UkoeHB/renet2>
- <https://github.com/UkoeHB/bevy_simplenet>
## Others
- plugins avec <https://github.com/extism/extism>
## Backgammon existing projects
- go : <https://bgammon.org/blog/20240101-hello-world/>
- protocole de communication : <https://code.rocket9labs.com/tslocum/bgammon/src/branch/main/PROTOCOL.md>
- ocaml : <https://github.com/jacobhilton/backgammon?tab=readme-ov-file>
cli example : <https://www.jacobh.co.uk/backgammon/>
- lib rust backgammon
- <https://github.com/carlostrub/backgammon>
- <https://github.com/marktani/backgammon>
- network webtarot
- front ?
## cli examples
### GnuBackgammon
(No game) new game
gnubg rolls 3, anthon rolls 1.
GNU Backgammon Positions ID: 4HPwATDgc/ABMA
Match ID : MIEFAAAAAAAA
+12-11-10--9--8--7-------6--5--4--3--2--1-+ O: gnubg
| X O | | O X | 0 points
| X O | | O X | Rolled 31
| X O | | O |
| X | | O |
| X | | O |
^| |BAR| | (Cube: 1)
| O | | X |
| O | | X |
| O X | | X |
| O X | | X O |
| O X | | X O | 0 points
+13-14-15-16-17-18------19-20-21-22-23-24-+ X: anthon
gnubg moves 8/5 6/5.
### jacobh
Move 11: player O rolls a 6-2.
Player O estimates that they have a 90.6111% chance of winning.
Os borne off: none
24 23 22 21 20 19 18 17 16 15 14 13
---
| v v v v v v | | v v v v v v |
| | | |
| X O O O | | O O O |
| X O O O | | O O |
| O | | |
| | X | |
| | | |
| | | |
| | | |
| | | |
|------------------------------| |------------------------------|
| | | |
| | | |
| | | |
| | | |
| X | | |
| X X | | X |
| X X X | | X O |
| X X X | | X O O |
| | | |
| ^ ^ ^ ^ ^ ^ | | ^ ^ ^ ^ ^ ^ |
---
1 2 3 4 5 6 7 8 9 10 11 12
Xs borne off: none
Move 12: player X rolls a 6-3.
Your move (? for help): bar/22
Illegal move: it is possible to move more.
Your move (? for help): ?
Enter the start and end positions, separated by a forward slash (or any non-numeric character), of each counter you want to move.
Each position should be number from 1 to 24, "bar" or "off".
Unlike in standard notation, you should enter each counter movement individually. For example:
24/18 18/13
bar/3 13/10 13/10 8/5
2/off 1/off
You can also enter these commands:
p - show the previous move
n - show the next move
<enter> - toggle between showing the current and last moves
help - show this help text
quit - abandon game

View file

@ -1,61 +0,0 @@
# Journal
```sh
devenv init
cargo init
cargo add pico-args
```
Organisation store / server / client selon <https://herluf-ba.github.io/making-a-turn-based-multiplayer-game-in-rust-01-whats-a-turn-based-game-anyway>
_store_ est la bibliothèque contenant le _reducer_ qui transforme l'état du jeu en fonction des évènements. Elle est utilisée par le _server_ et le _client_. Seuls les évènements sont transmis entre clients et serveur.
## Config neovim debugger launchers
Cela se passe dans la config neovim (lua/plugins/overrides.lua)
## Organisation du store
lib
- game::GameState
- error
- dice
- board
- user
- user
## Algorithme de détermination des coups
- strategy::choose_move
- GameRules.get_possible_moves_sequences(with_excedents: bool)
- get_possible_moves_sequences_by_dices(dice_max, dice_min, with_excedents, false);
- get_possible_moves_sequences_by_dices(dice_min, dice_max, with_excedents, true);
- has_checkers_outside_last_quarter() ok
- board.get_possible_moves ok
- check_corner_rules(&(first_move, second_move)) ok
- handle_event
- state.validate (ok)
- rules.moves_follow_rules (ok)
- moves_possible ok
- moves_follows_dices ok
- moves_allowed (ok)
- check_corner_rules ok
- can_take_corner_by_effect ok
- get_possible_moves_sequences -> cf. l.15
- check_exit_rules
- get_possible_moves_sequences(without exedents) -> cf l.15
- get_quarter_filling_moves_sequences
- get_possible_moves_sequences -> cf l.15
- state.consume (RollResult) (ok)
- get_rollresult_jans -> points_rules.get_result_jans (ok)
- get_jans (ok)
- get_jans_by_ordered_dice (ok)
- get_jans_by_ordered_dice ( dices.poped )
- move_rules.get_scoring_quarter_filling_moves_sequences (ok)
- get_quarter_filling_moves_sequences cf l.8 (ok)
- board.get_quarter_filling_candidate -> is_quarter_fillable ok
- move_rules.get_possible_moves_sequence -> cf l.15
- get_jans_points -> jan.get_points ok

View file

@ -1,417 +0,0 @@
# Outputs
## 50 episodes - 1000 steps max - desktop
{"episode": 0, "reward": -1798.7162, "steps count": 1000, "duration": 11}
{"episode": 1, "reward": -1794.8162, "steps count": 1000, "duration": 32}
{"episode": 2, "reward": -1387.7109, "steps count": 1000, "duration": 58}
{"episode": 3, "reward": -42.5005, "steps count": 1000, "duration": 82}
{"episode": 4, "reward": -48.2005, "steps count": 1000, "duration": 109}
{"episode": 5, "reward": 1.2000, "steps count": 1000, "duration": 141}
{"episode": 6, "reward": 8.8000, "steps count": 1000, "duration": 184}
{"episode": 7, "reward": 6.9002, "steps count": 1000, "duration": 219}
{"episode": 8, "reward": 16.5001, "steps count": 1000, "duration": 248}
{"episode": 9, "reward": -2.6000, "steps count": 1000, "duration": 281}
{"episode": 10, "reward": 3.0999, "steps count": 1000, "duration": 324}
{"episode": 11, "reward": -34.7004, "steps count": 1000, "duration": 497}
{"episode": 12, "reward": -15.7998, "steps count": 1000, "duration": 466}
{"episode": 13, "reward": 6.9000, "steps count": 1000, "duration": 496}
{"episode": 14, "reward": 6.3000, "steps count": 1000, "duration": 540}
{"episode": 15, "reward": -2.6000, "steps count": 1000, "duration": 581}
{"episode": 16, "reward": -33.0003, "steps count": 1000, "duration": 641}
{"episode": 17, "reward": -36.8000, "steps count": 1000, "duration": 665}
{"episode": 18, "reward": -10.1997, "steps count": 1000, "duration": 753}
{"episode": 19, "reward": -88.1014, "steps count": 1000, "duration": 837}
{"episode": 20, "reward": -57.5002, "steps count": 1000, "duration": 881}
{"episode": 21, "reward": -17.7997, "steps count": 1000, "duration": 1159}
{"episode": 22, "reward": -25.4000, "steps count": 1000, "duration": 1235}
{"episode": 23, "reward": -104.4013, "steps count": 995, "duration": 1290}
{"episode": 24, "reward": -268.6004, "steps count": 1000, "duration": 1322}
{"episode": 25, "reward": -743.6052, "steps count": 1000, "duration": 1398}
{"episode": 26, "reward": -821.5029, "steps count": 1000, "duration": 1427}
{"episode": 27, "reward": -211.5993, "steps count": 1000, "duration": 1409}
{"episode": 28, "reward": -276.1974, "steps count": 1000, "duration": 1463}
{"episode": 29, "reward": -222.9980, "steps count": 1000, "duration": 1509}
{"episode": 30, "reward": -298.9973, "steps count": 1000, "duration": 1560}
{"episode": 31, "reward": -164.0011, "steps count": 1000, "duration": 1752}
{"episode": 32, "reward": -221.0990, "steps count": 1000, "duration": 1807}
{"episode": 33, "reward": -260.9996, "steps count": 1000, "duration": 1730}
{"episode": 34, "reward": -420.5959, "steps count": 1000, "duration": 1767}
{"episode": 35, "reward": -407.2964, "steps count": 1000, "duration": 1815}
{"episode": 36, "reward": -291.2966, "steps count": 1000, "duration": 1870}
thread 'main' has overflowed its stack
fatal runtime error: stack overflow, aborting
error: Recipe `trainbot` was terminated on line 24 by signal 6
## 50 episodes - 700 steps max - desktop
const MEMORY_SIZE: usize = 4096;
const DENSE_SIZE: usize = 128;
const EPS_DECAY: f64 = 1000.0;
const EPS_START: f64 = 0.9;
const EPS_END: f64 = 0.05;
> Entraînement
> {"episode": 0, "reward": -862.8993, "steps count": 700, "duration": 6}
> {"episode": 1, "reward": -418.8971, "steps count": 700, "duration": 13}
> {"episode": 2, "reward": -64.9999, "steps count": 453, "duration": 14}
> {"episode": 3, "reward": -142.8002, "steps count": 700, "duration": 31}
> {"episode": 4, "reward": -74.4004, "steps count": 700, "duration": 45}
> {"episode": 5, "reward": -40.2002, "steps count": 700, "duration": 58}
> {"episode": 6, "reward": -21.1998, "steps count": 700, "duration": 70}
> {"episode": 7, "reward": 99.7000, "steps count": 642, "duration": 79}
> {"episode": 8, "reward": -5.9999, "steps count": 700, "duration": 99}
> {"episode": 9, "reward": -7.8999, "steps count": 700, "duration": 118}
> {"episode": 10, "reward": 92.5000, "steps count": 624, "duration": 117}
> {"episode": 11, "reward": -17.1998, "steps count": 700, "duration": 144}
> {"episode": 12, "reward": 1.7000, "steps count": 700, "duration": 157}
> {"episode": 13, "reward": -7.9000, "steps count": 700, "duration": 172}
> {"episode": 14, "reward": -7.9000, "steps count": 700, "duration": 196}
> {"episode": 15, "reward": -2.8000, "steps count": 700, "duration": 214}
> {"episode": 16, "reward": 16.8002, "steps count": 700, "duration": 250}
> {"episode": 17, "reward": -47.7001, "steps count": 700, "duration": 272}
> k{"episode": 18, "reward": -13.6000, "steps count": 700, "duration": 288}
> {"episode": 19, "reward": -79.9002, "steps count": 700, "duration": 304}
> {"episode": 20, "reward": -355.5985, "steps count": 700, "duration": 317}
> {"episode": 21, "reward": -205.5001, "steps count": 700, "duration": 333}
> {"episode": 22, "reward": -207.3974, "steps count": 700, "duration": 348}
> {"episode": 23, "reward": -161.7999, "steps count": 700, "duration": 367}
---
const MEMORY_SIZE: usize = 8192;
const DENSE_SIZE: usize = 128;
const EPS_DECAY: f64 = 10000.0;
const EPS_START: f64 = 0.9;
const EPS_END: f64 = 0.05;
> Entraînement
> {"episode": 0, "reward": -1119.9921, "steps count": 700, "duration": 6}
> {"episode": 1, "reward": -928.6963, "steps count": 700, "duration": 13}
> {"episode": 2, "reward": -364.5009, "steps count": 380, "duration": 11}
> {"episode": 3, "reward": -797.5981, "steps count": 700, "duration": 28}
> {"episode": 4, "reward": -577.5994, "steps count": 599, "duration": 34}
> {"episode": 5, "reward": -725.2992, "steps count": 700, "duration": 49}
> {"episode": 6, "reward": -638.8995, "steps count": 700, "duration": 59}
> {"episode": 7, "reward": -1039.1932, "steps count": 700, "duration": 73}
> field invalid : White, 3, Board { positions: [13, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, -2, 0, -11] }
thread 'main' panicked at store/src/game.rs:556:65:
called `Result::unwrap()` on an `Err` value: FieldInvalid
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: Recipe `trainbot` failed on line 27 with exit code 101
---
# [allow(unused)]
const MEMORY_SIZE: usize = 8192;
const DENSE_SIZE: usize = 256;
const EPS_DECAY: f64 = 10000.0;
const EPS_START: f64 = 0.9;
const EPS_END: f64 = 0.05;
> Entraînement
> {"episode": 0, "reward": -1102.6925, "steps count": 700, "duration": 9}
> field invalid : White, 6, Board { positions: [14, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 0, 0, -13] }
thread 'main' panicked at store/src/game.rs:556:65:
called `Result::unwrap()` on an `Err` value: FieldInvalid
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: Recipe `trainbot` failed on line 27 with exit code 101
---
const MEMORY_SIZE: usize = 8192;
const DENSE_SIZE: usize = 256;
const EPS_DECAY: f64 = 1000.0;
const EPS_START: f64 = 0.9;
const EPS_END: f64 = 0.05;
> Entraînement
> {"episode": 0, "reward": -1116.2921, "steps count": 700, "duration": 9}
> {"episode": 1, "reward": -1116.2922, "steps count": 700, "duration": 18}
> {"episode": 2, "reward": -1119.9921, "steps count": 700, "duration": 29}
> {"episode": 3, "reward": -1089.1927, "steps count": 700, "duration": 41}
> {"episode": 4, "reward": -1116.2921, "steps count": 700, "duration": 53}
> {"episode": 5, "reward": -684.8043, "steps count": 700, "duration": 66}
> {"episode": 6, "reward": 0.3000, "steps count": 700, "duration": 80}
> {"episode": 7, "reward": 2.0000, "steps count": 700, "duration": 96}
> {"episode": 8, "reward": 30.9001, "steps count": 700, "duration": 112}
> {"episode": 9, "reward": 0.3000, "steps count": 700, "duration": 128}
> {"episode": 10, "reward": 0.3000, "steps count": 700, "duration": 141}
> {"episode": 11, "reward": 8.8000, "steps count": 700, "duration": 155}
> {"episode": 12, "reward": 7.1000, "steps count": 700, "duration": 169}
> {"episode": 13, "reward": 17.3001, "steps count": 700, "duration": 190}
> {"episode": 14, "reward": -107.9005, "steps count": 700, "duration": 210}
> {"episode": 15, "reward": 7.1001, "steps count": 700, "duration": 236}
> {"episode": 16, "reward": 17.3001, "steps count": 700, "duration": 268}
> {"episode": 17, "reward": 7.1000, "steps count": 700, "duration": 283}
> {"episode": 18, "reward": -5.9000, "steps count": 700, "duration": 300}
> {"episode": 19, "reward": -36.8009, "steps count": 700, "duration": 316}
> {"episode": 20, "reward": 19.0001, "steps count": 700, "duration": 332}
> {"episode": 21, "reward": 113.3000, "steps count": 461, "duration": 227}
> field invalid : White, 1, Board { positions: [0, 2, 2, 0, 2, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3, -7, -2, -1, 0, -1, -1] }
thread 'main' panicked at store/src/game.rs:556:65:
called `Result::unwrap()` on an `Err` value: FieldInvalid
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: Recipe `trainbot` failed on line 27 with exit code 101
---
num_episodes: 50,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 700, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 1000.0,
> Entraînement
> {"episode": 0, "reward": -1118.8921, "steps count": 700, "duration": 9}
> {"episode": 1, "reward": -1119.9921, "steps count": 700, "duration": 17}
> {"episode": 2, "reward": -1118.8921, "steps count": 700, "duration": 28}
> {"episode": 3, "reward": -283.5977, "steps count": 700, "duration": 41}
> {"episode": 4, "reward": -23.4998, "steps count": 700, "duration": 54}
> {"episode": 5, "reward": -31.9999, "steps count": 700, "duration": 68}
> {"episode": 6, "reward": 2.0000, "steps count": 700, "duration": 82}
> {"episode": 7, "reward": 109.3000, "steps count": 192, "duration": 26}
> {"episode": 8, "reward": -4.8000, "steps count": 700, "duration": 102}
> {"episode": 9, "reward": 15.6001, "steps count": 700, "duration": 124}
> {"episode": 10, "reward": 15.6002, "steps count": 700, "duration": 144}
> {"episode": 11, "reward": -65.7008, "steps count": 700, "duration": 162}
> {"episode": 12, "reward": 19.0002, "steps count": 700, "duration": 182}
> {"episode": 13, "reward": 20.7001, "steps count": 700, "duration": 197}
> {"episode": 14, "reward": 12.2002, "steps count": 700, "duration": 229}
> {"episode": 15, "reward": -32.0007, "steps count": 700, "duration": 242}
> {"episode": 16, "reward": 10.5000, "steps count": 700, "duration": 287}
> {"episode": 17, "reward": 24.1001, "steps count": 700, "duration": 318}
> {"episode": 18, "reward": 25.8002, "steps count": 700, "duration": 335}
> {"episode": 19, "reward": 29.2001, "steps count": 700, "duration": 367}
> {"episode": 20, "reward": 9.1000, "steps count": 700, "duration": 366}
> {"episode": 21, "reward": 3.7001, "steps count": 700, "duration": 398}
> {"episode": 22, "reward": 10.5000, "steps count": 700, "duration": 417}
> {"episode": 23, "reward": 10.5000, "steps count": 700, "duration": 438}
> {"episode": 24, "reward": 13.9000, "steps count": 700, "duration": 444}
> {"episode": 25, "reward": 7.1000, "steps count": 700, "duration": 486}
> {"episode": 26, "reward": 12.2001, "steps count": 700, "duration": 499}
> {"episode": 27, "reward": 8.8001, "steps count": 700, "duration": 554}
> {"episode": 28, "reward": -6.5000, "steps count": 700, "duration": 608}
> {"episode": 29, "reward": -3.1000, "steps count": 700, "duration": 633}
> {"episode": 30, "reward": -32.0001, "steps count": 700, "duration": 696}
> {"episode": 31, "reward": 22.4002, "steps count": 700, "duration": 843}
> {"episode": 32, "reward": -77.9004, "steps count": 700, "duration": 817}
> {"episode": 33, "reward": -368.5993, "steps count": 700, "duration": 827}
> {"episode": 34, "reward": -254.6986, "steps count": 700, "duration": 852}
> {"episode": 35, "reward": -433.1992, "steps count": 700, "duration": 884}
> {"episode": 36, "reward": -521.6010, "steps count": 700, "duration": 905}
> {"episode": 37, "reward": -71.1004, "steps count": 700, "duration": 930}
> {"episode": 38, "reward": -251.0004, "steps count": 700, "duration": 956}
> {"episode": 39, "reward": -594.7045, "steps count": 700, "duration": 982}
> {"episode": 40, "reward": -154.4001, "steps count": 700, "duration": 1008}
> {"episode": 41, "reward": -171.3994, "steps count": 700, "duration": 1033}
> {"episode": 42, "reward": -118.7004, "steps count": 700, "duration": 1059}
> {"episode": 43, "reward": -137.4003, "steps count": 700, "duration": 1087}
thread 'main' has overflowed its stack
fatal runtime error: stack overflow, aborting
error: Recipe `trainbot` was terminated on line 27 by signal 6
---
num_episodes: 40,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 1500, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 1000.0,
> Entraînement
> {"episode": 0, "reward": -2399.9993, "steps count": 1500, "duration": 31}
> {"episode": 1, "reward": -2061.6736, "steps count": 1500, "duration": 81}
> {"episode": 2, "reward": -48.9010, "steps count": 1500, "duration": 145}
> {"episode": 3, "reward": 3.8000, "steps count": 1500, "duration": 215}
> {"episode": 4, "reward": -6.3999, "steps count": 1500, "duration": 302}
> {"episode": 5, "reward": 20.8004, "steps count": 1500, "duration": 374}
> {"episode": 6, "reward": 49.6992, "steps count": 1500, "duration": 469}
> {"episode": 7, "reward": 29.3002, "steps count": 1500, "duration": 597}
> {"episode": 8, "reward": 34.3999, "steps count": 1500, "duration": 710}
> {"episode": 9, "reward": 115.3003, "steps count": 966, "duration": 515}
> {"episode": 10, "reward": 25.9004, "steps count": 1500, "duration": 852}
> {"episode": 11, "reward": -122.0007, "steps count": 1500, "duration": 1017}
> {"episode": 12, "reward": -274.9966, "steps count": 1500, "duration": 1073}
> {"episode": 13, "reward": 54.8994, "steps count": 651, "duration": 518}
> {"episode": 14, "reward": -439.8978, "steps count": 1500, "duration": 1244}
> {"episode": 15, "reward": -506.1997, "steps count": 1500, "duration": 1676}
> {"episode": 16, "reward": -829.5031, "steps count": 1500, "duration": 1855}
> {"episode": 17, "reward": -545.2961, "steps count": 1500, "duration": 1892}
> {"episode": 18, "reward": -795.2026, "steps count": 1500, "duration": 2008}
> {"episode": 19, "reward": -637.1031, "steps count": 1500, "duration": 2124}
> {"episode": 20, "reward": -989.6997, "steps count": 1500, "duration": 2241}
thread 'main' has overflowed its stack
fatal runtime error: stack overflow, aborting
error: Recipe `trainbot` was terminated on line 27 by signal 6
---
num_episodes: 40,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 1000, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 10000.0,
> Entraînement
> {"episode": 0, "reward": -1598.8848, "steps count": 1000, "duration": 16}
> {"episode": 1, "reward": -1531.9866, "steps count": 1000, "duration": 34}
> {"episode": 2, "reward": -515.6000, "steps count": 530, "duration": 25}
> {"episode": 3, "reward": -396.1008, "steps count": 441, "duration": 27}
> {"episode": 4, "reward": -540.6996, "steps count": 605, "duration": 43}
> {"episode": 5, "reward": -976.0975, "steps count": 1000, "duration": 89}
> {"episode": 6, "reward": -1014.2944, "steps count": 1000, "duration": 117}
> {"episode": 7, "reward": -806.7012, "steps count": 1000, "duration": 140}
> {"episode": 8, "reward": -1276.6891, "steps count": 1000, "duration": 166}
> {"episode": 9, "reward": -1554.3855, "steps count": 1000, "duration": 197}
> {"episode": 10, "reward": -1178.3925, "steps count": 1000, "duration": 219}
> {"episode": 11, "reward": -1457.4869, "steps count": 1000, "duration": 258}
> {"episode": 12, "reward": -1475.8882, "steps count": 1000, "duration": 291}
---
num_episodes: 40,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 1000, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 3000.0,
> Entraînement
> {"episode": 0, "reward": -1598.8848, "steps count": 1000, "duration": 15}
> {"episode": 1, "reward": -1599.9847, "steps count": 1000, "duration": 33}
> {"episode": 2, "reward": -751.7018, "steps count": 1000, "duration": 57}
> {"episode": 3, "reward": -402.8979, "steps count": 1000, "duration": 81}
> {"episode": 4, "reward": -289.2985, "steps count": 1000, "duration": 108}
> {"episode": 5, "reward": -231.4988, "steps count": 1000, "duration": 140}
> {"episode": 6, "reward": -138.0006, "steps count": 1000, "duration": 165}
> {"episode": 7, "reward": -145.0998, "steps count": 1000, "duration": 200}
> {"episode": 8, "reward": -60.4005, "steps count": 1000, "duration": 236}
> {"episode": 9, "reward": -35.7999, "steps count": 1000, "duration": 276}
> {"episode": 10, "reward": -42.2002, "steps count": 1000, "duration": 313}
> {"episode": 11, "reward": 69.0002, "steps count": 874, "duration": 300}
> {"episode": 12, "reward": 93.2000, "steps count": 421, "duration": 153}
> {"episode": 13, "reward": -324.9010, "steps count": 866, "duration": 364}
> {"episode": 14, "reward": -1331.3883, "steps count": 1000, "duration": 478}
> {"episode": 15, "reward": -1544.5859, "steps count": 1000, "duration": 514}
> {"episode": 16, "reward": -1599.9847, "steps count": 1000, "duration": 552}
---
Nouveaux points...
num_episodes: 40,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 1000, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 3000.0,
> Entraînement
> {"episode": 0, "reward": -1798.1161, "steps count": 1000, "duration": 15}
> {"episode": 1, "reward": -1800.0162, "steps count": 1000, "duration": 34}
> {"episode": 2, "reward": -1718.6151, "steps count": 1000, "duration": 57}
> {"episode": 3, "reward": -1369.5055, "steps count": 1000, "duration": 82}
> {"episode": 4, "reward": -321.5974, "steps count": 1000, "duration": 115}
> {"episode": 5, "reward": -213.2988, "steps count": 1000, "duration": 148}
> {"episode": 6, "reward": -175.4995, "steps count": 1000, "duration": 172}
> {"episode": 7, "reward": -126.1011, "steps count": 1000, "duration": 203}
> {"episode": 8, "reward": -105.1011, "steps count": 1000, "duration": 242}
> {"episode": 9, "reward": -46.3007, "steps count": 1000, "duration": 281}
> {"episode": 10, "reward": -57.7006, "steps count": 1000, "duration": 323}
> {"episode": 11, "reward": -15.7997, "steps count": 1000, "duration": 354}
> {"episode": 12, "reward": -38.6999, "steps count": 1000, "duration": 414}
> {"episode": 13, "reward": 10.7002, "steps count": 1000, "duration": 513}
> {"episode": 14, "reward": -10.1999, "steps count": 1000, "duration": 585}
> {"episode": 15, "reward": -8.3000, "steps count": 1000, "duration": 644}
> {"episode": 16, "reward": -463.4984, "steps count": 973, "duration": 588}
> {"episode": 17, "reward": -148.8951, "steps count": 1000, "duration": 646}
> {"episode": 18, "reward": 3.0999, "steps count": 1000, "duration": 676}
> {"episode": 19, "reward": -12.0999, "steps count": 1000, "duration": 753}
> {"episode": 20, "reward": 6.9000, "steps count": 1000, "duration": 801}
> {"episode": 21, "reward": 14.5001, "steps count": 1000, "duration": 850}
> {"episode": 22, "reward": -19.6999, "steps count": 1000, "duration": 937}
> {"episode": 23, "reward": 83.0000, "steps count": 456, "duration": 532}
> {"episode": 24, "reward": -13.9998, "steps count": 1000, "duration": 1236}
> {"episode": 25, "reward": 25.9003, "steps count": 1000, "duration": 1264}
> {"episode": 26, "reward": 1.2002, "steps count": 1000, "duration": 1349}
> {"episode": 27, "reward": 3.1000, "steps count": 1000, "duration": 1364}
> {"episode": 28, "reward": -6.4000, "steps count": 1000, "duration": 1392}
> {"episode": 29, "reward": -4.4998, "steps count": 1000, "duration": 1444}
> {"episode": 30, "reward": 3.1000, "steps count": 1000, "duration": 1611}
thread 'main' has overflowed its stack
fatal runtime error: stack overflow, aborting
---
num_episodes: 40,
// memory_size: 8192, // must be set in dqn_model.rs with the MEMORY_SIZE constant
// max_steps: 700, // must be set in environment.rs with the MAX_STEPS constant
dense_size: 256, // neural network complexity
eps_start: 0.9, // epsilon initial value (0.9 => more exploration)
eps_end: 0.05,
eps_decay: 3000.0,
{"episode": 0, "reward": -1256.1014, "steps count": 700, "duration": 9}
{"episode": 1, "reward": -1256.1013, "steps count": 700, "duration": 20}
{"episode": 2, "reward": -1256.1014, "steps count": 700, "duration": 31}
{"episode": 3, "reward": -1258.7015, "steps count": 700, "duration": 44}
{"episode": 4, "reward": -1206.8009, "steps count": 700, "duration": 56}
{"episode": 5, "reward": -473.2974, "steps count": 700, "duration": 68}
{"episode": 6, "reward": -285.2984, "steps count": 700, "duration": 82}
{"episode": 7, "reward": -332.6987, "steps count": 700, "duration": 103}
{"episode": 8, "reward": -359.2984, "steps count": 700, "duration": 114}
{"episode": 9, "reward": -118.7008, "steps count": 700, "duration": 125}
{"episode": 10, "reward": -83.9004, "steps count": 700, "duration": 144}
{"episode": 11, "reward": -68.7006, "steps count": 700, "duration": 165}
{"episode": 12, "reward": -49.7002, "steps count": 700, "duration": 180}
{"episode": 13, "reward": -68.7002, "steps count": 700, "duration": 204}
{"episode": 14, "reward": -38.3001, "steps count": 700, "duration": 223}
{"episode": 15, "reward": -19.2999, "steps count": 700, "duration": 240}
{"episode": 16, "reward": -19.1998, "steps count": 700, "duration": 254}
{"episode": 17, "reward": -21.1999, "steps count": 700, "duration": 250}
{"episode": 18, "reward": -26.8998, "steps count": 700, "duration": 280}
{"episode": 19, "reward": -11.6999, "steps count": 700, "duration": 301}
{"episode": 20, "reward": -13.5998, "steps count": 700, "duration": 317}
{"episode": 21, "reward": 5.4000, "steps count": 700, "duration": 334}
{"episode": 22, "reward": 3.5000, "steps count": 700, "duration": 353}
{"episode": 23, "reward": 13.0000, "steps count": 700, "duration": 374}
{"episode": 24, "reward": 7.3001, "steps count": 700, "duration": 391}
{"episode": 25, "reward": -4.1000, "steps count": 700, "duration": 408}
{"episode": 26, "reward": -17.3998, "steps count": 700, "duration": 437}
{"episode": 27, "reward": 11.1001, "steps count": 700, "duration": 480}
{"episode": 28, "reward": -4.1000, "steps count": 700, "duration": 505}
{"episode": 29, "reward": -13.5999, "steps count": 700, "duration": 522}
{"episode": 30, "reward": -0.3000, "steps count": 700, "duration": 540}
{"episode": 31, "reward": -15.4998, "steps count": 700, "duration": 572}
{"episode": 32, "reward": 14.9001, "steps count": 700, "duration": 630}
{"episode": 33, "reward": -4.1000, "steps count": 700, "duration": 729}
{"episode": 34, "reward": 5.4000, "steps count": 700, "duration": 777}
{"episode": 35, "reward": 7.3000, "steps count": 700, "duration": 748}
{"episode": 36, "reward": 9.2001, "steps count": 700, "duration": 767}
{"episode": 37, "reward": 13.0001, "steps count": 700, "duration": 791}
{"episode": 38, "reward": -13.5999, "steps count": 700, "duration": 813}
{"episode": 39, "reward": 26.3002, "steps count": 700, "duration": 838}
> Sauvegarde du modèle de validation
> Modèle de validation sauvegardé : models/burn_dqn_50_model.mpk
> Chargement du modèle pour test
> Chargement du modèle depuis : models/burn_dqn_50_model.mpk
> Test avec le modèle chargé
> Episode terminé. Récompense totale: 70.00, Étapes: 700

View file

@ -1,182 +0,0 @@
use std::{
collections::HashMap,
net::{SocketAddr, UdpSocket},
sync::mpsc::{self, Receiver, TryRecvError},
thread,
time::{Duration, Instant, SystemTime},
};
use renet::{
transport::{
ClientAuthentication, NetcodeClientTransport, NetcodeServerTransport, ServerAuthentication, ServerConfig, NETCODE_USER_DATA_BYTES,
},
ClientId, ConnectionConfig, DefaultChannel, RenetClient, RenetServer, ServerEvent,
};
// Helper struct to pass an username in the user data
struct Username(String);
impl Username {
fn to_netcode_user_data(&self) -> [u8; NETCODE_USER_DATA_BYTES] {
let mut user_data = [0u8; NETCODE_USER_DATA_BYTES];
if self.0.len() > NETCODE_USER_DATA_BYTES - 8 {
panic!("Username is too big");
}
user_data[0..8].copy_from_slice(&(self.0.len() as u64).to_le_bytes());
user_data[8..self.0.len() + 8].copy_from_slice(self.0.as_bytes());
user_data
}
fn from_user_data(user_data: &[u8; NETCODE_USER_DATA_BYTES]) -> Self {
let mut buffer = [0u8; 8];
buffer.copy_from_slice(&user_data[0..8]);
let mut len = u64::from_le_bytes(buffer) as usize;
len = len.min(NETCODE_USER_DATA_BYTES - 8);
let data = user_data[8..len + 8].to_vec();
let username = String::from_utf8(data).unwrap();
Self(username)
}
}
fn main() {
env_logger::init();
println!("Usage: server [SERVER_PORT] or client [SERVER_ADDR] [USER_NAME]");
let args: Vec<String> = std::env::args().collect();
let exec_type = &args[1];
match exec_type.as_str() {
"client" => {
let server_addr: SocketAddr = args[2].parse().unwrap();
let username = Username(args[3].clone());
client(server_addr, username);
}
"server" => {
let server_addr: SocketAddr = format!("0.0.0.0:{}", args[2]).parse().unwrap();
server(server_addr);
}
_ => {
println!("Invalid argument, first one must be \"client\" or \"server\".");
}
}
}
const PROTOCOL_ID: u64 = 7;
fn server(public_addr: SocketAddr) {
let connection_config = ConnectionConfig::default();
let mut server: RenetServer = RenetServer::new(connection_config);
let current_time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap();
let server_config = ServerConfig {
current_time,
max_clients: 64,
protocol_id: PROTOCOL_ID,
public_addresses: vec![public_addr],
authentication: ServerAuthentication::Unsecure,
};
let socket: UdpSocket = UdpSocket::bind(public_addr).unwrap();
let mut transport = NetcodeServerTransport::new(server_config, socket).unwrap();
let mut usernames: HashMap<ClientId, String> = HashMap::new();
let mut received_messages = vec![];
let mut last_updated = Instant::now();
loop {
let now = Instant::now();
let duration = now - last_updated;
last_updated = now;
server.update(duration);
transport.update(duration, &mut server).unwrap();
received_messages.clear();
while let Some(event) = server.get_event() {
match event {
ServerEvent::ClientConnected { client_id } => {
let user_data = transport.user_data(client_id).unwrap();
let username = Username::from_user_data(&user_data);
usernames.insert(client_id, username.0);
println!("Client {} connected.", client_id)
}
ServerEvent::ClientDisconnected { client_id, reason } => {
println!("Client {} disconnected: {}", client_id, reason);
usernames.remove_entry(&client_id);
}
}
}
for client_id in server.clients_id() {
while let Some(message) = server.receive_message(client_id, DefaultChannel::ReliableOrdered) {
let text = String::from_utf8(message.into()).unwrap();
let username = usernames.get(&client_id).unwrap();
println!("Client {} ({}) sent text: {}", username, client_id, text);
let text = format!("{}: {}", username, text);
received_messages.push(text);
}
}
for text in received_messages.iter() {
server.broadcast_message(DefaultChannel::ReliableOrdered, text.as_bytes().to_vec());
}
transport.send_packets(&mut server);
thread::sleep(Duration::from_millis(50));
}
}
fn client(server_addr: SocketAddr, username: Username) {
let connection_config = ConnectionConfig::default();
let mut client = RenetClient::new(connection_config);
let socket = UdpSocket::bind("127.0.0.1:0").unwrap();
let current_time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap();
let client_id = current_time.as_millis() as u64;
let authentication = ClientAuthentication::Unsecure {
server_addr,
client_id,
user_data: Some(username.to_netcode_user_data()),
protocol_id: PROTOCOL_ID,
};
let mut transport = NetcodeClientTransport::new(current_time, authentication, socket).unwrap();
let stdin_channel: Receiver<String> = spawn_stdin_channel();
let mut last_updated = Instant::now();
loop {
let now = Instant::now();
let duration = now - last_updated;
last_updated = now;
client.update(duration);
transport.update(duration, &mut client).unwrap();
if transport.is_connected() {
match stdin_channel.try_recv() {
Ok(text) => client.send_message(DefaultChannel::ReliableOrdered, text.as_bytes().to_vec()),
Err(TryRecvError::Empty) => {}
Err(TryRecvError::Disconnected) => panic!("Channel disconnected"),
}
while let Some(text) = client.receive_message(DefaultChannel::ReliableOrdered) {
let text = String::from_utf8(text.into()).unwrap();
println!("{}", text);
}
}
transport.send_packets(&mut client).unwrap();
thread::sleep(Duration::from_millis(50));
}
}
fn spawn_stdin_channel() -> Receiver<String> {
let (tx, rx) = mpsc::channel::<String>();
thread::spawn(move || loop {
let mut buffer = String::new();
std::io::stdin().read_line(&mut buffer).unwrap();
tx.send(buffer.trim_end().to_string()).unwrap();
});
rx
}

View file

@ -1,5 +0,0 @@
# Vocabulary
Dames : checkers / men
cases : points
cadrant : quarter

View file

@ -1,6 +1,6 @@
# Trictrac — Research Notes
# Trictrac — store crate overview
## 1. Rust Engine: Module Map
## 1. Module Map
| Module | Responsibility |
| ---------------------- | ------------------------------------------------------------------------- |