| .beads | ||
| bot | ||
| client_cli | ||
| doc | ||
| store | ||
| .envrc | ||
| .gitignore | ||
| Cargo.lock | ||
| Cargo.toml | ||
| devenv.lock | ||
| devenv.nix | ||
| devenv.yaml | ||
| flake.nix | ||
| justfile | ||
| LICENSE | ||
| README.md | ||
Trictrac
This is a game of Trictrac rust implementation.
The project is on its early stages. Rules (without "schools") are implemented, as well as a rudimentary terminal interface which allow you to play against a bot which plays randomly.
Training of AI bots is the work in progress.
Usage
cargo run --bin=client_cli -- --bot random
Roadmap
- rules
- command line interface
- basic bot (random play)
- AI bot
- network game
- web client
Code structure
- game rules and game state are implemented in the store/ folder.
- the command-line application is implemented in client_cli/; it allows you to play against a bot, or to have two bots play against each other
- the bots algorithms and the training of their models are implemented in the bot/ folder
store package
The game state is defined by the GameState struct in store/src/game.rs. The to_string_id() method allows this state to be encoded compactly in a string (without the played moves history). For a more readable textual representation, the fmt::Display trait is implemented.
client_cli package
client_cli/src/game_runner.rs contains the logic to make two bots play against each other.
bot package
bot/src/strategy/default.rscontains the code for a basic bot strategy: it determines the list of valid moves (using theget_possible_moves_sequencesmethod ofstore::MoveRules) and simply executes the first move in the list.bot/src/strategy/dqnburn.rsis another bot strategy that uses a reinforcement learning trained model with the DQN algorithm via the burn library (https://burn.dev/).bot/scripts/trains.shallows you to train agents using different algorithms (DQN, PPO, SAC).