>>33306
>fast BehaviorTrees to replace FSMs in a composable
I hadn't heard of this, and it looks useful for my stuff.
>Fast path-finding algos in a smol footprint
I think everything for finding "good" paths starts with Depth-First Search (DFS), then adds customizations and optimizations to avoid the need for full exploration. In machine learning, Monte-Carlo Tree Search is pretty standard. It gives you a way to accumulate the results of each branch. UCT (Upper Confidence bounds for Trees) tells you how to prioritize which branch to take. Dynamic Programming adds a cache so if you see a state twice, you can recognize it and avoid duplicate processing. AlphaZero adds in a neural-network based heuristic so you can work with some information before investigating any branches. I think MuZero uses a neural network to abstract the explicit tree search, for cases where the number of branches is large.
There are other algorithms that looks like path algorithms that are better thought as structure-finding algorithms. Topological sorts and spanning tree algorithms are two examples.
>exploring YAML as a data format
I recommend sticking to the subset of YAML where the data is compatible with JSON. That one is battle-tested on very complex infrastructure tasks for exactly this purpose (human-readable format for defining & configuring user-designed resources). For cases where the underlying "controllers" for handling resource configs can change, the Kubernetes object format is great.
https://kubernetes.io/docs/concepts/overview/working-with-objects/
For other cases, just JSON-compatible YAML is great.
>stable & performant Pythonic API for all this with the latest tools
If you don't need to train on-device (though you probably do), I'd recommend separating the requirements for development from the requirements for execution. PyTorch is great for development, and you can export the models you create to be run by a more performant library. For example, you can create a model with pytorch, export it to ONNX, and use some C++ runtime to run the ONNX model. It looks like ONNX is going to add support for training
https://onnx.ai/onnx/operators/onnx_aionnxpreviewtraining_Gradient.html so you might be able to take this approach even for cases where you do need to train on-device. OpenVINO seems to be the main choice for running ONNX models on CPUs, and TensorRT for Nvidia GPUs.
>auto-rigging meta
Anything that looks like automatically generating a configuration is going to be solved with an optimization algorithm. The main questions to ask are: how easy is it to get new datapoints (i.e., get an example configuration & test it to see how it performs), how much compute can you afford to throw at the problem, how many dimensions does the search space have, and how complex is the search space.
- Bayesian optimization: very sample-efficient (needs few samples), the good algorithms are compute-intensive, and it deals with simple search spaces.
- Neural networks: great for dealing with complex search spaces. If you can get a lot of samples, you can train these normally. If not, you'll need to use some tricks to train it with fewer samples. The size of the network determines how compute-intensive it is.
- Monte Carlo reinforcement learning methods: requires a lot of samples, very low computation costs per sample, can deal with medium-complexity search spaces.
Usually in ML, the solution is some mix of all of these things that fits your constraints.