Open
Description
The warmup shapes provided at startup place a hard cap on the number and length of sequences in a batch that will be scheduled. After we load the model onto the card, we should ensure that there is enough available memory to run all possible shaped batches. This could also respect the --gpu-memory-utilization
parameter, so that users can set a configurable buffer.
This would likely require implementing memory profiling so we can calculate how much memory a given batch size will take at inference