So I'm trying to graph an 8k (8000, not 8192, tl;dr is artist disparity) map which is made up of 4*4 chunks of 2000 unit terrains. I ideally need to graph the entire thing in a single recast graph with an accurate cell size, around 0.5 units. The graph will also be updated at runtime a lot so I also ideally need pretty small tile sizes. I've been doing a lot of testing with different graph types and have landed back at recast as the most suitable as I'd like to have slopes baked into the graph to take some work off steering behaviours for alignment to avoid slope raycasting per agent.
From my testing upto this point, attempting to bake 8000*8000 @ 0.5m res w/ 128 (default) tile sizes is not even close to possible in terms of memory in my case. I'd really appreciate any suggestions as to how to reduce data sizes as it's attempting to pull out several gigs right now which is overflowing memory for the partition I'm on (only have around 4.6GB spare and need to clean (doing so now, will check back for responses and retry) so not sure if this is mostly temp bake data and the output will be significantly smaller, but I can be sure that during baking an attempt at this size and resolution is overflowing 4GB).
EDIT: So, interestingly a rescan (no clearing) worked fine at 256 tile res though graph updates took far too long. As a long shot I tried at 32 res and it successfully baked 250000 tiles without any memory issues this time around, however updates are still taking far too long, literally been waiting on load for about 6 minutes now for it to process a lot of updates (about 20k new colliders to avoid which are present for update collision testing and then dropped back into pools once the graphs have been updated). I'm thinking at this point that I might have to stagger updates hugely, as in process the whole thing one tile at a time in the background, which is far from ideal as I'd have to work in a lot of staggering for multiplayer (basically this happens on the host/server so they can path NPCs in the background). Does this sound like an expected time for that many objects at 32 tile res? Still processing async in the background here. The time taken was very low with a grid graph, about 10 secs to update the entire graph so while I knew that this would be a lot more expensive ofc, I didn't realise it would be so much more heavy).