I’m looking for ways to increase my infinite procedural runtime grid graph. Currently I’m running 100x100 at 2m sized nodes, with very little performance loss, but I would much rather have more like 250x250 at 1.5m size.
Idea A) is it possible to store and save runtime generated grid graph data then load it back into the system later? The point being to reduce runtime CPU use on generating nodes that have already been calculated once before. If a player is in an area they frequent the whole area would be scanned already and the cached data would be used as the grid moves around instead of re-checking the nodes.
Because my world is infinite I wouldn’t be able to keep increasing the grid boundaries forever. I would probably have to use multiple grids, maybe max 512x512, something Aron has suggested is bad practice. Maybe I could create holders for the data and keep them separate from the actual in-use grid graph, inputting data from surrounding holders when available or calculating new nodes when not available. Feasible?
B) Is it possible to limit the max floodfill checks per frame. Letting me stretch the grid graph larger but not trying to update all the new nodes at once. Imagining a 250x250 grid, moving north 1 I would want to check 250 new nodes, but I could limit it to 50 per frame using only say 5ms a frame across 5 frames instead of 25ms in a single frame. Feasible?
Would this bring much overhead with it for incorporating the new nodes back into the graph? ie: 5ms overhead per update, meaning 10ms(5+5) per frame instead of 30ms(25+5) in a single frame, though 50ms total instead of 30ms when spread over 5 frames.
I would probably have to set a variable to watch if the amount of nodes waiting to be scanned gets too high, for example if a player teleports north 10m and all the sudden I need to scan 2,500 nodes, 50 per frame could switch to 200 per frame until the number gets back under 500.
Sound possible?