Grid Graph vs Recast for position evaluation?

I’m sorry if this has been discussed elsewhere, or if it is stupid question, but I have spent a very long time in the search function and this seems to be the solution I’m looking for.

I am trying to implement a navigation system that can be evaluated by the AI at run-time. I want to be able to take a node and evaluate it based on its position. I understand that custom nodes can be created and this is a plan for later, however I need to get my foundation in place first.

My first thought was to use a grid-graph, as I know I will be able to explore and evaluate nodes capably.

However I’m curious as to whether I will run into issues with the count of the nodes once levels reach scale, even with well targeted queries. Will having large cells in the grid result in strange pathfinding around objects? Specifically rounded edges, or objects at an angle? And if I turn granularity up to accommodate, will performance be acceptable in areas as large as 0.1km^2 or so, with humanoid-scale actors?

My second thought was to use recast, however I don’t believe I would be able to get node data as I am wanting to. Please correct me if I am wrong here, as I’ve been looking for the answer all night.

Assuming I cannot extract the node data from a navmesh, would I be able to place down empty game objects, or even a point graph, and use those nodes as my points of evaluation?

Sorry for how scattered this is. I don’t have a ton of experience with pathfinding, and I’ve been awake for far too long now.

Any feedback is greatly appreciated!


If you need to evaluate the nodes in a fine grained way you really should use a grid graph.
A 300x300 meter grid (approximately 0.1 square kilometer) is definitely feasible with a grid graph.

You may also want to look into the ITraversalProvider:

I greatly appreciate the response and reference, this was exactly what I was looking for.

I do have one more question if you find the time. Am I overthinking the node size? I feel as though I am trying to take the nodes down too small in order to try and get a cleaner grid around the edges of the obstacles (specifically the ones at an angle of course), however I suspect that densities of roughly 4 Squares (9 nodes?) per square foot would be a bit tough for the CPU to deal with.

I notice that you’re referring to a node size of 1, while I have already taken it down to 0.2 in attempt to support some of the thinner objects I have in the scene. Collider size has been adjusted down as well. It feels wrong having such a heavy implementation. Please advise here.

Sorry again for the beginner questions. I am doing as much reading as I can and will be caught up soon :slight_smile:

That seems like very small nodes, yes. Usually they are around the same size as the characters or a bit smaller. You usually don’t need super clean edges for the characters to move well.
You can post a screenshot if you want.

They are roughly 2/3 the size of the character. Maybe a little less than that.
My worry is that when I bring the node size up too much larger than it is, the thinner test primitives I’ve dropped in tend to get paths made through them in certain spots when at an angle to the grid.

Here is an image of how it looks along one of the thin, angled primitives.


One more thing, as kind of a rule of thumb, should I be particularly concerned about the time to scan the entire graph, or only on the time to scan the regions I plan to evaluate during runtime?


That is controlled by the ‘Diameter’ setting on the grid graph collision settings. It is in node widths. The default of 1 is good for most cases, but the corners may in some cases clip through objects. You can set it to 1.5 (or more technically the limit is sqrt(2) = 1.4142) to guarantee that no colliders will be inside the nodes.

If you are only scanning the graph once at the start of the game then you don’t have to care much about it. For a graph that size it shouldn’t take more than at most a few seconds I’d estimate. You can also cache the graph if you want to improve the start-up time.

Excellent, thank you very much for all of your help Aron. I was able tweak the collider settings to achieve a preferable result while increasing the node size 2.5x.
Image here if you’d like to take a look (I believe it is looking good, however always happy to have more eyes).

My last question is regarding editor performance. I notice lag while navigating in the Scene view once a large graph has been scanned. My question is whether this is a limitation of Unity, a bug, or if the limitation is simply my CPU, and it’s time for a new workstation.

Thanks again, I can’t tell you how much I appreciate your time.

Does it lag continuously or just once? Once is reasonable and common. However for larger graphs it may also lag a bit all the time. Every frame it has to go through every node (if you have a 300x300 graph that’s 90000 nodes every frame) to check if they have changed (and if so, rebuild that part of the visualization mesh). That’s a fair amount of work every frame. It is slightly faster if you just use the “Solid Color” graph coloring mode instead of e.g. the Penalty mode. You can also turn off “Show Graphs” at the bottom of the A* Inspector to hide the graph and make it not have to do all that work.

It is a continuous lag when zoomed out and have them all in view, rendering the surface, outlines, and connections. I have however noticed it to not be as bad now that I’ve navigated the scene a few times.

I’m still running on my old reliable 4770k. An upgraded workstation is already on my horizon, so I’d be happy to hear this is a hardware limitation and not an editor one :slight_smile:

1 Like