[fixed] RichAIs getting stuck when pathing in certain circumstances, also funnel simplification bug

Well, this video is super long, sorry about that: https://youtu.be/J4mjXc4RGDk

I walk through the setup early on in, and about 5 minutes in you start seeing examples of it not working. I then show a few different approaches and what their results are, hopefully staving off the first round of requests for “try this.” :wink: And then there’s a secondary bug with the RichAI funnel simplification that I show way on in – 17 minutes or so.

You can skim through there for sure, versus just sitting down and watching the whole thing, but since I figured you’d want pretty complete information in order to be able to answer this, it’s there. I have had the same issue on certain geometry in both the current asset store pro version of the asset, as well as the 4.1.5 beta pro version.

TLDW:

  1. When walking around at normal speed between waypoints, often my AIs get stuck and stand there forever. A super long frame (leading to a high deltaTime) seems to jolt them out of it. Slower AIs seem to have more trouble with it.

  2. AIs that are really fast and actively chasing my moving player transform never have any of these problems whatsoever. Often they are the same agent, but just switched into a new mode, it’s worth noting.

  3. If RVO is turned on, then they sometimes wind up doing a circular dance around one another at the end of the prior path, even though a new target destination has been set.

  4. This was not happening when I was using AIPath agents actually, but they were getting stuck instead when they would wander slightly off the graph, or possibly into a mysteriously-unwalkable node. I’m not sure.

  5. I have mysteriously unwalkable nodes, come to that. Different settings for the max border edge length and max error and min region size lead to different locations of said nodes, as one might expect. But frankly based on the actual painted area it has laid out, I wouldn’t expect any unwalkable nodes at all!

  6. There’s definitely a bug of some sort with the RichAI funnel simplification, getting to your code that says:

//Ok, this is wrong…
Debug.LogWarning(“Linecast failing because point not inside node, and line does not hit any edges of it”);
inside Linecast on NavmeshBase.cs

Overall I absolutely LOVE this project. I have had a few frustrations, but until I moved to this new part of the level with agents in there, honestly I was having no troubles whatsoever with anyone getting stuck or whatnot. So this is very peculiar.

Perhaps worth noting: I’m using collision hulls that are either mesh or box colliders created by technie hull painter, and I’m rasterizing based on that. If I should be rasterizing based on the meshes themselves instead, then no problem, but those are higher-poly.

pseudo-edit from after the video: Actually I just tried that and it leads to many fewer mysterious unwalkable nodes, but still people getting stuck and the linecasts issue. They get stuck at different locations than before, however. Turning off the funnel simplification leads to them the linecasts thing going away, but still appearances of the “Grid graph special case!” message, which is another oddity.

I then had the idea of turning of “slow when not facing target,” and that does cause a change – the ones that are stuck start rapidly changing direction while staying in place, so blinking around. That then led to a TON of errors, incidentally:

  • Inside SimulateRotationTowards, I started getting “Look rotation viewing vector is zero” issues for about 15,761 times in just about three minutes.

  • And then about six thousand nullrefs inside Pathfinding.RichFunnel.ClampToNavmeshInternalFull (UnityEngine.Vector3& position) (at Assets/AstarPathfindingProject/Core/AI/RichPath.cs:602).

Is this perhaps some combination of the way the seeker is configured leading into some bugs here?

Sorry for the long post, but hopefully this helps to clear up a few bugs or helps with some documentation issue or something. I think I’ve got some bugs on my hands here, though, if the exceptions are anything to go by.

Cheers. :slight_smile:
Chris

Okay, so things were super unclear here, and I kept poking at this a bunch to see what I could come up with. The fact that this previously had worked but suddenly did in the new geometry was what threw me. There were a series of things going on.

  1. I had a bug in my “if you’re stuck, look for a new target” code that was triggering that way too frequently. In simpler areas that did not matter, because the path would already be found anyhow. In this new area, the geometry was complex enough that this was not the case. I’d run this in fairly-complex geometry before with about 90 AIs running around (at 100+ fps), but this was just another level up I suppose.

  2. The documentation on some of this stuff was really unhelpful, and so I wound up having to guess and check a lot of things. I get that there isn’t a one-size-fits-all solution in terms of how to configure these things, but the defaults seemed to be assuming simpler geometry than you’d see in an average FPS game. I changed the Tile Size from 125 down to 25, and that got rid of ALL of my strange artifacts with the impassable nodes. I just found that by accident. I was then able to move back to rasterizing by collider, detail 10, and it works great.

  3. As part of this I just went ahead and made my stairs into ramps, collider-wise, since doing fook IK on all those AIs is too costly on the CPU anyhow. I think that also cleared up some of the impassible nodes, to be honest, although that wasn’t the only thing.

  4. In general I put in some more code surrounding pathPending, making it so that new targets don’t get set when that is true since that leads to really long wait times.

  5. I also made a change so that it sets the first destination prior to calling base.Start(), so that all of them start out with valid paths versus getting a path on the first Update call. This was surprising.

  6. Every time I set the destination and the target (I’m setting both now, and I stopped using the separate DestinationSetter or whatever it was called class), I’m also calling if ( !this.pathPending ) this.SearchPath(); after it. That was a really key discovery. Even with the repath rate set at 0.1 or 0.4 respectively, it just wasn’t doing the job without that being called.

1 Like

All of THAT said, I am now getting the message: “Layer estimate too low, doubling size of buffer. This message is harmless.” Any way to increase the layer estimate via my configuration to keep that from happening?

And secondarily, the RVO code is definitely working fine, but the RichAI’s funnel simplification is still throwing the error:

Linecast failing because point not inside node, and line does not hit any edges of it
UnityEngine.Debug:LogWarning(Object)
Pathfinding.NavmeshBase:Linecast(NavmeshBase, Vector3, Vector3, GraphNode, GraphHitInfo&, List`1) (at Assets/AstarPathfindingProject/Generators/NavmeshBase.cs:1119)

In the same simulation I’m now only getting about 200 of those messages in the span of something like four minutes, which is much better than thousands in seconds, but it’s still an issue.

That was a debug message that I had forgot to remove. Thanks for bringing it to my attention.

Those are not actually unwalkable nodes. It seems you are using the ‘Heuristic Optimization’ setting.
That code will draw a a box around the seed nodes it uses. You can safely ignore them, it’s mostly for debugging and I should probably remove them.

Yeah… That is unfortunately a long standing bug. Linecasting on a navmesh is really tricky and those errors are due to floating point precision problems (for example they might happen if you try to fire a linecast that lines up perfectly with the edge of a triangle node). I have been trying to rewrite it for some time now, but there are so many edge cases.
In any case I think I have manged to design it so that if the linecast for some reason fails, then it will fall back to simply not simplifying the funnel as much as it could do. So the path might be a bit longer than it would have to be, but it should still be a valid path.

Is that with 4.1.5 or with 4.0.6?

Are you talking about the same impassable nodes as I mentioned above were not actually impassable nodes? At which time in the video did you show this?

Did this help anything? I would think it wouldn’t change anything except possibly a 1 or 2 frame delay for when they start moving.

Huh? Do you mean they did not start to move towards the new destination that you set even after at most 0.4 seconds?

Are there any particular things you found to be very unclear. I’d love to improve the docs.

That is just a harmless message. But you are right that I should probably just remove it.
Essentially just says that it had to increase the size of an array. The recast graph initially guesses that the number of layers (geometry intersections when firing a ray from the sky down to below the ground) will be at most 8. It seems your level has a few more.
If you really want to you can increase it by changing the AvgSpanLayerCountEstimate constant in the VoxelClasses.cs file.

No problem. Just removed it on my copy, so all good to me.

Did this help anything? I would think it wouldn’t change anything except possibly a 1 or 2 frame delay for when they start moving.[/quote]

Yes, this helped in that they were not moving prior to that at all. BUT, please let me clarify below a bit more:

[quote=“aron_granberg, post:4, topic:4285, full:true”][quote=“Chris_Park, post:2, topic:4285”][quote=“Chris_Park, post:2, topic:4285”]
Every time I set the destination and the target (I’m setting both now, and I stopped using the separate DestinationSetter or whatever it was called class), I’m also calling if ( !this.pathPending ) this.SearchPath(); after it. That was a really key discovery. Even with the repath rate set at 0.1 or 0.4 respectively, it just wasn’t doing the job without that being called.
[/quote]

Huh? Do you mean they did not start to move towards the new destination that you set even after at most 0.4 seconds?[/quote][/quote]

Correct. They wouldn’t move at all, or in some cases would after 10-300 seconds or so. There are a couple of thing about this that was biting me:

  1. I had my own separate code going “am I at the destination?” since yours wasn’t seeming to do what I expected at all times. I’ve since removed my code, but what was happening was that my code was kicking in faster than the path would be returned. By having if ( !this.pathPending ) prior to my own code, that made it work okay even when it took longer on the secondary threads to generate a valid path.

When the random wander point chosen was close enough, then they were able to get to it before my code stomped on yours. So that was my bad, but basically a big message saying DON’T BOTHER IT IF pathPending IS TRUE would be a good thing in the documentation.

  1. Even post-that, however, I’ve been confused about the purpose of destination versus target on RichAI. I’d been using AIPath prior to that, because it seemed simpler and a “rich AI” didn’t sound like the sort of thing I wanted. The fact that the RichAI is actually “works really well with navmeshes AI” is another thing documentation should have in the FAQ, potentially.

At any rate, I’d just been setting target when using AIPath, and all was well in the prior tests I’d been doing. When I switched to RichAI and started using the AIDestinationSetter class as your tutorial documentation suggested, I noticed that was just setting the destination instead of also setting the target. So I started doing that also.

When doing that, THEN my AIs would just stand around infinitely if I didn’t call if ( !this.pathPending ) this.SearchPath(); directly. Even then, my AIs were not reacting to changes in direction on the intervals that I was expecting, so my chaser AIs were not keeping up with me anymore. I started setting both destination and target, and STILL calling if ( !this.pathPending ) this.SearchPath(); and all is now well. Probably overkill, but my framerate is still north of 110 in a fairly complex environment with 30ish AIs, so I’m happy for now.

I’ve noticed that basically what you have going on is a coroutine that gets started early and then loops endlessly. That seems… risky. I don’t know, I’ve just been averse to putting sim-dependent code in coroutines, since exception handling can be odd and the ability to override timing and such is low.

For that loop, why not just have it call in Update a method that counts down timeUntilNextRepath, and sets that to the new value just like you do with the coroutine, etc? The difference would be minimal, but I could set timeUntilNextRepath = 0 to force an immediate next-frame recalculation without having duplicate recalcs like I may be causing now. And you also lose the overhead of coroutines on this particular bit.

I use couroutines a fair bit, but it’s just a personal preference of mine to avoid them in “sim code,” so to speak.

timeUntilNextRepath

Let me go through and just make a rough list in a minute, in a second update to this since this post is already getting long. The documentation is pretty good as it is, but I’m a really experienced AI programmer, and know 2D pathfinding inside and out, and there were a lot of things I found unclear. I’m relatively new to 3D pathfinding, aside from using node trees and things which basically are dimensionless and so an extension of the 2D algorithms I’d already used.

That is just a harmless message. But you are right that I should probably just remove it.
Essentially just says that it had to increase the size of an array. The recast graph initially guesses that the number of layers (geometry intersections when firing a ray from the sky down to below the ground) will be at most 8. It seems your level has a few more.
If you really want to you can increase it by changing the AvgSpanLayerCountEstimate constant in the VoxelClasses.cs file.
[/quote]

Sweet, that’s what I was looking for – thanks. I appreciate the message being there, actually. Having it say how to change the default would also be welcome as an addition so that someone like me could go in there and adjust it. I’d rather spend a bit of extra RAM on what I’m guessing are null array entries if unused rather than go through array expansion immediately on level load. Kind of like initializing a List<> with some starting value other than their default 16, for instance.

You do great work, I just want to say, by the way.

Okay! Documentation requests. Generally speaking I’m not going to dive into API documentation to try to find out what something in a GUI does (MSDN and unity have just trained me out of that over the last two decades), so I’m just going to mention all the things that I found confusing. My background is heavily in AI, and I know 2D pathfinding very well, so some of this may be a bit specific to me coming at it with a different vernacular in some cases.

That said, I’ll also take some guesses at what things are that aren’t really explained in a manual-like fashion, but which I assume the function of based on prior experience with other products.

Big Requests:

  1. In general, my first request is a central index of tutorials somewhere easily accessible on the sidebar. Right now there are a variety of tutorials that are scattered about, but in some cases you have to click from one tutorial to the next in order to find them. I’m going off memory here, but this was my feeling. Overall with the basic tutorials I expected to skip them, and I didn’t see any advanced ones directly linked, and I wasn’t planning on coding yet and so ignored the API documentation, and so clicked on the FAQ, which had only a single unuseful entry. So I went to the examples in the unity package itself, and just based my work around that; which ultimately led me to AIPath exclusively, instead of RichAI, which turned out to be a very bad path for my case.

  2. Overall there need to be more central explanations of what concepts are in your system, clearly and concisely. Based on its name, I had a really wrong assumption of what RichAI was. I assumed it was “AI with more features.” When I did find RichAI and it told me to add AIDestinationSetter, it hadn’t said anything about a Seeker class yet, so I tried to remove seeker and assumed that AIDestinationSetter was the new seeker. It turns out that AIDestinationSetter is more or less just a for-testing class more than anything else (in my opinion), and that could be noted.

Your explanation of the different graph types was really good, incidentally, and an example of what I think is the better form of your documentation. I don’t think your documentation is actually all that bad at all, it’s mainly just not indexed well. And then some things are either lightly documented, or not documented, or documented in a video or API reference, which are not places I thought to look.

So, on to things that did or do confuse me:

On RecastGraph:

  • Why are width and depth (voxels) set to a fixed value? I clearly am able to scale the voxelized area as desired, so what is that based on? Why does it tell me this information if I can’t change it?

  • What exactly is Cell Size? I assume that if this was a grid, it would be the width of one side of each cube in the grid, in unity units. But since this is a navmesh and not a tiled grid… is this referring specifically to how obstacles are rasterized to voxels for purposes of then generating the colliders?

  • Why would I use tiles versus not use tiles? What is the tile size? Is that the 2D area or 3D volume or something else? Is this just for voxelization, or is this something that affects the resulting navmesh?

  • What is max border edge length? I’m assuming that this is either related to the tiles (whatever those are), or the individual final navmesh cells. From testing around it seems to be the latter, but the ideal values for various scenarios aren’t really spelled out. You mentioned that being too large could cause issues, as can having them too small, but I don’t have a concept of what large versus small is in – for instance – an indoor game versus an open world.

  • What is max edge error? Is that something that is basically related to how it’s generating the navmesh cells and how they fit the geometry? Why would a value of 1 be acceptable, if that’s the case? What’s the cost of it being lower? I’m using 0.1, and it generates quickly. Is that hurting generation speed, runtime speed, or nothing?

  • I can understand why rasterize terrain exists, but why would I rasterize meshes instead of using my existing colliders? It’s defaulting to rasterizing meshes, which makes me think it’s desirable for some reason, but I can’t fathom why. My collision meshes are either box colliders, or much simpler mesh colliders than the source meshes.

  • For collider detail it seems to imply that only affect sphere and capsule colliders. Is that correct? I’m assuming that mesh colliders are already considered “rasterized” from the get-go, unless “rasterization” actually means “voxelization” in this case. I’ve left this at the default of 10 and it seems fine… although I noticed an AI walk through a capsule collider a bit ago, so I need to recheck that. But all the mesh and box colliders are fine.

Graphs In General:

  • Why would I add more than one graph? By having a bounding box around my entire level, it’s already made lots of disconnected resulting navmeshes. Is there a benefit to having it set up as multiple graphs instead? Is there a disadvantage?

  • Regarding graphs in general, I see that they can be rotated. Which is awesome! My character is able to walk on walls and the ceiling, and having AIs that can do the same would be great. Being able to assign AIs to specific graphs so that they snap to a direction would be really useful, but there isn’t much said about that in the documentation, so at the moment I’d be experimenting blindly. Since RichAI is using a raycast for ground testing, presumably it’s also using custom gravity that is graph-oriented, so just switching which graph it is assigned to should let me have an AI jump from the floor to the wall if those were separate graphs… I assume. But right now it would be just a lot of fiddling on my part.

Settings:

  • The term heuristic really throws me off, because normally I think of these as being the actual algorithms. I understand that your tool actually does pathfinding as well as path-geometry-generation, but the term confuses me anyhow. Mainly because I don’t understand what “None” would even do! The simplest I can think of is Manhattan already, and I haven’t used anything but dijkstra’s algorithm for doing multi-agent solving (for a single target) in a lot of years.

  • Euclidean intrigues me, but I don’t even know what that is in this case. A link to wikipedia’s relevant reference, and a general note on the cost of that compared to Manhattan or none would be useful. With Manhattan Diagonal, I also am not sure if you’re automatically preventing the typical cutting-corners issue there, so I wound up using just regular manhattan for now since I was having trouble originally. I had been using euclidean without knowing what it was.

  • What is heuristic scale? The only thing I can figure is that this is octree depth, or that potentially you’re generating a second topology on top of the existing topology when you’re using a heuristic? If so, what’s the ram cost of that?

  • Under that you also have “max nearest node distance,” which is super useful but seems to be caught up in the heuristic bits, so I’d move that up above heuristics just to avoid confusing people like me, honestly. I was looking for this a while ago, and it was right in front of me the whole time. The tooltip explains it well, but an FAQ entry wouldn’t go amiss on “how do I get my AIs to follow an object that is off the topology without leaving the topology?”

  • Speaking of, when it comes to “max nearest node distance,” is there a downside to increasing that? I have tall ceilings in the room I’m using right now, and the character can be on the ceiling, which potentially is 20 or 25 unity units away from the topology. Having the AIs able to follow along under him if they’re chasing him is great, but the default being 4 kind of scares me in terms of what the search cost might be per AI if I were to up that. And if that search cost is incurred in a general case, or only when something is actually far away? Put another way, if I change that to 25 and normally my AIs are searching for something that is 1m above the navmesh, does that incur any extra cost compared to if that value is 4? If they’re searching for my player that actually IS 25 units away on the ceiling, is that incredibly expensive and potentially going to cause frame spikes? (Presumably not that since it’s all secondary thread, but still). Right now it’s just guess and check for this sort of thing for me.

  • For heuristic optimization, what is that? Is that just for making generation faster, or is that for making results better? Optimization could mean either, and having AIs naturally spread out (“random”) could be desirable if they’re moving in a group.

  • Batch graph updates makes sense, but is that overridden when I manually make calls to update the graph? If I’m not using the graph updater helper thing from the example, would this have any function? How do graph updates happen on dynamic objects if I’m not using the helper? My natural assumption is “not at all,” but this implies otherwise.

  • Prioritize graphs confuses the heck out of me. I assume that this only applies when I have actually defined multiple graphs, but is it something that also applies if I have one graph defined but multiple navmeshes are generated from that? Is the term graph used to exclusively mean the “things that are defined in the upper area there,” or are the “individual closed sections generated by whatever was defined in the upper area there” actually called graphs? I would normally assume the latter, but the interface is ambiguous.

  • If I wanted to make an AI absolutely locked to a specific graph (either kind from the above post), is that even possible? It’s not normally relevant, but I don’t much see how AIs could hop graphs unless I defined multiple overlapping ones manually.

  • What is “cache?” Is that basically a pregenerated navmesh that I can save with my scene, for a recast graph? If so, that’s really useful, because recalculating the same thing every time I launch is a bit pointless, even if it is only a few hundred ms.

  • The terminology for cache startup and whatnot is confusing to me in general, too; I get the general idea, but the specifics in a manual-like sense for “here’s how you use the GUI to do what you want as a level designer” doesn’t seem to exist.

Other Thoughts:

  • I know what penalties are – or at least I assume I do, from a general AStar implementation sense – but it’s not defined anywhere and someone new to pathfinding would probably be confused. I also don’t see anywhere to actually define penalties. It’s probably in some class, but it seems like usually that would be an editor-based thing or a tag-based thing or a layer-based thing.

  • The relevant graph surface stuff seems great – I have lots of graphs being generated that are completely pointless right now. But how does that work? A brief tutorial on that would be great. Do I just add a specific component to individual objects that are a part of graphs, and it essentially flood-fills out from them? Or am I having to tag ALL of the objects in the surface that would be generated?

RVO:

  • If I had some AIs that were horizontal, and some that were vertical on walls, wouldn’t I need two RVO simulators? Presumably one simulator for each group of graphs? I’m still not sure if that even is how that works, but the RVO simulator seems to be a singleton unfortunately.

Actual AIs:

  • Your image for slowdown times was awesome, but it was kind of buried.

  • Wall force and wall dist are not clear to me. I am familiar with the concept, but how can I see what a “wall” is? Does that just mean “edge of navmesh?” That would treat walls and ledges the same. Or is it using surrounding geometry in some way? Is it possible for this to knock RichAIs off their navmesh, or trap them in areas that are too tight? The former isn’t supposed to be possible from what I understand, but for a while there I thought I was having issues with the latter when my AIs weren’t walking. Wasn’t the case, but documentation on that would have helped me realize that faster.

RVO Controller:

  • If one agent “collides with” certain layers, but those other agents don’t collide with it, does the first agent always go around and never bump the first out of the way?

  • What is an “obstacle” for purposes of this? Something that isn’t an agent that has a navmesh cutter on it that isn’t marked as static? Or is there some other flag? I ask because different definitions imply different performance and optimization strategies.

  • Being able to have an agent easily switch its on RVO later and what layers it collides with during runtime could be useful, but is that information already baked by the time agents are moving around and one might switch modes? And since the collides-with parameter involves bitwise operators to set multiple, you might consider adding a params[] method that allows people to set that without having to remember bitwise math. :wink:

  • What does “priority” mean on this? It says how much the others will avoid me, but does that mean that I move out of the way less and they move out of the way more? Or does it mean they move MORE out of the way? Put another way: is this deciding who moves out of the way, or is this deciding the degree to which others move out of the way if I have the priority (by some means or other).

  • What does “lock when not moving” mean, precisely? It says avoidance quality is not the best, but is that just a minor quibble or a major issue? If my agent starts moving again, it becomes immediately unlocked? It seems like this would pretty much always be desirable to have on, but I’m not sure. Wouldn’t an agent that isn’t moving always have a very-cheap avoidance calculation by having an infinite time horizon to all obstacles (ie my movement vector is zero, so I can’t possibly hit anything, so early-out)?

Whew, that’s all I can think of. None of this stuff is a crisis, but it’s what I’ve wondered as I went.

It’s not the Unity Physics linecast, it’s a linecast for the pathfinding graphs. It works the same way but it traverses the pathfinding graph to find obstacles there instead of using the physics engine. It shouldn’t allocate anything I think.

Hm… It’s really odd. The code that would have thrown that error is

Quaternion targetRotation = Quaternion.LookRotation(movementPlane.ToWorld(direction, 0), movementPlane.ToWorld(Vector2.zero, 1));

So the vector that would have to be null is movementPlane.ToWorld(Vector2.zero, 1) but that should not be possible to be null as the ToWorld method call is essentially just a rotation. If you can replicate the error I’d be interested in the steps.

The target property is deprecated since 4.1. However setting it should be identical to setting the destination property every frame (in fact what it does is to just add the AIDestinationSetter component). I think this is mentioned in the docs (see earlier link).
What the AIDestinationSetter script does is essentially just setting the destination property every frame.

Yeah, I’m going to change the code to do that. In earlier versions the coroutine did some other things and it made more sense that it was a coroutine, but since then it has changed and it no longer makes much sense.

Wow. That is a nice long list of improvements! Thank you!
I will try to go through it and improve stuff. I’ll just comment on a few things here:

Isn’t it already?

I am also working on a new documentation system which I think will look nicer and be easier to search. It’s not really finished yet, but you can see a part of it here: https://github.com/HalfVoxel/doxydoc
The screenshot is pretty old though.

Yeah that’s not possible out of the box right now unfortunately. It’s possible with some very minor modifications to the RVOController script that would allow you to set which simulator to use from an external script.

The default is not 4, it is 100. You must have changed it at some point.

public float maxNearestNodeDistance = 100;

Yes, that is exactly what it is. There is this page in the docs called “Saving & Loading Graphs”. Have you read it?

Got it, that makes sense.

[quote=“aron_granberg, post:7, topic:4285, full:true”]Hm… It’s really odd. The code that would have thrown that error is

Quaternion targetRotation = Quaternion.LookRotation(movementPlane.ToWorld(direction, 0), movementPlane.ToWorld(Vector2.zero, 1));

So the vector that would have to be null is movementPlane.ToWorld(Vector2.zero, 1) but that should not be possible to be null as the ToWorld method call is essentially just a rotation. If you can replicate the error I’d be interested in the steps.[/quote]

Vectors are a struct so can only be null if you’re having an initialization issue. Fortunately that’s not what that error is about, and compile-time checks usually weed out init errors on structs anyway. All it’s saying is what the result of movementPlane.ToWorld(direction, 0), movementPlane.ToWorld(Vector2.zero, 1) happens to be Vector3.zero.

If you have something trying to path to the direction it’s already on, or a few other edge cases, then you’ll wind up with that. In almost all cases the user is doing something wrong, but it’s a case that can happen. I’d change the code to be something like:

  Vector3 targetDir = movementPlane.ToWorld(direction, 0), movementPlane.ToWorld(Vector2.zero, 1);
  Quaternion targetRotation;
  if ( targetDir.x != 0 || targetDir.y != 0 || targetDir.z != 0 )
     targetRotation = Quaternion.LookRotation( targetDir );
  else
    targetRotation = Quaternion.Identity;

That would prevent the error while introducing no heap or stack allocs in the majorty case, and doing a minimum of inline float comparison checks.

[quote=“aron_granberg, post:7, topic:4285, full:true”]The target property is deprecated since 4.1. However setting it should be identical to setting the destination property every frame (in fact what it does is to just add the AIDestinationSetter component). I think this is mentioned in the docs (see earlier link).
What the AIDestinationSetter script does is essentially just setting the destination property every frame.[/quote]

Hoo boy. A couple of notes. Firstly, the target property isn’t actually flagged as deprecated in the code. I’d just put this on the line above it:

[System.Obsolete( "target is deprecated, please use AIDestinationSetter component instead.  See documentation for details." )]

That will give you the message you want. Also, the fact that you’re using quotes in the format of this:

   /** Target to move towards.
	 * The AI will try to follow/move towards this target.
	 * It can be a point on the ground where the player has clicked in an RTS for example, or it can be the player object in a zombie game.
	 *
	 * \deprecated In 4.0 this will automatically add a \link Pathfinding.AIDestinationSetter AIDestinationSetter\endlink component and set the target on that component.
	 * Try instead to use the #destination property which does not require a transform to be created as the target or use
	 * the AIDestinationSetter component directly.
	 */

Means that I don’t actually see those in the editor. If you format it like this, then it will show up in intellisense:

    /// <summary>
    /// The AI will try to follow/move towards this target.
    /// It can be a point on the ground where the player has clicked in an RTS for example, or it can be the player object in a zombie game.
    ///
    /// \deprecated In 4.0 this will automatically add a \link Pathfinding.AIDestinationSetter AIDestinationSetter\endlink component and set the target on that component.
    ///
    /// Try instead to use the #destination property which does not require a transform to be created as the target or use
    /// the AIDestinationSetter component directly.
    /// </summary>

So proper usage should be:

    /// <summary>
    /// The AI will try to follow/move towards this target.
    /// It can be a point on the ground where the player has clicked in an RTS for example, or it can be the player object in a zombie game.
    ///
    /// \deprecated In 4.0 this will automatically add a \link Pathfinding.AIDestinationSetter AIDestinationSetter\endlink component and set the target on that component.
    ///
    /// Try instead to use the #destination property which does not require a transform to be created as the target or use
    /// the AIDestinationSetter component directly.
    /// </summary>
    [System.Obsolete( "target is deprecated, please use AIDestinationSetter component instead.  See documentation for details." )]
    public Transform target {

If you do a triple forward slash in visual studio above a method, property, or member variable it will automatically add all of the intellisense stuff. You can put in documentation on the return results, individual parameters, and so on.

You clearly have a great command of programming, but since I see you’re using \copydoc it looks like you come from a C++ background, and/or are using Doxygen Manual in general. I’m not sure if that is able to work with the visual-studio-style documentation or not, but having intellisense is a godsend. When it’s not there I tend to assume it’s just undocumented.

[quote=“aron_granberg, post:7, topic:4285, full:true”][quote=“Chris_Park, post:5, topic:4285”]
For that loop, why not just have it call in Update a method that counts down timeUntilNextRepath, and sets that to the new value just like you do with the coroutine, etc? The difference would be minimal, but I could set timeUntilNextRepath = 0 to force an immediate next-frame recalculation without having duplicate recalcs like I may be causing now. And you also lose the overhead of coroutines on this particular bit.
[/quote]

Yeah, I’m going to change the code to do that. In earlier versions the coroutine did some other things and it made more sense that it was a coroutine, but since then it has changed and it no longer makes much sense.
[/quote]

Story of my life with code evolution, too. :slight_smile:

Regarding the AIDestinationSetter, I’m still not sure I’m sold on that. I imagine that you’re using that so that you can have a component that your users can easily override for different logic? I’m all fine and well with that, but the way you’re using delegates in there makes me nervous. If exceptions don’t get handled properly, there can be some cases with memory leaks, etc. I dunno, that’s just me being overly paranoid, probably.

Isn’t it already?

I am also working on a new documentation system which I think will look nicer and be easier to search. It’s not really finished yet, but you can see a part of it here: https://github.com/HalfVoxel/doxydoc
The screenshot is pretty old though.[/quote]

Mostly it is, yes – I should have been clearer. The fact that it SEEMS to have a full list of tutorials on the sidebar was actually my downfall. There are a number of tutorials that are not listed directly there, such as stuff about RichAI. You have to go into Getting Started Part 2 and then click through something else to get to that. I already had the system up and working well, so going to a getting started tutorial wasn’t really on my mind.

There’s also not any real grouping of those tutorials on the sidebar. There’s a lot of stuff that is “case-specific things you want tutorials about if you’re doing specific things and already know the system,” and then there’s “help if you’re an absolute newbie,” and then there SHOULD (in my opinion) be a “here’s where to look for overarching next steps if you know what you’re doing at a basic level but are trying to figure this out.”

Aka:

  • Here’s how to get to my house = basic tutorials
  • Now that you’re in my house, here are all the rooms and why in general you’d visit any one of them (missing!)
  • By the way, here’s how to use the child safety locks in the kitchen (a bit too prominent)

Just my thoughts.

[quote=“aron_granberg, post:8, topic:4285, full:true”][quote=“Chris_Park, post:6, topic:4285”]
Speaking of, when it comes to “max nearest node distance,” is there a downside to increasing that? I have tall ceilings in the room I’m using right now, and the character can be on the ceiling, which potentially is 20 or 25 unity units away from the topology. Having the AIs able to follow along under him if they’re chasing him is great, but the default being 4 kind of scares me
[/quote]

The default is not 4, it is 100. You must have changed it at some point.[/quote]

Oof, this is where I get into trouble with copying from a demo. I copied the AStar component from the outdoor tutorial with all the yellow boxes and AIs walking around on them. I started with that tutorial, since it was a working starting point, and then started just messing with it until it did what I wanted. That’s how I first got my feet wet here. The value of 4 was I think set on there, although I wouldn’t swear to it. It’s also possible I changed it while experimenting. Good to know that there isn’t a giant cost to this, though, thank you.

Yes, that is exactly what it is. There is this page in the docs called “Saving & Loading Graphs”. Have you read it?[/quote]

Nope, I missed that piece. On the one hand that’s my fault for not exhaustively going through the documentation, but on the other hand it’s called cache in one place and saving and loading in another. I didn’t actually want to save/load graphs at the time I passed that piece of documentation, so I ignored it. When I later saw cache, I thought “huh, I wonder what that is, although I can take a guess at it.”

Anyhow, hopefully this was of some help!

Yeah, I meant zero, not null. Not sure why I wrote that.
Structs can actually never be null in any scenario.

Yeah, usually I do use the Obsolete attribute, however in this particular case I decided to leave it without the obsolete attribute at least during the beta. It should be functionally equivalent to setting the destination property.

That is something I in fact didn’t know about until a few months ago. I use Mac and thus use MonoDevelop and it picks up that documentation just fine. I’m not sure why VisualStudio hasn’t implemented support for it. I really dislike XML comments because they are so extremely verbose, but I am planning on writing a script to convert all my comments to XML comments as most users are using VisualStudio. I won’t be doing it by hand as I have over 2700 documentation comments in the package at the moment.

I think it’s just you being paranoid there :p. I don’t think there is any risk of a memory leak there. And if something throws an exception then you have more serious problems anyway.

Yep. Currently I only have 1 other behavior called “Patrol” (which I think should be included in the beta).

I use copydoc because I really don’t want to have to keep the exact same comment up to date in 4 different places. That would be horrible. And no, it was with C# I started using Doxygen actually.

Also. You might like a new feature I have added to the beta. If you right-click on pretty much any property (excluding those on the AstarPath object, I haven’t converted that to the new system yet) you will get a link to the online documentation.

There are some cases when it is useful. For example one might need multiple graphs for different types of units that require significantly different navmeshes. Or you might need it because you need to cover several locations in the world that are separated by large distances. And a few other cases. Usually you only use a single graph though.

I use the term heuristic because that’s the standard term for it in pathfinding literature. Sometimes it is just called the “H” score, but I suppose that is even more cryptic.
The current beta documentation actually already includes some links to wikipedia: https://arongranberg.com/astar/documentation/dev_4_1_6_17dee0ac/namespace_pathfinding.php#a35d651e776fc105830877a30b2c7da6a

In my dev version I have improved the documentation even more:

Got it – I don’t have anything more to add! You’ve pretty much nailed everything I had a question about. :slight_smile:

Hello!

Looks like I’ve run into the same issue. I’m getting warning: “Look rotation viewing vector is zero”
It only reproduces what slowWhenNotFacingTarget is false.

Here are some videos, I hope they will help.
CloudApp

CloudApp

Also I’ve found that direction (Vector2) become (NaN, NaN), I think this is related.

PS. Adding if statement to prevent calculation with NaN direction would not help, it just help to get rid of warning message.

PPS. I’m using 4.1.8 Beta (pro)