I’ve been contemplating methods for audio occlusion, which can be a tricky thing to get right. My first methods involve raycasts, but raycasts aren’t always ideal because there can be a thin sliver of collider like a pillar blocking the way that acts the same as a wide brick wall. Multiple raycasts spaced out can improve but then you need lots to get better “resolution” and the original issue can still cause wacky audio problems.
Other audio systems implement actual physics-based acoustics, but this is too overkill for what I need. I think I’d like to try using a* pathfinding for it, by getting the shortest path along the ground between the two points and comparing the length of the shortest path to the linear distance between the points.
I have some ideas on how to do this. One involves having a pooling system of FollowerEntity navigators, and when the audio listener is in audible range of an audio source, the script doing this will pull a FollowerEntity, set it’s target to the audio listener, and use that path distance in the calculations while the audio is playing which adjust an audio filter.
But maybe this wont lead to the effect I think it would.