The ResourceLocator component incorrectly caches search results to the current search key, even when those results correspond to a previous search key. This occurs when the search query changes while a request is still in flight.
Specifically, when a response arrives, the component caches it under the original key (correct) but also under the currentRequestKey if it differs (incorrect). This means if a user types abc (triggering a request) and then changes it to xyz while the request is pending, the results for abc are cached and displayed for xyz.
This issue was introduced in PR #16773.
It is particularly egregious on the first load of the dropdown because there is no debounce delay, making it very likely for a user to start typing while the initial "empty" search is loading.
There's probably easier ways to reproduce this, but this is the one I stumbled upon.
lmChatOpenAi node to a workflow and configure OpenAI credentials.Search results should only be cached for the specific query parameters that generated them. If the user changes the search query while a request is pending, the results from the previous request should be discarded or cached only for the original key, not the new one. The UI should reflect the state of the current query (loading or results specific to that query).
Generated at: 2025-11-21T02:51:13.291Z
WSL2 running on Dev Container
1.121.0
22.21.0
PostgreSQL
main (default)
self hosted