r/searchandrescue 17d ago

Have you tried using ChatGPT to aid in SAR search planning?

I did some very cursory experiments with ChatGPT and search data (including CalTopo) for a search and recovery operation and I was surprised to see the results.

Note: I put as much detail as I could into the prompt but surely I was missing a lot of important information for this quick test.

I was immediately surprised with the LLMs knowledge about search theory. It asked some smart "High‑value clarifying questions" about the subject, clues, forensics, terrain, search history and even provided some provisional theories and tactics.

I noticed that, like a human, became fixated on some important clues. This was an old search so a lot of avenues were already pursued.

Before I put more effort into massaging the source data and improving the details in the prompt, I wanted to see if anyone has tried this already.

Disclaimer: I am not experienced in Search Theory but if the results become promising my plan is to take this to some more experienced folks on our team and neighboring teams for more experimentation.

0 Upvotes

7 comments sorted by

8

u/[deleted] 17d ago edited 5d ago

[deleted]

-3

u/Mauronic 17d ago

I would most definitely add trusted resources to the prompt as one of the next steps.

-7

u/BallsOutKrunked WEMT / WFR / RFR / CA MRA Team 17d ago

Not really arguing but one can and really should train agents to run on particular chunks of knowledge ahead of other things. We have a bot that uses our regional and state medical protocols as the starting point and anything not sourced from those is called out differently.

You can say "only use lost person behavior book, the nps rescue manual, and blah blah blah".

The more time I spend with LLMs the more I see the knowledge sources and prompt engineering/instructions being 80% of the problem if you're starting with a good-enough model to begin with.

The default text box on chatgpt.com is useful but it's a Swiss army knife trying to do everything for everyone.

9

u/TheophilusOmega 17d ago

What could it possibly know that experts and locals don't? 

It could regurgitate theory, or it could make up BS, either way how is that more helpful than a team with years of experience and local knowledge?

0

u/Mauronic 17d ago edited 17d ago

Not sure. Perhaps it could point out blind spots. Someone told me that on long searches, best practice is to swap out leadership to get fresh perspectives. Perhaps it could assist with assignment planning or paperwork.

I am not sure.

But for whatever reason this post with a simple, curious question sure is getting downvoted! lol

1

u/[deleted] 16d ago

[deleted]

1

u/Mauronic 16d ago

I totally agree that there is a trap of using LLMs as a clutch.

I never suggested using this as a learning tool, but simply as an aid.

If you can’t envision any ways that AI could support an experienced person or search operations then that’s fine, I am not qualified enough to debate that.

But as an aid to an experienced person, risks are limited. Power loss and connectivity are non-issues with a local model.

7

u/IraTheRouge 17d ago

I would not trust any AI to do any work within emergency response. It isn't intelligent, it regurgitates what it thinks is true. Even if it was trained to only use proper sources, I would never trust someone's life to it. Not to mention that any data given to it isn't secure and is just asking for potential leaks of personal information. That aside, anything that it does would have to be thoroughly checked by a human, which defeats the point of it being a time saver. This also would lead to less people having the training and experience to do it themselves, because "just let the AI do it".

-2

u/Which_Amphibian4835 17d ago

I 100% have it help me write IAPs expeditiously. Haven’t gotten around to testing on search theory