Google's problem (and to a lesser degree, other LLMs) is that their models are good at identifying language use correlations (remembering that words are not the only form of language) but have zero actual intelligence and thus no understanding of their meaning and no way to weigh the meaning behind the correlations.
Microsoft is struggling with this issue in their "guardrails" but at least they are working the issue, albeit heavy handedly. To some appearances they have a separate model post processing the answers from the base model.
Google shows no signs of having a handle on the problem other than embeddeding specific overrides on their model, whatever it is called this week. Which only makes things worse as demonstrated by their model's racialist imagery fiasco.
It is particularly noticeable that despite their collaborative history and anti-MS alignment with Google, Apple bit the bullet and licenced the OpenAI tech. Because as bad as Google's "AI" tech is, Apple's is worst.
As to the Recall teapot tempest, keeping the data local solves most of the problems but it leaves one major sore point: subpoenas. Just as with cellphone encryption, governent authoritarians (left *and* right) will have serious heartburn if they can't search the computer's accumulated data. Which both Recall and the upcoming AI FILE MANAGER will have to protect to get any market traction.
As the government, hollywood, and media types are discovering, real world uses of LLMs come with unexpected subtleties nobody is prepared to deal with, not the media, not the government, and not the tech companies.
Interesting times.