Current AI models are, and designed to be your own personal librarian where the library is filled with books of dubious providence while under the umbrella of the builders overly nice ideology.
They consistently get everything wrong and are about as reliable as asking a random drunk in the pub.
I use it as a tool to help creative writing, editing, and on occasion generating ideas when I hit a wall. Even then I constantly correct It, and use other tools including the Mark I human brain. It is also reasonably useful for research. But, that is only because I have a broad background in STEM, I can pick out the nonsense it comes up with.
I've read published works, on kindle for example, created by humans but authored exclusively by AI and they are not particularly good.
Thus, if you feel AI is making you 'dumb' then chances are you weren't the sharpest knife in the drawer anyway.
I haven't read the papers, so pinch of salt with this opinion, but I have doubts on this research. They only way you could say for sure is to measure users before and after use. Which seems unlikely and even then, difficult to do since measuring intelligence is hard at the best of times. It is very likely the people using AI more often, are using it specifically because they were lacking in confidence, experience or to be harsher, brain power.