Will an overreliance on Copilot and ChatGPT make you dumb? A new Microsoft study says AI 'atrophies' critical thinking: "I already feel like I hav...

If the purpose of the AI is to answer questions and find information, I think that's fine and helpful to the human mind -- as the Doctor said in Doctor Who, "The answers are easy. It's asking the right questions that's hard." But if you're using AI to create work and not using your human imagination to create, then that does seem destructive. Just my opinion.
 
  • Like
Reactions: Arun Topez
I made an account just to correct this ignorance
You feel dumb when using A.I because you are < it increases what's already there and makes up for lack of ability
90 % of you are in made up echo chambers that congratulate you and others for doing absolutely nothing at all
 
Microsoft researchers say an overdependency and reliance on AI tools like Copilot may negatively impact a person's critical thinking, leading to the deterioration of cognitive faculties.

Will an overreliance on Copilot and ChatGPT make you dumb? A new Microsoft study says AI 'atrophies' critical thinking: "I already feel like I hav... : Read more
Current AI models are, and designed to be your own personal librarian where the library is filled with books of dubious providence while under the umbrella of the builders overly nice ideology.

They consistently get everything wrong and are about as reliable as asking a random drunk in the pub.

I use it as a tool to help creative writing, editing, and on occasion generating ideas when I hit a wall. Even then I constantly correct It, and use other tools including the Mark I human brain. It is also reasonably useful for research. But, that is only because I have a broad background in STEM, I can pick out the nonsense it comes up with.

I've read published works, on kindle for example, created by humans but authored exclusively by AI and they are not particularly good.

Thus, if you feel AI is making you 'dumb' then chances are you weren't the sharpest knife in the drawer anyway.

I haven't read the papers, so pinch of salt with this opinion, but I have doubts on this research. They only way you could say for sure is to measure users before and after use. Which seems unlikely and even then, difficult to do since measuring intelligence is hard at the best of times. It is very likely the people using AI more often, are using it specifically because they were lacking in confidence, experience or to be harsher, brain power.
 
Current AI models are, and designed to be your own personal librarian where the library is filled with books of dubious providence while under the umbrella of the builders overly nice ideology.

They consistently get everything wrong and are about as reliable as asking a random drunk in the pub.

I use it as a tool to help creative writing, editing, and on occasion generating ideas when I hit a wall. Even then I constantly correct It, and use other tools including the Mark I human brain. It is also reasonably useful for research. But, that is only because I have a broad background in STEM, I can pick out the nonsense it comes up with.

I've read published works, on kindle for example, created by humans but authored exclusively by AI and they are not particularly good.

Thus, if you feel AI is making you 'dumb' then chances are you weren't the sharpest knife in the drawer anyway.

I haven't read the papers, so pinch of salt with this opinion, but I have doubts on this research. They only way you could say for sure is to measure users before and after use. Which seems unlikely and even then, difficult to do since measuring intelligence is hard at the best of times. It is very likely the people using AI more often, are using it specifically because they were lacking in confidence, experience or to be harsher, brain power.
I don't recall any librarians writing content, doing analytics, generating creative, or creating presentations for me? It sounds more like you're referring to earlier virtual Assistants that simply consolidated and presented information from the web (like a librarian), and not AI which has the ability to create content and replace jobs, including writing in the tones of it's users, creating full videos, presentations, and manage project timelines. It can make people dumber if they're solely relying on it to do everything, and not continuing to grow themselves. If you don't actively use your brain, you will get dumber. If you're using AI to do redundant things for you that don't require thinking and take up your time, then that's where AI can be helpful. The problem is these companies aren't building it that way and building it to eventually replace everything to make people dumber (watch Wall-E).
 
Current AI models are, and designed to be your own personal librarian where the library is filled with books of dubious providence while under the umbrella of the builders overly nice ideology.

They consistently get everything wrong and are about as reliable as asking a random drunk in the pub.

I use it as a tool to help creative writing, editing, and on occasion generating ideas when I hit a wall. Even then I constantly correct It, and use other tools including the Mark I human brain. It is also reasonably useful for research. But, that is only because I have a broad background in STEM, I can pick out the nonsense it comes up with.

I've read published works, on kindle for example, created by humans but authored exclusively by AI and they are not particularly good.

Thus, if you feel AI is making you 'dumb' then chances are you weren't the sharpest knife in the drawer anyway.

I haven't read the papers, so pinch of salt with this opinion, but I have doubts on this research. They only way you could say for sure is to measure users before and after use. Which seems unlikely and even then, difficult to do since measuring intelligence is hard at the best of times. It is very likely the people using AI more often, are using it specifically because they were lacking in confidence, experience or to be harsher, brain power.
If you're too ignorant and uninspired to use properly just say so
It only gets "things wrong" if you genuinely don't know what you're talking about. (Or don't like the truth)
 
If you're too ignorant and uninspired to use properly just say so
It only gets "things wrong" if you genuinely don't know what you're talking about. (Or don't like the truth)
I use it to a degree you can only imagine. It amplifies
That's a fact.
 
If you're too ignorant and uninspired to use properly just say so
It only gets "things wrong" if you genuinely don't know what you're talking about. (Or don't like the truth)

I ask Copilot questions many times every day, so I'm a user and a bit of a fan, but it absolutely gets things wrong too. Where it's great and where it's weak is somewhat predictable: if the information is broad based, it's very, very good, especially at combining things across disciplines. That's something it can answer in seconds that might otherwise take hours of research many different sites. But if the information only exists in a few opinion posts, the AI is going to reflect those opinions and, if those posts were wrong, the AI will provide the wrong information too.

Even if you ask the AI to create code, something that one might expect a computer to do well, it's often wrong because it's largely an amalgam of code fragments posted by various human users. If you ask AI to write text, it's often gibberish. If you ask AI to draw a picture of a person, that person may have 3 arms or fingers on a hand (or 8). I'm sure it will get better as some of these things that human brains do automatically get hard-coded into the systems, but to say that AI "only gets things wrong if you genuinely don't know what you're talking about" is simply incorrect.
 
If the purpose of the AI is to answer questions and find information, I think that's fine and helpful to the human mind -- as the Doctor said in Doctor Who, "The answers are easy. It's asking the right questions that's hard." But if you're using AI to create work and not using your human imagination to create, then that does seem destructive. Just my opinion.
Having just come off a college campus (I graduated right when ChatGPT was doing the rounds) was pretty accurate to this. An older professor of mine was very opinion in sharing their opinion that they still saw Google and grammarly as plagiarism because they shortcut research and can easily lead to misinformation. The biggest though was confidence. From the professors I talked with and the tutoring center I worked at, AI essays were creeping up. And the reasons were the usual suspects: lazy and whatnot. But what a lot of people were also saying is that students didn't trust themselves over ChatGPT to write a better essay. I think all of the AI essays were first years in beginning classes where they were supposed to be learning how to write on a college level. In reality that comes with an understanding that not every first essay you write will be great, but some students took that anxiety and shoved off to ChatGPT. It's the same real problem that exists with Google and Grammarly. I'll get it myself. There's an uncertainty about something, anything. Maybe how to spell a word or what something means or is, so you go use one of the many tools fingertips away to give you the answer. And maybe, most likely, it's right, but not all the time. Grammarly as an English tool actually makes a lot of mistakes and it perpetuates the same misconceptions that float around (reinforces them) and kinda dies at the fluidity of English.

I think that's the real issue with all these tools on thinking. Well, there's probably more than one. It's a shot to our confidence as thinkers and affects young thinkers a LOT who then grow up dependent on those tools. Question and answer is about the same level as Google, so not perfectly fine but nothing any more dangerous than Google or another search engine. Reasoning and generative AI is where we start to get in the realm of replacing entire skills and trusting software tools to potentially be better than humans at those skills. Like a pitch is the creation of art (poetry and drawings and etc) and how ANYONE can do it. Except, anyone can already choose to draw or write poetry. There isn't some magical barrier. And when we do shove that skill off on AI because we think we aren't "expert" enough to do it, we're letting the AI create something with no meaning or humanity in it. Like it's called the Arts and humanities. A child's stick figure is more art than anything an AI "generates".
 
Having just come off a college campus (I graduated right when ChatGPT was doing the rounds) was pretty accurate to this. An older professor of mine was very opinion in sharing their opinion that they still saw Google and grammarly as plagiarism because they shortcut research and can easily lead to misinformation. The biggest though was confidence. From the professors I talked with and the tutoring center I worked at, AI essays were creeping up. And the reasons were the usual suspects: lazy and whatnot. But what a lot of people were also saying is that students didn't trust themselves over ChatGPT to write a better essay. I think all of the AI essays were first years in beginning classes where they were supposed to be learning how to write on a college level. In reality that comes with an understanding that not every first essay you write will be great, but some students took that anxiety and shoved off to ChatGPT. It's the same real problem that exists with Google and Grammarly. I'll get it myself. There's an uncertainty about something, anything. Maybe how to spell a word or what something means or is, so you go use one of the many tools fingertips away to give you the answer. And maybe, most likely, it's right, but not all the time. Grammarly as an English tool actually makes a lot of mistakes and it perpetuates the same misconceptions that float around (reinforces them) and kinda dies at the fluidity of English.

I think that's the real issue with all these tools on thinking. Well, there's probably more than one. It's a shot to our confidence as thinkers and affects young thinkers a LOT who then grow up dependent on those tools. Question and answer is about the same level as Google, so not perfectly fine but nothing any more dangerous than Google or another search engine. Reasoning and generative AI is where we start to get in the realm of replacing entire skills and trusting software tools to potentially be better than humans at those skills. Like a pitch is the creation of art (poetry and drawings and etc) and how ANYONE can do it. Except, anyone can already choose to draw or write poetry. There isn't some magical barrier. And when we do shove that skill off on AI because we think we aren't "expert" enough to do it, we're letting the AI create something with no meaning or humanity in it. Like it's called the Arts and humanities. A child's stick figure is more art than anything an AI "generates".

I think this is a really interesting subject -- the progression of tech allowed/encouraged/required in education. My kids use calculators to do parts of their math assignments in junior high and highschool, with the teachers' logic that what they want the kids doing is the algebra and trigonometry, and don't want them wasting time on addition, subtraction, multiplication, and division of large numbers, or looking up sin, cos, and tan values in a table (like I had to do in highschool math). Calculator use was rigidly prohibited in my schools, until college, where we were required to use graphing calculators for my math classes (math, physics, and econ major). My parents were required to learn how to use slide rules!

I don't care much for the grammar checkers, but I sure rely on spell checkers. I'm a reasonable (certainly not great) speller, but typos produce a lot of spelling errors that are easy to miss without computer help.

I think for researching information, ANY source or method is good, PROVIDED (important) that the student learns how to check the validity of the source. Even in printed science publications, there are good journals that involve heavy peer review and others that will take just about anything without much vetting. A good article found via Google or Bing on the website of a reputable site, say, a Johns Hopkins page summarizing some medical or astronomical principle, may well be better than something found in a library in a lesser journal.

All of that said, I'll admit that AI seems different if it's actually creating the paper for you, because then the student may know literally nothing about the subject matter or the writing process. I also think that writing on a subject CREATES AND SOLIDIFIES knowledge. We learn as we write. Explaining a subject requires your knowledge to connect all the points from start to finish, often revealing gaps in what we think we know. This is why teaching a subject also forces us to learn it better.

So if I were God Emperor of academics, I would permit using AI for research, but still require a bibliography with original source references (so the AI is just to get a high level understanding, but sources must still be cited to defend any specific points the student raises), but prohibit using AI to create anything, unless it were a class on using AI for that purpose.
 
I think this is a really interesting subject -- the progression of tech allowed/encouraged/required in education. My kids use calculators to do parts of their math assignments in junior high and highschool, with the teachers' logic that what they want the kids doing is the algebra and trigonometry, and don't want them wasting time on addition, subtraction, multiplication, and division of large numbers, or looking up sin, cos, and tan values in a table (like I had to do in highschool math). Calculator use was rigidly prohibited in my schools, until college, where we were required to use graphing calculators for my math classes (math, physics, and econ major). My parents were required to learn how to use slide rules!

I don't care much for the grammar checkers, but I sure rely on spell checkers. I'm a reasonable (certainly not great) speller, but typos produce a lot of spelling errors that are easy to miss without computer help.

I think for researching information, ANY source or method is good, PROVIDED (important) that the student learns how to check the validity of the source. Even in printed science publications, there are good journals that involve heavy peer review and others that will take just about anything without much vetting. A good article found via Google or Bing on the website of a reputable site, say, a Johns Hopkins page summarizing some medical or astronomical principle, may well be better than something found in a library in a lesser journal.

All of that said, I'll admit that AI seems different if it's actually creating the paper for you, because then the student may know literally nothing about the subject matter or the writing process. I also think that writing on a subject CREATES AND SOLIDIFIES knowledge. We learn as we write. Explaining a subject requires your knowledge to connect all the points from start to finish, often revealing gaps in what we think we know. This is why teaching a subject also forces us to learn it better.

So if I were God Emperor of academics, I would permit using AI for research, but still require a bibliography with original source references (so the AI is just to get a high level understanding, but sources must still be cited to defend any specific points the student raises), but prohibit using AI to create anything, unless it were a class on using AI for that purpose.
Let me steal your last paragraph: if I were the God Emperor of academics I'd hammer in the importance of thinking.

I'm really curious (and I'm sure these studies exist and there will be more in the coming years) how the knowledge/skills we essentially externally compartmentalize onto external tools (simple math like addition and subtraction) affects our ability to more readily recall advanced information at all. Like I get the idea that since we put basic math on calculators we can more easily do advanced math... but also I'm a bit skeptical. Because (and again, I haven't read any studies on the matter so this is just my biased observations) I feel that we've become less able to access the basic level math information. Or at the very least as I said earlier with English, we second guess ourselves more and trust computers more than ourselves to do it. I feel myself fall back on this a lot myself. It'll be simple addition and subtraction and I'll know the answer and then also put it in my calculator because I trust my calculator more than myself. One reason I bring this up is because during the early Chat GPT days I gave it a math problem that involved the order of operations. It got the problem wrong and I realized it was because it failed to do the order of operations properly (and I double checked and verified I put in parenthesis right and everything).

So I don't know. Are we freeing ourselves up for more advanced thought or just preventing ourselves from internalizing certain foundational knowledge? And to be frank, with the anecdote you gave I'm inclined to push back that when they entire the real world junior high and high school kids would be better served being able to basic math without any tools and knowing how to use tools for advanced math. In general, Math should focus on the idea of teaching problem solving not specialized knowledge that those kids will only really encounter again if they go into an advanced career within that field.

That's what I meant by how I'd like schools to focus more on teaching students how to think. I feel like a lot of middle and high school education is just teaching how to follow directions. Kids aren't really understanding the importance of say researching and writing their own work, so when they learn about tools that do it for them they gladly make use of them. Teachers don't always hammer in the importance of thinking and a lot of assignments at that level can just feel meaningless. I think so many first year university students feel flabbergasted because for the first time they're being taught how to research and how to think and why it's important that they form and articulate their own opinions and why it's important that they are able to problem solve. When students don't have those skills then AI becomes another risk in that sense. Like I'm inclined to agree that any research tool can be useful, but researching itself is a skill that isn't taught as well as it should be.

It's all a really messy twine ball in a can of worms that's about to go "pop goes the weasel" to me. I think education on the whole is failing students. I think academics are failing to provide students with the internal tools to think and feel confident in their own knowledge and thoughts and opinion. Then we have all these tools pop up that promise to do our thinking for us and with which we instantly look up new information (and also pawn off responsibility for the accuracy of that information).
 
  • Like
Reactions: GraniteStateColin
Let me steal your last paragraph: if I were the God Emperor of academics I'd hammer in the importance of thinking.

I'm really curious (and I'm sure these studies exist and there will be more in the coming years) how the knowledge/skills we essentially externally compartmentalize onto external tools (simple math like addition and subtraction) affects our ability to more readily recall advanced information at all. Like I get the idea that since we put basic math on calculators we can more easily do advanced math... but also I'm a bit skeptical. Because (and again, I haven't read any studies on the matter so this is just my biased observations) I feel that we've become less able to access the basic level math information. Or at the very least as I said earlier with English, we second guess ourselves more and trust computers more than ourselves to do it. I feel myself fall back on this a lot myself. It'll be simple addition and subtraction and I'll know the answer and then also put it in my calculator because I trust my calculator more than myself. One reason I bring this up is because during the early Chat GPT days I gave it a math problem that involved the order of operations. It got the problem wrong and I realized it was because it failed to do the order of operations properly (and I double checked and verified I put in parenthesis right and everything).

So I don't know. Are we freeing ourselves up for more advanced thought or just preventing ourselves from internalizing certain foundational knowledge? And to be frank, with the anecdote you gave I'm inclined to push back that when they entire the real world junior high and high school kids would be better served being able to basic math without any tools and knowing how to use tools for advanced math. In general, Math should focus on the idea of teaching problem solving not specialized knowledge that those kids will only really encounter again if they go into an advanced career within that field.

That's what I meant by how I'd like schools to focus more on teaching students how to think. I feel like a lot of middle and high school education is just teaching how to follow directions. Kids aren't really understanding the importance of say researching and writing their own work, so when they learn about tools that do it for them they gladly make use of them. Teachers don't always hammer in the importance of thinking and a lot of assignments at that level can just feel meaningless. I think so many first year university students feel flabbergasted because for the first time they're being taught how to research and how to think and why it's important that they form and articulate their own opinions and why it's important that they are able to problem solve. When students don't have those skills then AI becomes another risk in that sense. Like I'm inclined to agree that any research tool can be useful, but researching itself is a skill that isn't taught as well as it should be.

It's all a really messy twine ball in a can of worms that's about to go "pop goes the weasel" to me. I think education on the whole is failing students. I think academics are failing to provide students with the internal tools to think and feel confident in their own knowledge and thoughts and opinion. Then we have all these tools pop up that promise to do our thinking for us and with which we instantly look up new information (and also pawn off responsibility for the accuracy of that information).

Great insightful points. On the positive side, I think one thing U.S. schools have historically done well compared to other countries is teach thinking rather than memorization. This, historically, meant that U.S. students tested worse (as they still do today), but were actually better at solving a new problem that didn't resemble anything they'd ever seen before. That "better" is merely comparative and also a few decades old, so maybe it's no longer true, or even if it's still true, maybe U.S. schools are bad, just not as bad as other countries.

For my kids, they were not allowed to use calculators when they were learning basic math. It was only when they got to more advanced math that they were allowed to use them. That seems roughly the same as our being assigned to use calculators in college math classes. More interesting to me is that the homework assignments my daughter gets requires her to enter her answers on a computer (a Chromebook, ugh), but it tells her if they're right or not in real-time as she enters the answer. I'm not 100% sure how I feel about that yet, but I think I like it: instant feedback and opportunity to self-correct before the teacher gets the work, especially for homework, strikes me as more educational and faster than the old process of turn paper homework and then wait a day or more to find out if it were right (if ever).

None of this really has anything to do with AI, but I think it shows there's a broad spectrum of how tech can be used in education, maybe some is good and some is bad. Some of it seems like a net positive to me, but I confess I don't have any expertise in education nor any real hard opinions on this. The only thing that really aggravates me with education is when a school doesn't require kids to learn the basics at all, especially math and science.
 
Great insightful points. On the positive side, I think one thing U.S. schools have historically done well compared to other countries is teach thinking rather than memorization. This, historically, meant that U.S. students tested worse (as they still do today), but were actually better at solving a new problem that didn't resemble anything they'd ever seen before. That "better" is merely comparative and also a few decades old, so maybe it's no longer true, or even if it's still true, maybe U.S. schools are bad, just not as bad as other countries.

For my kids, they were not allowed to use calculators when they were learning basic math. It was only when they got to more advanced math that they were allowed to use them. That seems roughly the same as our being assigned to use calculators in college math classes. More interesting to me is that the homework assignments my daughter gets requires her to enter her answers on a computer (a Chromebook, ugh), but it tells her if they're right or not in real-time as she enters the answer. I'm not 100% sure how I feel about that yet, but I think I like it: instant feedback and opportunity to self-correct before the teacher gets the work, especially for homework, strikes me as more educational and faster than the old process of turn paper homework and then wait a day or more to find out if it were right (if ever).

None of this really has anything to do with AI, but I think it shows there's a broad spectrum of how tech can be used in education, maybe some is good and some is bad. Some of it seems like a net positive to me, but I confess I don't have any expertise in education nor any real hard opinions on this. The only thing that really aggravates me with education is when a school doesn't require kids to learn the basics at all, especially math and science.
Yeah, I certainly also have no formal training in education. Certainly not in other countries (like I wasn't aware of comparisons between the US and other education systems and how that can lead to worse testing but better problem solving (I am interested in how US education responds to this if it's difficult to measure; growing up I certainly remember a back and forth on standardized testing and whether to prepare students for it or life in general)). Only really have my own experiences as a student which... I'm young (just graduated uni), but that's certainly getting more outdated by the minute. Like what you mentioned of your daughter inputing homework answers on her Chromebook is similar to what I did in university courses for an accounting class (thankfully not on a Chromebook). The grade was miniscule and you were encouraged to do it over and over again for a one hundred, but I think the ability to get instant feedback and practice what was learned was useful. That said it was also up to the student how useful it was. Like we could just type in random answers until we got it correct or ask friends for our or we could take the time to work the problem or go to the professor to have the problem explained to us.

Yeah, I feel like we've gone from talking about AI's effect on education specifically to technological tools effect on education generally, but I also think they're more of less the same thing. At the end of the day AI is a tool and like has been said what will matter most is how education responds to it and more importantly how they implement it. If the focus is on ensuring students understand the fundamentals first and know how to problem solve and research properly and all the good jazz then there's a lot of ways AI can assist them. But I definitely think all the gaps haven't been plugged there and with new technology there's also the slow adoption and understanding from the educators (like I think one problem with this stuff is that kids are exposed to it and often take to it faster than their educators which leaves this gap where they haven't been taught how to properly use them). Education in the US in general is... varied. Depending on where people live (states have different guidelines and different counties have different resources and different teachers are better and worse). It will be interesting to see how things progress.
 
  • Like
Reactions: GraniteStateColin

Members online

Forum statistics

Threads
333,905
Messages
2,256,867
Members
428,715
Latest member
Nuridayu suka kulum