Brain not needed. AI like ChatGPT sneaking into our jobs and lives

Started by Tapio Dmitriyevich, April 24, 2025, 06:52:38 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Tapio Dmitriyevich

There's this trend. People seem to ask ChatGPT (and similar systems) increasingly, not use their own brain. I noticed it's some kind of trend, if you don't participate, you're considered to be conservative and old.

Where did I notice the trend?
- At job. My colleague from staff recruiting team puts her text into AI and gets a perfect result (no grammar mistakes/typos) and then uses that.
- At job 2. Someone asked the AI "write a report about xy" - and they used the result as a foundation. It is seen as being efficient.
- At job 3. Use some webservice, with templates or have AI create design related pictures. Not doing it locally.
- In some Android programming related subreddit, newbies with zero programming ideas regularly say "I asked my
question to AI, it gave me a result, but the result does not work"
- In WWW fora. If I read comments in perfect german or english, summarizing pros and cons: this has likely been created by AI.

AI can only learn from what it has been fed. Some average? Some trends? Some mainstream ideas of things? Western geopolitics? Whatever it is all in there...

I am currectly opposed. I don't know exactly. I love to use my own brain and I wish people used their own brain.

Any thoughts?

In the past I worked in a highly regulated field, security-wise. Now I am in another area and I am speechless how careless they all are. Upload all their stuff, every thought, every file, to online services to receive some results in return.

^
Not perfect english up there. But it's my own!

Roasted Swan

Quote from: Tapio Dmitriyevich on April 24, 2025, 06:52:38 AMThere's this trend. People seem to ask ChatGPT (and similar systems) increasingly, not use their own brain. I noticed it's some kind of trend, if you don't participate, you're considered to be conservative and old.

Where did I notice the trend?
- At job. My colleague from staff recruiting team puts her text into AI and gets a perfect result (no grammar mistakes/typos) and then uses that.
- At job 2. Someone asked the AI "write a report about xy" - and they used the result as a foundation. It is seen as being efficient.
- At job 3. Use some webservice, with templates or have AI create design related pictures. Not doing it locally.
- In some Android programming related subreddit, newbies with zero programming ideas regularly say "I asked my
question to AI, it gave me a result, but the result does not work"
- In WWW fora. If I read comments in perfect german, summarizing pros and cons, this has likely been created by AI.

AI can only learn from what it has been fed. Some average? Some trends? Some mainstream ideas of things? Western geopolitics? Whatever it is all in there...

I am currectly opposed. I don't know exactly. I love to use my own brain and I wish people used their own brain.

Any thoughts?

In the past I worked in a highly regulated field, security-wise. Now I am in another area and I am speechless how careless they all are. Upload all their stuff, every thought, every file, to online services to receive some results in return.

^
Not perfect english up there. But it's my own!

I agree with all your concerns.  So far I have avoided any (concious!) engagement with AI and certainly I am not interested in using it to write or research any information on my behalf.  I enjoy doing both of those processes myself.  But then I am not 'competing' in a workplace to appear more productive or well-informed.  Clearly AI is going to insert itself more into our lives whether we actively seek to use it or not

Christo

Our brain has been our most overrated organ for centuries. Now we know why: knowledge is something else.  :)
... music is not only an 'entertainment', nor a mere luxury, but a necessity of the spiritual if not of the physical life, an opening of those magic casements through which we can catch a glimpse of that country where ultimate reality will be found.    RVW, 1948

krummholz

ChatGPT is the bane of STEM professors here, and probably throughout academia now. Using it in an unauthorized way is also the toughest kind of academic dishonesty to prove, since you will get at least slightly different responses if you pose the same question to it twice.

Iota

Quote from: Roasted Swan on April 24, 2025, 07:45:02 AMI agree with all your concerns.  So far I have avoided any (concious!) engagement with AI and certainly I am not interested in using it to write or research any information on my behalf.  I enjoy doing both of those processes myself.  But then I am not 'competing' in a workplace to appear more productive or well-informed.  Clearly AI is going to insert itself more into our lives whether we actively seek to use it or not

I relate very much to what you write, though sometimes I do find the AI overview offered with some google searches quite useful for example.
Personally the older I get, the more irrelevant I feel, and AI is the just the latest step in that journey, the fact I don't use it just alienates me yet further from younger generations. But I don't mind, indeed I embrace my irrelevance, it actually feels something of a relief in a way, rather liberating. And as the internet has brought great wonders as well as great horrors, the indications are that AI is the next step in that evolution. Though whether it will wipe us out like the dinosaurs, take us to unimaginable new heights, or something else, we seem unavoidably on course to discover ..

Roasted Swan

Quote from: Iota on April 24, 2025, 09:14:19 AMI relate very much to what you write, though sometimes I do find the AI overview offered with some google searches quite useful for example.
Personally the older I get, the more irrelevant I feel, and AI is the just the latest step in that journey, the fact I don't use it just alienates me yet further from younger generations. But I don't mind, indeed I embrace my irrelevance, it actually feels something of a relief in a way, rather liberating. And as the internet has brought great wonders as well as great horrors, the indications are that AI is the next step in that evolution. Though whether it will wipe us out like the dinosaurs, take us to unimaginable new heights, or something else, we seem unavoidably on course to discover ..

my feelings exactly! (but you put it much better than I could - even after I'd asked AI........ ;) )

Henk

I think AI is a hype for a large part. It costs also much resources. Rare earth metals become scarce. Maybe the end of the smartphone era is near. Allocation choices need to be made or the market decides.

Technology ethicists also warn for too much dependence on technology, for instance Nolen Gertz in 'Technology and Nihilism', who observed and got bewildered and worried by his son's behaviour with regard to technology.

AI can have some good applications in scientific areas. There's bad and good sides to it and that should be figured out well. It's also a societal and health question.

Some time ago I asked a bot developed by Mistral a question. It didn't make much sense to me, I got some basic info. I feel no use to do that more often. Also chat bots can't replace human connection and contact, for instance in critical situation when one needs help. The risks things go wrong are so high and thus the use when limiting risks also becomes less clear.

relm1

"To what corner of the world do they not fly, these swarms of new books? It may be that one here and there contributes something worth knowing, but the very multitude of them is hurtful to scholarship, because it creates a glut, and even in good things satiety is most harmful.  Filling the world with books, not just trifling things, but stupid, ignorant, slanderous, raving, irreligious and seditious books, and the number of them is such that even the valuable publications lose their value." - Desiderius Erasmus against the printing press (1466–1536)

AI is revolutionary tech that will be everywhere in just few years.  I used to hate it as I lost work to it but now am learning it so at least I can use it to not lose work. 

Worth a read: https://engelsbergideas.com/essays/the-war-against-printing/

Holden

Quote from: krummholz on April 24, 2025, 08:26:42 AMChatGPT is the bane of STEM professors here, and probably throughout academia now. Using it in an unauthorized way is also the toughest kind of academic dishonesty to prove, since you will get at least slightly different responses if you pose the same question to it twice.

Two points to think about here.

1. Where is the AI getting it's information from, what databases has it trawled, digested and added to its own overall database? To my knowledge, the likes of ChatGPT and Copilot are 'trained' on a number of them most of which comes from internet media outlets. With the abundance of fake news this will mean that some of the information that an AI generator gives you is actually wrong. This is likely to happen if the search parameters that you give the likes of ChatGPT are not as rigorous as they should be. It needs to be understood that ChatGPT, CoPilot, etc are not search engines. If you ask them a question the what you get comes directly from their own databases.

2. Maybe its time for STEM (and other) professors (lets say educators in general) to up their game and consider how outdated their current assessment tasks and procedures actually are. I see some of my senior secondary school colleagues struggling with this as they read a students work that they know has been AI generated but they can't accuse the student without definitive proof. Should we now look at making the assessment task directly relative to the curriculum instead of some sort of essay/report/thesis etc? Whoa! That sounds like work and might mean that the course notes and assessment tasks that 'Professors Wackford  and Squeers' have been using for the last two decades might have to be totally rewritten.
Cheers

Holden

krummholz

Quote from: Holden on April 24, 2025, 11:54:06 PMTwo points to think about here.

1. Where is the AI getting it's information from, what databases has it trawled, digested and added to its own overall database? To my knowledge, the likes of ChatGPT and Copilot are 'trained' on a number of them most of which comes from internet media outlets. With the abundance of fake news this will mean that some of the information that an AI generator gives you is actually wrong. This is likely to happen if the search parameters that you give the likes of ChatGPT are not as rigorous as they should be. It needs to be understood that ChatGPT, CoPilot, etc are not search engines. If you ask them a question the what you get comes directly from their own databases.

Of course, "hallucinations" can be a dead giveaway that a student has used ChatGPT or a similar AI chatbot. Most often, though, all a professor can go on is a vague suspicion that the submitted work doesn't sound like student work, but more like an encyclopedia entry. The problem is as old as the internet - it is NOT a new thing with AI - and students have always tried to cheat on assignments by plagiarizing from published, and then internet, sources. Unless the professor can find the actual source, there is little hope of nailing the student - and AI chatbots only make the problem more daunting as there is no "source" out there to find. I sit on a committee that hears academic integrity cases and have seen professors report students with nothing more than a vague suspicion - and of course that makes it impossible even to meet a "preponderance of evidence" standard.

Quote from: Holden on April 24, 2025, 11:54:06 PM2. Maybe its time for STEM (and other) professors (lets say educators in general) to up their game and consider how outdated their current assessment tasks and procedures actually are. I see some of my senior secondary school colleagues struggling with this as they read a students work that they know has been AI generated but they can't accuse the student without definitive proof. Should we now look at making the assessment task directly relative to the curriculum instead of some sort of essay/report/thesis etc? Whoa! That sounds like work and might mean that the course notes and assessment tasks that 'Professors Wackford  and Squeers' have been using for the last two decades might have to be totally rewritten.


How practical that is probably depends on the subject area. I teach physics and astronomy. It was one thing when students only needed to search the web for answers to well-known physics problems... or get their answers from web sources like Chegg. With AI, a student can pose ANY physics problem, including those thought up by the professor, to a chatbot and get at least a working strategy if not a complete solution. My answer has been to make homework optional and grade only work done in the classroom. But that takes away some of the student's incentive to do the most important part of their course work, namely, working homework problems.

DavidW

Quote from: Iota on April 24, 2025, 09:14:19 AMPersonally the older I get, the more irrelevant I feel, and AI is the just the latest step in that journey, the fact I don't use it just alienates me yet further from younger generations.

Quote"Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things."
- Douglas Adams

Meant as kind of funny, but it is pretty true.

DavidW

Quote from: krummholz on April 25, 2025, 04:33:41 AMMy answer has been to make homework optional and grade only work done in the classroom. But that takes away some of the student's incentive to do the most important part of their course work, namely, working homework problems.

I understand this POV, but I think it throws the baby out with the bathwater. There are plenty of students who won't cheat, but need assignments to be worth credit to do the right thing, which is repeated practice. I would rather reach those that I can than worry about those that I can't. It also lets me model what I want them to see by example: physics mastery comes from the grind, not some innate mastery present in the soul from birth! :laugh: (that sounds stupid, but so many students think that way)

Daverz

I've found it useful for research, particularly if you can dig into the references, but it should be remembered that Google completely destroyed the usefulness of their regular search engine to maximize short-term profit and this "enshitification" of their products is a consistent MO for them. 

Kalevala

@krummholz and @DavidW So, and this is going back a ways, don't students still have to list sources and references (bottom of page, etc.)?

K

relm1

Quote from: Kalevala on April 25, 2025, 03:33:09 PM@krummholz and @DavidW So, and this is going back a ways, don't students still have to list sources and references (bottom of page, etc.)?

K

Yes, this is a great approach.  I just took a class and they make a big point about AI being unallowed but also say they have ways to detect it however it's not bad tool to use for research, just that you must site it and not make it write out the response.  I thought this was a good middle approach that kept the homework relevant, allowed for use in certain ways, but required we source it.

DavidW

Quote from: Kalevala on April 25, 2025, 03:33:09 PM@krummholz and @DavidW So, and this is going back a ways, don't students still have to list sources and references (bottom of page, etc.)?

K

On routine homework, no. Some instructors will require it if completed by hand, but it is commonplace (and has been for the past 25 years) for instructors to employ online homework services. Physics homework is about solving problems yourself, not doing research. Using Chegg to find a solution and posting a reference to it would be audacious cheating and not fine.

On papers-- yes... but AI can generate references as well. Plus, the common student cheat pre-AI is to patch-plagiarize Wikipedia and then use the Wiki sources as the paper's sources. I am not a humanities instructor, but I do require writing. It used to be formal reports, but I've switched to more informal writing called lab memos this year. Anyway, the common place thing that students will cheat on a formal report is in the introduction. And it is a stupid cheat. It doesn't take much more work to selectively quote your sources and then interpret the quotes yourself.

And if a student were to plagiarize or use AI, what stops them from also lying about their references? A good teacher would personally check those references. But I have a small number of students as compared to normal high school teachers and college and university professors. I still would need to check the sources of 40-60 papers. That is why casual cheating is so effective. Instructors can't afford to scrutinize every single scrap for potential cheating. Just take me with my light load. I assign homework three times a week. Five problems per set. For forty students, you do the math. And I am assigning it on paper, mostly to check that students show their work, not to try to detect cheating.

In modern times, how do teachers do it? First of all AI AI-written anything is so formal, stiff, and uses uncommon language that almost anyone can see when someone used AI to write a paper. You don't even need hallucinations. But what is done in general to catch cheating? Well, online resources are used.

On the front of papers, there is a service called Turnitin that checks for duplication of any other paper in their database. That tests for students copying from themselves, easy to find resources online, and even paying for a paper that someone used at a different school. The difficulty in technical writing is the frequent and long use of known, repeated phrases. For instance, it is not copying for multiple papers to use the phrase "the law of conservation of mechanical energy." One can use filters on the length of the phrase, but it is best if the instructor spends time going through the matches. Turnitin also provides AI checker, which surprisingly is rare at returning false positives.

On the front of online homework submissions, I find that the key thing to look for is the time to complete the assignment. Webassign has become pretty smart about only tracking active use and not just when the tab is open. A typical A or B student will take 1-2 hours to complete one of my homework assignments. If a C student completes in 5-10 minutes, they are most likely cheating in some way. Especially if that level of speed and accuracy is not reflected at all in exams. This unnatural pacing is also one of the ways that the College Board is having proctors observe potential cheating on standardized exams like the PSAT. But most homework cheating is invisible.

VonStupp

The composition teacher at my employ was just discussing this with me. It appears AI can be used in certain assigned papers in her class, but must be cited like anything else. Since AI draws on established content, I assume its use has a trail of some sort. Of course, a works cited or bibliography can now be completed for you online too, so students rarely know how to do this on their own without the help of the internet, unless done on paper and pencil in class without technology.

My concern over the last year or so has been the use of ChatGTP, or other such conveniences, for scholarship letters and application materials. It seems disingenuous to have a computer paste together your life to receive money or the like. I know when my students use words outside of their lexicon, but generous donors and benefactors often do not.

VS

All the good music has already been written by people with wigs and stuff. - Frank Zappa

My Musical Musings

Kalevala

What about page citations?  Or is that all listed in AI?  Or is that mostly no longer relevant due to online articles?

K

krummholz

Quote from: DavidW on April 25, 2025, 06:38:03 AMI understand this POV, but I think it throws the baby out with the bathwater. There are plenty of students who won't cheat, but need assignments to be worth credit to do the right thing, which is repeated practice. I would rather reach those that I can than worry about those that I can't. It also lets me model what I want them to see by example: physics mastery comes from the grind, not some innate mastery present in the soul from birth! :laugh: (that sounds stupid, but so many students think that way)

No, you're absolutely correct, and I agree that it seems like an extreme answer, but I cannot think of a way to allow them to get credit for work done outside of class that is fair to the ones who don't cheat, given that there is currently no way to tell. The only solution I can imagine is to think of problems that ChatGPT gets wrong. That would take more time than I have even as it stands, and you can rest assured that ChatGPT is still learning and what it gets wrong today, it may well get right in the future.

That's not to say that I don't assign homework - I do, in fact I have been experimenting with a colleague's idea to give them challenging problems with the additional information that the weekly quiz - or next week's exam - will have a very similar problem on it. To get a good grade they have to at least learn how to solve problems very much like THAT one. The problem with the method was that many students just didn't care enough to do the work, even when their grade was on the line.

I don't have a good answer, and none of my colleagues do either.

krummholz

Quote from: DavidW on April 26, 2025, 07:19:51 AMIn modern times, how do teachers do it? First of all AI AI-written anything is so formal, stiff, and uses uncommon language that almost anyone can see when someone used AI to write a paper. You don't even need hallucinations. But what is done in general to catch cheating? Well, online resources are used.

Believe it or not, there are students who really do write like that. One of our majors both learned English as a second language and is quite eloquent in it as well, and clearly revels in her mastery of the language. She was recently accused of using AI to write an assignment for a class in another department. She submitted counter-evidence of her brilliance in the form of multiple research papers and fellowships - all to show that she had no need to cheat in any way. She was acquitted, btw.

I agree with the rest of your post - cheating is very hard to catch. In the case of cheating by using AI for homework problems, impossible I would say - unless the student submits a solution with an error that, say, ChatGPT is known to make.