Client: I asked #ChatGPT to recommend a host for my website.
Robin: FacePalm
Welp, #appleintelligence in the new #iPhone #iOS update has completely bollixed the dictation and #Siri on the phone. The #AI is completely mistaking commands for queries, even offering #ChatGPT to figure out what I am saying. #Apple needs to realize that if non-technical people get a bad impression of their system, they will turn it off and never turned it on again. Why I keep on turning it on, I don't know. In any case, I am turning it off, again.
New Scientist has used freedom of information laws to obtain the ChatGPT records of Peter Kyle, the UK's technology secretary, in what is believed to be a world-first use of such legislationFrom https://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/
These records show that Kyle asked ChatGPT to explain why the UK’s small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on “Limited Awareness and Understanding”, “Regulatory and Ethical Concerns” and “Lack of Government or Institutional Support”.Apparently it didn't say "because it's unhelpful and probably harmful to most SMB problems" or "what on earth are you doing asking a computer this you fool?".
Imagine being a Government minister and asking a mindless machine a question like "why aren't Small to Medium businesses taking up AI?"
Because Peter, they haven't found a business use for it, that's why. It's a toy looking for a serious purpose outside of Fascism
These are not serious people, and they do not understand the technological world they have been voted in to regulate. The Tech Bros will just dangle shiny things in front of these unserious politicians and there goes our Future.
"Oh Chat GPT, which podcasts should I go on? I could ask a human being with a mind, but instead I'll ask Fancy Autocorrect for the most likely answer that I want to hear"
"While #ChatGPT answered fewer questions about articles that blocked its crawlers compared with the other chatbots, overall it demonstrated a bias toward providing wrong answers over no answers"
https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php
it literally asks for being laughed at.
#chatgpt and #electronics
The thing to keep in mind about Large Language Models (LLMs, what people refer to as AI, currently) is even though human knowledge in the form of language is fed into them for their training, they are only storing statistical models of language, not the actual human knowledge. Their responses are constructed from statistical analysis of context of prior language used.
Any appearance of knowledge is pure coincidence. Even on the most “advanced” models.
Language is how we convey knowledge, not the knowledge itself. This is why a language model can never actually know anything.
And this is why they’re so easy to manipulate into conveying objectively false information, in some cases, maliciously so. ChatGPT and all the other big vendors do manipulate their models, and yes, in part, with malice.
#LLMs are a fucking scourge. Perceiving their training infrastructure as anything but a horrific all-consuming parasite destroying the internet (and wasting real-life resources at a grand scale) is delusional.
#ChatGPT isn't a fun toy or a useful tool, it's a _someone else's_ utility built with complete disregard for human creativity and craft, mixed with malicious intent masquerading as "progress", and should be treated as such.
This weekend I finally caved and used #ChatGPT [1] to improve a program I’m working on [2]
To recap the above thread this is a part of, I’m trying to build a mesh network of microcontrollers (ESP8266), each with an air quality sensor and they broadcast their readings so that a different microcontroller (ESP32) can forward them to a Raspberry Pi Zero for storage, statistics, visualization etc.
I have about a dozen of nodes and the network has started to fall apart (it was fine when I had 6), so I’m trying to find workarounds but I am unfamiliar with C++. This is where ChatGPT comes in.
The experience wasn’t smooth. It felt a little like chatting with a junior dev, fresh out of school and full of ideas about how a happy path algorithm should be written or how an ideal library should interface, but less concerned with the code in front of them.
I started by sharing the public URL for the .ino file (it’s an Arduino IDE project) on my GitHub and it looks like the bot used it as a search rather than a link, and it scanned the top results but didn’t find my code. It told me so in so many words. I had to share the link to the raw code instead. No biggie but a bit underwhelming.
(As an aside, I find it funny that the bot’s default search engine, Bing, was unable to locate a file from a public repository on GitHub based on its full URL. Both of these services are operated by Microsoft.)
Then, having just seen the code and after I expressed my desire for a code review, the bot proceeded to explain to me generalities of programming and best practice. Meh.
After a third prompt from me, it finally read my code but still provided several comments about things that aren’t in my program. I don’t use a display anywhere, so why the advice about OLED? (I know why, that was in one of the results in its first search, the one where it told me it hadn’t found my code). Of course I am indeed using the data with an output, mine being the mesh rather than a screen, but… Meh.
Out of the six initial suggestions, only the one about the macro was useful; although with the amount of memory at my disposal, that particular optimization wouldn’t matter.
Of the other recommendations, none applied to my program and the bot kept suggesting their suboptimal variant as fixes later on, contradicting itself numerous times. Every time I corrected its mistake, it praised me for my insight and gave my a bullet list of pros and cons, which felt a little patronizing, and then it proceeded to make the same mistake a few prompts later. And again later on.
The bot failed to refactor my code at any point, instead giving me basic unfinished examples modified to fit the current recommendation. One of the things I was worried about was my use of the JSON library and how strings are manipulated. After all, this is C++ and with a headless device that won’t tell me anything when it crashes. I tried a bunch of times to get help from the bot, failed and moved on.
Every time I challenged one of the bot’s suggestions, it came back to me with an apology and that after further review my code didn’t need that particular change or with a generic phrase to not even try giving me a firm answer.
The one suggestion that looked like it could improve things, about deep sleep, ended up being a non starter because that would power down the sensor and I would have to restart the calibration phase on wake-up (that’s about 5 minutes for this sensor). The bot didn’t know to check what consequences the change would have.
This part was interesting. I was challenging the bot’s assumptions about deep sleep and how this specific sensor would behave in that case, and I told the bot to go read the code of the library for the sensor. I had to prompt this twice because the first time it came back with guesses. It looks like it did actually search for it, scanned the top results and came back with apparently more insight into the inner workings of the library. It didn’t find the authoritative source that I was using, but that’s a tough ask even for a human so I won’t hold this against it (it did manage to find two reasonable sources). Not meh and actually a decent attempt.
What the bot told me it found, was validating my existing bias so I didn’t keep pushing. I also didn’t go and verify by myself.
I tried again to get the bot to review my code for errors. As was becoming the norm, at my first prompt I got only generic answers; when prompted again the bot had apparently forgotten all about my code; and when I reminded it that it did have access to my code, then it finally obliged. Its answer was predictably full of extensive praise about each thing that I did correctly (in fact I was surprised that there were no errors, but given that, the bot’s response was predictable). I’d have been fine with a short list of common errors it checked for and didn’t find, but ok.
Then I tried to have the bot refactor my code in a few ways. I wanted to see if it could suggest code to send the data over the mesh in one format while writing on Serial in another format. I already knew it was bad at rewriting existing code.
That was a fiesta of all the advice it had given me this far, and it applied the bad variant every time. It knew by then of my preference for the “guard” style but ignored it; it made several copies of each scalar; it didn’t rewrite my code at all, instead using some made up example; it kept using functions that it had warned me against.
I tried trolling the bot by suggesting deep sleep again, but it can’t do sarcasm and it tried its best to give me points any time it could, so it didn’t contradict me and praised me again for my insight.
I also tried to see if a pointed question with no actual consequence would get flagged but alas, the bot helpfully (not!) provided a pro/con list to help me decide for myself, not realizing how futile the decision was. This one wasn’t about sarcasm, it was about comparing two identical options and telling me this in no uncertain terms.
This is how I generally find this kind of bot useful: tossing around ideas, having it formulate them in several ways to help me solidify my own understanding of them.
I still tried to get it to suggest code with the libraries I am using, but I stopped insisting on refactoring my actual code. I also gave up on fine tuning the code according to the code standards established earlier in the conversation. At this point I have zero faith in their Memories product before even trying it
(Perhaps an error I made here was in not telling the bot to forget the first search results)
The bot did make some assertions that look like they can hold water and I’ll have to verify them.
At one point I failed to see a correct expression in the suggested code, and when I told the bot that it was missing that expression, it didn’t call me out on it. I had to read the new answer and figure out that it was the exact same suggestion a second time. That’s annoying.
I eventually saw that when trying to reconnect to the WiFi, with gradual increments on the delay inbetween attempts, eventually we get to a delay that is so long that perhaps the microcontroller staying awake doesn’t make sense.
For example when I’m doing maintenance on the Raspberry Pi or on the root node, none of the sensor readings matter for a while. They can try reconnecting a few times on signal failure in case it’s just a blip or a reboot, but if they fail for too long, they should feel free to go in power saving mode and try again much later.
So I gave the bot a basic description of this algorithm and it provided code that we iterated on for a bit. At one point I gave up trying with words and I pasted code myself, which earned me deep praise again and a line by line explanation of how great my code was. Of course the bot was eager to improve on this anyway, but logic isn’t its forte…
Now I need to iterate on this on my own, since in my case the nodes with a sensor have no concept of my home WiFi and they can’t reach the rpi0, only the root node can do that. So that’s the only microcontroller that would save power in the event that my home WiFi went down. Big improvement! /s
As for the bot itself, while it got better at conversing since earlier versions, and it learned how to search the web, fetch content and parse it really fast, and it can produce code that sometimes works, it’s still much less useful than a coding partner of any amount of experience.
I can’t gauge its trustworthiness or reliability, even when it’s sharing the links that it scanned, without reading through the materials myself.
It still can’t keep its facts straight, although it has access to API documentation and/or actual source code, and it rarely knows when it doesn’t have enough information.
[1] https://chatgpt.com/share/670d8875-dc8c-8007-97b1-9938b82ed838
[2] https://github.com/GuillaumeRossolini/griotte
It's really effing obvious LLMs are a con trick:
If LLMs were actually intelligent, they would be able to just learn from each other and would get better all the time. But what actually happens if LLMs only learn from each other is their models collapse and they start spouting gibberish.
LLMs depend entirely on copying what humans write because they have no ability to create anything themselves. That's why they collapse when you remove their access to humans.
There is no intelligence in LLMs, it's just repackaging what humans have written without their permission. It's stolen human labour.
My #AI project I prompt #Gemini #Claude #ChatGPT & #Copilot the same prompt to compare the responses, return the results in a table I have asked them to write this code in #R I use #R because it has #CRANR 18000+ packages that #AnalyzeData I am treating them as #SpecialNeedsStudents Some can see, touch the web Some can't remember #VirtualAIClassroom Packages in R are like the specialized regions of the human brain Maybe value in packages #ReturningErrorCodes Building the #Scaffolding #ImOutThere
“It’s important to understand and respect individuals’ gender identities, including transwomen, and to use their correct pronouns and names. Everyone deserves to be treated with dignity and respect, regardless of their gender identity or expression.” #ChatGPT passes the #TuringTest https://medium.com/the-identity-current/i-asked-chatgpt-to-tell-me-what-is-a-woman-d4912e50f5ee
“#ChatGPT told POLITICO it thinks it might need regulating: ‘The EU should consider designating #generativeAI and #largelanguagemodels as ‘high risk’ technologies, given their potential to create harmful and misleading content,‘ the #chatbot responded when questioned on whether it should fall under the #AI Act’s scope.“ #artificialintelligence https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/