CRITICAL OBSERVATION: MY CONVERSATION WITH CHATGPT IN REGARDS TO "KILLER AI DRONES AND A MANHATTAN COCKTAIL" (PART 3)
(Image: AI created. Prompted by A.Glass 2023. Three attempts.)
"Once again ChatGPT has failed to impress me. Maybe I am expecting too much, but with the amount of press the 'AI learning' program is getting in the media at the moment, you would be forgiven to assume that artificial intelligence is about to take over the world. Such is our desire, thanks to the rapid paced digital relays, for sensationalism and its conspiracy laced expectations of grandeur.
The ChatGPT is indeed a learning program that reminds you its knowledge cut off does not go back any further than 2021, so it's data, to which it learns from is limited. This is when it does ramble. I asked ChatGPT if it knew what the Kargu-2 drone was, it didn't know. It's reply was:
"I'm sorry, I'm not aware of any specific drone called Kargu-2. There are many drones that are being developed and used for various purposes, so it's possible that Kargu-2 is one of them, but I don't have enough information to confirm. Could you provide more details or context about the Kargu-2 drone?"
Then I hinted it was a AI learning drone used for military purposes, that this AI drone could attack autonomously after learning what a human being looks like (its intended target). It thanked me for the information, to which it then gave its seemingly balanced opinion, as follows:
"...While it is true that machine learning can be used to train such systems to recognize and respond to certain objects or situations, it is important to note that the way the technology is used, the data sets it is trained on, and the ethical and legal frameworks in place will all play a role in determining how the system behaves.
It's important to consider the ethical and legal implications of autonomous weapons systems, as they raise questions about accountability, responsibility, and the potential for misuse. As an AI, I am not designed to make moral or ethical judgments, but it's important to have a conversation about the responsible use of these technologies."
As noted with my last conversation is the ChatGPT, it tends to lose its attempt as a nonbiased reply, instead reverting back to the overuse of a phrase, in this case it was the word "ethical" as noted above. Asking me to consider the "ethical" whilst it claims it is unable to make "ethical judgments". Which makes the reply sound like a machine. So, I reiterate that I find it very hard to believe ChatGPT has written detailed papers for an assessment, unless there has been sloppy standards at some of these learning institutions, where papers have been handed in and have apparently passed their initial grading. Unless a detailed edit had occurred by a human prior, which would either make a paper worst in its presentation or end up being redrafted completely (by the human who presented it) in some ad hoc, obviously plagiarized way.
Another point, is that ChatGPT does not read links. Which I offered to send the 'learning program', and this would have been interesting if it could respond to a link in regards to the killer drone. Instead this was its confused reply.
"I am a language model AI and I do not have the capability to provide links or URLs. However, you can search on the internet for the information you are looking for, reputable news sources and government websites, research papers or articles written by experts in the field are some of the sources you can use to get more information about the Kargu-2 drone or any other topic you're interested in."
So in other words, after having a hodge podge 'opinion' piece put together by the bot program, you would be the one proving the footnotes and links. However, there was one aspect to my recent conversation with ChatGPT that I found amusing. Does it have a sense of humor? Or did I set up the AI's humorous reply? Either way this was its response when I ended the dialogue
"Sounds great, let me know if you have any other questions about cocktails or anything else. I'll be happy to help."
"Killer drones and cocktails."
"As an AI language model, I can provide information and help on a wide range of topics, including cocktails and autonomous weapons. However, it's important to note that autonomous weapons, also known as killer drones, raise important ethical, legal, and security concerns. The use of such systems can have serious implications for human lives and international security, so it's crucial to have a thoughtful and informed conversation about their development and use. On the other hand, cocktails are a form of mixology and a social ritual, providing pleasure and relaxation to those who consume it responsibly."
I did chuckle to its hyperreal reply.
Also, it was incorrect in regards to the "Boulevardier" cocktail. The Boulevardier is not a variant of the Manhattan with added Bourbon, it is a cocktail on its own with the use of Campari which includes Rye or Bourbon. It's more akin to an Negroni cocktail. (12/02/2023) "
___
Full conversation: chiasmusmagazine.blogspot.com/2023/01/my-conversation-with-chatgpt-in-regards_27.html
(A.Glass 2023)
Comments
Post a Comment