Critical observation: "MY CONVERSATION WITH CHATGPT IN REGARDS TO THE "LIKELIHOOD OF NUCLEAR WAR." (PART 2)

 


"There were quite a lot of errors. Yet the ChatGPT (AI) stayed as non biased as it could, which ended up repeating itself. It also admitted "knowledge cut off" points, that the last reading of the Doomsday clock was in 2022 not 2001, as stated by the AI. In relation at attempting to create a non biased discussion, it ended up contradicting the term "likelihood". As noted, "It is difficult to predict the likelihood of a nuclear war. The potential for a nuclear conflict exists as long as nuclear weapons exist. " then it goes on and says in a returned answer, "...as for the Doomsday Clock, it's important to note that it is a symbolic representation of the likelihood of a global catastrophe and is adjusted by the Bulletin of the Atomic Scientists based on their assessment of current events...". This could be indicative of how the ChatGPT overstates its range of knowledge via how it learns and interprets responses. Thus it ends up repeating terminology and begins to make less sense. I can't imagine how ChatGPT, at this point in time, could write or understand complex literature, without duplicating it."


(A.Glass 2023)


___


Full conversation: chiasmusmagazine.blogspot.com/2023/01/my-conversation-with-chatgpt-in-regards_18.html

Comments