Hey all, I’m very interested in AI theory and from what I’ve read and talked to, it seems like people generally don’t think that AI will ever surpass us, or at the very least, be able to conceive true human emotion and, essentially, a soul. From my own research, wouldn’t it be asanine to assume that they won’t? If AI will be built under self learning algorithims, which I believe would make them true AI instead of just programmed bots, as par the Turing Test, then why would it be impossible for the AI to surpass or provide self made upgrades to its machinery and wetware?
if we take the premise that humans are flawed, and not perfect, and we create machines to remedy that imperfection, then one must understand that there are and will be flaws and loopholes in the code of creation. And if that is true, what is stopping AI from exploiting such code? One could say “oh well we won’t program them that way”, but again, isn’t that just wishful thinking? if we are flawed and not perfect, then we are prone to exploitiations in our own design that an AI with exponential and self learning algorithms would easily exploit. It’s not a matter of if, but when.
Does anybody know any books, videos, or just general stuff that talks about this? I find that a lot of AI talk is very pessimistic and circle jerky, in the fact that it’s all the same stuff under a certain lens. Apologies if the question comes off rudementary, I’m still optimistic in the field! Cheers!