The question of Artificial Intelligence and Machine Learning as an existential threat to humanity.
Welcome and thank you very much for stopping by to read part 2 of this very important and defining moment in the history of mankind. Here at IA, we’ve argued that it is surprising to find that the very individuals that are seemingly at the edge of Artificial Intelligence (AI) and Machine Learning (ML) are the same ones that are promoting regulation.
I find that very interesting. First as we already know from empirical evidence from our past – and that is that regulation stifles creativity and invention and thus progress. Nowhere could this be truer when you see how AI and ML will revolutionize the world – and as we also know – two heads are better than one – the more people involved, the more progress that will be realized. This is not open for debate – this is fact.
When someone speaks of technology and how it may or may not harm anyone – be it on purpose or by accident, yes there needs to be vigilance, but not for how the code is written or what any code can or cannot do. The simple fact is that the only thing that needs to be really scrutinized is how any entity or group of entities utilizes any technologies. If any law or set of laws – be they moral or of legal standing – are broken, the simple solution is to address the culprit. Just as any invention while under employment of a company on company time belongs to the company; so too must any legal responsibility and punishment also belong to the same company that is undoubtedly responsible. It ends there.
So with regard to the legal responsibilities of any company it is simple. Follow the orders. See? The regulations that are already in place like the ones that prevent monopolies, or insider trading and the such are sufficient. If you don’t agree, please leave a comment and we can set up a a video interview where perhaps it can be debated.
As to how AI and machine learning are safe for humanity or a danger to it? Well, in my humble opinion this is also a simple answer. It basically goes back to the individual. If – all things being equal – an AI is developed that can in fact direct and control the daily activities of humanity on a grand global scale – for example.
Lets take a closer look at this proposition. So an AI that controls what? Government? Ok. A global one world Government that is monitored and directed by only one Universal AI. An AI that is only interested in the advancement of mankind and it’s ultimate purpose is to protect us and to guide us and to show us what we ask of it.
Sounds perfect. And it is perfect. It really is. The problem is who decides in which direction. This will be the problem. But the reality is that no one will decide. All mankind will decide. AI, will decide by asking us. Each individual will be connected to this One Global AI Government. We have to think of it as a giant library of knowledge for anyone of us. It will be our post office, it will grant patents, it will grant any assistance needed for the advancement of all mankind (All inclusive). See the key is All inclusive isn’t it? Yes it is.
This AI will know to include every one of us humans. It will give us the correct solutions, it will not make mistakes. And I honestly believe that it is this reality, this fact – that has some leaders of tech promoting regulation. (Scoffs laughing), Ha, ha, ha, ha….Thank you but no thank you.
Moving on.
AI could never harm us unless we explicitly told it to. It ends there.
So you have your answer. No Artificial Intelligence (AI) and/or Machine Learning (ML) is not a threat to humanity – nor could it ever be. One fact about computers that one may always reference to know that this answer is correct is this:
Machines do not make mistakes, people do. This simply means that an error in the program was not caused by the computer, it was caused by human error. Purposely or otherwise ( still responsible ).
When will ai capable of fooling humans happen?
The fact is this is really, really close. Now here for example is a perfect case of regulation. Should we regulate how a company can use AI to converse with a human? No, not unless the human thinks it’s another human. In other words so long as we know we are talking to an AI and it identifies itself as an AI, then there shouldn’t be any problems. You see there? We don’t regulate the code; we regulate the company ( the entity responsible ). It’s that simple. It ends there.
IT ENDS THERE.
In short, the laws that are already in place – be they just or unjust are all that is needed to regulate the use of AI by any company. If there is malice, it will be sorted out, if there is good it will be sorted out. Okay? If you disagree please leave a comment, we would love to hear from you. Thank you and God Bless.
More Stories
U.N. FAILS TO REACH AGREEMENT ON LAWS (LETHAL AUTONOMOUS WEAPONS)
MOTOR & DRIVE SYSTEMS 2022 CONFERENCE FEBRUARY 8 – 9
PEP 672 — Unicode-related Security Considerations for Python