top of page
Search

CAN WE TRUST AI WITH OUR FUTURE?



AI(Artificial intelligence) is still a newborn baby in our family, the Earth. It was 1936, after multiple trials, Alan Turing’s brain sperm did hit the egg and fertilized it. However, unlike human birth, this took more time because it’s unnatural, and humans had to create a right atmosphere for its birth, and for it to grow quickly. Now, we have better machines, data and the fast Internet, although it’s WIP.


As of today, the human brain is also a partial black box. After implanting electrodes and some other sophisticated gadgets in the brain during R&D, we could only come to know that our brain with 80 billion neurons sparkles electric signal in one part of the brain, send this signals back and forth to other parts of the brain, leading to a feeling, experience and eventual action. Meaning, when XYZ signals travel and meet ABC signal, PQR happens, Bingo! Its an algorithm. We still don’t know how does this billion of travelling signals create a feeling or why does this arise in the first place?



What we know is, as a human child grows, generally, it does learn from the surrounding but a lot of it is taught by his/her parents, teachers and anyone related and concerned. Everything that he/she learns doesn’t just have information but also logic, values, principle, ethics, morals etcetera and all of it is a perception of other human being teaching the child. It’s a perception and not a fact. Everything that you think is right or wrong, is your perception. If you think killing someone is wrong, it’s your perception and not the killers, maybe.

We have read in our school science books, Freud’s theory of Id, Ego and Superego. Where we learnt, Superego is the moral compass of the personality, upholding a sense of right and wrong. But again, it is what is learnt by a child in his/her phallic stage, 3-5 years of age.


Let’s come back to our newborn baby, AI. Fun fact, from your childhood, you learnt all your values from human beings and so is AI now. Human beings, who in the last billions of years and especially from last ~11,000 years (from foraging to agricultural revolution) created multiple stories/religion of their own while building, but never agreed on one. Thankfully, the majority of the people got themselves affiliated to one of the few prevalent stories and adopted one the religion and its values for themselves and their families, and these human beings are now teaching AI, values ‘What is correct and what is wrong’, wow.


Have you ever played chess on your phone, laptop or any device? Got amused or felt inferior? But that’s just a tip of the iceberg for AI, which was using your mediocre Internet and a minuscule amount of processing power from your device. But have you heard of Google’s ‘Alphago’? It beat the best Go player in the world (2017), and about IBM’s Watson (2011)? Now think about such powerful AI with Google’s 54-qubit Sycamore processor (read about it of you haven’t) and more data training? Of course, the first level of training starts with the data that you directly/indirectly feed.


You won’t be wrong in considering that we are creating something much supreme than our human brains, but that’s a small challenge when you compare it with the value, moral, ethics it will be taught from the pre-defined algorithms and things it will learn automatically from the neural networks.

Consider a situation, a decade later, a new virus has spread in one of the continents, 40% of the continent’s population is impacted (which is 9% of the world population) it’s contagious and can spread from the air, water and human contact, the southern side of the continent has still not been impacted but is at immediate risk and equally is the nearby continent. Based on the information provided to the AI, it could analyse that no medication will be available for several months, and the spread can lead to an economic hit and starvation for 17% of the world population. Will it suggest dropping a nuclear weapon in this cluster and stop the spread or will it risk 17% of world population starving and maybe dying because of that? Again, what you think is your perception which might not really be same as the person sitting next to you, your family, your colleague and here you are talking about the suggestion made by machine. Also, if we can soon trust them with AI’s autonomous driving to take you from place A to B safely, can we authorise them for more than just suggesting?

The rosy picture that many of the AI enthusiasts portray, will only talk about the good things and I have no doubts on the intent of those use cases but don’t you think a common man should know both the side? As of now, perils of AI ain’t that scary, However, before it’s too late, it’s time we brush up our archaic regulations not just with information/data security but with the usage of AI too, over top of anything and everything.


 

Fact: You won’t remember this article with 100% precision in the next few seconds, but some machine will remember this in entirety even after 50 years.

Let’s create a safer and better future together.

110 views1 comment

Recent Posts

See All
bottom of page