Dawn of a Decade

Artificial Intelligence is an extremely useful field which has tremendous scope in the future. Basically, AI is a smart computer or a machine which can follow, optimize, or reduce the time for a given task that requires human intelligence. Developments in AI and related fields are increasing at an unbelievable pace. But we are as close to substantiating the plebeian perception of AI as nefarious cyborgs intent on destroying humans as physicists are to discovering a unified Theory of Everything. Anyways, AI has a plethora of uses in many different fields. It can prove to be/has been useful in medicine, space exploration, robotics, transportation, finance, education, entertainment, smart services, and various other fields.

Even now, AI has stepped out of its infancy and we are witnessing global reformations due to it. It won’t take long for entire industries to become heavily affected by AI. At the present, only those jobs which are repetitive or don’t require careful consideration are being replaced by machines like working in factories. But the ones which demand soft skills and creative reasoning are not going anywhere in the near future. But soon enough, especially with progress in developing intelligent technology, we can expect massive reforms worldwide, and almost no industry will remain unaffected by it.

But the development of such technology brings up many questions pertaining to moral values, threats to human automomy/existence, differences in our ultimate goals, dangerous technology, etc. These are critical questions and it will not do to leave them unanswered. To answer these questions, let’s get an idea of some useful terminologies first.

Broadly speaking, AI can be categorised into 3 types. The first is Artificial Narrow Intelligence (ANI). The self-evident name suggests that this stream of intelligence can do a very specific task quite efficiently. But it needs very clear instructions about any given task to implement it. ANI is the only level of AI we have achieved so far. ANI is useful for playing games like chess, solving the Rubik’s cube, smart assistants, translating a web page, etc. Basically, it’s just an idiot savant.

The next one is Artificial General Intelligence (AGI). This one is comparable to human intelligence in many ways. If you were to give it a vague task like teaching a class of organic chemistry or optimizing flight schedules, it would learn from the available resources and ask questions if required, and then carry out the given task accordingly. AGI can be assigned tasks that require human intuition, making decisions, systematic analysis of the data available and possible outcomes, etc. It can also learn and improve itself without human intervention. This is the exact reason it’s akin to human intelligence. We have yet to reach this milestone.

And the third category is Artificial Super Intelligence (ASI). This is the (clichéd) ultimate form of AI and the most efficient one too. Just to put things in perspective, ASI could supercede us in intelligence by several orders of magnitude. Consequently, ASI is the level of AI that is expected to pose a threat to the human race. It’s not going to be possible to accomplish this feat within the span of this century, but there are quite a few precautions to be taken and changes to be made before super intelligent machines become a reality. Whether we fail or succeed in creating super intelligent technology, it will probably be the last challenge we ever face.

All those seemingly surrealistic advantages of ASI like immortality, Dyson spheres, mind uploading, stellar exploration, etc. will be very much real then. The current pandemic of COVID-19 faced by the world and many other medical problems like cancer, Alzheimer’s, etc. could be dealt with easily. The fact that humans could easily attain the status of a type 2 civilization on the Kardashev scale would be a giant leap for humanity. The ideas of omnipotence, omnipresence, omniscience and any other omni- you can think of would be realized with the help of ASI. Then, we won’t have time for trivial problems like climate change, energy production, pandemics, disasters, etc. They will be taken care of by ASI for us. We will be living in a paradise beyond our wildest dreams. But all these pluperfect fantasies are based on one major assumption- that the super intelligent machines we create would want to co-operate with us humans. Just in case this doesn’t happen, we are toast.

A serious concern is whether humans would be able to control super intelligent machines. We need to design and think in a way that we can avoid the risk of being pulverized by our own creation, and at the same time, get the most out of it. To try and approach this intricate problem, we first have to understand the fundamental nature of intelligent machines. Eliezer Yudkowsky summed this up pretty well in this statement,’The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.’

A classic thought experiment to illustrate this idea is the paperclip maximizer proposed by Nick Bostrom. Let’s say you were to program an ASI with the goal of maximizing the production of paperclips. First of all, it will try and evolve its intelligence, so that it can increase the number of paperclips produced that way. Once it’s well equipped, it starts by spreading production facilities over the whole globe. Next, the ASI will gather resources (atoms) from every possible source. The ASI will just keep collecting resources and producing paperclips. After a point, it looks up to the carbon atoms we humans possess as a potential resource. Yes, our body atoms will contribute to the production of paperclips too. If the goal was to eradicate cancer, all it would need to do is destroy the entire human race. In short, the intelligent machines could achieve a goal and destroy/affect the balance on our planet.

It’s clear that the super intelligence can easily turn out to be super stupidity. But that’s with respect to moral values that humans possess. To an intelligent system, all that matters is the goal. But there may very well be a huge difference in our ultimate goals and the dangerous means used to obtain it. We can foresee a possible Dystopian future. This necessitates the rethinking of AI and its primary objectives from an ethical viewpoint. Research in AI ethics is quite complicated, because the morals humans beings live by are varying and have too many exceptions. Nonetheless, we are making some progress. But if this is neglected, our very existence could be in grave danger. It may be argued that we are still decades, or even centuries away from such technology, so why bother about AI ethics now. To that, if we don’t act now, we may never get a chance to do anything about it. Then it will be too late. Our privacy, integrity and autonomy would be dangerously low and flesh would have been totally dominated by metal.

So, do we know what’s in store for us? Well, to a certain extent, yes, but for the most part, no. As of now, we can take an educated guess to answer these questions, and act accordingly. It is very probable that one major turn of events would solve many of our problems, or not. But it would be wise to expect the best, yet prepare for the worst too. We still have a long way to go as a civilization, explore the promising, yet perilous realm of technology. But at the present, you may find comfort in the thought that no cyborgs are going to come knocking to your door anytime soon 😉

Random rumination

I delved into the inevitably metaphysical part of life again, you know, the stuff that gives you some good old existential crisis. Wondering what was true and why something was the way it was in our intricate universe in the dead of the night was an otherworldly feeling altogether.

One thing which I do repeatedly is keep ruminating about a divine, or even an alien presence in the cosmos. Is it something beyond the reach of us earthlings? Something so advanced and unbelievable that it is simply beyond comprehension?

Suppose you handed a smartphone to a troglodyte back in the Stone Age. Just imagine it. You tell him that any person who owned that tiny box could communicate with you, no matter where he was. You can capture any view you want. You get access to an unlimited amount of information that people who came before you had discovered. (He would’ve probably mistaken you for a meal, but let’s not think of that)

So the point being, when you have an advanced enough technology, it is synonymous with magic. But we are at a developing stage right now. Still finding out for ourselves the world around us. We’re still fathoming the depths of the ocean, examining everything from neutrinos to neutron stars, navigating our way through the universe.

There’s this idea called ‘Last Thursdayism’ which claims that you cannot disprove the statement that the universe was created just last Thursday. For all we know, it could’ve been so. This is a classic example of an unfalsifiable theory. You can’t disprove it, no matter how weird or counter-intuitive it may seem.


Another quirky theory is that the universe and everything in it is just a simulation. All you’ve ever known, felt, seen is a simulated reality created by a powerful enough civilization with super advanced technology. Although this seems pretty unlikely, it can’t be disproved.

Lastly, is there any supernatural presence at all in our world? This is not a simple yes-no question for the most part. There are way too many things that need to be taken into account here. Beliefs, separating myths from facts, the science and history behind them, and unraveling stuff. Believe me you could publish entire books dedicated to this idea itself. In fact, there’s an entire branch of study called theology dedicated to things of these sort…

I could go on and on with my thoughts and the list would seem endless. That’s the thing about human minds. Quick and easy to wander, yet, the most powerful tools we possess!

Create your website at WordPress.com
Get started