Some thoughts around Artificial Intelligence hypotheticals. Most answers would be around “we are decades if not centuries away from building Artificial General Intelligence”, but hope you find some interesting points.

Q. How would machine learning answer the question, “Would Jack realistically have died aboard the Titanic”?

Originally answered here : https://qr.ae/pNz7lw

You cannot really make Machine Learning predictions {or any other model based predictions or human intuition prediction} for events during a black swan event like sinking of the strongest ship (or any ship in fact) by an iceberg.

You can use some crude approximations around rate of death of people from lower decks in case of ship accidents etc, but that likelihood would be a useless statistic in case of Titanic crash. In case of a shock/rare event/black swan luck [randomness] becomes much more powerful than in regular day to day life. Two Face is a psycho but his philosophy works in extreme circumstances.

Q. Understanding that a model is only as good as its assumptions, can geneticists knowledge of mapping the human genome and high-performance super computers be used to quickly assess long-term effects of vaccines?

Originally answered here : https://qr.ae/pNzjGg

They cannot. Human DNA is so complex that it cannot be modeled. Physical universe is large, but relatively less interconnected {thus runs on simple fundamental rules}, so you have equations that govern behavior of it and you can predict what would happen in Universe. What asteroid will be where in 5 years and which stars will go supernovae and when will our Sun die. You cannot still predict some rare black swan events though.

Human brain and DNA, ecology, economy, society are complex systems, that is they don’t follow fixed rules, very interconnected and interdependent, so you can identify the trends, but hard to predict the trajectory of system.

So while trajectory of a particle can be written in a equation with 3–4 parameters getting 99%+ accuracy in real world, an equation to guess who will default on their loan will have thousands of parameters and will still be what? 80% accurate say.

To predict a complex system like DNA, you require humongous data to generalize rules, maybe more than what is available in the Universe. When we have much more data and computation ability than what we have right now, we might be able to do it more accurately than we do it now, never with 100% accuracy though. As of today, we can just guess and hope for the best and make theories (which might sometimes generalize), but no one can predict.

Q. What’s your opinion on Elon Musk believes AI poses a threat to humanity and is most worried by Google-owned DeepMind project? Do you agree with this?

Originally answered here: https://qr.ae/pNcniz

I don’t agree with Elon Musk.

There is a difference between Artificial General Intelligence ( Artificial general intelligence - Wikipedia

) , that we might possibly fear (and probably what Elon Musk fears) and the current trendy technique of Deep Learning. Deep Learning can at best be called Artificial Special Intelligence and there is no indication that the AI technology we have right now can be built to create sentient AI or AI that can challenge humanity.

The other possible problem one might see is job loss due to AI. I think its going to look like till 20 November 2020 human was doing a job and 21st November , an AI starts doing the job. The transition will be gradual and slow enough so that human can slowly adapt to a new skill and switch to a new job. It will be like AI starts reducing 10% of the workload , then 30% then 80% and later just needs humans as supervisors according to me.

Possibly one thing thar Elon Musk and governments might be worrying about is the monopoly Google (and other big tech firms) have over data and AI infrastructure. I think with latest anti trust lawsuits, legislators are addressing such concerns.

Google faces its 3rd major antitrust lawsuit as Texas and other states take the company to court over its ad practices

Q. Would artificial intelligence be possible if computer programming was not invented?

Originally answered here: https://qr.ae/pNZrHN

Of course not. Not in a way humanity formalized and invented computers.

The entire premise of AI is Computers being programmed to do tasks that are hard for them. Now that programming can be by humans (traditional AI, intelligent systems) or by Computers themselves (Machine Learning).

Q. Should doctors be replaced by artificial intelligence and surgical robots?

Originally answered here: https://qr.ae/pND8g7

I have written about this in many answers previously :

Muktabh Mayank’s answer to What’s your opinion on Elon Musk believes AI poses a threat to humanity and is most worried by Google-owned DeepMind project? Do you agree with this?

Muktabh Mayank’s answer to As a result of the Fourth Industrial Revolution, will any job be lost?

Transitions are not sudden and there would be no “replacements”. Unlike a hollywood movie, a doctor won’t wake up one day and reach office to find out that an AI will now do what they used to do till one day back as a doctor. Doctors’ jobs will evolve to use AI is what will happen.

Will doctors be able to take care of more number of patients per doctore using AI ? Yes.

Will doctors have to use many AI gadgets in their practice in future ? Yes. Many AI tools will be incorporated into their workflow like stethoscope and XRays are today.

Will salaries change ? Yes. Its hard to predict what exactly happens but I think the effect will be some very rich doctors and others slightly worse off.

“Should Doctors be replaced ?” That is a question with a false premise that we have AI systems that can replace Doctors. We don’t and won’t have for foreseeable future.

Q. How much would I have to manage to hijack a facial recognition system?

Originally answered here: https://qr.ae/pNWORI

The foolproof way of hijaking is an armed invasion of the cloud center where the Face Recognition systems run.

Or basically some kind of social engineering based hacking to just delete your records and photos from the system forever.

But I guess, you are more interested that what can one do as a citizen to fool these systems as we cannot really stop these face ID systems by a large scale assault or hack. Following are ways to achieve that :

The simplest way is for everyone to just wear masks. Masks which hide face of wearer or have someone else’s face printed on them. However, that is a bit odd (outside the pandemic timeline of course!)

Another (less suspicious) method is use Adversarial attacks or Poisoning images to guard against Facial Recognition.

Fawkes

is an open source and free tool you can use to poison your images before uploading to social media so that Facial Recognition systems find it hard to identify them.

https://youtu.be/AWrI0EuYW6A

However, this is for fooling virtual world face recognition, how to fool the algorithms working in real world on CCTV feeds ?

There are hardware based adversarial attacks (These are adversarial patches created using other Deep Learning techniques) that can make Deep Learning Face Recognition systems go ineffective.

https://youtu.be/MIbFvK2S9g8

Paper Here: https://arxiv.org/pdf/1904.08653.pdf

So yeah, maybe invest in buying one of these patches if you don’t want to be seen. This is not foolproof however and can fail many times.

Q. Why is artificial intelligence so power intensive?

Originally answered here: https://qr.ae/pNzjGf

Anything ambitious is power hungry ! You want to learn some 100 billion numbers to memorize the whole of human knowledge ontology (GPT3 is a clever way to memorize wikipedia basically), you would need power to crunch it. Similarly looking at many billion images to figure out all possible objects from figuring out what is a cat and what a wing of a bird is to and what the shape of an aircraft is automatically will need number crunching ability (BYOL). I think this should not even be a question. Humanity’s energy budget has not changed by a big margin for sometime and rather grown steadily, but Deep Learning based advances are recent because of lack of hardware until 10 years back that can consume energy vigorously and crunch numbers fast for learning that would have taken years otherwise. (GPGPUs / TPUs / Graphcore etc). These new devices will keep getting more powerful and more efficient per compute cycle with time increasing our capacity to number crunch even more. The energy requirement will only grow up.

The question should rather be how to make energy cheap and eco-friendly so that we can do ambitious stuf. There is only so much energy one can produce without converting Earth into a gas chamber and burning the limited trees and dinosaur juice in Earth’s core.

If you want to have cheaper AI algorithms that improve life of humans, a particle accelerator that can solve mysteries of Physics or frequent air travel, we will need to adopt the energy sources which make energy essentially free for all humans. Kardashev scale - Wikipedia

for a stage 1 civilization expects humans to capture most of Solar Energy coming to Earth using microwave emitting Sun satellite, inventing Fusion (not fission) nuclear reactor and antimatter based energy. This would enable humans to become a Level 1 civilization. I guess when so much energy will be available at our (human) discretion.

Q. Can artificial intelligences become fully aware of themselves if they cannot be sentient (overcome their programming)?

Originally answered here: https://qr.ae/pNcn9b

Unfortunately (actually fortunately), we are so far away from Sentient AI that we have no clue about how would it look like and how would it behave. Anyone who tells you that you can predict anything about sentient AI (or we are anywhere close to sentient AI) is just misinformed.

For example, one of the answers to this question itself picks up the example of Facebook AI that learned to communicate and needed to be shut down. That is plain fake news, fake news which mainstream media reported and it resulted in creation of all types of conspiracy theories. FACT CHECK: Did Facebook Shut Down an AI Experiment Because Chatbots Developed Their Own Language?

Think of it, well known news outlet, some of which you actually swear on and present as proof in your online debates, put out a fake news. This has started a trend where charlatans can shine as “AI leaders” by talking about sentient AI as if they were an expert in the field. The truth is no one knows (specially these charlatans don’t know) how sentient AI will look like in future. Its all woozle effect and fiction.

The only intelligence we know about (in fact we don’t know about it a lot) is human intelligence and we know that humans were sentient long before they understood their programming. (We have not understood our DNA fully yet). I don’t think this can be extrapolated to machines as humans are the ones building AI and they might want to address the defects they have. But yeah, as I said, we are decades if not a century away from such decisions and thoughts as of now, so its very hard to speculate anything.

Q. If a machine reaches HLMI (Human Level Machine Intelligence) will it already outperform humans since the average speed of a signal transfer inside a microprocessor is about 67% the speed of light. whereas that of out brain cells is 120m/s?

Originally answered here : https://qr.ae/pNWgkS

Not necessarily.

Human Brain actually is much much more efficient than the hardware we have today despite the signal speed being low. You cannot put a jet engine on a 1970s model car and think it can defeat a 2020 Tesla Model S. In an Information Processing system (in fact any system) the system is as fast as the slowest component, the weakest link. Your Computer’s CPU is much faster than its memory and way faster than the harddisk, and for most tasks you do on your computer you are limited by the speed of the hard disk despite having a very fast CPU. Basically as of today, the least optimized component of human brains will be better than the least optimized component of the compute systems.

Now we don’t know if we will have hardware as efficient as human brain in all aspect to predict whether HLMI will be better than humans already. So as of today, the answer is, “we don’t know”.

Q. Could we theoretically build an AI that could build another AI more intelligent than itself, and thus continue the cycle? Are there any limitations to this?

Originally answered here : https://qr.ae/pNzjtQ

Theoretically Yes. That is the point of “intelligence singularity” : Technological singularity - Wikipedia. An intelligence which can efficiently make an intelligence better than itself recursively would trigger creation of far superior intelligence than humans in a very short time. We are very very far from having any hardware or software remotely capable of doing so, but I think its going to be somewhat like Ray Kurzweil predicts: The Singularity Is Near - Wikipedia

. The most efficient machines of planet (human bodies and brains) will be what possibly achieves this feat by implementing new technologies like ASI to enhance itself, so we will have transhumans, becoming smarter than before. “Running Tensorflow models on Human Brain” is the way to go you know :) .

Remember, all this is very theoretical and will maybe not even happen in 21st century (or ever) and so its not possible for us to imagine how things would look like then.

Here is how 19th century patents for what they imagined future to look like : Imagining the future: early 20th century US patents - in pictures

. A lot of it was never even invented and air travel and submarine is far from what they think :

You can expect the future to have some trends in line with what we think today, but the details might be very different from what we can imagine.

Q. How is deep learning not the future of artificial intelligence?

https://www.quora.com/How-is-deep-learning-not-the-future-of-artificial-intelligence?top_ans=280928643

Humans attach emotional responses to arbitrary ontologies. How can we think “Deep Learning” has a fixed definition or “future of AI” means something quantitative.

How do we know Deep Learning will not evolve and will stay the way it is ? Both in terms of Hardware efficiency and Learning methodology , Deep Learning 5 10 years might be very different and scale very differently from what it is today. No one would have imagined GPT3 or CLIP or new awesome Deepfake results in 2012 when I started working on it. How do we know we wont invent better hardware to perform the current learning algorithms much more efficiently or invent new learning algorithms to make Deep Learning work at super scale ? We may invent something, we may not. Deep Learning methods might give us an AGI all on their own or they might become like SVM, once a hyped technology but hardly used today. No way to know.

Will the Final AGI https://en.wikipedia.org/wiki/Artificial_general_intelligence have Deep Neural Networks as its component ? We dont know, we cannot even guess. It may, It may not. Its just so far ahead in future that humans with their current knowledge cannot imagine it.

Will Deep Learning find commercial applications in near future ? Yes. You can be sure about it.

In my opinion, the right way is to start using Deep Learning to the advantage of human race in its current ability and stop thinking of abstract concepts like “future of artificial intelligence” which we can in no way imagine given our current knowledge. Some questions just don’t have an epistemic answer.

Q. Is AI an existential threat to humanity?

https://qr.ae/pGvtz5

No ! We are just too influenced by Hollywood when we look at AI and fears around it.

Humanity is sturdier and smarter than what most of us think it to be. Simple local shocks like a small nuclear accident or a AI drone going berserk and killing people are painful possible scenarios, but will not cause humanity to end. Humanity has very powerful tools [adaptability of human beings and their will to survive, very strong communication mechanism] which can essentially beat every threat apart from the following types :

  1. Risks which humanity continues to take [and doesn’t fear] despite knowing they are detrimental to it until its too late. Climate change falls in this category, we all know that it effects us and its happening, but we are still not doing enough. An example from past is 2008 financial crisis where financial companies in America continued giving rotten mortgages despite knowing may of these mortgages are bad.

  2. Risks which can start and spread faster than humans dispel information. A risk like this was COVID19 [or think of another virus like CORONAVIRUS but which causes long term effects on humans], a zika virus which spreads like COVID if you will. Such a virus is a systemic risk and needs to be handled at very small level, possibly before it starts, otherwise it would essentially cripple humanity. Thanos’s deadly finger snap is another example of such a risk, before humans could understand anything, half of the population was gone ! A nuclear war between US and Russia would set off a similar chain of dangers which couldn’t be possibly stopped by humanity.

Contrast AI or Nuclear Energy or CRISPR technology to both the types of doomsday scenarios. They are surely not type 1. Common Sense already alerts humans about their negatives. We have evolved for billions of years to sense quickly emerging danger. We would never for example do a CRISPR experiment on entire human race without lot of trials [goes for mRNA vaccines too]. We will never install too many nuclear stations which blowing up are interdependent on each other and we will never put an AI which can potentially harm us in charge without an emergency STOP button:

https://www.quora.com/How-do-we-fight-with-machines-if-ever-war-occurs/answer/Muktabh-Mayank

We are too aware about AI risks for it to become an existential threat unlike climate change.

There might be a few incidents of self driving cars potentially killing passengers, an army drone shooting innocent people or an AI robot rebelling against their master and some of these might have painful consequences, but they will not harm humanity. News of isolated incidents spreading like fire and making rest of humanity alert is one of our biggest superpowers.

Earthquakes, Tsunamis and Volcanos fall in the same category of risks as one rogue AI. They are painful consequences to some of these local incidents but humanity emerges stronger on the other side.

As of today, our AI research is far from making an AGI : https://en.wikipedia.org/wiki/Artificial_general_intelligence

[decades or maybe centuries away maybe] and it is far from becoming in charge. Most “dangers” of AI today are like dangers from other disruptions, a new elite emerging, some people becoming suddenly very important/powerful. The closest analog from history to this is Mongols’ cavalry warfare, Spain’s advantage oover incas or gunpowder.

https://en.wikipedia.org/wiki/Gunpowder_empires

Calling AI in today’s scenario as risk is just too far away from truth. There are some people who like the current status quo a lot and are afraid of losing it, that’s not risk, its aversion to change.

Q. If an extremist group get hold of an artificial Intelligence application what harm can they inflict on the world?

https://qr.ae/pG73vE

Not a lot ! Deploying AI [the type of AI systems we have right now and for a few decades IMO], requires very string reach and infrastructure. A simple AI system on its own is nothing.

For example, if I am a terrorist have an AI system which can recognize every person in the world [that system still doesn’t exist btw], I cannot do anything substantial with it until I get an access to all of world’s CCTVs cameras on whose feed you need to run the AI system to get any insight if any. Having an advanced AI system AND perpetual access to all CCTV cameras for a terrorist is a zero probability event.

I as a terrorist can have an AI system to generate fake news articles, but how to surface those articles in the TBs of information and make them viral to cause any disruption without platforms blocking me and counter-ops making people aware about the fake claims in my article ? Its very hard.

As of now, we are far away from any AI doomsday scenario. It will require a much more interconnected and automated world than today for AI to be a possible danger.

Q. Movies like ex machina shows scientist are working in isolated location to create a human artificial intelligence. Does these kind of scenarios takes place in real world? What do u think?

https://qr.ae/pGXEun

That is not really possible. One scientist working in isolation and creating Artificial General Intelligence is a good movie story, but is not how the research world works. Unlike what we humans think [and want], a single human can only achieve as much !

Research is collaborative, [sometimes boring] slow-yielding, compounding of mini-innovations introduced over time. Small mini-innovations slowly accumulate over time to become something noticeable. People from outside the field view it as magic many times because they don’t see the mini-innovations, but rather a compounded application of these innovations good enough for real world usage all of a sudden.

Here I try to describe how the current high accuracy by Deep Learning architectures slowly developed :

https://www.quora.com/Are-neural-net-architectures-accidental-discoveries/answer/Muktabh-Mayank

Deep Learning, the most promising technique as of now, has been used to make specialized AI and not a AGI [ https://en.wikipedia.org/wiki/Artificial_general_intelligence ] yet and IMO the day is still far when we will have something remotely like ex-machina.