Some thoughts around prevalent AI Ludditism [From Quora answers]

I have tried to clarify some common fears people have around AI eating their jobs in my Quora answers.

My answer to “Artificial intelligence will or may lead to an unanticipated crisis if it starts to code itself? How true is it?”

Originally answered here: https://qr.ae/TSC6dR

Chill!! People who are asking such questions need to first of all stop worrying and stop reading media articles trying to scare them.

We are decades (maybe centuries) away from totally automating tasks which require such high level of mental capability. Present Artificial Intelligence can at best approximate tasks which require 2–3 seconds of human decision making that is all. Also most AI we have developed right now is not Artificial general intelligence - Wikipedia

(the one which you watch on movies) but task specific algorithms. We have not even created Terminator/Skynet 0.0.1 yet.

To answer your question directly, it might be a problem (with some low probability) if AI starts implementing a better AI itself, thus increasing its capability at a superhuman rate making humans obsolete. But this is like hundreds of other low probability doomsday scenarios like an Alien civilization attacking and colonizing us or Earth’s capability to support life getting over. So no point worrying about it in real life.

My answer to “What is Deepfake technology?”

Originally answered here: https://qr.ae/pNnxiL

Deepfake is essentially creating fake images/videos/audio/text using Deep Learning techniques.

For example, if you remember a recent viral app which used to create images of how a person would look if they were old or applied makeup, the images it was creating were fake images generated by AI.

However, the term “Deepfake” is generally used for describing negative usage of such technologies.

So you can have someone’s face placed in an objectionable photograph, someone being made to speak fake things they never said in a video and a fake audio in someone’s voice or just a fake news article using Generative Deep Learning algorithms.

For example there are websites where objectionable videos of celebrities are created by morphing their faces into existing pornography using Deep Learning. There is also danger of a video being made with a politician being made to say something that they have not using such technology to sway the public opinion.

It isn’t that such fake photographs or videos or audio cannot be created by humans. They can be and have been created in the past and even now. Just that Deep Learning algorithms bring “automation” to it, that is they can now be created in large quantities without human involvement.

I will put some examples here which you can see and understand how AI can generate video/image/text samples :

Generating Videos Using Deep Learning

First order model

:

Video:

https://raw.githubusercontent.com/AliaksandrSiarohin/first-order-model/master/sup-mat/vox-teaser.gif

Generating Images Using Deep Learning

Semi-supervised StyleGAN: Disentanglement is through mutual information loss. Propose new metrics for measuring disentanglement in generator. Take-away: small amount of supervision is enough for disentanglement and high-res generation @NVIDIAAI pic.twitter.com/FDh26WbiQQ

— Anima Anandkumar (hiring) (@AnimaAnandkumar) March 12, 2020

Animesh Garg on Twitter

Generating Text Using Deep Learning

Better Language Models and Their Implications

My answer to “Should we be worried about deep fakes and the misuse of facial recognition?”

Originally answered here: https://qr.ae/Tx6CUl

I think the only aspect of Artificial Intelligence that can disrupt our world for a very long time to come is going to be DeepFakes. The reason is not because the technology is itself very harmful, the reason is because humans are emotional and don’t really put a thought before believing and sharing videos or voice messages. You could basically generate photos , videos and voice clips of a person doing/saying whatever you want using these techniques. I will not be surprised if fake media created by these technology be used to instigate violence or bad faith. In a world where fake news is widespread and fake news busters can spread fake news too, it is going to be big challenge to tackle Deepfakes.

The misuse of facial recognition is actually somewhat fearmongering and ludditism. It probably makes the world somewhat less private, but every technology since the invention of cameras has done so for a very long time and its a continuing trend since years. I think the probability of using Face Recognition for bad purposes is not that high. That said, I think there are already regulations in place in various countries to make sure Facial Recognition cannot be used to harm people.

My answer to “Why Deep mind AI of Google failed to answer 1+1+1+1+1+1+1?”

Originally answered here: https://qr.ae/pNnedF

Many assumptions in the question appear wrong:

  1. Deepmind is a company, its not an AI.

  2. The way you refer to AI, it feels like you are talking about Artificial general intelligence – Wikipedia.

    Most AI we have in the world (and most of what Deepmind builds) is Weak AI - Wikipedia

  • AGI (Artificial General Intelligence) is a theoretical future where computers can learn any new task when presented to them like humans, is decades if not centuries away. Weak AI means a Computer needs to be programmed to perform one task, like detecting cavity in teeth or playing Atari games.

  • Weak AI variant for problems like “1 + 1 + 1 … “ exists. That is simple mathematical expression parsing and evaluation. What Deepmind was trying to do was to use a Weak AI model generally used to look for features in language (LSTMs are the algorithm) to solve mathematical problems. All it tells is Weak AI algorithms to extract features on Natural Language cannot learn to solve Mathematical problem as of now.

Link to original answer : https://www.quora.com/What-you-think-about-Data-Scientists-Automated-and-Unemployed-by-2025-article/answers/16871059

The so called Data Scientists who use off the shelf Machine Learning algorithms on simple data WILL be automated pretty soon, infact Amazon/Azure/Google ML services already do so. To be frank, calling sklearn.fit_predict with different parameters is not really hard and is kind of repetitive.
The real world is slightly different, A lot of real world Data Scientists are actually Data Mungers, who clean/transform the data (which is really hard to automate), and then Hyperparameter adjusters who apply black box algorithms on top of it (which is easy to automate).
Other easy repetitive tasks, which involve simple repetitive work and involve humans for the hack of it (say Data Entry Operators, Customer Support chat people, ) are being automated at a steep rate too. Surprisingly, some aspects of art (some form of painting, music, lyrics writing) is being automated too, which I thought was impossible to.
The day to day tasks of actual Data Scientists, say working at Facebook AI research and Google Brain (writing different Artificial Intelligence agents) will be one of the last to be automated, as it is generally very hard. So I think they will be one of the last in last in the  line of losing “survival of the fittest” battle to Machines. A time when Machine can deduce algorithms is still far off in future.

My answer to “What will be the need for humans if we automate everything with machine learning and robotics? What will humans do in the future?”

Link to original answer here: https://www.quora.com/What-will-be-the-need-for-humans-if-we-automate-everything-with-machine-learning-and-robotics-What-will-humans-do-in-the-future/answer/Muktabh-Mayank

Why is it necessary for humans to work ? Do any other species of plants and animals work jobs ? Don’t other species live ? Is the only purpose of human existence to make Excel and Powerpoint sheets or write Python code ? These are deep metaphysical questions which are more philosophical and have no real (only theoretical ) answers. Also this is not my answer.

The more practical answer is that automation has always been coming, jobs have always been automated. Shoe Makers were automated by factories and chariot drivers and horse tamers were automated by cars. What always happens is new jobs were created every time (factory workers, car drivers) requiring lesser number of humans, so that humans could engage in more intellectually demanding fields. The same is happening now, with incoming AI technology, jobs are changing. The difference is that while it took generations earlier for a change to reflect, people will see drastic changes in their lifetime (that’s probably the only challenge I see, people will not really be able to “settle” on a career for all their lives pretty soon and will require to learn-unlearn many times in their lifetime) . The (wrongful) fear is because we see only one aspect of it, not the other side. AI does automate stuff, but sum of human achievements is not a Zero Sum game, it will grow larger. While you might not be required to enter data in Excel (one random example) searching web due to AI, a bigger intellectual challenge can be taken over by same person than Excel data entry (like say solving Cancer).

My answer to “Will data science and machine learning get automated leading to lesser opportunities for data scientists”

Originally answered here: https://www.quora.com/Will-data-science-and-machine-learning-get-automated-leading-to-lesser-opportunities-for-data-scientists-as-per-https-www-datasciencecentral-com-profiles-blogs-data-scientists-automated-and-unemployed-by-2025-update/answer/Muktabh-Mayank

Yes. (2025 is not the date I think its going to happen, but its inevitable and will happen in near future). They will be to a good extent. So will be Software Developers, designers, manual workers, teachers, linguists, musicians, game developers etc,etc. There are already rudimentary projects like Turning Design Mockups Into Code With Deep Learning which can turn a design mockup into HTML/CSS code, carpedm20/ENAS-pytorch which can design neural networks without a Data Scientist, Why AutoML Is Set To Become The Future Of Artificial Intelligence , system which can generate new characters for games, Microsoft AI can translate Chinese to English just as accurately as humans , [Baidu’s Deep Voice can clone speech with less than four seconds of training Computing](https://www.computing.co.uk/ctg/news/3028065/baidus-deep-voice-can-clone-speech-with-less-than-four-seconds-of-training) and multiple such projects.

OpanAI’s new project can generate programs given specs : https://www.youtube.com/watch?v=utuz7wBGjKM

Look at the artificially drawn faces by latest GANs , designers will find it harder to compete with the new quality. https://youtu.be/XOxxPcy5Gr4

AI will impact every job profile which exists as of now, Data Scientists no exception, automating some or a lot of work people spend their time on. For a lot of time these systems will become a <Man + Machine> AI systems rather than just a Human working before stuff is totally automated. So its not like everyone becomes redundant day 1, but they will eventually.

Full Automation of any field is going to take way longer than 2025 IMHO. That said, yes a lot less people will be needed for the same task as of today. Then what will people do you ask ? newer more complex tasks.

Automation is not a new phenomenon. Think about the railway breaks a long time back: https://youtu.be/EEUkmP2nyxo

So much work was once needed to just stop the train. A lot lesser work is needed today to run/stop the train and not just that, slowly trains are moving towards full automation, but right now they are in a <man + machine> stage. A lot of jobs will stay in this phase for sometime before full automation kicks in. But unlike railways, which take generations to move from one stage of automation to another, AI is causing changes at a phase at a very high rate.

What is the effect on a general Data Scientist (or any white/blue collared worker for that matter):

  1. Adapt to AI. Automation has started, but AI aided jobs will stay for a few more years than non-AI jobs. So while no plain X jobs by 2025, X + AI jobs might be around till 2030.

  2. Things wont be like earlier generations where one skill learnt gets you a job for entire life. One needs to be open to learning new skills and get started in the middle of life.

  3. Average level of education needed will be high. Think of it, 50 years back “High School” was all the education needed for most jobs. Now its somewhere between high school and graduation. Masters and Research might look like the next frontier, but these degrees are too slow and broad. Coursera like courses will become more important in catching new skillsets. You might already see people doing that a lot.

  4. With more uncertainty in jobs, millenials probably will want to be a less “spend-y” and more frugal. You can see things happening already 6 reasons why more millennials aren’t buying homes and will actually increase. AI is just a trend in a longer cycle of Automation and millenials are at a point in history where education+society was according to old norms but automation has reached a point where jobs have become uncertain. Younger people will be smarter.

    1. My answer to “Should facial recognition technology be banned considering it can do more harm than good?”

“It can do more harm than good.” How is this proven ? This is just a subjective opinion. If we were to make regulations on subjective opinions, we would ban many things just because people thought they are bad. Unfortunately, Humans do react to emotionally and based on opinions. we have regulated away (or are trying to regulate away) innovation in fields which can potentially solve the biggest problems the world is facing right now, because we were made to emotionally believe that the fields are bad, just like how the OP thinks about Facial Recognition today.

Let me give some examples :

  1. Research in Nuclear Engineering and making reactors has been sidelined and demonized so much that very less innovation than expected has happened in these fields. We could have by now potentially removed all polluting coal power if we would have innovated well and global warming would not have been an issue maybe. But no, because people are made to believe that Nuclear power itself is bad, our democratic institutions have neglected this field. There is no doubt about the fact that Nuclear reactors malfunctioned in past and there were nuclear weapons. The correct way should have been to promote Nuclear technology that is less risky and cannot be weaponized. But that would not have struck a chord with the armchair smart people that run the world as its a boring pitch.

  2. That’s not the only thing. We have to feed everyone in the world and we are willing to promote insects as a potential protein source than promoting work on GMOs. The most sensible solution would have been to estimate what is the total capacity of Earth to sustain human population and check the rise accordingly. Dont push something you don’t know the response of. We don’t do so, because that disregards human freedom and also would lower the return on our financial investments. When we don’t understand how to feed these new people we would either build a class exploitation theory (which is not entirely wrong, but that’s not the only reason) or just ask them to eat insects for protein. A better way to solve this would be by making more resistant crops using GMOs. Genetic Modifications is something humans have been anyway doing by force breeding plants and animals, but in a slow long term way. All technology enables us to do is to make the process faster. Of course, no one is denying potential risks of GMOs and we should regulate GMOs for these, but building a culture and opinion around “organic food” and demonizing them completely is not what would be sustainable.

Anti-nuclear movement - Wikipedia

Genetically modified food controversies - Wikipedia

Things still move in background but very slowly for these two and many such fields which common public feel the fields are bad because their security is threatened. There is no doubt that regulation is needed to protect public security, but banning fields of science and demonizing them are modern equivalents to Book burning - Wikipedia

. Stupid and Luddite.

Now, let’s come to the current question. “Should Face Recognition be banned altogether ?” .

  1. Can you guarantee that banning it would stop government and people in power snooping on innocent people ?

  2. Can you guarantee that all the lives Face Recognition surveillance saves from terrorists and wrong doers will still be saved ? If no, are the potential lives saved by this technology of lesser importance than your own life ? Why should they go to waste ?

  3. All AI phrenology is snake oil. Phrenology - Wikipedia

  1. . AI to determine the right candidate for the job, AI to determine gender/sexual orientation, AI to determine criminal action. All these fields are snake oil from what I understand. So if you are making this as a case against Facial Recognition, you are reading one or both out of hyped up media and bad research.

I have no doubt that there are potential wrongdoings that Facial Recognition can be used for. Stopping that by regulations is why we elect people into parliaments and legislatures. I really like how the cigarette companies are required to put photos of the potential bad effects of their products on packets by regulation, fantastic trick of regulation. There is then gatekeeping regulation with thick red tape so that only big firms with lawyers can deal with. That is another way how the negative hype can be used to restrict innovation.

There should be restriction of bad practices. But if people force them to take stupid book burning decisions, we are destined to not cross the great filter. Great Filter - Wikipedia

If this answer feels wrong and boring, that is how most real world issues are : Complex and not solvable with one line blanket statements like : “Ban X” and “F**K Y”.

My answer to “Is it true that artificial intelligence will eat 70% jobs in India by 2025? I feel so much scared. My brother is in IT & I am in the back office for livelihood.”

Originally answered here: https://qr.ae/pN2YLH

Opinion 1: I dont think 70% of the jobs right now will be automated by 2025. Maybe by 2035 I guess or even later. It’s too soon to worry.

Opinion 2: AI eating away 70% of the jobs (whenever that happens, say 2030) doesnt mean that 70% people will be unemployed. What a lot of scaremongers do not tell you is that “old jobs going away” doesn’t mean “No new jobs will be created”, causing the panic I can sense in your question. Also the change will not be quick like CoronaVirus pandemic, it will be gradual and will give you time to adapt. If you are worried that a Terminator army will one day come swooping over taking away our jobs, that day is maybe impossible or far far away in future and shouldn’t happen in your lifetime I guess. What at best will happen by 2030 according to me is AI being used as a tool in most jobs around the world just like Computers are right now. So you will have to take courses to adapt maybe to a new type of work profile where your work is aided by AI.

Opinion 3: The only thing constant in this world is “change”. Things will not be constant. Your job changes from year to year. Maybe you started working with a Windows system and then the company adapted to Linux and you just did fine. You are evolved as a human as pinnacle of life forms on Earth out of millions of years of life just to make sure you are sturdy enough to adapt to change. See, you have already lived through high pollution (guessing you live in a metropolitan in India) and 7 months of CoronaVirus, did you fear it and prepare for it when you were young ? No, and still you did fine. You might do just fine even if there is a sudden event like alien invasion and make a way for yourself, much better in a gradual change like adoption of AI, dont stress too much.

Opinion 4: Do we work on the same jobs as our grandfathers did ? One of my grandparents was given a scholarship and a government job guarantee to study veterinary sciences. There was so much demand and not enough supply. Is there the same requirement for veterinary doctors today ? No. There is a requirement for something today and that is going to shift. Because humanity is growing at a super-exponential speed, jobs used to get redundant in a few generations for our ancestors, they will get redundant a few times in a human lifetime for millenials and future generations. That is the cost we pay for super fast internet speed and healthcare getting better by the day and cars that don’t need to drive. However, none of these changes we see happening have brought massive unemployment, I don’t think AI adoption will do that either.

Some big company stops using one BI solution (say X BI) and decides to switch to a competitor can create a decent job risk for someone working as an X BI expert (that’s the type of titles you see) in Indian IT firms. Think of AI as a similar risk, you can face maybe a small break at work at worse, but it wont just make number of jobs in market as Zero (or even scarce). How do you avoid these risks : A. Save, don’t spend like crazy. B. Insure and build a safety net. C. Be ready to upskill and D. Build redundancy in income.

My answer to “Is it possible that the level of artificial intelligence we have today, be capable to manipulate humans and their behaviour?”

Originally answered here : https://qr.ae/pN2a7U

AI algorithms (as of today) themselves are not capable enought to manipulate humans. So basically no AI algorithm can really by its own mind creep into social media and try influencing human behaviour. No Skynet yet, maybe for not a very long time in future. Skynet (Terminator) - Wikipedia

However, what current AI algorithms can do is automate tasks that humans take a few seconds to do. Basically get a rough look at your web history and understand what are your general interests or looking at a video and keeping of track of what persons appeared in it or searching information related to a term in search engine. These are trivial tasks, but an AI algorithm performing them can vastly reduce effort for people who want to influence someone’s personal touch on issue. That is because a personal “nudge” can be given to a person on an issue (what baby diaper to buy, what car to buy or what party to vote for) only after processing a lot of information about them. What AI algorithms can enable is massive information processing about different people. Surveillance capitalism - Wikipedia

That said, if you really think that humans have not been led by a few before AI’s invention and this is something of a new dystopian phenomenon, you probably need to understand the entire business of mainstream media (MSM, for purpose of this answer MSM = TV + newspapers + Movies + Other non-scientific Publications, some scientific ones even) and advertisement better. Behavior of humans in a society is controlled by creation of a “window of discourse” by mainstream media using emotive cognition and limited handouts (That basically means cleverly creating good and bad images about things and focusing majority of discussion on less important things). Most people dont really know what they want and so they are made to want things which satisfy them and also helps the people who are putting resources to influence them. Humans are not actually as rational and smart as what economics classes teach us. This phenomenon has been alive since Indus Valley, Mesopotamia and Ancient Egypt. Initially the power was held by clergy, then marketing executives and media professionals, and they still continue to hold some of the power, but internet is the new gangsta.

What AI enables in combination with internet is instead of influencing the group think by controlling the narrative, personal information of people can be analyzed to shape their personal world view. This effects humans more deeply. A lot of us now think very differently from what the MSM wants us to think due to these modern ways to influence opinion (So for example, MSM continues selling us soft drinks as feel good happy drinks which popular sports people drink, but many many people are not drinking these now). This personal touch can enable sale of products and maybe even effect your views on nuclear energy, solar energy, global warming, free college or even any new faith or MSM outlet, things which had absolute control on such influence earlier. As you can see, this is much more powerful than old groupthink control mechanism employed by Mainstream Media like movies, televisions and newspapers and that is why they create a lot of noise criticizing it. Its effecting revenues and influence of erstwhile influencers and they dont like that a lot. No surprises about the dystopian hit pieces around “AI is coming for you” and “social media needs to be controlled”.

If you are even 1% tensed or worried about this, you first need to ask yourself : What is society ? Isnt it animals whose behavior is controlled by some narrative ? Its just a debate about who runs the narrative, who sits on top of the pyramid, that is all, no big deal for most of us followers. Chill.

Note to self : Dont rewatch Black Mirror for the third time :-D .

My answer to “Will the 5G internet (or ASI) make this world a chaotic place?”

Originally Answered here: https://qr.ae/pNCZvk

I assume the question is asked on the premise that with such humongous data speed available to each person, will society get out of control as hierarchy/order will be hard to maintain ? I dont foresee any other way 5G can harm, as its not going to be rolled out without tests.

The correct theoretical answer to this is : We dont know. But probability of chaos prevailing due to 5G is quite very less given that humanity is quite tenacious. Why so ?

You can think of human world as a very large self-organizing system. Any system which is isolated tends to have high entropy over time. I am not talking about the more popular thermodynamic entropy (Entropy (classical thermodynamics) - Wikipedia

) here but the probabilistic concept of entropy Entropy (information theory) - Wikipedia

which simply means that the more new variables you introduce into the system, the shock or surprise value of the system increases. Just by existing, humanity is creating new issues which can surprise and damage it. We are also adding more variables with the resources of Earth we consume or the human population increase. Also with every new technology we add into the system (AI / blockchain / 5G / Genetic Engineering), while you add a few ways for the system to grow, you add many more possible ways for the system to fail. A self-organizing system will either adapt or evolve around a new surprise to incorporate it. However, if the change is something that overwhelms its capacity to adapt or evolve, it will get destroyed. Such irreversible changes are called systemic risks.

However, a technology like 5G (or ASI) is not a systemic risk, that is 5G can be rolled out gradually and problems it causes can be addressed along the way. Humanity is a self-organizing system, so it will roll out a new technology incrementally and use active stability (tactical measures) to avoid pitfalls and not crash and burn when the technology is being introduced. In simpler terms, humans are adaptable and will slowly adapt to incremental technologies and circumstances. You can say the same for new technologies like genetics or AI. The only problems are systemic risks (like pandemics, say COVID19, or say asteroid hitting the Earth), which cannot be paused and will overwhelm the speed at which humans adapt. These can damage or destabilize system as a whole.

Another path technology luddites suggest is to not take any risks at all, that is make surprise factor as zero from technology, invent no technology. That would essentially make the system implode as you are adding new issues and people all the time and society (and nature) can only bear as many people in its natural state. You need technology to adapt the society for good.

Other people who propose restrictions on new technology actually believe in passive stability (plan everything beforehand and create regulations/controls to avoid failures). They are mostly wrong as they cannot imagine all possible scenarios and end up creating regulatory capture.

Paraphrasing Jordan B Peterson from his book, “the path to progress lies in between stability and chaos”. Too much stability (not inventing new technology, too much restriction on thoughts and movements) will create chaos as system crashes under its own weight. 5G is a risk, but not a systemic risk.

5G internet is an incremental change. There is no doubt about the fact it will introduce changes into the system, but you will always have the option to turn it off. With new information available, humans will realign and the humanity as a system can adapt or evolve, but its rare that humans will not be able to take it.

More practically, what changes did the transition from 3G to 4G bring ? You can kind of extend the arrow and predict what changes will 5G technology bring. More apps and more centralized data over the internet. A lot many things we cannot even imagine right now. That said, I personally find it hard to believe that faster internet can damage humanity. In many places, the data speed change between 3G and 4G is not even that high to notice and maybe even after 5G is rolled out, many people will get data speed of 4G levels only for a long time. So it might actually even turn out to be marketing gimmick in some circumstances.

My answer to “What is the name of Google AI?”

Originally Answered here: https://qr.ae/pN5TAp

There are many AI algorithms Google has launched as products. Remember they all are Artificial Special Intelligence algorithms that can do one task only.

For example,

Google Translate is an AI algorithm that translates text from one language to other.

Google Assistant is an AI algorithm that translates sound to text and then performs a Google Search.

Google Photos has an AI algorithm that automatically categorizes your photos

GMail has an AI algorithm that suggests text so that you need to type less

YouTube has an AI algorithm that recommends you good videos

And a few more..

I dont want you to think the following two things :

A. Google has an Artificial general intelligence - Wikipedia

AI like this :

and thus can have a name like “Vision” or “Jarvis”

B. This is all the AI Google products have :

Google Assistant - Wikipedia