How AI could save or end humanity ๐Ÿค–

How AI could save or end humanity ๐Ÿค–

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

ยท

27 min read

Looking back in time, several experts have predicted revolutionary changes in technology only some of which have managed to be the reality today. Herbert Simon, an American political scientist who happens to have some influence in the field of computer science, predicted in 1965 that "within 20 years, machines will be capable of doing any work a man can do". A similar prediction was made five years later by Marvin Minsky, a computer scientist and AI researcher, that in three to eight years there will be machines with the same level of intelligence as an average human being. These are well-renowned people, and who are experienced in the technological realm, but these predictions of theirs were just not close to being the case, not even decades after the times they predicted. It's already over half a century since then.

Predicting is a hard thing to do, especially if it is about the distant future. It has been shown that there's almost no difference between predictions made by experts, and that made by non-experts, or even worse, a random pick of one of the possible outcomes. Different absurd things have been said about the future of machines, computing, and AI in the last century and the current one.

Human-level intelligence in computers

Raymond Kurzweil โ€” Google's Director of Engineering โ€” who has made over 140 predictions and claimed to have been correct around 85% of the time predicted, in 2014, that computers will have all of the intellectual and emotional capabilities of humans by 2029. 2029 is still far away, so he may be right . He even stressed this in 2017, saying that "I've been consistent that by 2029, computers will have human-level intelligence". Later, he went further stating that he believes that "the technological singularity" will happen by 2045.

The technological singularity, or more concisely, the singularity is a hypothetical time in the future when advancements in technology lead to the creation of machines that are smarter than humans. These machines would create other machines which are even smarter than themselves by learning more and more. The newly created machines would go on to create even smarter ones, and the cycle would continue, thus multiplying the intelligence of humans present in machines by several orders of magnitude.

Although, there have been lots of advancements in technology over the past years, we're still far from having human-level intelligence in computers. This requires an extraordinary advancement in computer software and hardware.

Simulating the human brain

On the quest to making machines have human-level intelligence, AI researchers have come up with the idea of simulating the activities of a real human brain on a computer. This requires that enough knowledge of the inner workings of the brain be known, but only a few is still known about the brain.

Nevertheless, the brain is estimated to have about 86 - 100 billion neurons (nerve cells) on average. Each neuron is connected to up to 15,000 other neurons across 100 - 150 trillion synapses on average. Therefore, simulating the activities of a real human brain would mean that each neuron and synapse would be represented with computer parts, majorly the CPU and the RAM.

Researchers have given this approach several shots, but the most successful of all is that which took place in 2013. A joint venture between a research group in Japan and another from Germany used the Japanese K supercomputer to carry out the simulation. Then, K was the fourth most powerful supercomputer in the world. The computer comprises of over 700,000 processor cores and 1.4 million GB of RAM, spread across over 88,000 nodes. It does around 10 petaflops (10 quadrillion floating point operations per second). This is no small computing power. For reference, the K supercomputer is over 160,000 times faster than one intel core i7 chip, and one gigaflops (1 billion floating point operations per second), or GFLOPS for short will take an average human being 30+ years if he does one operation per second.

The K Supercomputer

Supercomputers are a network/cluster of computers. A single computer in the network is referred to as a node.

As humble as the brain is, it took the K computer, even with its huge computing power, approximately 40 minutes to simulate one second of the activities of 1% of the human brain. Read again, "one second ... of one percent of the human brain". This shows how uneasy such simulation is.

The K computer has been taken out of service since 2019, and has been succeeded by Fugaku which is much more powerful. Both the K computer and Fugaku were manufactured by Fujitsu and installed at the Riken Advanced Institute for Computational Science in Japan.

The list of the top most powerful computer systems in the world is maintained by the TOP500 project project, and the list is updated twice a year. The K computer was once the #1 supercomputer in the world (2011), but that title has been taken over by several other systems over time, and it now belongs to Frontier; a supercomputer in the US that is approximately 110 times more powerful than the K computer, making it to be possibly enough to simulate the activity of 100% of the human brain (not yet tried though).

Even with the level of computing power that we have now, simulating a complete human brain in real time is still not doable. Another approach to making machines have human-level intelligence would be through software, but we're not close to making a breakthrough in that too. This is in no way to say that it is not possible, it just would take time.

As mentioned earlier, brain simulation requires a deep understanding of how the brain works, which at this time, is not yet like it should be. Even if it were, huge computing power would be needed to fully run the simulation with simulation software at real time, meaning that there's a need for huge improvement in computer hardware.

Gordon Moore โ€” a cofounder of Intel โ€” predicted in 1965 that the size of transistors (the major component in modern computers) on a single chip would double each year for at least one decade, meaning that the size of transistors would reduce by a half making it easier to fit more of them on chips of the same size. Moore made this prediction because he has noticed that this has been the case since 1959, and it remained like that for roughly one decade after the time he made the prediction. In 1975, he made some modifications to it that the doubling would happen in around every two years (as things were beginning to slow down). This is now known as Moore's law.

There are a couple of threats to Moore's law which will limit the extent to which it can be applied, and it's almost that time, if not already. This means that other methods for hardware improvement have to researched and discovered if we are to make a breakthrough in brain simulation.

AI vs AGI

You may have watched movies featuring some kind of robot that is covered with human-like skin/body and acts fully like human. I certainly have. The truth is that nothing like that exists yet. And in fact, we're still far from it.

What we have at the moment, and that which we refer to as AI is nothing close to what those robots exhibit. That which they exhibit is known as Artificial General Intelligence, or AGI for short. According to Wikipedia, "Artificial general intelligence is the ability of an intelligent agent to understand or learn any intellectual task that a human being can".

This simply means that an AGI agent would be able to learn, reason, understand, plan, and communicate in natural language as humans. This might seem to you like what we have already, but it's not.

AI is programmed to do specific task

Consider an AI program that reads hand-written text (from an image) and outputs the same text in computer text format. An image of an "a" character written by hand on a paper passed to it would cause it to output "a" that's passed as input to a computer when the character is pressed on a keyboard. If the steps to be taken by the computer (an algorithm) to recognize each hand-written versions of characters were to be written (as in "a written computer program") by hand, an insane amount of effort would have to be exerted, if practically doable at all, as different people have different handwritings.

Therefore, the program is written such that it learns from many known and confirmed images of hand-written characters and their corresponding output in computer text. This requires that the program be trained, and the training is done by supplying it a huge amount of such known and confirmed images.

Thousands of these may be supplied to the program, each containing the different set of characters. An image could contain hand-written "A boy" in Tony's handwriting, another could contain "I hate food" in Bola's handwriting, and so on. As the program is trained more and more, it becomes better at coming up with relevant steps for deriving appropriate outputs. The study of how to write programs that can learn like this one is known as machine learning.

The above program is purpose-specific as the program can only learn different ways characters are written and nothing else. It can't recognize human voice or cat faces. A machine that's built to run it (or that runs it) can be said to be intelligent, since it'll be able to recognize characters, even if they were written by different people, but it's not generally intelligent.

General intelligence

Let's look at the case of a newborn. At birth, the child knows nothing. He can't understand what people say, nor does he understand what different actions mean. So, there's no way to teach him. But over time, he learns the language of the people around him. Who taught him the language? No one. Without being given any explicit instructions, his brain learns by itself what different statements mean depending on when and how the people around him use them.

The sounds made by the people that make up the language is a kind of input taken by the ears and sent to the child's brain. At first, the brain doesn't understand the input, but due to its built-in general intelligence, it learns to parse it appropriately over time, store it, understand it, and indirectly give out appropriate output of the same form.

Assumming that the child was born in the UK, and of course, that people around speak the English language, if someone asks him at age 5 "What's your name?", his ears as input devices pick up the sound input, which is caused by the vibration of air (due to the person's speech), and send it to the brain in form of nerve impulses (signals transmitted through nerve cells). As the child has now grown and the brain has learned the meanings of the sounds (in form of nerve impulses), it knows that is a request for the name of the entity it's present in. It then processes that against other details it has learned. For example, should the person be told the name? or should be name be said in a loud manner or not? It then sends instructions (also in form of nerve impulses) through to the vocal cord (the part of the human body whose vibrations is responsible for voice production) so it makes a sound that corresponds to the child's name and other words needed.

Based on this, an AGI is mandated to be able to take in any arbitrary input, parse it, learn from it if needed, and give corresponding output in appropriate forms. While the current state of AI like voice recognition, text-to-speech output, face recognition, e.t.c. may give the impression that they could be combined to form AGI, that's far from true. As we saw in the case of the child, a human-level AGI agent is required to be able to learn things on its own without being explicitly programmed to do so. The child was able to learn the language of the people around him without anyone teaching him. At first, he couldn't understand people, he couldn't read books, he couldn't understand different actions, there was no way to pass message across to him that his brain would be able to comprehend. But because of general intelligence, he could reason, and learn from random inputs (sound in the case of natural language), and after some time he was able to give outputs in the same sound form.

If the child weren't born in the UK, say he was born in China, he would learn the language the same way as a result of his general intelligence. The case would be the same if he were born in any other country in the world. He would learn the language of the people around him and be able to communicate with it. Even if he were born among deaf people who communicate with sign language, he would also learn the language, understand it, and be able to communicate with it.

This is just one part of the features of general intelligence. The intelligent machine we mentioned above that can output text from images can do this only because its learning process has been explicitly programmed. It takes a specific input type (images), and follows the instructions (written by the programmer) to learn from the image.

In the case of general intelligence, the agent is able to take any type of input, reason about it, and learn from it. Basically, implementing this in a computer would mean that we have to write a kind of software that can make the computer learn from any kind of input, or we simulate the human brain, and none of these has been done successfully yet.

Even purpose-specific AI software like the text recognition software we just mentioned still isn't at its best yet. So, it may not be the best time to start talking about writing general intelligence AI software. A machine that runs a software like that has to be able to learn any language, any profession, go to the university, learn with other students, and pass their exams, be able to cook dishes, wash clothes, and do every single intellectual thing man can do.

Apple's face recognition system on the iPhone 11 still mistakes my face for that of a friend, and I'm able to unlock his phone. If purpose-specific AI software like that aren't yet at their best, I'll argue that implementing general AI in software would be much more difficult for our current level of technology than it may seem.

It's worth noting that learning is just one feature of an AGI agent. The agent also has to be able to reason, imagine, and plan. There have even been some debates as to whether emotions are needed in AGI agents or not, but we're not very concerned with that. Till now, it still hasn't been proved whether AGI is actually possible or not.

The control problem

There are lots of big names who do not believe that human-level intelligence is possible. Some others on the other hand believe that it'll be here someday. Prominent names in the second category are Bill Gates and Elon Musk. These two have even shown their concern for the occurence of human-level AGI of even superintelligence.

The reason why humans still dominate the world is because our intelligence surpasses that of other animals. We are able to do things that they cannot do. But if one day, the singularity happens, and we end up creating machines with superintelligence, there's no much assurance that we'll continue to dominate the world. The superintelligent robots, and other more intelligent ones they create could end up overthrowing us, and control the world instead. They could even end the human race.

We therefore, we need to figure out a way to always have control over the robots and other ones they create even if they're more intelligent than us, or we could be seeking our own doom. This problem is known as the control problem or AI alignment.

While some have said the control problem is nothing to worry about, because the singularity won't be here anytime soon, many experts have opposed them saying that the problem should be dealt with first before research into AGI is continually made, as it may be too late to start making research about the problem when AGI already exists. Bill Gates has even said that he doesn't know why other people are not concerned.

Anyway, as said earlier, prediction is hard, especially if it is about the distant future. Bill Gates once said in 1989 "we will never make a 32-bit operating system", but Microsoft wouldn't be what it is today if they didn't launch 32-bit and 64-bit OS since then. Steve Jobs, cofounder and former CEO of Apple, also said in 2003, that "the subscription model of buying music is bankrupt", but Apple later launched Apple Music (although not under Steve) which uses that exact model in 2015, and the service generated approximately $5 billion for the company last year alone (2021).

Based on this, we should never rule out the possibility of AGI coming into existence. The future is very unpredictable.

Are we safe?

Whether we like it or not, AI is already taking people's jobs, and honestly, that will continue to be the case, but some experts have argued that AI creates more jobs than it takes. In a conversation about AI and its trends, James Mayinka, the chairman of McKinsey Global Institute, said the following when asked about AI and it's impact on jobs:

The pattern when you look at that and you factor in the economics, the pattern that emerges is the following. There will be jobs that'll grow, actually, and other ones that'll be created. So that's a good thing. Think of that as jobs gained. There will be jobs that'll be lost, partly because technology will be able to do the various activities involved in that job. Third, there will be jobs that will be changed.

The third occurrence James mentioned is a very interesting one, and it is absolutely true. In fact, technology has mostly brought about changes in the way people do their jobs than it has taken from them. This applies to almost every field.

But the sad truth about advanced AI as a technology is that it is created to replace humans. This is something everyone who knows what AI is should know. The reason why this may not be so obvious at this time is that the technology is still very at its infancy. But it's worth knowing that lots of companies and institutions are investing hundreds of millions and even billions of dollars on AI research and robotics. Many of them have also started replacing human workers with robots.

In 2016, DHL started to use robots which work alongside human workers in their warehouses. Car plants like those of Nissan, Ford, e.t.c now also feature the use of robots for most of their works. In fact, in Nissan's factory in Sunderland, 95% of the work is said to have been automated with robots doing most of the work. These are works that normally would be undertaken by humans.

Robots working at Nissan's factory

Companies want to automate their businesses as much as possible. This is primarily because with automation, cost would hopefully be reduced and production would be much faster. It has worked for many. In 2018 though, Elon Musk, who had decided to automate most of the processes of the production of the Tesla Model 3, confirmed that the automations have been slowing down the process. He later went on to replace some of the robots with human workers. He also admitted on Twitter that "excessive automation at Tesla was his mistake" and that "humans are underrated".

But of course, this doesn't mean that robots won't be used anymore. They might even overthrow all of the workers at the company if they'll serve their purpose. In 2015, a worker was killed in Germany at a Volkswagen plant by a robot. It was said that the robot is usually separated from human workers by a cage, but at that time, the worker, a 22-year-old, was in the cage with the robot, and for some reason, the robot picked him up and crushed him against a metal plate. This is what the robot was programmed to do. It was meant to work with car parts, but the worker was at the spot where the robot was supposed to be picking up parts. So, it's not an error on the part of the robots, its rather a human error. This incidence did not in any way reduce the use of robots in Volkswagen.

Preparing for the transformation

With the increasing tendency of AI to replace human workers, how do we everyday people prepare? We've got to pay our bills somehow.

Reskilling

Since AI, like any other technology is said to create jobs as it takes them, there're a couple of things to do in preparation for the not-so-faraway future. Although AI creating jobs is something we can't depend too much on, it makes sense to prepare in some reasonable way for the unforseeable future. James mentioned four things concerning this in the conversation I mentioned earlier. The first is about reskilling.

The first one is, we do have to solve the skills question because as jobs change, we're going to need to make sure that workers can actually adapt, learn skills, be able to work alongside machines, or move into occupations that are actually growing. So the skills question is, in fact, a real thing for us all to work on.

Investing in human capital

The second is more about investment in human capital and policy making. James said this:

The second question is how do we help workers transition either form declining occupations to the occupations that are growing? This is where policy and other mechanisms are really, really important to make sure we support the workers, we have the safety nets and the benefit models, and transition supports to actually help workers transition. That's the second thing.

As AI becomes better and better, and job changes happen, we have to adapt by upskilling and reskilling, and of course, get more unskilled people skilled.

The wage question

Next, James talked about jobs that seem to be not easily taken by AI, and which are not well-paid. The point here seems to be for employers, rather than workers.

(James was in the conversation with two other people; Fei Fei Li, a professor of computer science and the standing co-director of Human-Centered Artificial Intelligence at Stanford University, and Mary Kay, the International President of the Service Employees International Union.)

The third thing that I'd highlight is, in fact, what Mary Kay raised, which is the wage question, because one of the challenges that we've got here is that some of the hardest occupations to automate, and the ones that are going to grow, tend to be in sectors like care work, as Mary Kay described. We need real people to do that work. They tend to be teachers, they tend to be all these occupations that are really, really important and fundamentally human. The challenge with our labor market systems is that those tend not to be some of the best paid jobs in the economy. So even if there is work, we have to think about how do we support living wages for people doing that work to be able to live. So the wage question is actually a fundamentally important one.

Redesigning work

Technology improves from time to time, and we always want to bring in new technology into our workflow. Therefore, as AI gets better and new breakthroughs are made, employers need to think about ways to redesign work so as to keep everything and everyone in shape.

The fourth and final thing that we need to solve for is how do we actually redesign work, because what happens is, the workplace actually changes as we bring in technology to the workforce. In fact, it's one of the things that Mary Kay and I, and others, we've been talking about, which is how do we think about data in the workplace? How do we think about redesigning the work itself? By the way, if we didn't think these questions about redesigning work were urgent, we only have to pay attention to what's happened with COVID right now.

How AI affects the social well-being of the society

In 2020, a study was conducted by two researchers about the effect of AI on the social well-being of the people. One of the two researchers was Christos Makridis, an assistant professor from Arizona State University. Saurabh Mishra from Stanford University Human-Centered Artificial Intelligence (HAI) was the second. HAI was formed by Stanford and its mission "is to advance AI research, education, policy and practice to improve the human condition".

Using HAI's AI Index, an open source project that tracks and visualizes data using AI, the two researches discovered that between 2014 and 2018, cities with greater increases in AI-related job postings exhibited greater econimic growth. They also found out that this increase in the economy of the cities led to improvements in the well-being of the people. But the growths were a result of the cities' initial investment in human capital, and their ability to create AI-based employment opportunities. Therefore, only cities with specific AI-related infrastructure and more educated workers benefited from the growth. This isn't as expected, as many people have always been afraid that AI would take away their jobs.

It turns out, based on this, that as things change gradually, we have to make sure that more people get educated so as to benefit from the growth of AI. But, although the study shows the increases to have happened along the same time period, it doesn't necessarily show that AI cause the improvement in well-being. What then does this mean for us?

As we saw already, robots are already being used in place of humans, and that is very unlikely to change. Programmers, whose jobs are very probable to be taken by AI in the future, are afraid that AI programmers would take over their jobs soon, but Saurabh Mishra, one of the researchers, said this:

Given that cities have an educated population set, a good internet connection, and residents with programming skills, they can drive economic growth. Supporting the AI-based industry can improve the economic growth of any city, and thus the well-being of its residents.

It looks like, for the time being, developers can still feel safe, maybe. But we need to understand that there's need to hone our skills. (I'm a dev too.) There have been an increasing amount of developers who take CS degrees very lightly, and I don't think that's a good thing. Although it's possible to get a dev job without a CS degree, young generation of developers (who can still go for the degree) should go for it. Again, we don't know what the future holds, and, apart from that, there's more to programming that being able to write code in the latest JavaScript framework ๐Ÿ˜€.

More things to do

As research on AI continues, we need to make sure that we meet the skilling, re-skilling, and upskilling needs of the advances in the technology. Government should also make policies that'll promote technological innovation, AI-related research and increased investment in human capital, so as to benefit from the growth of AI.

There's no reason to be against AI. Computing and automation is the reason why the world is what it is today. We just must make sure to always study what's going on with the world, and be willing to adapt. As such, maybe AI researchers should not designate the research and innovations to just themselves and specific workers. Everyday workers might as well just join in the design in some way, diversifying the field. For example, when talking making the AI field diverse, James said (in the conversation mentioned above):

It also hasn't always looked at work and workers who are considered low wage, to the extent that often, AI research has done research with workers, it's been working with radiologists, doctors, not as much with people on the front lines.

The only problem with everyday workers offering a helping hand in the development of AI is them losing their jobs for nothing. It's a big problem. But it's not unsolvable. Again, policy making has to play a role. During the conversation, Mary Kay talked about how the truck drivers in Sweden are working together with the engineers to create driverless cars.

The truck drivers in Sweden are working with engineers to design and tune up the driverless vehicles. Truck drivers understand that their jobs are going to be replaced at the same wages and benefits in whatever the economy creates as new jobs as Sweden goes to carbon free emissions and all the other needs in the country. So truck drivers are training the autonomous AI, work together with engineers, and then going into middle schools and helping children understand that there are going to be other opportunities for them besides truck driving, rewiring the next generation. I think that's an incredible example of how there's a global commitment.

Before she said this, she made mention of the fact that there are mechansims in place that will pave the way for the workers and their employers, even as jobs change (as a result of technology), and that they've made a decision to "protect workers, not jobs".

But in Sweden, there is an ethos between government, employers, and working people because they have a bargaining system that gives everybody a seat at the table. They've made a decision to protect workers, not jobs. So workers can understand that there will be lots of change, but that the government and employers have a commitment to the retraining.

If such a mechanism could be in place in every country of the world (which is definitely possible), their might be just a little reason, or even none to worry about the growth of AI and it's effect on jobs.

Funnily enough, in 2020, a truck startup in Sweden started hiring remote truck operators. These operators do not necessarily have to be others, it could be former truck drivers, if they manage to reskill. This shows to some extent how AI could produce new jobs as it takes some, and how reskilling is needed to live through the transformation.

If we could create and control AGI robots

Let's play a little in our imaginations. Think of a world where robots do every single work we do today. Farmers do not need to farm anymore, workers at restaurants and hotels can always stay at their homes, no one is mandated to get to their workplace before 9:00am. How crazy would that be?

We humans are not very different from social insects, e.g ants and bees. In the colony of bees, there's a queen bee which reproduces, worker bees, which gather food, maintain the nest, defend the anthill, care for the queen, e.t.c. and drones which mate with the queen reproduction, and for continuation of the specie. The structure in similar with ants.

Some part of us work to produce food, some others work to produce shelter, some others works to maintain well-being, and others work in several other areas. We depend on one another to survive as a society.

But if we managed to create human-level AGI robots or maybe superintelligent ones, which call do all of the things we currently have to do as work, humanity would be relieved of work, but that's of course if we managed to control the robots. (Remember the control problem.)

When we look at it deeply, there's just three things we live for as humans; food, fun, and responsibility. Think of anything you want to do now, it'll be for one of those three. People work for money, but money in itself has no meaning. You want it either to get food, have fun, maybe travel around the world, or take care of a responsibility, maybe to cater for your children. Religion too would be considered a responsibility. All that would be needed at that point is people to keep control of the robots while they work, which could be done from home. Or maybe there'll be no need for them to be monitored.

Since there would be no work for the masses, and there would still be need for money, a Universal Basic Income, or UBI for short, may be the solution. The government of each country would pay each citizen of the country a fixed amount that should be enough to live normally. And, maybe people who still love to do work for fun, could join in, work alongside robots, and earn some extra cash. Or, as many people on Quora have said, some people might acquire robots and lease them to make money. ๐Ÿ˜‚

If all of these could happen, we finally may be able to solve the long-standing problem of social injustice and inequality, just maybe, since everyone would receive the basic income, and everyone would live identically. But of course, lots of policies would be needed to put this in place, and I think it's a possible future. (Maybe I'm being weird.) Economists too would have to make lots of contributions by the way.

While these things might sound funny, I think it would just make sense. Large corporations that make money with robots might have to be taxed heavily by the governement, or maybe the government would take control of everything and let everyone live equally (off the basic income). ๐Ÿ˜€

Conclusion

We're still in 2022, and we're still far from from developing a fully functioning AGI system. We should for now, hold on to our jobs, and keep preparing for any change that's likely to happen due to AI.

Thanks for reading. If you liked this article, please like and share it. If you'll like to reach out to me, send a DM to me on Twitter @abdulramonjemil.

You might also want to join my newsletter which I've just created, for more of my content.

Once again, thanks for reading!!! ๐Ÿ˜Š

ย