Rise of the Machines – Info News
Life gets more like Black Mirror every day. The most recent thinkpiece-inspiring digital nightmare, in case you haven’t heard, is “deepfakes” — digitally manipulated videos of people that are nearly indiscernible from reality. Deepfakes are one of the most frightening applications of machine learning I have ever heard of. The fact that they are already being used to sexually degrade women is horrifying. Other potential uses, including hyper-realistic propaganda and fake news, could lead to a future where we can’t trust anything we don’t see with our own eyes.
Technologies like deepfakes do not inspire a lot of support for the ongoing Artificial Intelligence revolution. Perhaps this is a good thing: at a moment when technology is advancing at frightful speeds, it is wise to approach new developments with caution. Imagine if humanity had sat down and researched the long-term effects of fossil fuel usage before screwing up the Earth so badly.
It is tempting to shun a new technology before it creates problems for our grandchildren — or, in the worst case scenario, hastens the demise of our species.
However, I believe that at the heart of a lot of people’s fear is an ignorance of what Artificial Intelligence and Machine Learning actually are. Contemporary popular culture is abound with cautionary tales like The Terminator and Lawnmower Man, in which a malevolent AI turns against its human creators and tries to supplant and destroy them. I have enough faith in the readers of this blog to tell the difference between fantasy and reality. At the same time, however, it is difficult to overcome the existential anxiety evoked by the idea of an intelligent machine.
In this article I aim to dispel some of that fear by demystifying AI, and more specifically Machine Learning. By understanding that Machine Learning is just a set of routines and processes for computers — albeit highly complex ones — I hope that people will abandon the idea that some sort of digital sorcery is behind Artificial Intelligence. In fact, I hope that readers will walk away from this post with the knowledge that Machine Learning is capable of revolutionizing the way we interact with the world around us — and in many ways, it already is.
For starters, Machine Learning is not a new idea thought up by Silicon Valley whippersnappers. It has a relatively long history, dating at least back to the dawn of modern computing in the 1950s. The term Artificial Intelligence was coined by John McCarthy in 1956 at a Dartmouth Conference devoted to the subject — the first of its kind. In the decades that followed, computers were quickly taught how to solve complex algebra and calculus problems. The first computer languages were written during this time.
By the 1960s the intelligence of machines, an idea that was still foreign to the lay population, had become a sudden and startling reality.
It may help, at this point, to provide some definitions. The terms Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably, and they do describe roughly the same idea: machines that are able to act in a way analogous to the human brain. The important difference lies in syntax. AI describes an entity: an intelligence that is made by man and not by nature. ML, on the other hand, refers to a process; specifically, the way in which AI are able to actually learn. For this reason, I find ML to be the more useful term. It points to a specific process that can, with relative ease, be understood by just about anyone.
Because computers are not able to make the intuitive connections that human neurons are so good at, they have to be rigorously taught how to think. This is accomplished through the use of training data. It may have been a few years since you took High School math class, but there’s a good chance you were tested on concepts relating to inputs, outputs, and graphing the relationship between them. This is, in essence, what Machine Learning is. Connections between individual data points — say, the correlation between the time of year and the shopping habits of consumers — are taught to computers through trial and error. Over time, computers that learn enough of these connections become pretty good at extrapolating from them and making new connections themselves.
The conclusions these now-intelligent machines come to are not perfect, but they are constantly improving — and not prone to human bias and miscalculation.
In the simplest form of ML, machines are tested on their ability to correctly predict the connection between different factors. If Machine A says that shoppers will buy more toys at Christmas time, and Machine B says they will buy more swimsuits, Machine A will be kept and Machine B discarded. Over a large number of trials, the machines that are the best at making these connections will propagate and improve, leading to machines that exhibit intelligence above and beyond what an individual person could do. (Shoutout to content guru CGP Grey for explaining this far more eloquently and entertainingly than I ever could.)
There is certainly a lot more nuance to the current iterations of ML than I have laid out, and any readers who are interested (and willing to slog through a lot of complicated math) are encouraged to do further reading. Of particular interest is the history of neural networks, which imitate the complex and manifold connections that occur between synapses in the human brain. For the average reader, however, I hope that a simpler explanation will suffice. The key to alleviating the fears some people have of AI is to understand that the processes really are quite simple beyond all the jargon.
There is no magic going on: it’s just math and mechanics.
Once you have a base-level understanding of ML, there is a whole world of industry applications to learn more about. Forbes did a rundown two years ago of “Use Cases Everyone Should Know About” in fields ranging from health care to malware security. The applications that you’re probably the most familiar with are used by sites like Amazon and YouTube to recommend you products and content based on what you have already enjoyed. But ML is also being used to teach cars to drive by themselves, to help doctors more effectively recognize breast cancer, and to enable computers to hold conversations with human beings.
Unless neo-luddites and smartphone-skeptics join forces to smash the world’s computers, AI and ML are not going anywhere. Along with Blockchain, AI is predicted to be a major game-changer in almost every industry in 2018. The major players in the tech field — the indomitable giants Google, Amazon, and Apple — all have their eyes on the AI prize. Look in the next few years for smart assistants like Alexa and Google Home to become increasingly helpful, and also increasingly ubiquitous. For any company, no matter what field they are in, the time to jump aboard the AI train is now.
Once again, I want to urge a note of caution before you embrace our new artificially intelligent friends. I do not personally believe that the development of a fully sentient, destructive AI is possible within our lifetime, but there are other risks to consider. In the right hands, AI can be a wonderful tool that helps increase convenience, safety, and overall quality of life. In the wrong hands, it can be used for sinister applications like deepfakes — or by hackers and criminals to perform massive cyber-fraud. As with any tool, it is the way we choose to use AI that will dictate how it shapes our future. Let’s make the decision now to use it for the benefit of all people, and to be careful that it doesn’t get so powerful we can’t control it.
Article Prepared by Ollala Corp