“Algorithm” is a word that one hears used much more frequently than in the past. One of the reasons is that scientists have learned that computers can learn on their own if given a few simple instructions. That’s really all that algorithms are mathematical instructions. Wikipedia states that an algorithm “is a step-by-step procedure for calculations.
Algorithms are used for calculation, data processing, and automated reasoning.” Whether you are aware of it or not, algorithms are becoming a ubiquitous part of our lives. Some pundits see danger in this trend. For example, Leo Hickman writes, “The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives.
So should we be more wary of their power?” [“How algorithms rule the world,” The Guardian, 1 July 2013] It’s a bit hyperbolic to declare that algorithms rule the world; but, I agree that their use is becoming more widespread. That’s because computers are playing increasingly important roles in so many aspects of our lives. I like the HowStuffWorks explanation:
“To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then ‘executes’ the program, following each step mechanically, to accomplish the end goal. When you are telling the computer what to do, you also get to choose how it’s going to do it. That’s where computer algorithms come in. The algorithm is the basic technique used to get the job done.”
The only point that explanation gets wrong is that you have to tell a computer “exactly what you want it to do” step by step. Rather than follow only explicitly programmed instructions, some computer algorithms are designed to allow computers to learn on their own (i.e., facilitate machine learning). Uses for machine learning include data mining and pattern recognition. Klint Finley reports, “Today’s internet is ruled by algorithms. These mathematical creations determine what you see in your Facebook feed, what movies Netflix recommends to you, and what ads you see in your Gmail.” [“Wanna Build Your Own Google? Visit the App Store for Algorithms,” Wired, 11 August 2014].
As mathematical equations, algorithms are neither good nor evil. Clearly, however, people with both good and bad intentions have used algorithms. Dr. Panos Parpas, a lecturer in the department of computing at Imperial College London, told Hickman, “[Algorithms] are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.”
Algorithms are nothing new. As noted above, they are simply mathematical instructions. Their use in computers can be traced back to one of the giants in computational theory Alan Turing. Back in 1952, Turing “published a set of equations that tried to explain the patterns we see in nature, from the dappled stripes adorning the back of a zebra to the whorled leaves on a plant stem, or even the complex tucking and folding that turns a ball of cells into an organism.” [“The Powerful Equations That Explain The Patterns We See In Nature,” by Kat Arney , Gizmodo, 13 August 2014] Turing became famous during the Second World War because he helped break the Enigma code. Sadly, Turing took his own life two years after publishing his book. Fortunately, Turing’s impact on the world didn’t end with his suicide. Arney reports that scientists are still using his algorithms to discover patterns in nature. Arney concludes:
“In the last years of Alan Turing’s life he saw his mathematical dream — a programmable electronic computer — sputter into existence from a temperamental collection of wires and tubes. Back then it was capable of crunching a few numbers at a snail’s pace. Today, the smartphone in your pocket is packed with computing technology that would have blown his mind. It’s taken almost another lifetime to bring his biological vision into scientific reality, but it’s turning out to be more than a neat explanation and some fancy equations.”
Although Turing’s algorithms have been useful in identifying how patterns emerge in nature, other correlations generated by algorithms have been more suspect. Deborah Gage reminds us, “Correlation … is different than causality.” [“Big Data Uncovers Some Weird Correlations,” The Wall Street Journal, 23 March 2014] She adds, “Finding surprising correlations has never been easier, thanks to the flood of data that’s now available.” Gage reports that one “company found that deals closed during a new moon are, on average, 43% bigger than when the moon is full.”
Other weird correlations that have been discovered include, “People answer the phone more often when it’s snowy, cold or very humid; when it’s sunny or less humid they respond more to email. A preliminary analysis shows that they also buy more when it’s sunny, although certain people buy more when it’s overcast. …The online lender ZestFinance Inc. found that people who fill out their loan applications using all capital letters default more often than people who use all lowercase letters, and more often still than people who use uppercase and lowercase letters correctly.” Gage continues:
“Are sales deals affected by the cycles of the moon? Is it possible to determine credit risk by the way a person types? Fast new data-crunching software combined with a flood of public and private data is allowing companies to test these and other seemingly far-fetched theories, asking questions that few people would have thought to ask before. By combining human and artificial intelligence, they seek to uncover clever insights and make predictions that could give businesses an advantage in an increasingly competitive marketplace.”
ZestFinance Chief Executive Douglas Merrill told Gage, “Data scientists need to verify whether their findings make sense. Machine learning isn’t replacing people.” Part of the problem is that most machine learning systems don’t combine reasoning with calculations. They simply spit out correlations whether they make sense or not.
Gage reports, “ZestFinance discarded another finding from its software that taller people are better at repaying loans, a hypothesis that Mr. Merrill calls silly.” By adding reasoning to machine learning systems correlations and insights become much more useful. “Part of the problem,” writes Catherine Havasi (CEO and co-founder of Luminoso, “Is that when we humans communicate, we rely on a vast background of unspoken assumptions. … We assume everyone we meet shares this knowledge. It forms the basis of how we interact and allows us to communicate quickly, efficiently, and with deep meaning.” [“Who’s Doing Common-Sense Reasoning And Why It Matters,” TechCrunch, 9 August 2014] She adds, “As advanced as technology is today, its main shortcoming as it becomes a large part of daily life in society is that it does not share these assumptions.”
“Common-sense reasoning is a field of artificial intelligence that aims to help computers understand and interact with people more naturally by finding ways to collect these assumptions and teach them to computers. Common Sense Reasoning has been most successful in the field of natural language processing (NLP), though notable work has been done in other areas. This area of machine learning, with its strange name, is starting to quietly infiltrate different applications ranging from text understanding to processing and comprehending what’s in a photo.
Without common sense, it will be difficult to build adaptable and unsupervised NLP systems in an increasingly digital and mobile world. …NLP is where common-sense reasoning excels, and the technology is starting to find its way into commercial products. Though there is still a long way to go, common-sense reasoning will continue to evolve rapidly in the coming years and the technology is stable enough to be in business use today. It holds significant advantages over existing ontology and rule-based systems, or systems based simply on machine learning.”
Algorithms can make systems smarter, but without adding a little common sense into the equation they can still produce some pretty bizarre results.