Machine learning technologies increasingly shape our sense of reality and the choices we make in our daily lives. They power Amazon’s product recommendations. They classify documents relevant for a lawsuit. They enable computers to play chess like the masters.
As machine learning applications expand to influence our civic, professional and private lives, it’s important that we all have a basic understanding of how they work and what potential they offer. Pedro Domingos undertook the challenge of providing the first comprehensive, nontechnical overview of machine learning in his new book, The Master Algorithm.
In The Master Algorithm, Domingos divides the field into five different “tribes” – symbolism, connectionism (neural networks), evolutionary algorithms, Bayesian networks, and analogical reasoning – which he challenges his readers to unify in one future “master algorithm” capable of learning nearly anything. This will towards universality informs his book’s bold central hypothesis: “all knowledge - past, present, and future - can be derived from data by a single, universal learning algorithm.” It’s up to us to find it.
We had the pleasure of interviewing Domingos last week. Keep reading to see the highlights.
Let’s start with your central hypothesis that all knowledge can be derived from the master algorithm. That’s a pretty bold claim. What justifies it?
The reason this claim comes off as bold is that the idea of a general purpose learner is still very new to people. People generally accept that, when given enough data, an algorithm can learn to represent it well; but they’ve lost sight of the fact that the different styles of algorithms are relatively interdependent. Think about the first computer, the universal Turing machine. No one imagined one machine could do two totally different things, but computers did! The computer was a universal deductive machine. The master algorithm is a universal inductive machine, deriving all knowledge from data.
If the master algorithm becomes the basis for representing all knowledge, does computer science become the new philosophy?
One exciting thing about working in machine learning is that we can ask key questions about knowledge and cognition that philosophers have been asking for centuries with rapid feedback about whether we’re on the right track. We make a hypothesis, implement an algorithm to test it, and it either works or it doesn’t. We tweak our model, and our results become more or less accurate; if less, we’re doing something wrong. And yes, machine learning could become the glue between different domains of knowledge. Researchers in physics may not directly talk to researchers in the social sciences, but they can indirectly talk to each other through the language of machine learning and computer science. It’s very common to see models developed for one problem reach their potential in a completely different context. Markov models were initially used for speech analysis but ended up being an incredibly powerful tool for computational biology.
How do you reconcile the generality of your vision with the fact that some of the most exciting advances in machine learning result from domain-specific problems?
Like any good algorithm, the master algorithm should have two inputs: data and knowledge. This knowledge could be weak (we have a lot to learn) or strong (we have a little to learn). In both cases, the job of learning is to use data to advance our knowledge. It’s exciting is to build a model that represents and improves our knowledge in one domain that can then be transferred to another.
But isn’t this ability transfer ideas or procedures from one domain into another beyond the capacity of artificial intelligence?
I think we have an overly mystical view of human creativity. Creativity comes from taking things you know and combining them in new ways. Machines are starting to demonstrate astonishing creative capabilities by creating new outputs from existing inputs. Take David Cope’s Experiments in Musical Intelligence: his programs create pieces of music in the style of a particular composer that people struggle to distinguish from the real thing.
We’ve seen strong interest lately in deep learning, given progress using neural networks to do things like classify objects in images. Should neural networks be the foundation of the master algorithm?
Yes, deep learning has made huge headway on speech and image recognition. Some say backpropagation, the technique that learns neural networks, is all we need to solve artificial intelligence. We’ve solved vision and speech with deep learning, so once we solve language, we’re done! Deep learning, however, had previous heydays in the 50s-60s and then again in the 80s. History shows that other schools of thought made progress on other problems. It’s interesting to see people like Geoffrey Hinton – a deep learning pioneer – incorporating elements of symbolic machine learning into neural networks. That said, I don’t think the grand unifying theory will be cobbled together from previous schools, but will emerge from ideas people haven’t had yet.
What about the general public? What should everyone know about these new technologies?
Most people still think of computers as fast but dumb machines. This is no longer the case. Consumers need to understand that the machines of the future will change based upon how we interact with them. When we interact with each other, we constantly learn things and modify our actions accordingly. We need to grow to be receptive like this to machines as well because what they present us depends on what we teach them about ourselves. Eventually, such understanding could empower consumers to deliberately interact with tools to shape the experience they want from companies.
But such empowerment seems pretty far off, considering that current algorithms still do things like present women with job advertisements with lower salaries than men.
Ethical edge cases like this make it all the more important that everyone have some understanding how these tools work. The debate about the ethical implications of machine learning is only going to get bigger and bigger. Learning algorithms by themselves are wonderful because they have no subjective biases. They don’t know that race and gender exist. In theory, therefore, they should be more objective than humans, but there are always places for existing social biases to creep in. An algorithm that generates a credit score, for example, could be skewed by misreported input data. The good news is that we have the power to tune algorithms to promote the social outcomes we desire. We can develop a job advertisement tool optimized for accuracy, or we can tweak parameters to correct for any existing social biases that negatively impact minority groups. There’s a ton of work to do to define which parameters should be legal, and that means that we as computer scientists are responsible to make sure the underlying technology is accurately understood by those setting policy.
What advice would you give to someone considering a career in machine learning?
Students need to consider whether they want to have impact through quick wins or whether they have the patience to aim to create a new paradigm. Industry is the place for incremental research and quick wins. Academics should set their sights on a longer horizon, acknowledging that 99 out of 100 efforts will fail, but that failure is required to push the field to the next level. Finally, young researchers should learn as much as they can, but mistrust everything they learn. Part of our job is to make the old knowledge base obsolete. Creative destruction is the precursor for innovation. History has shown us that even logic is provisional, but we shouldn’t fear that. We should embrace it as the opportunity to create something awesome.
More from the Blog
Oct 26 2015
by — Deep learning is a hot and fascinating research area, particularly when applied to classifying images. While researching the Fast Forward Labs Deep Learning: Image Analysis report, we played with a lot of very cool technology. In this blog post, we offer a guide to getting started with deep learning by using APIs from some of the most interesting deep-learning-as-a-service startups. These APIs...
Nov 3 2015
We’ve got three events on the horizon. Join us, or contact us with questions if you’d like to learn more but are unable to attend! Wednesday, November 11 | Mountain View, CA Hilary Mason will give a keynote at H2O World, where participants discuss how to use machine learning to build intelligent applications. Hilary will explain how Fast Forward Labs helps companies discover and build exciti...
Nov 22 2017
by — As you are preparing for your Thanksgiving meal, just know that a robotic arm is holding the spoon at the Institute for Culinary Education (ICE); progress is relentless. “The Chef Watson cookbook is a revolutionary display of the creative collaboration of man and machine.” Cognitive Cooking with Chef Watson, culinary and cognitive creativity at your fingertips. Perhaps you should try the Acorn ...