Previously in our newsletter, we had framed automated machine learning around four notions:
- Citizen Data Science / ML: Automated machine learning will allow everyone to do data science and ML. It requires no special training or skills.
- Efficient Data Science / ML: Automated machine learning will supercharge your data scientists and ML engineers by making them more efficient.
- Learning to Learn: Automated machine learning will automate architecture and optimization algorithm design(architecture search).
- Transfer Learning: Automated machine learning will allow algorithms to learn new tasks faster by utilizing what they learned from mastering other tasks in the past.
Since then, the term automated machine learning has been strongly linked to Google’s definition of AutoML as a way for neural nets to design neural nets, or - expressed technically - as a way to perform neural architecture search. Google’s messaging asserts that AutoML will make AI work for everyone. Google Cloud’s AutoML beta products now allow one to custom vision, language and translation models with minimum machine learning skills. The product page states that under the hood, this capability is powered by Google’s AutoML and transfer learning. But, as pointed out by fast.ai, transfer learning and neural architecture search are two opposite approaches. Transfer learning assumes that neural net architectures generalize to similar problems (for example, features like corners and lines show up in many different images); neural architecture search assumes that each dataset needs a unique and specialized architecture. In transfer learning, you start with a trained model with an existing architecture and further tune the weights with your data; neural architecture search requires training multiple new architectures along with new weights. In practice, one does not need to use both techniques (yet?)! Transfer learning is currently the predominant approach since neural architecture search is currently computationally expensive. We very much agree with fast.ai’s assessment that not everyone needs to perform neural architecture search, and the ability to perform such a search does not replace machine learning expertise. In fact, blindly using computation power to search for the best architecture seems to lead us further into the abyss of un-interpretable models.
On the flip side, if we go back in time to the pre-GPU era, one could argue that we are at the same place with neural architecture search as we were back then with deep learning. Sprinkle in the notion of Software 2.0, and suddenly the idea of everyone designing neural nets for their particular needs looks like a reasonable trajectory!
More from the Blog
Aug 15 2018
by — Each report we do features a science-fiction story inspired by the report topic. For our multi-task learning report, Umair Kazi explores the paradoxes of knowledge in the lost city of Havurtat.
by Umair Kazi
Aug 28 2018
by — From random forest to neural networks, many modern machine learning algorithms involve a number of parameters that have to be fixed before training the algorithm. These parameters, in contrast to the ones learned by the algorithm during training, are called hyperparameters. The performance of a model on a task given data depends on the specific values of these hyperparameters. Hyperparamter tu...
Jul 22 2019
by — We discussed this research as part of our virtual event on Wednesday, July 24th; you can watch the replay here! Convolutional Neural Networks (CNNs or ConvNets) excel at learning meaningful representations of features and concepts within images. These capabilities make CNNs extremely valuable for solving problems in the image analysis domain. We can automatically identify defects in manufactur...