Skip Navigation
News & Ideas

TWG Engineering Education Tackles Deep Learning

In this blog post TWG senior Software Engineer, Ben Wendt, explains how personal growth and growing the deep learning.

Several of the members of the engineering team have been working on expanding their deep learning skills. Recently a group of four TWG engineers worked through the fast.ai Deep Learning for Coders course. Our students were given one-half day per week of work time for the opportunity to learn, study and work on the course material. The course is fairly intensive, so students generally did at least as much studying on their own time to keep up with the pace.

The course is an excellent resource for coders with a range of experience levels. It doesn’t rely too heavily on mathematical notation or theoretical concepts but focuses on developing experience and getting useful results. The analogy made by the lecturer is that teaching baseball should be done by throwing a ball and swinging a bat, rather than studying the math and physics underlying the game. We found that the course allowed personal growth and growing the deep learning competence of our team.

Students in the course immediately start using the Keras framework to get useful results in image classification in the dogs and cats Kaggle competition. Through the coursework, our engineers learned the basics of neural network architecture, tuning the learning process, and most importantly how to effectively use transfer learning.

Transfer learning is a technique where a generalized model (in our case, an image classifier) can be retrained for a specific task. ImageNet is a classification challenge where millions of images are put into one of 1000 categories. Researchers from top AI firms and academic researchers compete every year to beat the state-of-the-art performance on ImageNet. The great thing for deep learning practitioners is that many ImageNet winning models are subsequently made publicly available for reuse.

Through the training of these deep neural networks, the models have learned building block concepts of images, such as eyes, headlights, roofs, elbows, etc. Transfer learning leverages this knowledge to create a model for a similar task. If you start with a model trained on ImageNet that can find eyes, paws, and tails, your task of classifying cats vs. dogs will be much easier to train.
Subsequent to completing the course, our team of Aaron, Ashun, Dex, and Phil decided to apply their learnings by use the kaggle dog breed identification data to make a dog breed classifier. Dex used the Inception model, developed by Google; Aaron used the resnet model, developed by Microsoft, and which won the 2015 ImageNet contest; Ashun used the Xception model, a variant of inception; and Phil used the VGG model, developed by the University of Oxford, which won the ImageNet contest in 2014.

After trying to transfer learning on a variety of models, the team found that they were getting the best results using the Xception model. While we were able to exceed 80% accuracy with most models, the team was able to get 87% accuracy with Xception incorrectly identifying an image of as one of 120 breeds.

The team proceeded to create a dog breed identification application, where you can take a photo of a dog and be informed of the breed of the dog in the photo. This was a crowd-pleasing result during our Friday demos, where we take an hour to show off interesting projects that members of our team are working on. Gaining practical experience like this, while learning a new skill is pretty common at TWG, and our developers are looking forward to putting their new skills to use on future projects.

Keep up-to-date with the latest industry insights and TWG case studies. Sign up for our product innovation newsletter here.

We have a lot going on! Stay up to date.