DeepLearningKit – Deep Learning for iOS (tested on iPhone 6S), tvOS and OS X developed in Metal and Swift

By | December 28, 2015

In early October we purchased the new iPhone 6S and had high expectations of its GPU performance. One of the reasons for our expectations was a blog post by Simon Gladman where he wrote that iPhone 6S had 3 times the GPU performance of iPhone 6, this was also reported by TheNextWeb.

In our GPU programming case (developing Deep Learning algorithms with Metal) – going from iPhone 5S to iPhone 6S – got 1 order of magnitude in improved performance! Calculation time to run through a 20 layer deep convolutional neural network model for image recognition went from approximately 2 seconds to less than 100 milliseconds. Note that 100 milliseconds or in other words 0.1 seconds is what Jacob Nielsen stated is one of 3 important response times – that a user feels a system reacts instantenously.

This blog post gives a brief overview of DeepLearningKit – a Deep Learning Kit for iOS, OS X and tvOS. It is developed in Metal in order to make efficient use of the GPU and Swift for setting up Metal as well as loading data and integrate with apps.

1. DeepLearningKit – GPU Accelerated Deep Learning for Apple’s iOS, tvOS and OS X with Metal and Swift

DeepLearningKit currently implements Convolutional Neural Networks in Metal (parallelized for the GPU), deep learning layer operators include: convolution, pooling, relu layer.

On OS X DeepLearningKit can easily be adapted to utilize several GPUs if present, e.g. to run the same deep learning model on several GPUs to increase throughput or run different models in order to increase number of classes to predict over.


let GPUs = MTLCopyAllDevices()
print(GPUs)

gave the following on a (2012) Retina Macbook Pro

Screen Shot 2015-10-14 at 13.16.49

An interesting feature on iOS (and most likely on tvOS, but not yet tested in our case) is that one can share memory between GPU and CPU (less copying of data).

2. App Store for Deep Learning Models

Given the immense asymmetry in time taken to train a Deep Learning Model versus time needed to use it (e.g. to do image recognition), it makes perfect sense to build a large repository of pre-trained models that can be (re)used several times. Since there are several popular tools used to train Deep Learning models (e.g. Caffe, Torch, Theano, DeepLearning4J, PyLearn and Nervana) we’re working on supporting importing pre-trained models in those tools into an “app store” for deep learning models (currently we’ve been primarily been working with Caffe CNN models).

Screen Shot 2015-10-14 at 10.05.24

The illustrates how much energy is required to train a Deep Network (per night), some Deep Learning Models can take weeks of training on GPUs like the Nvidia TitanX, or in other words piles of wood of energy. Using a model is quite different since it requires less energy than lighting match.

Screen Shot 2015-10-14 at 10.51.52

energytousecnn

Deep Learning Models also typically have a (low) limit in the number of classes they can predict per model (e.g. in the ImageNet competition there are 1000 classes, CIFAR-100 100 classes and CIFAR-10 10 classes). This means that in order to create real-life applications one need to intelligently (and very rapid load them from SSD into GPU accessible RAM) switch between several Deep Learning Models, or if there is enough capacity one can run several models in parallel on the same GPU. Selecting an approriate Deep Learning model (i.e. which is the most likely to work well in a given context) is to our knowledge not a well-studied field of research, and in some ways it resembles the meta or universal search problem found in web search (e.g. cross-model ranking), but latency plays an even bigger part in the mobile on-device case (don’t have time to run many models).

With state-of-the-art compression techniques for Convolutional Neural Network the (groundbreaking) AlexNet model from 2012 can be compressed from 240MB to 6.9MB.  This means that one could theoretically fit more than eighteen thousand AlexNet models on a 128 GB mobile device like the iPhone 6!

Conclusion

Deep Learning on iOS, tvOS and OS X devices is still in its infancy, and open source DeepLearningKit hopes to play a part of it. Check out our DeepLearningKit tutorial at https://deeplearningkit.org/tutorials-for-ios-os-x-and-tvos/tutorial-using-deeplearningkit-with-ios-for-iphone-and-ipad/

 

 

2 thoughts on “DeepLearningKit – Deep Learning for iOS (tested on iPhone 6S), tvOS and OS X developed in Metal and Swift

  1. Samir Kumar

    Hi can you share any details on the specifics of the 20 layer network used in the Iphone 6S benchmark? What kind of network is this? Alexnet style, Inception net, VGG?

    Thanks
    Samir

    Reply

Leave a Reply to deeplearningkit Cancel reply