DeepLearningKit - an Open Source Deep Learning
Framework for Apple’s iOS, OS X and tvOS
developed in Metal and Swift
Amund Tveit
Memkite
Torbjørn Morland*
Memkite
Thomas Brox Røst*
Atbrox
Abstract
In this paper we present DeepLearningKit - an open source framework that sup-
ports using pre- trained deep learning models (convolutional neural networks) for
iOS, OS X and tvOS. DeepLearningKit is developed in Metal in order to utilize the
GPU efficiently and Swift for integration with applications, e.g. iOS-based mobile
apps on iPhone/iPad, tvOS-based apps for the big screen, or OS X desktop appli-
cations. The goal is to support using deep learning models trained with popular
frameworks such as Caffe, Torch, TensorFlow, Theano, Pylearn, Deeplearning4J
and Mocha. Given the massive GPU resources and time required to train Deep
Learning models we suggest an App Store like model to distribute and download
pretrained and reusable Deep Learning models.
1 Introduction
The Metal programming language is the recommended and most the efficient way of utilizing the
GPU on Apple’s iOS since 2014 [1, 2, 3, 4], and tvOS and OSX since 2015 [5, 6, 7]. This paper
gives an overview of a Metal and Swift based Deep Learning library named DeepLearningKit,
in particular parts of Metal convolutional neural network operators for the GPU. DeepLearningKit
supports on-device Deep Learning on Apple’s iOS, OS X and tvOS.
DeepLearningKit currently has shader functions for convolutional neural networks implemented in
Metal and parallelized for the GPU - operators include: convolution, pooling, rectifier and softmax.
In terms of deep learning model supported it has support for Min Lin’s Caffe-trained Network In
Network[8] (NIN - trained on CIFAR-10, CIFAR-100 and ImageNet data sets). We also have pre-
liminary support running Theano[9] trained LeNet (trained on MNIST digit classification dataset).
The reason we have chosen NIN is that the network is small compared to other deep convolutional
neural networks, but at the same time provide very high classification accuracy on images, e.g.
better than AlexNet. GoogleLeNet (winner of Imagenet 2014) uses a similar approach as NIN[?].
NIN can perhaps also be used in non-image domains, e.g speech recognition[10] or natural lan-
guage processing[11]. In particular one could attempt to adapt Zhang and Lecun’s encoding and 1D
convolutional operators in “Text Understanding from Scratch”[12] and use it with NIN.
2 DeepLearningKit
This section gives describes how DeepLearningKit was built and with early empirical results using
it.
https://DeepLearningKit.org
1
Figure 1: DeepLearningKit
2.1 Metal GPU Compute API in a Nutshell
All of the configuration of Metal happens in Swift (or Objective-C), and Metal itself is where to
write the computational functions (shaders)
Metal has a threading model similar to Vulkan (SPIR-V) as shown in figure 2 (source: [13]), with one
or several command queues (MTLCommandQueue) that each store and run a sequence of command
buffers (MTLCommandBuffer) that processes data on the GPU (MTLBuffer). Figure 3 shows a
(partial) comparison between Metal/Swift API and C++/OpenCL (OpenCL can be used to generate
Vulkan/SPIR-V code).
Each MTLCommandBuffer has a compute command encoder (MTLComputeCommandEncoder)
that specifies how the command buffer should be executed on the GPU, e.g. number of threadgroups
and threads per threadgroup (specified with MTLSize and dispatchThreadGroups()) as well as the
Metal shader function to perform (MTLFunction) that is loaded from the Metal (source or binary)
Library (MTLLibrary). The Metal shader function, e.g. convolution() or rectifier() is wrapped
inside a compute pipeline descriptor (MTLComputePipeLineDescriptor) before it is inserted into
the MTLCommandBuffer.
Since Convolutional Neural Networks is a set of layers (typically with matrix or vector calculations
per layer), we represented each layer as a MTLCommandBuffer (with the approriate Metal shader)
and inserted all of those into a MTLCommandQueue. Data (both on Swift and Metal side) was
pre-allocated (e.g. MTLBuffer) before the calculation was started on the GPU.
2.2 GPU algorithms used in DeepLearningKit
Calculating convolution layers are the most computationally expensive part of deep convolutional
neural networks, since it involves matrix multiplication - typically with the GEneral Matrix to Matrix
Multiplication (GEMM) function parallelized for the GPU.
The approach for GPU based GEMM used in DeepLearningKit is similar to that of Nvidia’s cuDNN
CUDA based Deep Learning library, i.e. using an im2col() transformation followed by convolution()
function, see Shaders.metal on github.com/deeplearningkit/deeplearningkit for implementation.
2
Figure 2: Metal - similar setup as Vulkan (SPIR-V)
Figure 3: Metal/Swift compared to C++/OpenCL
2.3 Experiences with PowerVR G6430/GT7600 on iPhone 5S/6S
The performance of DeepLearningKit from iPhone 5S (with PowerVR G6430 according to The
iPhone 5S Review (AnandTech)) to iPhone 6S (with PowerVR GT7600 according to Apple iPhone
6S Plus vs. Samsung Galaxy S6 Edge+) we got 1 order of magnitude in improved performance.
Calculation time to run through a 20 layer deep convolutional neural network model for image
recognition went from approximately 2 seconds to less than 100 milliseconds. The network we
used was NIN network trained on CIFAR-10. Based on XCode profiling we suspect that the Metal
compute drivers for the GPU weren’t fine tuned, so with lower level tools (e.g. for OpenCL/Vulkan
SPIR-V) for tuning for the GPU we could probably improve performance quite a bit.
(Note that 100 milliseconds or in other words 0.1 seconds is what Jacob Nielsen stated is one of 3
important response times that a user feels a system reacts instantenously)
Based on XCode profiling with Instruments (figure 4) we suspect that our Metal/Swift code and
perhaps the underlying Metal compute drivers still have potential to be improved (the time required
to run through the network in a forward pass is about 93 milliseconds (ref lower right in figure ),
but Xcode instruments shows that the total duration for the GPU compute steps are about 0.5 mil-
liseconds (537.62 microseconds, ref blue line at the bottom of figure 3), i.e. i.e. only approximately
1/200th of the total forward pass time (note: the entire forward pass does more than computation
3
on the GPU, the most likely time cost is memory allocation/copying and synchronization between
GPU and CPU)
Figure 4: DeepLearningKit run - XCode Instruments Output - GPU Hardware
2.4 Effort needed to port from Metal/Swift to OpenCL/Vulkan Compute SPIR-V
Code needed to set up and run deep learning on the GPU, load/save data, and setup the deep learning
pipeline (convolutional neural network)is done is done in Swift (for easy app integration on iOS, OS
X and tvOS), but can be moved to a language of selection (e.g. Java on Android or C++/C on other
devices). The Swift API for setting up Metal resembles the corresponding OpenCL C API as shown
in Figure 3.
The Deep Learning GPU code (e.g. shader functions with calculations of convolution etc) is written
in Metal, a language that is a subset C++11 and also has its own (relatively few) additions compared
to C++11. Porting the Metal code GPU code to OpenCL should be relatively straight forward since
OpenCL is also a subset of C++, as an example see figures 5 and 6 for a rectifier function written in
both Metal and OpenCL. Going from OpenCL to Vulkan SPIR-V can be done with compiler (figure
??) for further profiling and optimization.
Figure 5: Rectifier Function in Metal
Figure 6: Rectifier Function in OpenCL
Threading model in Vulkan is close to 1-1 with Metal (figure 2), so that should not be an issue for
porting
4
2.5 Roadmap for DeepLearningKit
Here follows a brief overview of things we are working on or is on our roadmap.
1. use FFT-based convolution - with precalculated convolution filters [14, 15]
2. use lower resolution on floating point - in order to increase performance and support larger
models (for now it uses 32 bit float or complex numbers - i.e. 2x32 bit per complex number
to prepare for FFT-based convolution) [16, 17]
3. avoid copying memory between CPU and GPU more than needed [18]
4. add support for other types of pre-trained networks than deep convolutional neural net-
works, e.g. recurring neural networks[19, 20]
5. look into more in-place calculations to save memory, i.e. supporting larger models
6. try to exploit larger parts of Metal API wrt memory layout, threadgroups to increase per-
formance (this relates to 1.) [21, 22, 23, 24, 25]
7. Look into teacher-student deep networks or other compressed models for even smaller but
still high quality models (recent research have shown AlexNet models being compressed
from 240MB to 6.9MB), see the paper [A Deep Neural Network Compression Pipeline]
8. Look into algorithms for approximate matrix multiplication (i.e. convolution step speedup)
to further increase speed (and reduce energy usage), interesting techniques include a) [Ap-
proximating matrix multiplication and low-rank approximation], [Fast Approximate Matrix
Multiplication by Solving Linear Systems] and [Fast Monte-Carlo Algorithms for Approx-
imate Matrix Multiplications].
9. Look into a broad set of Deep Learning applications, e.g. categories in figures 7 from the
research bibliography at [http://Deeplearning.University]. It might be application specific
optimizations that can be done, e.g. in the case of natural language processing with convo-
lutional neural networks one uses 1D convolution instead of 2D (as in image classification).
3 App Store for Deep Learning Models
Given the immense asymmetry in time taken to train a Deep Learning Model versus time needed to
use it (e.g. to do image recognition), it makes perfect sense to build a large repository of pre-trained
models that can be (re)used several times. Since there are several popular tools used to train Deep
Learning models (e.g. Caffe, Torch, Theano, DeepLearning4J, PyLearn and Nervana) were working
on supporting importing pre-trained models in those tools into an app store for deep learning models
(currently weve been primarily been working with Caffe CNN models).
The tweet in Figure 8 illustrates how much energy is required to train a Deep Network (per night),
some Deep Learning Models can take weeks of training on GPUs like the Nvidia TitanX, or in other
words piles of wood of energy. Using a model is quite different since it requires less energy than
lighting match. See figures 9 and 10 for an illustration of this.
Deep Learning Models also typically have a (low) limit in the number of classes they can predict per
model (e.g. in the ImageNet competition there are 1000 classes, CIFAR-100 100 classes and CIFAR-
10 10 classes). This means that in order to create real-life applications one need to intelligently (and
very rapid load them from SSD into GPU accessible RAM) switch between several Deep Learning
Models, or if there is enough capacity one can run several models in parallel on the same GPU.
Selecting an approriate Deep Learning model (i.e. which is the most likely to work well in a given
context) is to our knowledge not a well-studied field of research, and in some ways it resembles the
meta or universal search problem found in web search (e.g. cross-model ranking), but latency plays
an even bigger part in the mobile on-device case (dont have time to run many models). We have
some ideas for a meta model for selecting a model to use, which can use input like location, time of
day, and camera history to predict which models might be most relevant.
With state-of-the-art compression techniques for Convolutional Neural Network the (groundbreak-
ing) AlexNet model from 2012 can be compressed from 240MB to 6.9MB. This means that one
could theoretically fit more than eighteen thousand AlexNet models on a 128 GB mobile device like
the iPhone 6!
5