DeepLearningKit – Open Source Deep Learning Framework for Apple’s iOS, OS X and tvOS

By | December 28, 2015

Happy to announce that a (early) version of DeepLearningKit is available on:


0. What does DeepLearningKit do?

It currently allows using deep convolutional neural networks model trained in Caffe on Apple’s iOS, OS X and tvOS  (transformed from protobuffers into json with the tool at – a tutorial about this will come later).

1. Open Source Licence?

Apache 2.0

2. How to get started?

Have a look at Tutorial – Using DeepLearningKit with iOS for iPhone and iPad – it is about using a pre-trained CIFAR-10 Network in Network example.

3. What is DeepLearningKit developed in?

It is developed in Metal (for GPU Acceleration) and Swift (for app integration). I believe DeepLearningKit is the first (public) Deep Learning tool that is using the Metal compute API for GPUs (Metal is Apple’s recommended way to program GPUs)

4. More documentation

More tutorials and a paper describing DeepLearningKit will be made available on (+ for the paper)

5. I love developing for Apple’s [iOS,OS X or tvOS] and would like to contribute to this project, how?

Here are a few thoughts:

  1. Fork repo(s), play with it/them and provide feedback or fixes.
  2. Create apps that use DeepLearningKit (disclaimer: still very early version) and tell us about them.
  3. Try (and perhaps adapt) different types of deep neural networks to DeepLearningKit, e.g.
    1. Microsoft Research’s ImageNet 2015 winning approach described in the paper Deep Residual Learning for Image Recognition, or
    2. DeepMind’s (Google) AI for Atari games described in the papers Human-level control through deep reinforcement learning, Deep Reinforcement Learning with Double Q-Learning and Playing Atari with Deep Reinforcement Learning
    3. Other types of Deep Learning, check out http://DeepLearning.University for inspiration
  4. Performance Optimization wrt Metal (GPU): Metal is a very new API (in particular for GPGPU non-graphical processing), and there is probably ways of improve usage of it.
  5. Performance Optimization wrt algorithms (e.g. shader functions for comvolution): see our paper for roadmap
  6. Importers: develop model importers (in Swift) for convolutional neural networks) from other tools than Caffe, e.g. Torch,  TensorFlow, Theano, Nervana Systems, DeepLearning4J or Pylearn.HDF5 is an interesting format.
  7. Training Support: our goal was to primarily support using already trained Deep Learning models (since in the long run people will probably not train their own DL models but rather pick them from a Deep Learning Model store or similar, see our paper for why), but it would still be great to train convolutional neural networks in DeepLearningKit itself.
  8. Image Handling Support: DeepLearningKit is missing basic conversion from e.g. UIImage to RGB (the example network supports 32x32x3 CIFAR RGB Image Format, but has no conversion from UIImage to it). Check out e.g. Drawing Images From Pixel Data – In Swift and Image Processing in iOS Part 1: Raw Bitmap Modification for inspiration.

6. Is DeepLearningKit production ready for my mission critical app?

most likely not, but that doesn’t stop you from testing it out?

7. DeepLearningKit reminds me more about CUDA/GPU Libraries such as Nvidia’s cuDNN or  Facebook’s fbCunn rather than larger tools such as Torch, TensorFlow and Caffe, is that right?

You’re right, DeepLearningKit can be roughly seen as an early “metalDNN” with Swift packaging for loading and running models. (It currently doesn’t support Fast Fourier based convolution such as Facebook’s fbcunn)

8. Who Developed and Open Sourced DeepLearningKit

DeepLearningKit was developed and opensourced by the company Memkite – check out the About page for details

Best regards,

Amund Tveit 

Leave a Reply

Your email address will not be published. Required fields are marked *