Author Archives: deeplearningkit

Deep Learning for Text Summarization

A few years back I was involved in a project trying to do realtime/low latency text summarization (without deep learning) using a Nvidia Tesla C1060 GPU (roughly state-of-the-art GPU back then), the motivation for doing realtime summarization was driven by an idea of improving search results in general. The issue with search is that it typically returns disjunct results (with somewhat arbitrary relationship between each search result), instead of providing a coherent answer to the query in a more cross-result and summarized way (it has query driven summaries – snippets – on individual result level though, but not cross result). The project unfortunately never materialized in the form of great results, so it is great to see that Deep Learning based summarization is thriving (with much more powerful GPUs this time). See below for some recent research papers on Deep Learning based summarization:

  1. AttSum: Joint Learning of Focusing and Summarization with Neural Attention – authors: Z Cao, W Li, S Li, F Wei
  2. A Convolutional Attention Network for Extreme Summarization of Source Code – authors: M Allamanis, H Peng, C Sutton
  3. Sequence-to-Sequence RNNs for Text Summarization – authors: R Nallapati, B Xiang, B Zhou
  4. Learning Summary Statistic for Approximate Bayesian Computation via Deep Neural Network – authors: B Jiang, T Wu, C Zheng, WH Wong
  5. LCSTS: A Large Scale Chinese Short Text Summarization Dataset – authors: B Hu, Q Chen, F Zhu
  6. Deep Dependency Substructure-Based Learning for Multidocument Summarization – authors: S Yan, X Wan
  7. Ranking with Recursive Neural Networks and Its Application to Multi-document Summarization – authors: Z Cao, F Wei, L Dong, S Li, M Zhou
  8. Query-oriented Unsupervised Multi-document Summarization via Deep Learning – authors: S Zhong, Y Liu, B Li
  9. Abstractive Multi-Document Summarization via Phrase Selection  – authors: L Bing, P Li, Y Liao, W Lam, W Guo, RJ Passonneau
  10. Modelling ‚Visualising and Summarising Documents with a Single Convolutional Neural Network – authors: N de Freitas
  11. SRRank: Leveraging Semantic Roles for Extractive Multi-Document Summarization – authors: S Yan, X Wan

Best regards,

Amund Tveit ()

btw: if you want to work (with me) as a Data Scientist on Deep Learning, check out this position

Deep Learning for Named Entity Recognition

About a year ago I wrote a blog post about recent research in Deep Learning for Natural Language Processing covering several subareas. One of the areas I didn’t cover was Deep Learning for Named Entity Recognition – so here are some interesting recent (2015-2016) papers related to that:

  1. Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks – authors: M Francis

  2. Entity Attribute Extraction from Unstructured Text with Deep Belief Network – authors: B Zhong, L Kong, J Liu

  3. Learning Word Segmentation Representations to Improve Named Entity Recognition for Chinese Social Media – authors: N Peng, M Dredze

  4. Biomedical Named Entity Recognition based on Deep Neutral Network – authors: L Yao, H Liu, Y Liu, X Li, MW Anwar

  5. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition – authors: T Baldwin, MC de Marneffe, B Han, YB Kim, A Ritter…

  6. Semi-Supervised Approach to Named Entity Recognition in Spanish Applied to a Real-World Conversational System – authors: SS Bojórquez, VM González

  7. Boosting Named Entity Recognition with Neural Character Embeddings – authors: C dos Santos, V Guimaraes, RJ Niterói, R de Janeiro

  8. Exploring Recurrent Neural Networks to Detect Named Entities from Biomedical Text – authors: L Li, L Jin, D Huang

  9. Entity-centric search: querying by entities and for entities – authors: M Zhou

  10. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach – authors: X Ren, A El

  11. Boosting Named Entity Recognition with Neural Character Embeddings – authors: CN Santos, V Guimarães

  12. Named Entity Recognition in Chinese Clinical Text Using Deep Neural Network. – authors: Y Wu, M Jiang, J Lei, H Xu

  13. Context-aware Entity Morph Decoding – authors: B Zhang, H Huang, X Pan, S Li, CY Lin, H Ji, K Knight…

  14. Training word embeddings for deep learning in biomedical text mining tasks – authors: Z Jiang, L Li, D Huang, L Jin

  15. Entity Attribute Extraction from Unstructured Text with Deep Belief Network – authors: B Zhong, L Kong, J Liu

  16. Building Text-mining Framework for Gene-Phenotype Relation Extraction using Deep Leaning – authors: D Jang, J Lee, K Kim, D Lee

  17. Text Mining in Social Media for Security Threats – authors: D Inkpen

  18. Text Understanding from Scratch – authors: X Zhang, Y LeCun

  19. Syntax-based Deep Matching of Short Texts – authors: M Wang, Z Lu, H Li, Q Liu

  20. PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks – authors: J Tang, M Qu, Q Mei

  21. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach – authors: X Ren, A El

  22. Domain-Specific Semantic Relatedness from Wikipedia Structure: A Case Study in Biomedical Text – authors: A Sajadi, EE Milios, V Kešelj, JCM Janssen

  23. Deep Unordered Composition Rivals Syntactic Methods for Text Classification – authors: M Iyyer, V Manjunatha, J Boyd

  24. Representing Text for Joint Embedding of Text and Knowledge Bases – authors: K Toutanova, D Chen, P Pantel, H Poon, P Choudhury…

  25. In Defense of Word Embedding for Generic Text Representation – authors: G Lev, B Klein, L Wolf

Best regards,

Amund Tveit ()

btw: if you want to work (with me) as a Data Scientist on Deep Learning, check out this position

New Tutorial – Image Handling in DeepLearningKit

There has been a few requests (on github issues and stackoverflow) about image handling in DeepLearningKit, this is about how to transform back-and-forth between bitmap format used in the Deep Learning (conv.net) calculation and UIImage used in tvOS and iOS DeepLearningKit demo apps (for OS X the issue is still unsolved since NSImage is slightly different). What is provided is an API for setting and getting pixels on an UIImage, see the tutorial for details.


[LINK TO TUTORIAL – Image Handling in DeepLearningKit]

Best regards,
Amund Tveit

Video Tutorial – Using the DeepLearningKit example OS X app

This video shows how to use the standalone (very simple) example iOS app that comes with DeepLearningKit, the code can be found (as part of the main repository) at github.com/DeepLearningKit/DeepLearningKit/tree/master/OSXDeepLearningKitApp/OSXDeepLearningKitApp

Viewcontroller.swift


import Cocoa

class ViewController: NSViewController {
var deepNetwork: DeepNetwork!

override func viewDidLoad() {
super.viewDidLoad()
}

override func viewDidAppear() {
deepNetwork = DeepNetwork()

// conv1.json contains a cifar 10 image of a cat
let conv1Layer = deepNetwork.loadJSONFile("conv1")!
let image: [Float] = conv1Layer["input"] as! [Float]

var randomimage = createFloatNumbersArray(image.count)
for i in 0..

Video Tutorial – Using the DeepLearningKit example iOS app

This video shows how to use the standalone (very simple) example iOS app that comes with DeepLearningKit, the code can be found (as part of the main repository) at github.com/DeepLearningKit/DeepLearningKit/tree/master/iOSDeepLearningKitApp/iOSDeepLearningKitApp

Viewcontroller.swift


import UIKit

class ViewController: UIViewController {
var deepNetwork: DeepNetwork!

override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}

override func viewDidAppear(animated: Bool) {
deepNetwork = DeepNetwork()

// conv1.json contains a cifar 10 image of a cat
let conv1Layer = deepNetwork.loadJSONFile("conv1")!
let image: [Float] = conv1Layer["input"] as! [Float]

var randomimage = createFloatNumbersArray(image.count)
for i in 0..

Tutorial – Using DeepLearningKit with iOS for iPhone and iPad

  1. Clone DeepLearningKit:  git clone https://github.com/DeepLearningKit/DeepLearningKit.git

Screen Shot 2015-12-28 at 14.48.05

2. Clone demo app: git clone https://github.com/DeepLearningKit/DeepLearningKitForiOSDemoApp.git

Screen Shot 2015-12-28 at 14.48.37

3. Open DeepLearningKitForiOSDemoApp.xcodeproj in xcode (e.g. from Finder)

Screen Shot 2015-12-28 at 14.49.20

4. Have a look at ViewController.swift – notice that import DeepLearningKitForiOS gives an error (in red)

Screen Shot 2015-12-28 at 14.50.17

Screen Shot 2015-12-28 at 14.50.44

5. Open Finder and Drag DeepLearningForiOS.xcodeproj over to the demo app in xcode

Screen Shot 2015-12-28 at 14.52.15

6. Highlighted line below shows how the framework DeepLearningForiOS.xcodeproj can be included

Screen Shot 2015-12-28 at 14.52.55

7. Click on app settings (highlighted line in left part of Xcode) and go the General tab on the right

Screen Shot 2015-12-28 at 15.48.44

8. Scroll down to embedded binaries in General tab and add DeepLearningKitforiOS.frameworkiOS

Screen Shot 2015-12-28 at 15.21.28

9. Result afterwards should look something like this – embedded binaries down to the right

Screen Shot 2015-12-28 at 15.21.41

10. Drag the Shaders.metal file from DeepLearningKitForiOS into top project

(not quite sure why this needs to be done, but anyway)

Screen Shot 2015-12-28 at 15.56.09

Screen Shot 2015-12-28 at 15.56.18

11. Connect iPhone to your Mac (e.g. iPhone 6S), compile and run, should get something like this

Screen Shot 2015-12-28 at 15.32.35

DeepLearningKit – Deep Learning for iOS (tested on iPhone 6S), tvOS and OS X developed in Metal and Swift

In early October we purchased the new iPhone 6S and had high expectations of its GPU performance. One of the reasons for our expectations was a blog post by Simon Gladman where he wrote that iPhone 6S had 3 times the GPU performance of iPhone 6, this was also reported by TheNextWeb.

In our GPU programming case (developing Deep Learning algorithms with Metal) – going from iPhone 5S to iPhone 6S – got 1 order of magnitude in improved performance! Calculation time to run through a 20 layer deep convolutional neural network model for image recognition went from approximately 2 seconds to less than 100 milliseconds. Note that 100 milliseconds or in other words 0.1 seconds is what Jacob Nielsen stated is one of 3 important response times – that a user feels a system reacts instantenously.

This blog post gives a brief overview of DeepLearningKit – a Deep Learning Kit for iOS, OS X and tvOS. It is developed in Metal in order to make efficient use of the GPU and Swift for setting up Metal as well as loading data and integrate with apps.

1. DeepLearningKit – GPU Accelerated Deep Learning for Apple’s iOS, tvOS and OS X with Metal and Swift

DeepLearningKit currently implements Convolutional Neural Networks in Metal (parallelized for the GPU), deep learning layer operators include: convolution, pooling, relu layer.

On OS X DeepLearningKit can easily be adapted to utilize several GPUs if present, e.g. to run the same deep learning model on several GPUs to increase throughput or run different models in order to increase number of classes to predict over.


let GPUs = MTLCopyAllDevices()
print(GPUs)

gave the following on a (2012) Retina Macbook Pro

Screen Shot 2015-10-14 at 13.16.49

An interesting feature on iOS (and most likely on tvOS, but not yet tested in our case) is that one can share memory between GPU and CPU (less copying of data).

2. App Store for Deep Learning Models

Given the immense asymmetry in time taken to train a Deep Learning Model versus time needed to use it (e.g. to do image recognition), it makes perfect sense to build a large repository of pre-trained models that can be (re)used several times. Since there are several popular tools used to train Deep Learning models (e.g. Caffe, Torch, Theano, DeepLearning4J, PyLearn and Nervana) we’re working on supporting importing pre-trained models in those tools into an “app store” for deep learning models (currently we’ve been primarily been working with Caffe CNN models).

Screen Shot 2015-10-14 at 10.05.24

The illustrates how much energy is required to train a Deep Network (per night), some Deep Learning Models can take weeks of training on GPUs like the Nvidia TitanX, or in other words piles of wood of energy. Using a model is quite different since it requires less energy than lighting match.

Screen Shot 2015-10-14 at 10.51.52

energytousecnn

Deep Learning Models also typically have a (low) limit in the number of classes they can predict per model (e.g. in the ImageNet competition there are 1000 classes, CIFAR-100 100 classes and CIFAR-10 10 classes). This means that in order to create real-life applications one need to intelligently (and very rapid load them from SSD into GPU accessible RAM) switch between several Deep Learning Models, or if there is enough capacity one can run several models in parallel on the same GPU. Selecting an approriate Deep Learning model (i.e. which is the most likely to work well in a given context) is to our knowledge not a well-studied field of research, and in some ways it resembles the meta or universal search problem found in web search (e.g. cross-model ranking), but latency plays an even bigger part in the mobile on-device case (don’t have time to run many models).

With state-of-the-art compression techniques for Convolutional Neural Network the (groundbreaking) AlexNet model from 2012 can be compressed from 240MB to 6.9MB.  This means that one could theoretically fit more than eighteen thousand AlexNet models on a 128 GB mobile device like the iPhone 6!

Conclusion

Deep Learning on iOS, tvOS and OS X devices is still in its infancy, and open source DeepLearningKit hopes to play a part of it. Check out our DeepLearningKit tutorial at https://deeplearningkit.org/tutorials-for-ios-os-x-and-tvos/tutorial-using-deeplearningkit-with-ios-for-iphone-and-ipad/

 

 

DeepLearningKit – Open Source Deep Learning Framework for Apple’s iOS, OS X and tvOS

Happy to announce that a (early) version of DeepLearningKit is available on:

FAQ

0. What does DeepLearningKit do?

It currently allows using deep convolutional neural networks model trained in Caffe on Apple’s iOS, OS X and tvOS  (transformed from protobuffers into json with the tool at https://github.com/DeepLearningKit/caffemodel2json – a tutorial about this will come later).

1. Open Source Licence?

Apache 2.0

2. How to get started?

Have a look at Tutorial – Using DeepLearningKit with iOS for iPhone and iPad – it is about using a pre-trained CIFAR-10 Network in Network example.

3. What is DeepLearningKit developed in?

It is developed in Metal (for GPU Acceleration) and Swift (for app integration). I believe DeepLearningKit is the first (public) Deep Learning tool that is using the Metal compute API for GPUs (Metal is Apple’s recommended way to program GPUs)

4. More documentation

More tutorials and a paper describing DeepLearningKit will be made available on https://deeplearningkit.org (+ http://arxiv.org for the paper)

5. I love developing for Apple’s [iOS,OS X or tvOS] and would like to contribute to this project, how?

Here are a few thoughts:

  1. Fork github.com/deeplearningkit repo(s), play with it/them and provide feedback or fixes.
  2. Create apps that use DeepLearningKit (disclaimer: still very early version) and tell us about them.
  3. Try (and perhaps adapt) different types of deep neural networks to DeepLearningKit, e.g.
    1. Microsoft Research’s ImageNet 2015 winning approach described in the paper Deep Residual Learning for Image Recognition, or
    2. DeepMind’s (Google) AI for Atari games described in the papers Human-level control through deep reinforcement learning, Deep Reinforcement Learning with Double Q-Learning and Playing Atari with Deep Reinforcement Learning
    3. Other types of Deep Learning, check out http://DeepLearning.University for inspiration
  4. Performance Optimization wrt Metal (GPU): Metal is a very new API (in particular for GPGPU non-graphical processing), and there is probably ways of improve usage of it.
  5. Performance Optimization wrt algorithms (e.g. shader functions for comvolution): see our paper for roadmap
  6. Importers: develop model importers (in Swift) for convolutional neural networks) from other tools than Caffe, e.g. Torch,  TensorFlow, Theano, Nervana Systems, DeepLearning4J or Pylearn.HDF5 is an interesting format.
  7. Training Support: our goal was to primarily support using already trained Deep Learning models (since in the long run people will probably not train their own DL models but rather pick them from a Deep Learning Model store or similar, see our paper for why), but it would still be great to train convolutional neural networks in DeepLearningKit itself.
  8. Image Handling Support: DeepLearningKit is missing basic conversion from e.g. UIImage to RGB (the example network supports 32x32x3 CIFAR RGB Image Format, but has no conversion from UIImage to it). Check out e.g. Drawing Images From Pixel Data – In Swift and Image Processing in iOS Part 1: Raw Bitmap Modification for inspiration.

6. Is DeepLearningKit production ready for my mission critical app?

most likely not, but that doesn’t stop you from testing it out?

7. DeepLearningKit reminds me more about CUDA/GPU Libraries such as Nvidia’s cuDNN or  Facebook’s fbCunn rather than larger tools such as Torch, TensorFlow and Caffe, is that right?

You’re right, DeepLearningKit can be roughly seen as an early “metalDNN” with Swift packaging for loading and running models. (It currently doesn’t support Fast Fourier based convolution such as Facebook’s fbcunn)

8. Who Developed and Open Sourced DeepLearningKit

DeepLearningKit was developed and opensourced by the company Memkite – check out the About page for details

Best regards,