基于深度卷积压缩网络的图像分类方法

以下是资料介绍,如需要完整的请充值下载. 本资料已审核过,确保内容和网页里介绍一致.  
无需注册登录,支付后按照提示操作即可获取该资料.
资料介绍:

基于深度卷积压缩网络的图像分类方法(论文12000字,外文翻译)
摘要:本文提出了一种基于量化卷积神经网络的深度卷积压缩网络方法,并开发了应用在安卓手机上进行图像分类的应用软件。该方法通过量化全连接层和卷积层的参数可以在压缩深度卷积神经网络模型的同时加快模型运算速度。本文通过量化深度模型中全连接层来压缩模型,通过量化卷积层来提高网络运算速度,通过引入误差校正来缩小压缩后模型与原始模型在准确率上的差距。具体来说,将全连接层和卷积层的权重矩阵划分成子空间的集合,学习出每个子空间的字典和系数矩阵,最优的字典和系数矩阵通过K-means聚类来获得。对网络参数量化引入误差校正,直接最小化每层响应的预测误差来减少在训练阶段量化多层时产生的累积误差,用坐标下降法来更新字典和系数矩阵从而减少准确率的下降。通过采取考虑先前预测误差的训练策略来减少压缩量化多层产生的累积误差,本文方法能够在测试阶段加速计算并有效的压缩存储空间。最后,本文对AlexNet网络进行压缩并在安卓手机上实现单幅图像的分类,单幅图像分类最快时间约为1.9s。本文在ILSVRC-12数据集上与原始模型进行比较发现,压缩后模型较原始模型在准确率上仅有1.1%的下降。此外,压缩后的模型大小仅为17.6M,相对于原始模型的264.74M压缩率达到15倍。
关键词:卷积神经网络;量化;压缩;图像分类
 
Image Classification Method Based On
Deep Compressed Convolutional Neural Networks
Abstract:We proposed a deep compressed convolutional neural networks method based on quantized convolutional neural network, and develop an android application program to implement image classification. This method can simultaneously compress deep convolutional neural networks and speed up the computing process by quantizing parameters both in fully-connected layers and convolutional layers. In this paper, the fully-connected layer is used to compress the model, and the speed of the network is improved by quantizing the convolutional layers, and we introduce the idea  of error correction into the quantization of network parameters to reduce the accuracy loss. More intuitively, for weights matrix involved in the fully-connected layers and convolution layers, we split them into several subspaces by learning their respective codebook and indicator matrix, and the optimization can be solved by k-means clustering in each subspace to get the optimal codebook and indicator matrix. For error correction, we directly minimize the estimation error of each layer’s response by coordinate descent to reduce the accumulation of error caused by quantizing multiple layers in the training phase. We also take previous layer’s estimation error into consideration in the training phase to reduce the accumulation error, hence, we method can dramatically improve efficiency both in model’s computing process and size. Finally, we apply our method on AlexNet and develop an android application program to implement image classification task which takes about 1.9s as well as comparing accuracy performance with original AlexNet model on ILSVRC-12 benchmark with only 1.1% accuracy descent. In addition, the compressed model size is only 17.6M, compared to the original model size 264.74M, the compression rate is up to 15 times.
Keywords: CNN;quantize;compress ;image classification
 

基于深度卷积压缩网络的图像分类方法


目录
1.    绪论    1
1.1背景及意义    1
1.2卷积神经网络简介    1
1.3压缩深度卷积网络的目的及研究现状    2
1.4主要研究工作    2
1.5论文组织结构    3
2.    预备知识    4
2.1神经网络模型    4
2.1.1神经元模型    4
2.1.2神经网络    4
2.2卷积神经网络    5
2.3坐标下降法    6
2.4 K-means算法    8
3.    深度卷积神经网络的压缩    10
3.1量化全连接层    10
3.2量化卷积层    10
3.3误差校正量化    11
3.3.1全连接层的误差校正    11
3.3.2卷积层的误差校正    12
3.3.3多层误差校正    13
3.3.4计算复杂度    13
4.深度压缩卷积网络在手机端的实现    14
4.1软件环境搭建    14
4.2项目代码的编译与执行    15
4.2.1 JNI与代码编译    15
4.2.2读取模型文件与图片保存    16
4.2.3执行    17
4.3实验结果    17
4.3.1 AlexNet    17
4.3.2ILSVRC-12数据集    18
4.3.3单张图像的分类    18
5.总结与展望    20
参考文献    20
致谢    21