在Altera SoC DE1板卡上跑完整的卷積神經網絡

這次為大家詳細展示一個利用卷積神經網絡實現圖片自動分類的例程。

神經網絡的優點:自動從數據中學習經驗知識,無需複雜的模型和算法。

缺點:有監督學習,需要大量的帶標籤數據;參數量太少時容易過擬合,泛化能力差,參數量太大時訓練收斂很慢(有可能需要幾個月到幾年)。

為了克服上述缺點,人們發掘了各種計算資源,包括多核CPU、GPU、DSP、ASIC、FPGA,甚至使用模擬電路。

使用CPU實現卷積神經網絡比較方便調試,但性能太差,一般人們都選用更快的GPU實現。目前開源的框架大多都支持GPU,如伯克利大學Caffe和Google Convnet。

微軟在2015年2月宣佈使用Stratix V完成了CNN加速器,處理 CIFAR10 圖片速度可達每秒2300多張。

這裡我們也使用CIFAR10圖片數據,在Cyclone V板子上跑一個卷積神經網絡CNN demo。由於板子上計算資源太少(DSP Slice只有80多個),實現完整的網絡不太現實,只能在FPGA上實現基本計算單元,然後由HPS統一調度。性能預期不會太高,後面給出。

CIFAR10圖片都是什麼呢?先來張圖!

在Altera SoC DE1板卡上跑完整的卷積神經網絡

有興趣的朋友可以到官網下載(CIFAR10官網)。上面提到過,CNN是有監督學習系統,需要大量帶label的數據,CIFAR10就是這樣一個開放的數據庫,提供了60000張不同類別的圖片,分為10個類(如上圖左側所示),每個類別有600張圖。這個數據集不算特別大,適合在嵌入式平臺上實現。而更大的數據集有ImageNet-1000(ImageNet官網),擁有120多萬張高清無碼大圖,我下載到硬盤,佔用了近200GB空間(只能忍痛將其他rmvb和avi刪掉了)!

有朋友會問,不用這些數據行不行,我們的智能手機裡面照片能不能用於CNN做訓練?

答案是可以的,只是你的數據集很不“均勻”,採樣不夠“完備”,訓練出的模型是真實模型的“有偏估計”,而上述兩個數據集經過了種種考驗,已經是學術界公認的優質數據集,一年一度的ILSVRC比賽就採用了這些數據集。

說完數據,再說模型。先來看一張經典的CNN結構:

在Altera SoC DE1板卡上跑完整的卷積神經網絡

這是世界上第一個將CNN實用化的例子,實現了手寫體字母自動識別。在這個CNN模型中,可以看到輸入是一張32 x 32的二維圖像,經過卷積層(Convolution)、下采樣層(Subsampling,也稱Pooling)、全連接層(Full Connection,也稱Inner Product)後,得到一組概率密度,我們選其中概率最大的元素作為該模型對輸入圖像的分類結果。所以實現CNN時,只需要實現三種基本算法:卷積、下采樣、矩陣乘。除此之外,每層輸出都可選擇是否經過非線性變換,常用的非線性變換有ReLU和Sigmoid,前者計算較為簡單,使用較為廣泛。

Caffe框架中提供了專門為cifar10數據定製的模型,是用proto格式寫的,我們的demo也基於這個模型。內容如下:

  1. name: "CIFAR10_quick_test"
  2. input: "data"
  3. input_dim: 1
  4. input_dim: 3
  5. input_dim: 32
  6. input_dim: 32
  7. layers {
  8. name: "conv1"
  9. type: CONVOLUTION
  10. bottom: "data"
  11. top: "conv1"
  12. blobs_lr: 1
  13. blobs_lr: 2
  14. convolution_param {
  15. num_output: 32
  16. pad: 2
  17. kernel_size: 5
  18. stride: 1
  19. }
  20. }
  21. layers {
  22. name: "pool1"
  23. type: POOLING
  24. bottom: "conv1"
  25. top: "pool1"
  26. pooling_param {
  27. pool: MAX
  28. kernel_size: 3
  29. stride: 2
  30. }
  31. }
  32. layers {
  33. name: "relu1"
  34. type: RELU
  35. bottom: "pool1"
  36. top: "pool1"
  37. }
  38. layers {
  39. name: "conv2"
  40. type: CONVOLUTION
  41. bottom: "pool1"
  42. top: "conv2"
  43. blobs_lr: 1
  44. blobs_lr: 2
  45. convolution_param {
  46. num_output: 32
  47. pad: 2
  48. kernel_size: 5
  49. stride: 1
  50. }
  51. }
  52. layers {
  53. name: "relu2"
  54. type: RELU
  55. bottom: "conv2"
  56. top: "conv2"
  57. }
  58. layers {
  59. name: "pool2"
  60. type: POOLING
  61. bottom: "conv2"
  62. top: "pool2"
  63. pooling_param {
  64. pool: AVE
  65. kernel_size: 3
  66. stride: 2
  67. }
  68. }
  69. layers {
  70. name: "conv3"
  71. type: CONVOLUTION
  72. bottom: "pool2"
  73. top: "conv3"
  74. blobs_lr: 1
  75. blobs_lr: 2
  76. convolution_param {
  77. num_output: 64
  78. pad: 2
  79. kernel_size: 5
  80. stride: 1
  81. }
  82. }
  83. layers {
  84. name: "relu3"
  85. type: RELU
  86. bottom: "conv3"
  87. top: "conv3"
  88. }
  89. layers {
  90. name: "pool3"
  91. type: POOLING
  92. bottom: "conv3"
  93. top: "pool3"
  94. pooling_param {
  95. pool: AVE
  96. kernel_size: 3
  97. stride: 2
  98. }
  99. }
  100. layers {
  101. name: "ip1"
  102. type: INNER_PRODUCT
  103. bottom: "pool3"
  104. top: "ip1"
  105. blobs_lr: 1
  106. blobs_lr: 2
  107. inner_product_param {
  108. num_output: 64
  109. }
  110. }
  111. layers {
  112. name: "ip2"
  113. type: INNER_PRODUCT
  114. bottom: "ip1"
  115. top: "ip2"
  116. blobs_lr: 1
  117. blobs_lr: 2
  118. inner_product_param {
  119. num_output: 10
  120. }
  121. }
  122. layers {
  123. name: "prob"
  124. type: SOFTMAX
  125. bottom: "ip2"
  126. top: "prob"
  127. }

複製代碼

可見,上述模型經過了3個卷積層(conv1, conv2, conv3),每個卷積層後面都跟著下采樣層(pool1, pool2, pool3),之後有兩個全連接層(ip1, ip2),最後一層prob為SOFTMAX分類層,是計算概率密度的,這裡我們不需要關心。

下面三張圖分別統計了CNN模型各層的參數量、數據量和計算量。

在Altera SoC DE1板卡上跑完整的卷積神經網絡

在Altera SoC DE1板卡上跑完整的卷積神經網絡

在Altera SoC DE1板卡上跑完整的卷積神經網絡

可以看出,卷積層的參數量很少,但數據量很大;全連接層剛好相反,參數量較大,但數據量很少。

通過計算量統計發現conv2計算量最大,其次是conv3和conv1。全連接層的計算量相對卷積層較小,但不可忽略。其他層(pool1, pool2以及各級relu)由於計算量太小,本設計中沒有將其實現為Open CL kernel,而是直接CPU端實現。

綜上所述,我們重點實現兩個算法:卷積和矩陣乘,分別對應卷積層、全連接層的實現。

在DE1-SOC上我利用了友晶提供的Open CL BSP,支持C語言開發FPGA。

卷積層計算kernel函數如下:

  1. __attribute__((num_compute_units(4)))
  2. __kernel
  3. void conv(__global float * a, __global float * b, __global float * c, const int M, const int N, const int K)
  4. {
  5. int gx = get_global_id(0);
  6. int gy = get_global_id(1);
  7. float tmp=0.0f;
  8. for(int x = 0; x < K; x ++)
  9. {
  10. for(int y = 0; y < K; y ++)
  11. {
  12. tmp += a[(gx + x) * M + (gy + y)] * b[x * K + y];
  13. }
  14. }

複製代碼

全連接層計算採用矩陣乘實現,kernel函數如下:

  1. __attribute__((num_compute_units(4)))
  2. __kernel
  3. void gemm(__global float * a, __global float * b, __global float * c, const int M, const int N, const int K)
  4. {
  5. int gx = get_global_id(0);
  6. int gy = get_global_id(1);
  7. int sy = get_global_size(1);
  8. int sx = get_global_size(0);
  9. int s = sx * sy;
  10. for(int x = gx; x < M; x += sx)
  11. {
  12. for(int y = gy; y < N; y += sy)
  13. {
  14. float tmp=0.0f;
  15. for(int z = 0; z < K; z++)
  16. {
  17. tmp += a[z * M + x] * b[y * K + z];
  18. }
  19. c[y * M + x] = tmp;
  20. }
  21. }
  22. }

複製代碼

編譯kernel函數需要使用Altera SDK for OpenCL,我用的版本是14.0.0.200,申請了兩個月的license。編譯使用命令行aoc,得到*.aocx文件。

Open CL編譯輸出報告中給出了資源佔用情況:

  1. +--------------------------------------------------------------------+
  2. ; Estimated Resource Usage Summary ;
  3. +----------------------------------------+---------------------------+
  4. ; Resource + Usage ;
  5. +----------------------------------------+---------------------------+
  6. ; Logic utilization ; 83% ;
  7. ; Dedicated logic registers ; 46% ;
  8. ; Memory blocks ; 57% ;
  9. ; DSP blocks ; 25% ;
  10. +----------------------------------------+---------------------------;

複製代碼

可見,邏輯資源、存儲器資源消耗較為明顯,而DSP資源並未用盡,說明還有優化的空間。

編譯主程序需要使用SoCEDS,我用的版本為14.0.2.274,也是命令行方式,在工程目錄下執行make,結束後得到可執行文件cnn。

將這兩個文件拷貝到SD卡,按照前面的博客對板子進行設置,將CNN的模型、CIFAR10數據也拷貝到SD卡中,板子上電,mount SD卡到/mnt,執行cnn,得到輸出如下:

  1. Please input the number of images(1~100):100
  2. Loading data...OK!
  3. Constructing CNN...OK!
  4. Begin calculation...Elapsed Time = 141.861 s.
  5. Real Label = 3(cat), Calc Label = 3(cat), error count = 0
  6. Real Label = 8(ship), Calc Label = 8(ship), error count = 0
  7. Real Label = 8(ship), Calc Label = 8(ship), error count = 0
  8. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 0
  9. Real Label = 6(frog), Calc Label = 6(frog), error count = 0
  10. Real Label = 6(frog), Calc Label = 6(frog), error count = 0
  11. Real Label = 1(automobile), Calc Label = 1(automobile), error count = 0
  12. Real Label = 6(frog), Calc Label = 6(frog), error count = 0
  13. Real Label = 3(cat), Calc Label = 3(cat), error count = 0
  14. Real Label = 1(automobile), Calc Label = 1(automobile), error count = 0
  15. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 0
  16. Real Label = 9(truck), Calc Label = 9(truck), error count = 0
  17. Real Label = 5(dog), Calc Label = 5(dog), error count = 0
  18. Real Label = 7(horse), Calc Label = 7(horse), error count = 0
  19. Real Label = 9(truck), Calc Label = 9(truck), error count = 0
  20. Real Label = 8(ship), Calc Label = 8(ship), error count = 0
  21. Real Label = 5(dog), Calc Label = 5(dog), error count = 0
  22. Real Label = 7(horse), Calc Label = 7(horse), error count = 0
  23. Real Label = 8(ship), Calc Label = 8(ship), error count = 0
  24. Real Label = 6(frog), Calc Label = 6(frog), error count = 0
  25. Real Label = 7(horse), Calc Label = 7(horse), error count = 0
  26. Real Label = 0(airplane), Calc Label = 2(bird), error count = 1
  27. Real Label = 4(deer), Calc Label = 4(deer), error count = 1
  28. Real Label = 9(truck), Calc Label = 9(truck), error count = 1
  29. Real Label = 5(dog), Calc Label = 4(deer), error count = 2
  30. Real Label = 2(bird), Calc Label = 3(cat), error count = 3
  31. Real Label = 4(deer), Calc Label = 4(deer), error count = 3
  32. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 3
  33. Real Label = 9(truck), Calc Label = 9(truck), error count = 3
  34. Real Label = 6(frog), Calc Label = 6(frog), error count = 3
  35. Real Label = 6(frog), Calc Label = 6(frog), error count = 3
  36. Real Label = 5(dog), Calc Label = 5(dog), error count = 3
  37. Real Label = 4(deer), Calc Label = 4(deer), error count = 3
  38. Real Label = 5(dog), Calc Label = 5(dog), error count = 3
  39. Real Label = 9(truck), Calc Label = 9(truck), error count = 3
  40. Real Label = 2(bird), Calc Label = 3(cat), error count = 4
  41. Real Label = 4(deer), Calc Label = 7(horse), error count = 5
  42. Real Label = 1(automobile), Calc Label = 9(truck), error count = 6
  43. Real Label = 9(truck), Calc Label = 9(truck), error count = 6
  44. Real Label = 5(dog), Calc Label = 5(dog), error count = 6
  45. Real Label = 4(deer), Calc Label = 4(deer), error count = 6
  46. Real Label = 6(frog), Calc Label = 6(frog), error count = 6
  47. Real Label = 5(dog), Calc Label = 5(dog), error count = 6
  48. Real Label = 6(frog), Calc Label = 6(frog), error count = 6
  49. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 6
  50. Real Label = 9(truck), Calc Label = 9(truck), error count = 6
  51. Real Label = 3(cat), Calc Label = 5(dog), error count = 7
  52. Real Label = 9(truck), Calc Label = 9(truck), error count = 7
  53. Real Label = 7(horse), Calc Label = 7(horse), error count = 7
  54. Real Label = 6(frog), Calc Label = 6(frog), error count = 7
  55. Real Label = 9(truck), Calc Label = 9(truck), error count = 7
  56. Real Label = 8(ship), Calc Label = 8(ship), error count = 7
  57. Real Label = 0(airplane), Calc Label = 2(bird), error count = 8
  58. Real Label = 3(cat), Calc Label = 3(cat), error count = 8
  59. Real Label = 8(ship), Calc Label = 8(ship), error count = 8
  60. Real Label = 8(ship), Calc Label = 8(ship), error count = 8
  61. Real Label = 7(horse), Calc Label = 7(horse), error count = 8
  62. Real Label = 7(horse), Calc Label = 7(horse), error count = 8
  63. Real Label = 4(deer), Calc Label = 3(cat), error count = 9
  64. Real Label = 6(frog), Calc Label = 3(cat), error count = 10
  65. Real Label = 7(horse), Calc Label = 7(horse), error count = 10
  66. Real Label = 3(cat), Calc Label = 5(dog), error count = 11
  67. Real Label = 6(frog), Calc Label = 6(frog), error count = 11
  68. Real Label = 3(cat), Calc Label = 3(cat), error count = 11
  69. Real Label = 6(frog), Calc Label = 6(frog), error count = 11
  70. Real Label = 2(bird), Calc Label = 2(bird), error count = 11
  71. Real Label = 1(automobile), Calc Label = 1(automobile), error count = 11
  72. Real Label = 2(bird), Calc Label = 2(bird), error count = 11
  73. Real Label = 3(cat), Calc Label = 3(cat), error count = 11
  74. Real Label = 7(horse), Calc Label = 9(truck), error count = 12
  75. Real Label = 2(bird), Calc Label = 2(bird), error count = 12
  76. Real Label = 6(frog), Calc Label = 6(frog), error count = 12
  77. Real Label = 8(ship), Calc Label = 8(ship), error count = 12
  78. Real Label = 8(ship), Calc Label = 8(ship), error count = 12
  79. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 12
  80. Real Label = 2(bird), Calc Label = 2(bird), error count = 12
  81. Real Label = 9(truck), Calc Label = 0(airplane), error count = 13
  82. Real Label = 3(cat), Calc Label = 3(cat), error count = 13
  83. Real Label = 3(cat), Calc Label = 2(bird), error count = 14
  84. Real Label = 8(ship), Calc Label = 8(ship), error count = 14
  85. Real Label = 8(ship), Calc Label = 8(ship), error count = 14
  86. Real Label = 1(automobile), Calc Label = 1(automobile), error count = 14
  87. Real Label = 1(automobile), Calc Label = 1(automobile), error count = 14
  88. Real Label = 7(horse), Calc Label = 7(horse), error count = 14
  89. Real Label = 2(bird), Calc Label = 2(bird), error count = 14
  90. Real Label = 5(dog), Calc Label = 7(horse), error count = 15
  91. Real Label = 2(bird), Calc Label = 2(bird), error count = 15
  92. Real Label = 7(horse), Calc Label = 7(horse), error count = 15
  93. Real Label = 8(ship), Calc Label = 8(ship), error count = 15
  94. Real Label = 9(truck), Calc Label = 9(truck), error count = 15
  95. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 15
  96. Real Label = 3(cat), Calc Label = 4(deer), error count = 16
  97. Real Label = 8(ship), Calc Label = 8(ship), error count = 16
  98. Real Label = 6(frog), Calc Label = 6(frog), error count = 16
  99. Real Label = 4(deer), Calc Label = 4(deer), error count = 16
  100. Real Label = 6(frog), Calc Label = 6(frog), error count = 16
  101. Real Label = 6(frog), Calc Label = 6(frog), error count = 16
  102. Real Label = 0(airplane), Calc Label = 2(bird), error count = 17
  103. Real Label = 0(airplane), Calc Label = 0(airplane), error count = 17
  104. Real Label = 7(horse), Calc Label = 7(horse), error count = 17
  105. Classify Score = 83 %.
  106. 上面的執行流程是這樣的,首先輸入測試樣本數目(1到100),由於DE1板子FPGA端SDRAM容量較小,難以加載全部測試數據(10000張圖片),故每次最多裝入100張圖片。之後載入數據到HPS內存,然後開始構建CNN模型,構建過程中也實現了Open CL的初始化。構建完畢,將輸入圖像依次通過CNN,得到一系列分類結果,與標籤進行對比,統計錯誤分類個數,計算分類準確率。

    經過測試,分類準確率達到83%,與Caffe測試結果一致。

    經過以上測試,可以得到結論:

    (1)使用Open CL可以很方便地移植高級語言編寫的算法;

    (2)CNN在移植過程中需要考慮實際硬件,定製合適的模型和數據;

    (3)Cyclone 5邏輯資源較少(85K,Open CL kernel佔用了83%),如果希望進一步提高計算速度,一方面可以選用高性能器件(如Stratix V、Arria 10),另一方面可以使用RTL自己搭建計算系統。


    以上圖文內容均是EEWORLD論壇網友

    zhaoyongke原創,在此感謝。

    歡迎微博@EEWORLD

    如果你也寫過此類原創乾貨請關注微信訂閱號(ID:eeworldbbs,將你的原創發至:[email protected],一經入選,我們將幫你登上頭條!

    與更多行業內網友進行交流請登陸EEWORLD論壇。


分享到:


相關文章: