論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

原文:Detecting Text in Natural Image with Connectionist Text Proposal Network(arXiv:1609.03605)

Abstract. We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text/non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multilanguage text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8,35] by a large margin. The CTPN is computationally efficient with 0.14s/image, by using the very deep VGG16 model [27]. Online demo is available at: .

我們提出了一種新穎的連接主義者文本提案網絡(CTPN),該網絡可以準確定位自然圖像中的文本行。 CTPN直接在卷積特徵圖中檢測一系列精細文本建議中的文本行。我們開發了一種垂直定位機制,可以共同預測每個固定寬度提案的位置和文本/非文本得分,從而大大提高定位精度。順序提議通過遞歸神經網絡自然連接,該遞歸神經網絡被無縫地集成到卷積網絡中,從而形成了端到端的可訓練模型。這使CTPN能夠探索圖像的豐富上下文信息,從而使其能夠強大地檢測出非常模糊的文本。 CTPN可以可靠地在多尺度和多語言文本上運行,而無需進一步的後處理,這與以前的自下而上的方法(需要多步後過濾)不同。在ICDAR 2013和2015基準上,它可達到0.88和0.61 F值,大大超過了最近的結果[8,35]。通過使用非常深的VGG16模型,CTPN的計算效率為0.14s /圖像,[27]。在線演示可從以下網站獲得:http://textdet.com/。

Keywords: Scene text detection, convolutional network, recurrent neural network, anchor mechanism

關鍵字:場景文本檢測,卷積網絡,遞歸神經網絡,錨定機制


1 Introduction

Reading text in natural image has recently attracted increasing attention in computer vision [8,14,15,10,35,11,9,1,28,32]. This is due to its numerous practical applications such as image OCR, multi-language translation, image retrieval, etc. It includes two sub tasks: text detection and recognition. This work focus on the detection task [14,1,28,32], which is more challenging than recognition task carried out on a well-cropped word image [15,9]. Large variance of text patterns and highly cluttered background pose main challenge of accurate text localization.

以自然圖像閱讀文本最近在計算機視覺中引起了越來越多的關注[8,14,15,10,35,11,9,1,28,32]。 這是由於其大量的實際應用,例如圖像OCR,多語言翻譯,圖像檢索等。它包括兩個子任務:文本檢測和識別。 這項工作的重點是檢測任務[14,1,28,32],這比對裁剪好的單詞圖像[15,9]進行的識別任務更具挑戰性。 文本模式的巨大差異和高度混亂的背景構成了精確文本本地化的主要挑戰。


論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

Fig. 1: (a) Architecture of the Connectionist Text Proposal Network (CTPN). We densely slide a 3×3 spatial window through the last convolutional maps (conv5 ) of the VGG16 model [27]. The sequential windows in each row are recurrently connected by a Bi-directional LSTM (BLSTM) [7], where the convolutional feature (3×3×C) of each window is used as input of the 256D BLSTM (including two 128D LSTMs). The RNN layer is connected to a 512D fully-connected layer, followed by the output layer, which jointly predicts text/non-text scores, y-axis coordinates and side-refinement offsets of k anchors. (b) The CTPN outputs sequential fixed-width fine-scale text proposals. Color of each box indicates the text/non-text score. Only the boxes with positive scores are presented.

圖1:(a)連接主義者文本建議網(CTPN)的體系結構。 我們通過VGG16模型的最後卷積圖(conv5)密集地滑動3×3空間窗口[27]。 每行中的順序窗口通過雙向LSTM(BLSTM)[7]循環連接,其中每個窗口的卷積特徵(3×3×C)用作256D BLSTM(包括兩個128D LSTM)的輸入 。 RNN層連接到512D全連接層,然後是輸出層,該輸出層共同預測文本/非文本得分,y軸座標和k個錨點的側面細化偏移量。 (b)CTPN輸出順序固定寬度的小規模文本提案。 每個框的顏色表示文本/非文本分數。 僅顯示分數為正的框。

Current approaches for text detection mostly employ a bottom-up pipeline [28,1,14,32,33]. They commonly start from low-level character or stroke detection, which is typically followed by a number of subsequent steps: non-text component filtering, text line construction and text line verification. These multi-step bottom-up approaches are generally complicated with less robustness and reliability. Their performance heavily rely on the results of character detection, and connected-components methods or sliding-window methods have been proposed. These methods commonly explore low-level features (e.g., based on SWT [3,13], MSER [14,33,23], or HoG [28]) to distinguish text candidates from background. However, they are not robust by identifying individual strokes or characters separately, without context information. For example, it is more confident for people to identify a sequence of characters than an individual one, especially when a character is extremely ambiguous. These limitations often result in a large number of non-text components in character detection, causing main difficulties for handling them in following steps. Furthermore, these false detections are easily accumulated sequentially in bottom-up pipeline, as pointed out in [28]. To address these problems, we exploit strong deep features for detecting text information directly in convolutional maps. We develop text anchor mechanism that accurately predicts text locations in fine scale. Then, an in-network recurrent architecture is proposed to connect these fine-scale text proposals in sequences, allowing them to encode rich context information.

當前的文本檢測方法主要採用自下而上的管道[28,1,14,32,33]。它們通常從低級字符或筆畫檢測開始,然後通常執行許多後續步驟:非文本組件過濾,文本行構造和文本行驗證。這些多步自下而上的方法通常比較複雜,但健壯性和可靠性較低。它們的性能在很大程度上取決於字符檢測的結果,並且已經提出了連接組件方法或滑動窗口方法。這些方法通常探索低級功能(例如,基於SWT [3,13],MSER [14,33,23]或HoG [28]),以區分文本候選者和背景。但是,它們在沒有上下文信息的情況下無法通過分別識別單個筆畫或字符而變得不可靠。例如,人們識別一個字符序列要比單個字符更有信心,尤其是當一個字符非常模稜兩可時。這些限制通常會在字符檢測中導致大量非文本組件,從而導致在後續步驟中處理它們的主要困難。此外,如[28]所指出的,這些錯誤檢測很容易在自下而上的流水線中順序累積。為了解決這些問題,我們利用強大的深層功能直接在卷積圖中檢測文本信息。我們開發了文本錨定機制,可以精確地精確預測文本位置。然後,提出了一種網絡內循環架構,以按順序連接這些小規模文本提案,從而使它們能夠編碼豐富的上下文信息。

Deep Convolutional Neural Networks (CNN) have recently advanced general object detection substantially [25,5,6]. The state-of-the-art method is Faster Region-CNN (R-CNN) system [25] where a Region Proposal Network (RPN) is proposed to generate high-quality class-agnostic object proposals directly from convolutional feature maps. Then the RPN proposals are fed into a Fast R-CNN [5] model for further classification and refinement, leading to the state-of-the-art performance on generic object detection. However, it is difficult to apply these general object detection systems directly to scene text detection, which generally requires a higher localization accuracy. In generic object detection, each object has a well-defined closed boundary [2], while such a well-defined boundary may not exist in text, since a text line or word is composed of a number of separate characters or strokes. For object detection, a typical correct detection is defined loosely, e.g., by an overlap of > 0.5 between the detected bounding box and its ground truth (e.g., the PASCAL standard [4]), since people can recognize an object easily from major part of it. By contrast, reading text comprehensively is a fine-grained recognition task which requires a correct detection that covers a full region of a text line or word. Therefore, text detection generally requires a more accurate localization, leading to a different evaluation standard, e.g., the Wolf's standard [30] which is commonly employed by text benchmarks [19,21].

深度卷積神經網絡(CNN)最近已大大提高了一般目標檢測的速度[25,5,6]。最先進的方法是Faster Region-CNN(R-CNN)系統[25],其中提出了區域提議網絡(RPN)以直接從卷積特徵圖生成高質量的類無關對象提議。然後將RPN提議輸入到Fast R-CNN [5]模型中,以進行進一步的分類和細化,從而獲得通用對象檢測的最新性能。然而,難以將這些通用對象檢測系統直接應用於場景文本檢測,這通常需要更高的定位精度。在一般對象檢測中,每個對象都有一個明確定義的封閉邊界[2],而這樣一個明確定義的邊界可能在文本中不存在,因為文本行或單詞是由多個單獨的字符或筆劃組成的。對於物體檢測,典型的正確檢測是寬鬆定義的,例如,通過檢測到的邊界框與其地面實況之間的重疊> 0.5(例如,PASCAL標準[4]),因為人們可以從主要部分輕鬆識別物體它的。相反,全面閱讀文本是一種細粒度的識別任務,需要正確檢測才能覆蓋文本行或單詞的整個區域。因此,文本檢測通常需要更準確的定位,從而導致不同的評估標準,例如,文本基準[19,21]普遍採用的Wolf標準[30]。

In this work, we fill this gap by extending the RPN architecture [25] to accurate text line localization. We present several technical developments that tailor generic object detection model elegantly towards our problem. We strive for a further step by proposing an in-network recurrent mechanism that allows our model to detect text sequence directly in the convolutional maps, avoiding further post-processing by an additional costly CNN detection model.

在這項工作中,我們通過將RPN體系結構[25]擴展到準確的文本行本地化來填補這一空白。 我們提出了一些技術發展,可以針對我們的問題優雅地定製通用對象檢測模型。 我們通過提出一種網絡內遞歸機制來爭取進一步的步驟,該機制允許我們的模型直接在卷積圖中檢測文本序列,從而避免了額外的昂貴CNN檢測模型進行進一步的後處理。

1.1 Contributions

We propose a novel Connectionist Text Proposal Network (CTPN) that directly localizes text sequences in convolutional layers. This overcomes a number of main limitations raised by previous bottom-up approaches building on character detection. We leverage the advantages of strong deep convolutional features and sharing computation mechanism, and propose the CTPN architecture which is described in Fig. 1. It makes the following major contributions:

First, we cast the problem of text detection into localizing a sequence of finescale text proposals. We develop an anchor regression mechanism that jointly predicts vertical location and text/non-text score of each text proposal, resulting in an excellent localization accuracy. This departs from the RPN prediction of a whole object, which is difficult to provide a satisfied localization accuracy.

Second, we propose an in-network recurrence mechanism that elegantly connects sequential text proposals in the convolutional feature maps. This connection allows our detector to explore meaningful context information of text line, making it powerful to detect extremely challenging text reliably.

Third, both methods are integrated seamlessly to meet the nature of text sequence, resulting in a unified end-to-end trainable model. Our method is able to handle multi-scale and multi-lingual text in a single process, avoiding further post filtering or refinement.

Fourth, our method achieves new state-of-the-art results on a number of benchmarks, significantly improving recent results (e.g., 0.88 F-measure over 0.83 in [8] on the ICDAR 2013, and 0.61 F-measure over 0.54 in [35] on the ICDAR 2015). Furthermore, it is computationally efficient, resulting in a 0.14s/image running time (on the ICDAR 2013) by using the very deep VGG16 model [27].

我們提出了一種新穎的連接主義者文本提案網絡(CTPN),該網絡可以直接在卷積層中定位文本序列。 這克服了以前基於字符檢測的自下而上方法帶來的許多主要限制。 我們利用強大的深度卷積特性和共享計算機制的優勢,提出瞭如圖1所示的CTPN體系結構。它做出了以下主要貢獻:

首先,我們將文本檢測問題轉化為本地化一系列精細文本建議。我們開發了一種錨定迴歸機制,可以共同預測每個文本提案的垂直位置和文本/非文本得分,從而獲得出色的定位精度。這偏離了整個物體的RPN預測,這很難提供令人滿意的定位精度。

其次,我們提出了一種網絡內遞歸機制,該機制可以優雅地連接卷積特徵圖中的順序文本提議。這種連接使我們的檢測器能夠探索有意義的文本行上下文信息,從而使其能夠可靠地檢測出極具挑戰性的文本。

第三,兩種方法無縫地集成在一起以滿足文本序列的本質,從而形成了統一的端到端可訓練模型。我們的方法能夠在單個過程中處理多比例和多語言文本,避免了進一步的後期過濾或優化。

第四,我們的方法在多個基準上獲得了最新的最新結果,大大改善了最近的結果(例如,ICDAR 2013中[8]的0.88 F-measure超過了0.83,而2006年的CFDA中的0.61 F-measure超過了0.54)。 [35]見ICDAR 2015)。此外,它的計算效率很高,通過使用非常深的VGG16模型[27],圖像運行時間為0.14s(在ICDAR 2013上)。

2 Related Work

Text detection. Past works in scene text detection have been dominated by bottom-up approaches which are generally built on stroke or character detection. They can be roughly grouped into two categories, connected-components (CCs) based approaches and sliding-window based methods. The CCs based approaches discriminate text and non-text pixels by using a fast filter, and then text pixels are greedily grouped into stroke or character candidates, by using low-level properties, e.g., intensity, color, gradient, etc. [33,14,32,13,3]. The sliding-window based methods detect character candidates by densely moving a multi-scale window through an image. The character or non-character window is discriminated by a pre-trained classifier, by using manually-designed features [28,29], or recent CNN features [16]. However, both groups of methods commonly suffer from poor performance of character detection, causing accumulated errors in following component filtering and text line construction steps. Furthermore, robustly filtering out non-character components or confidently verifying detected text lines are even difficult themselves [1,33,14]. Another limitation is that the sliding-window methods are computationally expensive, by running a classifier on a huge number of the sliding windows.

文字檢測。場景文本檢測中的過去工作主要由自下而上的方法主導,這些方法通常基於筆畫或字符檢測。它們可以大致分為兩類,基於連接組件(CC)的方法和基於滑動窗口的方法。基於CC的方法通過使用快速過濾器來區分文本和非文本像素,然後通過使用低級屬性(例如強度,顏色,漸變等)將文本像素貪婪地分組為筆畫或字符候選者。[33, 14,32,13,3]。基於滑動窗口的方法通過在圖像中密集移動多尺度窗口來檢測候選字符。通過使用人工設計的特徵[28,29]或最新的CNN特徵[16],由預先訓練的分類器來區分字符或非字符窗口。但是,這兩種方法通常都具有較差的字符檢測性能,從而在後續的組件過濾和文本行構造步驟中導致累積的錯誤。此外,魯棒地濾除非字符組成部分或自信地驗證檢測到的文本行本身甚至很困難[1,33,14]。另一個限制是,通過在大量的滑動窗口上運行分類器,滑動窗口方法的計算量很大。

Object detection. Convolutional Neural Networks (CNN) have recently advanced general object detection substantially [25,5,6]. A common strategy is to generate a number of object proposals by employing inexpensive low-level features, and then a strong CNN classifier is applied to further classify and refine the generated proposals. Selective Search (SS) [4] which generates class-agnostic object proposals, is one of the most popular methods applied in recent leading object detection systems, such as Region CNN (R-CNN) [6] and its extensions [5]. Recently, Ren et al. [25] proposed a Faster R-CNN system for object detection. They proposed a Region Proposal Network (RPN) that generates high-quality class-agnostic object proposals directly from the convolutional feature maps. The RPN is fast by sharing convolutional computation. However, the RPN proposals are not discriminative, and require a further refinement and classification by an additional costly CNN model, e.g., the Fast R-CNN model [5]. More importantly, text is different significantly from general objects, making it difficult to directly apply general object detection system to this highly domain-specific task.

對象檢測。卷積神經網絡(CNN)最近已大大提高了通用目標檢測的速度[25,5,6]。一種常見的策略是通過使用廉價的低級功能來生成許多對象建議,然後將強大的CNN分類器應用於進一步分類和細化所生成的建議。選擇性搜索(SS)[4]生成與類無關的對象建議,是在最近的領先對象檢測系統(例如區域CNN(R-CNN)[6]及其擴展名[5])中應用最廣泛的方法之一。最近,Ren等。 [25]提出了一種用於對象檢測的Faster R-CNN系統。他們提出了一個區域提案網絡(RPN),該網絡直接從卷積特徵圖中生成高質量的類不可知對象提案。 RPN通過共享卷積計算而很快。但是,RPN提議沒有區別,需要通過其他昂貴的CNN模型(例如Fast R-CNN模型[5])進一步完善和分類。更重要的是,文本與常規對象有很大的不同,因此很難將常規對象檢測系統直接應用於高度特定領域的任務。

3 Connectionist Text Proposal Network

This section presents details of the Connectionist Text Proposal Network (CTPN). It includes three key contributions that make it reliable and accurate for text localization: detecting text in fine-scale proposals, recurrent connectionist text proposals, and side-refinement.

本節介紹了連接主義者文本提案網絡(CTPN)的詳細信息。 它包括三個使它可靠且準確地用於文本本地化的關鍵貢獻:在精細建議中檢測文本,經常性的連接主義文本建議以及側面改進。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

Fig. 2: Left: RPN proposals. Right: Fine-scale text proposals.

圖2:左:RPN提案。 右:精細文本提案。

3.1 Detecting Text in Fine-scale Proposals

Similar to Region Proposal Network (RPN) [25], the CTPN is essentially a fully convolutional network that allows an input image of arbitrary size. It detects a text line by densely sliding a small window in the convolutional feature maps, and outputs a sequence of fine-scale (e.g., fixed 16-pixel width) text proposals, as shown in Fig. 1 (b).

類似於區域提議網絡(RPN)[25],CTPN本質上是一個完全卷積的網絡,允許輸入任意大小的圖像。 它通過在卷積特徵圖中密集滑動一個小窗口來檢測文本行,並輸出一系列精細比例(例如,固定的16像素寬度)的文本建議,如圖1(b)所示。

We take the very deep 16-layer vggNet (VGG16) [27] as an example to describe our approach, which is readily applicable to other deep models. Architecture of the CTPN is presented in Fig. 1 (a). We use a small spatial window, 3×3, to slide the feature maps of last convolutional layer (e.g., the conv5 of the VGG16). The size of conv5 feature maps is determined by the size of input image, while the total stride and receptive field are fixed as 16 and 228 pixels, respectively. Both the total stride and receptive field are fixed by the network architecture. Using a sliding window in the convolutional layer allows it to share convolutional computation, which is the key to reduce computation of the costly sliding-window based methods.

我們以非常深的16層vggNet(VGG16)[27]為例來描述我們的方法,該方法很容易適用於其他深度模型。 CTPN的架構如圖1(a)所示。 我們使用一個3×3的小空間窗口來滑動最後一個卷積層的特徵圖(例如VGG16的conv5)。 conv5特徵圖的大小由輸入圖像的大小確定,而總步幅和接收場分別固定為16和228像素。 網絡架構固定了總步幅和接收場。 在卷積層中使用滑動窗口允許它共享卷積計算,這是減少基於代價昂貴的基於滑動窗口的方法的計算的關鍵。

Generally, sliding-window methods adopt multi-scale windows to detect objects of different sizes, where one window scale is fixed to objects of similar size. In [25], Ren et al. proposed an efficient anchor regression mechanism that allows the RPN to detect multi-scale objects with a single-scale window. The key insight is that a single window is able to predict objects in a wide range of scales and aspect ratios, by using a number of flexible anchors. We wish to extend this efficient anchor mechanism to our text task. However, text differs from generic objects substantially, which generally have a well-defined enclosed boundary and center, allowing inferring whole object from even a part of it [2]. Text is a sequence which does not have an obvious closed boundary. It may include multi-level components, such as stroke, character, word, text line and text region, which are not distinguished clearly between each other. Text detection is defined in word or text line level, so that it may be easy to make an incorrect detection by defining it as a single object, e.g., detecting part of a word. Therefore, directly predicting the location of a text line or word may be difficult or unreliable, making it hard to get a satisfied accuracy. An example is shown in Fig. 2, where the RPN is directly trained for localizing text lines in an image.

通常,滑動窗口方法採用多尺度窗口來檢測不同大小的對象,其中一個窗口尺度固定到相似大小的對象上。在[25]中,Ren等人。提出了一種有效的錨迴歸機制,該機制使RPN能夠使用單尺度窗口檢測多尺度對象。關鍵的見解是,通過使用多個靈活的錨點,單個窗口能夠以各種比例和寬高比預測對象。我們希望將這種有效的錨定機制擴展到我們的文本任務。但是,文本與通用對象有很大的不同,通用對象通常具有定義明確的封閉邊界和中心,甚至可以從其中的一部分推斷出整個對象[2]。文本是沒有明顯封閉邊界的序列。它可能包括多級組件,例如筆畫,字符,單詞,文本行和文本區域,它們之間沒有明確區分。文本檢測是在單詞或文本行級別定義的,因此通過將其定義為單個對象(例如,檢測單詞的一部分),很容易進行錯誤的檢測。因此,直接預測文本行或單詞的位置可能很困難或不可靠,從而難以獲得令人滿意的準確性。圖2中顯示了一個示例,其中直接訓練RPN以定位圖像中的文本行。

We look for a unique property of text that is able to generalize well to text components in all levels. We observed that word detection by the RPN is difficult to accurately predict the horizontal sides of words, since each character within a word is isolated or separated, making it confused to find the start and end locations of a word. Obviously, a text line is a sequence which is the main difference between text and generic objects. It is natural to consider a text line as a sequence of fine-scale text proposals, where each proposal generally represents a small part of a text line, e.g., a text piece with 16-pixel width. Each proposal may include a single or multiple strokes, a part of a character, a single or multiple characters, etc. We believe that it would be more accurate to just predict the vertical location of each proposal, by fixing its horizontal location which may be more difficult to predict. This reduces the search space, compared to the RPN which predicts 4 coordinates of an object. We develop a vertical anchor mechanism that simultaneously predicts a text/non-text score and y-axis location of each fine-scale proposal. It is also more reliable to detect a general fixed-width text proposal than identifying an isolate character, which is easily confused with part of a character or multiple characters. Furthermore, detecting a text line in a sequence of fixed-width text proposals also works reliably on text of multiple scales and multiple aspect ratios.

我們尋找一種獨特的文本屬性,可以很好地推廣到所有級別的文本組件。我們觀察到,由於單詞中的每個字符都是孤立的或分開的,因此RPN對單詞的檢測很難準確預測單詞的水平邊,這使查找單詞的開始和結束位置感到困惑。顯然,文本行是一個序列,是文本和通用對象之間的主要區別。將文本行視為一系列小尺寸文本建議是很自然的,其中每個建議通常代表一小部分文本行,例如16像素寬的文本。每個建議可能包括一個或多個筆畫,一個字符的一部分,一個或多個字符等。我們相信,通過固定每個建議的水平位置來預測每個建議的垂直位置會更準確。更難以預測。與預測對象的4個座標的RPN相比,這減少了搜索空間。我們開發了一種垂直錨機制,可同時預測每個精細建議的文本/非文本得分和y軸位置。檢測一般的固定寬度的文本提議比確定一個容易與一個字符或多個字符的一部分混淆的孤立字符相比,更可靠。此外,檢測一系列固定寬度的文本建議中的文本行也可以可靠地用於多個比例和多個長寬比的文本。

To this end, we design the fine-scale text proposal as follow. Our detector investigates each spatial location in the conv5 densely. A text proposal is defined to have a fixed width of 16 pixels (in the input image). This is equal to move the detector densely through the conv5 maps, where the total stride is exactly 16 pixels. Then we design k vertical anchors to predict y-coordinates for each proposal. The k anchors have a same horizontal location with a fixed width of 16 pixels, but their vertical locations are varied in k different heights. In our experiments, we use ten anchors for each proposal, k = 10, whose heights are varied from 11 to 273 pixels (by ÷0.7 each time) in the input image. The explicit vertical coordinates are measured by the height and y-axis center of a proposal bounding box. We compute relative predicted vertical coordinates (v) with respect to the bounding box location of an anchor as,

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

where and are the relative predicted coordinates and ground truth coordinates, respectively. and are the center (y-axis) and height of the anchor box, which can be pre-computed from an input image. cy and h are the predicted y-axis coordinates in the input image, while and are the ground truth coordinates. Therefore, each predicted text proposal has a bounding box with size of h × 16 (in the input image), as shown in Fig. 1 (b) and Fig. 2 (right). Generally, an text proposal is largely smaller than its effective receptive field which is 228×228.

為此,我們設計瞭如下的精細文本建議。 我們的探測器會密集地調查conv5中的每個空間位置。 文本建議定義為具有16像素的固定寬度(在輸入圖像中)。 這等於將檢測器密集移動通過conv5映射,其中總跨度恰好是16個像素。 然後,我們設計k個垂直錨以預測每個提案的y座標。 k個錨點具有相同的水平位置,固定寬度為16個像素,但其垂直位置在k個不同的高度中有所不同。 在我們的實驗中,我們為每個提案使用10個錨點,k = 10,在輸入圖像中其高度從11到273像素(每次÷0.7)變化。 顯式垂直座標由投標邊界框的高度和y軸中心測量。 我們計算相對於錨定框位置的相對預測垂直座標(v)為,

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

其中和分別是相對預測座標和地面真實座標。和是錨點框的中心(y軸)和高度,可以根據輸入圖像進行預先計算。 cy和h是輸入圖像中的預測y軸座標,而和是地面真實座標。 因此,每個預測文本建議都有一個大小為h×16的邊界框(在輸入圖像中),如圖1(b)和圖2(右)所示。 通常,文本提案比其有效接收字段228×228小得多。

The detection processing is summarised as follow. Given an input image, we have W × H × C conv5 features maps (by using the VGG16 model), where C is the number of feature maps or channels, and W × H is the spatial arrangement. When our detector is sliding a 3×3 window densely through the conv5, each sliding-window takes a convolutional feature of 3 × 3 × C for producing the prediction. For each prediction, the horizontal location (x-coordinates) and kanchor locations are fixed, which can be pre-computed by mapping the spatial window location in the conv5 onto the input image. Our detector outputs the text/non-text scores and the predicted y-coordinates (v) for k anchors at each window location. The detected text proposals are generated from the anchors having a text/non-text score of > 0.7 (with non-maximum suppression). By the designed vertical anchor and fine-scale detection strategy, our detector is able to handle text lines in a wide range of scales and aspect ratios by using a single-scale image. This further reduces its computation, and at the same time, predicting accurate localizations of the text lines. Compared to the RPN or Faster R-CNN system [25], our fine-scale detection provides more detailed supervised information that naturally leads to a more accurate detection.

檢測處理總結如下。給定輸入圖像,我們有W×H×C conv5個特徵圖(通過使用VGG16模型),其中C是特徵圖或通道的數量,W×H是空間排列。當我們的探測器通過conv5密集地滑動3×3窗口時,每個滑動窗口都具有3×3×C的卷積特徵以產生預測。對於每個預測,水平位置(x座標)和錨點位置是固定的,可以通過將conv5中的空間窗口位置映射到輸入圖像上進行預先計算。我們的檢測器輸出每個窗口位置的k個錨點的文本/非文本分數和預測的y座標(v)。從具有大於0.7的文本/非文本得分的錨點中生成檢測到的文本建議(非最大抑制)。通過設計的垂直錨點和精細比例檢測策略,我們的檢測器能夠通過使用單比例圖像來處理各種比例和寬高比的文本行。這樣可以進一步減少計算量,同時可以預測文本行的準確定位。與RPN或Faster R-CNN系統[25]相比,我們的精細檢測提供了更詳細的監督信息,自然可以帶來更準確的檢測。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

Fig. 3: Top: CTPN without RNN. Bottom: CTPN with RNN connection.

圖3:頂部:不帶RNN的CTPN。 底部:帶有RNN連接的CTPN。

3.2 Recurrent Connectionist Text Proposals

To improve localization accuracy, we split a text line into a sequence of fine-scale text proposals, and predict each of them separately. Obviously, it is not robust to regard each isolated proposal independently. This may lead to a number of false detections on non-text objects which have a similar structure as text patterns, such as windows, bricks, leaves, etc. (referred as text-like outliers in [13]). It is also possible to discard some ambiguous patterns which contain weak text information. Several examples are presented in Fig. 3 (top). Text have strong sequential characteristics where the sequential context information is crucial to make a reliable decision. This has been verified by recent work [9] where a recurrent neural network (RNN) is applied to encode this context information for text recognition. Their results have shown that the sequential context information is greatly facilitate the recognition task on cropped word images.

為了提高本地化的準確性,我們將文本行分成一系列精細的文本建議,並分別預測每個文本建議。 顯然,獨立考慮每個孤立的提案並不可靠。 這可能導致對非文本對象的許多錯誤檢測,這些對象的結構與文本模式類似,例如窗口,磚塊,樹葉等(在[13]中稱為類似文本的異常值)。 也可以丟棄一些包含弱文本信息的模稜兩可的模式。 圖3(上)顯示了幾個示例。 文本具有很強的順序特性,其中順序上下文信息對於做出可靠的決定至關重要。 最近的工作[9]證實了這一點,其中使用遞歸神經網絡(RNN)編碼此上下文信息以進行文本識別。 他們的結果表明,順序上下文信息極大地促進了對裁剪單詞圖像的識別任務。

Motivated from this work, we believe that this context information may also be of importance for our detection task. Our detector should be able to explore this important context information to make a more reliable decision, when it works on each individual proposal. Furthermore, we aim to encode this information directly in the convolutional layer, resulting in an elegant and seamless in-network connection of the fine-scale text proposals. RNN provides a natural choice for encoding this information recurrently using its hidden layers. To this end, we propose to design a RNN layer upon the conv5, which takes the convolutional feature of each window as sequential inputs, and updates its internal state recurrently in the hidden layer, Ht,

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

從這項工作的動機出發,我們認為此上下文信息對於我們的檢測任務也可能很重要。 當檢測器適用於每個提案時,我們的檢測器應該能夠探索這一重要的上下文信息,從而做出更可靠的決策。 此外,我們的目標是直接在卷積層中編碼此信息,以實現精細文本建議的優雅且無縫的網絡內連接。 RNN為使用其隱藏層循環編碼此信息提供了自然的選擇。 為此,我們建議在conv5上設計一個RNN層,該層將每個窗口的卷積特徵作為順序輸入,並在隱藏層Ht中循環更新其內部狀態,

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

where Xt ∈ R3×3×C is the input conv5 feature from t-th sliding-window (3×3). The sliding-window moves densely from left to right, resulting in t = 1, 2, ..., W sequential features for each row. W is the width of the conv5. Ht is a recurrent internal state that is computed jointly from both current input (Xt) and previous states encoded in Ht−1. The recurrence is computed by using a non-linear function ϕ, which defines exact form of the recurrent model. We exploit the long short-term memory (LSTM) architecture [12] for our RNN layer. The LSTM was proposed specially to address vanishing gradient problem, by introducing three additional multiplicative gates: the input gate, forget gate and output gate. Details can be found in [12]. Hence the internal state in RNN hidden layer accesses the sequential context information scanned by all previous windows through the recurrent connection. We further extend the RNN layer by using a bi-directional LSTM, which allows it to encode the recurrent context in both directions, so that the connectionist receipt field is able to cover the whole image width, e.g., 228 × width. We use a 128D hidden layer for each LSTM, resulting in a 256D RNN hidden layer, Ht ∈ R256 .

其中Xt∈R3×3×C是第t個滑動窗口(3×3)的輸入conv5特徵。滑動窗口從左到右密集移動,從而導致每行t = 1、2,...,W個連續特徵。 W是conv5的寬度。 Ht是循環內部狀態,它是根據當前輸入(Xt)和以Ht-1編碼的先前狀態共同計算的。遞歸是通過使用非線性函數defines計算的,該函數定義了遞歸模型的確切形式。我們為RNN層開發了長期短期記憶(LSTM)架構[12]。 LSTM是專門為解決消失梯度問題而提出的,它引入了三個附加的乘法門:輸入門,忘記門和輸出門。細節可以在[12]中找到。因此,RNN隱藏層中的內部狀態通過循環連接訪問所有先前窗口掃描的順序上下文信息。我們通過使用雙向LSTM進一步擴展RNN層,從而允許它在兩個方向上對循環上下文進行編碼,以便連接主義者的接收字段能夠覆蓋整個圖像寬度,例如228×寬度。對於每個LSTM,我們使用128D隱藏層,從而得到256D RNN隱藏層Ht∈R256。

The internal state in Ht is mapped to the following FC layer, and output layer for computing the predictions of the t-th proposal. Therefore, our integration with the RNN layer is elegant, resulting in an efficient model that is end-toend trainable without additional cost. The efficiency of the RNN connection is demonstrated in Fig. 3. Obviously, it reduces false detections considerably, and at the same time, recovers many missed text proposals which contain very weak text information.

Ht中的內部狀態映射到隨後的FC層,並輸出到輸出層,以計算第t個提議的預測。 因此,我們與RNN層的集成非常優雅,從而產生了一種高效的模型,可以進行端到端的培訓而無需額外的成本。 RNN連接的效率如圖3所示。顯然,它可以大大減少錯誤檢測,並同時恢復許多包含非常弱的文本信息的遺漏文本建議。

3.3 Side-refinement

The fine-scale text proposals are detected accurately and reliably by our CTPN. Text line construction is straightforward by connecting continuous text proposals whose text/non-text score is > 0.7. Text lines are constructed as follow. First, we define a paired neighbour (Bj) for a proposal Bi as Bj−> Bi , when (i) Bj is the nearest horizontal distance to Bi , and (ii) this distance is less than 50 pixels, and (iii) their vertical overlap is > 0.7. Second, two proposals are grouped into a pair, if Bj−> Bi and Bi−> Bj . Then a text line is constructed by sequentially connecting the pairs having a same proposal.

我們的CTPN可以準確,可靠地檢測出精細的文本建議。 通過連接文本/非文本得分> 0.7的連續文本建議,可以輕鬆構造文本行。 文本行的構造如下。 首先,當(i)Bj是最接近Bi的水平距離,並且(ii)此距離小於50個像素,並且(iii)他們將提案Bi的成對鄰居(Bj)定義為Bj-> Bi。 垂直重疊> 0.7。 其次,如果Bj-> Bi和Bi-> Bj,則將兩個提議分組為一對。 然後,通過順序連接具有相同提議的線對來構造文本行。

The fine-scale detection and RNN connection are able to predict accurate localizations in vertical direction. In horizontal direction, the image is divided into a sequence of equal 16-pixel width proposals. This may lead to an inaccurate localization when the text proposals in both horizontal sides are not exactly covered by a ground truth text line area, or some side proposals are discarded (e.g., having a low text score), as shown in Fig. 4. This inaccuracy may be not crucial in generic object detection, but should not be ignored in text detection, particularly for those small-scale text lines or words. To address this problem, we propose a side-refinement approach that accurately estimates the offset for each anchor/proposal in both left and right horizontal sides (referred as side-anchor or side-proposal). Similar to the y-coordinate prediction, we compute relative offset as,

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

where xside is the predicted x-coordinate of the nearest horizontal side (e.g., left or right side) to current anchor. x∗side is the ground truth (GT) side coordinate in x-axis, which is pre-computed from the GT bounding box and anchor location. c a x is the center of anchor in x-axis. w a is the width of anchor, which is fixed, w a = 16. The side-proposals are defined as the start and end proposals when we connect a sequence of detected fine-scale text proposals into a text line. We only use the offsets of the side-proposals to refine the final text line bounding box. Several detection examples improved by side-refinement are presented in Fig. 4. The side-refinement further improves the localization accuracy, leading to about 2% performance improvements on the SWT and Multi-Lingual datasets. Notice that the offset for side-refinement is predicted simultaneously by our model, as shown in Fig. 1. It is not computed from an additional post-processing step.

精細檢測和RNN連接能夠預測垂直方向上的精確定位。 在水平方向上,圖像被分為一系列相等的16像素寬度的建議。 如圖4所示,當兩個水平邊的文本建議沒有被地面真實文本行區域完全覆蓋時,或者丟棄了一些邊建議(例如,具有較低的文本分數)時,這可能導致定位不準確。 這種不準確性在一般對象檢測中可能不是至關重要的,但在文本檢測中尤其是那些小規模的文本行或單詞,不應忽略。 為了解決這個問題,我們提出了一種側面細化方法,該方法可以精確地估計左右水平側(稱為側面固定或側面固定)中每個錨/建議的偏移量。 與y座標預測相似,我們將相對偏移量計算為

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

其中xside是當前錨點最近的水平邊(例如,左側或右側)的預測x座標。 x ∗側是在x軸上的地面真實(GT)側座標,它是從GT邊界框和錨點位置預先計算的。 c a x是錨點在x軸上的中心。 w a是固定的錨的寬度,w a = 16。 當我們將一系列檢測到的精細文本建議連接到文本行時,側面建議定義為開始和結束建議。 我們僅使用側面建議的偏移量來完善最終文本行邊界框。 圖4給出了一些通過側面改進而改進的檢測示例。側面改進進一步提高了定位精度,從而導致SWT和Multi-Lingual數據集的性能提高了約2%。 請注意,側面精化的偏移量是由我們的模型同時預測的,如圖1所示。它不是從附加的後處理步驟中計算出來的。

3.4 Model Outputs and Loss Functions

The proposed CTPN has three outputs which are jointly connected to the last FC layer, as shown in Fig. 1 (a). The three outputs simultaneously predict text/nontext scores (s), vertical coordinates (v = {vc, vh} in E.q. (2)) and side-refinement offset (o). We explore k anchors to predict them on each spatial location in the conv5, resulting in 2k, 2k and k parameters in the output layer, respectively.

提議的CTPN具有三個輸出,這些輸出共同連接到最後一個FC層,如圖1(a)所示。 這三個輸出同時預測文本/非文本分數,垂直座標(等式(2)中的v = {vc,vh})和側面細化偏移(o)。 我們探索k個錨以在conv5中的每個空間位置上預測它們,分別在輸出層中產生2k,2k和k個參數。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

我們採用多任務學習來共同優化模型參數。 我們引入三個損失函數L cl s,L rev和l re o,分別計算文本/非文本分數,座標和邊細化的誤差。 考慮到這些因素,我們遵循[5,25]中應用的多任務損失,並將圖像的總體目標函數(L)最小化為:

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

其中每個錨點是訓練樣本,而i是小批量中錨點的索引。 si是錨點i為真實文本的預測概率。 s∗i = {0,1}是基本事實。 j是用於y座標迴歸的有效錨集中的錨的索引,定義如下。有效錨點是已定義的正錨點(s ∗ j = 1,如下所述),或者與地面真理文本提議的Intersection-overUnion(IoU)> 0.5重疊。 vj和v∗j是與第j個錨點關聯的預測和地面真實y座標。 k是側錨的索引,其被定義為在距地面真實文本行邊界框的左側或右側水平距離(例如32像素)內的一組錨。 ok和o∗k是與第k個錨點關聯的x軸上的預測和地面真實偏移。 L cl s是分類損失,我們使用Softmax損失來區分文本和非文本。 L re v和L re o是迴歸損失。我們通過使用平滑L1函數來計算它們[5,25],以遵循先前的工作。 λ1和λ2是權衡不同任務的損失權重,根據經驗將其設置為1.0和2.0。 Ns Nv和No是歸一化參數,分別表示L cl s,L rev和L re o使用的錨的總數。(頭條不能識別公式)

3.5 Training and Implementation Details

The CTPN can be trained end-to-end by using the standard back-propagation and stochastic gradient descent (SGD). Similar to RPN [25], training samples are the anchors, whose locations can be pre computed in input image, so that the training labels of each anchor can be computed from corresponding GT box.

通過使用標準的反向傳播和隨機梯度下降(SGD),可以端到端地訓練CTPN。 類似於RPN [25],訓練樣本是錨點,可以在輸入圖像中預先計算其位置,以便可以從相應的GT框計算每個錨點的訓練標籤。

Training labels. For text/non-text classification, a binary label is assigned to each positive (text) or negative (non-text) anchor. It is defined by computing the IoU overlap with the GT bounding box (divided by anchor location). A positive anchor is defined as : (i) an anchor that has an > 0.7 IoU overlap with any GT box; or (ii) the anchor with the highest IoU overlap with a GT box. By the condition (ii), even a very small text pattern can assign a positive anchor. This is crucial to detect small-scale text patterns, which is one of key advantages of the CTPN. This is different from generic object detection where the impact of condition (ii) may be not significant. The negative anchors are defined as < 0.5 IoU overlap with all GT boxes. The training labels for the y-coordinate regression (v ∗ ) and offset regression (o ∗ ) are computed as E.q. (2) and (4) respectively.

訓練標籤。 對於文本/非文本分類,將二進制標籤分配給每個正(文本)或負(非文本)錨點。 它是通過計算IoU與GT邊界框(按錨點位置劃分)的重疊來定義的。 正錨定義為:(i)與任何GT盒重疊> 0.7 IoU的錨; 或(ii)IoU最高的錨與GT盒重疊。 根據條件(ii),即使很小的文本樣式也可以分配正錨。 這對於檢測小規模文本模式至關重要,這是CTPN的主要優勢之一。 這與條件(ii)的影響可能不大的通用對象檢測不同。 負錨定義為與所有GT盒的<0.5 IoU重疊。 y座標迴歸(v ∗)和偏移回歸(o ∗)的訓練標籤計算為E.q. (2)和(4)。

Training data. In the training process, each mini-batch samples are collected randomly from a single image. The number of anchors for each minibatch is fixed to Ns = 128, with 1:1 ratio for positive and negative samples. A mini-patch is pad with negative samples if the number of positive ones is fewer than 64. Our model was trained on 3,000 natural images, including 229 images from the ICDAR 2013 training set. We collected the other images ourselves and manually labelled them with text line bounding boxes. All self-collected training images are not overlapped with any test image in all benchmarks. The input image is resized by setting its short side to 600 for training, while keeping its original aspect ratio.

訓練數據。 在訓練過程中,從單個圖像中隨機收集每個小批量樣本。 每個小批量的錨點數量固定為Ns = 128,正樣本和負樣本的比例為1:1。 如果陽性樣本的數量少於64個,則微型補丁用陰性樣本填充。我們的模型接受了3,000張自然圖像的訓練,包括來自ICDAR 2013訓練集的229張圖像。 我們自己收集了其他圖像,並用文本行邊框手動標記了它們。 在所有基準測試中,所有自收集的訓練圖像都不會與任何測試圖像重疊。 通過將輸入圖像的短邊設置為600進行調整以調整輸入圖像的大小,同時保持其原始縱橫比。

Implementation Details. We follow the standard practice, and explore the very deep VGG16 model [27] pre-trained on the ImageNet data [26]. We initialize the new layers (e.g., the RNN and output layers) by using random weights with Gaussian distribution of 0 mean and 0.01 standard deviation. The model was trained end-to-end by fixing the parameters in the first two convolutional layers. We used 0.9 momentum and 0.0005 weight decay. The learning rate was set to 0.001 in the first 16K iterations, followed by another 4K iterations with 0.0001 learning rate. Our model was implemented in Caffe framework [17].

實現細節。 我們遵循標準做法,並探索在ImageNet數據上預先訓練的非常深的VGG16模型[27] [26]。 我們使用具有0個均值和0.01個標準偏差的高斯分佈的隨機權重來初始化新層(例如RNN和輸出層)。 通過在前兩個卷積層中固定參數來端對端地訓練模型。 我們使用0.9的動量和0.0005的重量衰減。 在前16K迭代中將學習率設置為0.001,然後再進行一次具有0.0001學習率的4K迭代。 我們的模型是在Caffe框架中實現的[17]。

4 Experimental Results and Discussions

We evaluate the CTPN on five text detection benchmarks, namely the ICDAR 2011 [21], ICDAR 2013 [19], ICDAR 2015 [18], SWT [3], and Multilingual dataset [24]. In our experiments, we first verify the efficiency of each proposed component individually, e.g., the fine-scale text proposal detection or in-network recurrent connection. The ICDAR 2013 is used for this component evaluation.

我們根據五個文本檢測基準評估CTPN,即ICDAR 2011 [21],ICDAR 2013 [19],ICDAR 2015 [18],SWT [3]和多語言數據集[24]。 在我們的實驗中,我們首先分別驗證每個建議組件的效率,例如,小規模文本建議檢測或網絡內經常性連接。 ICDAR 2013用於此組件評估。

4.1 Benchmarks and Evaluation Metric

The ICDAR 2011 dataset [21] consists of 229 training images and 255 testing ones, where the images are labelled in word level. The ICDAR 2013 [19] is similar as the ICDAR 2011, and has in total 462 images, including 229 images and 233 images for training and testing, respectively. The ICDAR 2015 (Incidental Scene Text - Challenge 4) [18] includes 1,500 images which were collected by using the Google Glass. The training set has 1,000 images, and the remained 500 images are used for test. This dataset is more challenging than previous ones by including arbitrary orientation, very small-scale and low resolution text. The Multilingual scene text dataset is collected by [24]. It contains 248 images for training and 239 for testing. The images include multi-languages text, and the ground truth is labelled in text line level. Epshtein et al. [3] introduced the SWT dataset containing 307 images which include many extremely small-scale text.

ICDAR 2011數據集[21]包含229個訓練圖像和255個測試圖像,其中圖像以單詞級別標記。 ICDAR 2013 [19]與ICDAR 2011類似,共有462張圖像,其中分別有229張圖像和233張圖像用於訓練和測試。 ICDAR 2015(偶然場景文本-挑戰4)[18]包含1,500張圖像,這些圖像是使用Google Glass收集的。 訓練集有1,000張圖像,其餘500張圖像用於測試。 通過包含任意方向,非常小規模和低分辨率的文本,此數據集比以前的數據集更具挑戰性。 多語言場景文本數據集由[24]收集。 它包含248張用於訓練的圖像和239張用於測試的圖像。 圖像包括多語言文本,地面真相在文本行級別標記。 Epshtein等。 [3]介紹了包含307個圖像的SWT數據集,其中包括許多極端小規模的文本。

We follow previous work by using standard evaluation protocols which are provided by the dataset creators or competition organizers. For the ICDAR 2011 we use the standard protocol proposed by [30], the evaluation on the ICDAR 2013 follows the standard in [19]. For the ICDAR 2015, we used the online evaluation system provided by the organizers as in [18]. The evaluations on the SWT and Multilingual datasets follow the protocols defined in [3] and [24] respectively.

我們遵循以前的工作,使用數據集創建者或競賽組織者提供的標準評估協議。 對於ICDAR 2011,我們使用[30]提出的標準協議,對ICDAR 2013的評估遵循[19]中的標準。 對於ICDAR 2015,我們使用了組織者提供的在線評估系統,如[18]。 SWT和多語言數據集的評估分別遵循[3]和[24]中定義的協議。

4.2 Fine-Scale Text Proposal Network with Faster R-CNN

We first discuss our fine-scale detection strategy against the RPN and Faster R-CNN system [25]. As can be found in Table 1 (left), the individual RPN is difficult to perform accurate text localization, by generating a large amount of false detections (low precision). By refining the RPN proposals with a Fast R-CNN detection model [5], the Faster R-CNN system improves localization accuracy considerably, with a F-measure of 0.75. One observation is that the Faster R-CNN also increases the recall of original RPN. This may benefit from joint bounding box regression mechanism of the Fast R-CNN, which improves the accuracy of a predicted bounding box. The RPN proposals may roughly localize a major part of a text line or word, but they are not accurate enough by the ICDAR 2013 standard. Obviously, the proposed fine-scale text proposal network (FTPN) improves the Faster R-CNN remarkably in both precision and recall, suggesting that the FTPN is more accurate and reliable, by predicting a sequence of fine-scale text proposals rather than a whole text line.

我們首先討論針對RPN和Faster R-CNN系統的精細檢測策略[25]。如表1(左)所示,通過生成大量錯誤檢測(低精度),單個RPN很難執行準確的文本本地化。通過使用快速R-CNN檢測模型[5]完善RPN建議,Faster R-CNN系統以0.75的F值顯著提高了定位精度。一種觀察是,更快的R-CNN也增加了原始RPN的召回率。這可能得益於Fast R-CNN的聯合邊界框迴歸機制,該機制提高了預測邊界框的準確性。 RPN提案可能會粗略定位文本行或單詞的主要部分,但根據ICDAR 2013標準,它們不夠準確。顯然,擬議的精細文本提案網絡(FTPN)在準確性和查全率方面顯著提高了Faster R-CNN,這表明FTPN通過預測一系列精細文本提案而非整個序列而更加準確和可靠文字行。

4.3 Recurrent Connectionist Text Proposals

We discuss impact of recurrent connection on our CTPN. As shown in Fig. 3, the context information is greatly helpful to reduce false detections, such as textlike outliers. It is of great importance for recovering highly ambiguous text (e.g., extremely small-scale ones), which is one of main advantages of our CTPN, as demonstrated in Fig. 6. These appealing properties result in a significant performance boost. As shown in Table 1 (left), with our recurrent connection, the CTPN improves the FTPN substantially from a F-measure of 0.80 to 0.88.

Running time. The implementation time of our CTPN (for whole detection processing) is about 0.14s per image with a fixed short side of 600, by using a single GPU. The CTPN without the RNN connection takes about 0.13s/image GPU time. Therefore, the proposed in-network recurrent mechanism increase model computation marginally, with considerable performance gain obtained.

我們討論了循環連接對CTPN的影響。 如圖3所示,上下文信息極大地有助於減少錯誤檢測,例如類似文本的異常值。 如圖6所示,這對於恢復高度含糊的文本(例如,極小比例的文本)非常重要,這是我們CTPN的主要優勢之一。這些吸引人的屬性極大地提高了性能。 如表1(左)所示,通過我們的循環連接,CTPN將FTPN的F值從0.80大大提高到0.88。

運行時間。 通過使用單個GPU,CTPN(用於整個檢測處理)的實現時間約為每個圖像0.14s,固定短邊為600。 不帶RNN連接的CTPN大約需要每圖像GPU時間0.13s。 因此,所提出的網絡內遞歸機制略微增加了模型計算量,並獲得了可觀的性能提升。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

4.4 Comparisons with state-of-the-art results

Our detection results on several challenging images are presented in Fig. 5. As can be found, the CTPN works perfectly on these challenging cases, some of which are difficult for many previous methods. It is able to handle multi-scale and multi-language efficiently (e.g., Chinese and Korean).

我們在幾個具有挑戰性的圖像上的檢測結果如圖5所示。可以發現,CTPN可以完美地應對這些具有挑戰性的情況,其中某些情況對於許多以前的方法來說都是困難的。 它能夠有效地處理多種語言和多種語言(例如中文和韓文)。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

Fig. 5: CTPN detection results several challenging images, including multi-scale and multi-language text lines. Yellow boxes are the ground truth.

圖5:CTPN檢測可得到幾幅具有挑戰性的圖像,包括多尺度和多語言文本行。 黃框是事實。

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

論文CTPN:使用Connectionist文本提議網絡檢測自然圖像中的文本

Fig. 6: CTPN detection results on extremely small-scale cases (in red boxes), where some ground truth boxes are missed. Yellow boxes are the ground truth.

圖6:在極小規模的情況下(紅色方框中)的CTPN檢測結果,其中缺少一些地面真相箱。 黃框是事實。

The full evaluation was conducted on five benchmarks. Image resolution is varied significantly in different datasets. We set short side of images to 2000 for the SWT and ICDAR 2015, and 600 for the other three. We compare our performance against recently published results in [1,28,34]. As shown in Table 1 and 2, our CTPN achieves the best performance on all five datasets. On the SWT, our improvements are significant on both recall and F-measure, with marginal gain on precision. Our detector performs favourably against the TextFlow on the Multilingual, suggesting that our method generalize well to various languages. On the ICDAR 2013, it outperforms recent TextFlow [28] and FASText [1] remarkably by improving the F-measure from 0.80 to 0.88. The gains are considerable in both precision and recall, with more than +5% and +7% improvements, respectively. In addition, we further compare our method against [8,11,35], which were published after our initial submission. It consistently obtains substantial improvements on F-measure and recall. This may due to strong capability of CTPN for detecting extremely challenging text, e.g., very small-scale ones, some of which are even difficult for human. As shown in Fig. 6, those challenging ones are detected correctly by our detector, but some of them are even missed by the GT labelling, which may reduce our precision in evaluation.

全面評估是基於五個基準進行的。在不同的數據集中,圖像分辨率差異很大。對於SWT和ICDAR 2015,我們將圖像的短邊設置為2000,其他三個設置為600。我們將我們的表現與最近發表的結果進行比較[1,28,34]。如表1和表2所示,我們的CTPN在所有五個數據集上均實現了最佳性能。在SWT上,我們的改進在召回率和F量度上均具有重要意義,在精度上有少量提高。我們的檢測器對多語言上的TextFlow表現良好,這表明我們的方法可以很好地推廣到各種語言。在ICDAR 2013上,通過將F量度從0.80提高到0.88,它的性能明顯優於最近的TextFlow [28]和FASText [1]。精度和召回率方面均取得了可觀的進步,分別提高了+ 5%和+ 7%。此外,我們將我們的方法與[8,11,35]進行了比較,後者是在我們首次提交後發佈的。它始終在F測量和召回方面獲得實質性改進。這可能是由於CTPN具有強大的能力來檢測極具挑戰性的文本(例如,非常小的文本),其中有些甚至對人類來說是困難的。如圖6所示,我們的檢測器可以正確檢測到那些具有挑戰性的檢測器,但是其中一些甚至被GT標記遺漏了,這可能會降低我們的評估精度。

We further investigate running time of various methods, as compared in Table 2. FASText [1] achieves 0.15s/image CPU time. Our method is slightly faster than it by obtaining 0.14s/image, but in GPU time. Though it is not fair to compare them directly, the GPU computation has become mainstream with recent great success of deep learning approaches on object detection [25,5,6]. Regardless of running time, our method outperforms the FASText substantially with 11% improvement on F-measure. Our time can be reduced by using a smaller image scale. By using the scale of 450, it is reduced to 0.09s/image, while obtaining P/R/F of 0.92/0.77/0.84 on the ICDAR 2013, which are compared competitively against Gupta et al.'s approach [8] using 0.07s/image with GPU.

我們將進一步調查各種方法的運行時間,如表2所示。FASText [1]實現0.15s /圖像CPU時間。 通過獲取0.14s /圖像,我們的方法要比它快一點,但是在GPU時間上。 儘管直接比較它們是不公平的,但隨著深度學習方法在對象檢測方面的最新成功,GPU計算已成為主流[25,5,6]。 無論運行時間如何,我們的方法都比FASText表現更好,其F度量提高了11%。 通過使用較小的圖像比例可以減少我們的時間。 通過使用450的比例尺,圖像縮小到0.09s /圖像,而在ICDAR 2013上獲得的P / R / F為0.92 / 0.77 / 0.84,與Gupta等人的方法[8]相比具有競爭優勢 使用GPU時為每張圖片0.07秒。

5 Conclusions

We have presented a Connectionist Text Proposal Network (CTPN) - an efficient text detector that is end-to-end trainable. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional maps. We develop vertical anchor mechanism that jointly predicts precise location and text/nontext score for each proposal, which is the key to realize accurate localization of text. We propose an in-network RNN layer that connects sequential text proposals elegantly, allowing it to explore meaningful context information. These key technical developments result in a powerful ability to detect highly challenging text, with less false detections. The CTPN is efficient by achieving new state-ofthe-art performance on five benchmarks, with 0.14s/ image running time.

我們介紹了一個Connectionist文本提案網絡(CTPN)-一種高效的文本檢測器,可端到端訓練。 CTPN直接在卷積圖中檢測一系列精細文本建議中的文本行。 我們開發了垂直定位機制,可以共同預測每個提案的精確位置和文本/非文本得分,這是實現文本精確定位的關鍵。 我們提出了一個網絡內RNN層,該層可以優雅地連接順序文本提案,從而使它能夠探索有意義的上下文信息。 這些關鍵的技術發展帶來了強大的能力,能夠以極少的錯誤檢測來檢測具有挑戰性的文本。 通過在五個基準上實現最新的最新性能,CTPN效率很高,每圖像運行時間為0.14s。


分享到:


相關文章: