03.06 車道線檢測LaneNet

  • LanNetSegmentation branch完成語義分割,即判斷出像素屬於車道or背景Embedding branch完成像素的向量表示,用於後續聚類,以完成實例分割
  • H-Net
  • Segmentation branch

    解決樣本分佈不均衡

    車道線像素遠小於背景像素.loss函數的設計對不同像素賦給不同權重,降低背景權重.

    該分支的輸出為(w,h,2).

    Embedding branch

    loss的設計思路為使得屬於同一條車道線的像素距離儘量小,屬於不同車道線的像素距離儘可能大.即Discriminative loss.

    該分支的輸出為(w,h,n).n為表示像素的向量的維度.

    實例分割

    在Segmentation branch完成語義分割,Embedding branch完成像素的向量表示後,做聚類,完成實例分割.

    車道線檢測LaneNet

    H-net

    透視變換

    to do

    車道線擬合

    LaneNet的輸出是每條車道線的像素集合,還需要根據這些像素點回歸出一條車道線。傳統的做法是將圖片投影到鳥瞰圖中,然後使用二次或三次多項式進行擬合。在這種方法中,轉換矩陣H只被計算一次,所有的圖片使用的是相同的轉換矩陣,這會導致坡度變化下的誤差。

    為了解決這個問題,論文訓練了一個可以預測變換矩陣H的神經網絡HNet,網絡的輸入是圖片,輸出是轉置矩陣H。之前移植過Opencv逆透視變換矩陣的源碼,裡面轉換矩陣需要8個參數,這兒只給了6個參數的自由度,一開始有些疑惑,後來仔細閱讀paper,發現作者已經給出瞭解釋,是為了對轉換矩陣在水平方向上的變換進行約束。

    代碼分析

    <code>binary_seg_image, instance_seg_image = sess.run(
    [binary_seg_ret, instance_seg_ret],
    feed_dict={input_tensor: [image]}
    )/<code>

    輸入(1,256,512,3)輸出binary_seg_image:(1, 256, 512) instance_seg_image:(1, 256, 512, 4)

    完成像素級別的分類和向量表示

    class LaneNet的inference分為兩步.

    第一步提取分割的特徵,包括了用於語義分割的特徵和用以實例分割的特徵.

    <code>class LaneNet(cnn_basenet.CNNBaseModel):
    def inference(self, input_tensor, name):
    """

    :param input_tensor:
    :param name:
    :return:
    """
    with tf.variable_scope(name_or_scope=name, reuse=self._reuse):
    # first extract image features
    extract_feats_result = self._frontend.build_model(
    input_tensor=input_tensor,
    name='{:s}_frontend'.format(self._net_flag),
    reuse=self._reuse
    )
    #得到一個字典,包含了用於語義分割的feature map和用於實例分割的feature map.
    #binary_segment_logits (1,256,512,2) 2是類別數目.即車道/背景.
    #instance_segment_logits (1,256,512,64) 用以後面再做卷積為每個像素生成一個向量表示
    print('features:',extract_feats_result)

    # second apply backend process
    binary_seg_prediction, instance_seg_prediction = self._backend.inference(
    binary_seg_logits=extract_feats_result['binary_segment_logits']['data'],
    instance_seg_logits=extract_feats_result['instance_segment_logits']['data'],
    name='{:s}_backend'.format(self._net_flag),
    reuse=self._reuse
    )

    if not self._reuse:
    self._reuse = True

    return binary_seg_prediction, instance_seg_prediction/<code>

    第一步得到的features如下:

    <code>features : OrderedDict([('encode_stage_1_share', {'data': <tf.tensor>, 'shape': [1, 256, 512, 64]}), ('encode_stage_2_share', {'data': <tf.tensor>, 'shape': [1, 128, 256, 128]}), ('encode_stage_3_share', {'data': <tf.tensor>, 'shape': [1, 64, 128, 256]}), ('encode_stage_4_share', {'data': <tf.tensor>, 'shape': [1, 32, 64, 512]}), ('encode_stage_5_binary', {'data': <tf.tensor>, 'shape': [1, 16, 32, 512]}), ('encode_stage_5_instance', {'data': <tf.tensor>, 'shape': [1, 16, 32, 512]}), ('binary_segment_logits', {'data': <tf.tensor>, 'shape': [1, 256, 512, 2]}), ('instance_segment_logits', {'data': <tf.tensor>, 'shape': [1, 256, 512, 64]})])/<tf.tensor>/<tf.tensor>/<tf.tensor>/<tf.tensor>/<tf.tensor>/<tf.tensor>/<tf.tensor>/<tf.tensor>/<code>

    特徵提取完畢,做後處理

    <code>class LaneNetBackEnd(cnn_basenet.CNNBaseModel):
    def inference(self, binary_seg_logits, instance_seg_logits, name, reuse):
    """

    :param binary_seg_logits:
    :param instance_seg_logits:
    :param name:
    :param reuse:
    :return:
    """
    with tf.variable_scope(name_or_scope=name, reuse=reuse):

    with tf.variable_scope(name_or_scope='binary_seg'):
    binary_seg_score = tf.nn.softmax(logits=binary_seg_logits)
    binary_seg_prediction = tf.argmax(binary_seg_score, axis=-1)

    with tf.variable_scope(name_or_scope='instance_seg'):

    pix_bn = self.layerbn(
    inputdata=instance_seg_logits, is_training=self._is_training, name='pix_bn')
    pix_relu = self.relu(inputdata=pix_bn, name='pix_relu')
    instance_seg_prediction = self.conv2d(
    inputdata=pix_relu,
    out_channel=CFG.TRAIN.EMBEDDING_FEATS_DIMS,
    kernel_size=1,
    use_bias=False,
    name='pix_embedding_conv'
    )

    return binary_seg_prediction, instance_seg_prediction/<code>

    對每個像素的分類,做softmax轉成概率.再argmax求概率較大值的下標.對每個像素的向量表示,用1x1卷積核做卷積,得到channel維度=CFG.TRAIN.EMBEDDING_FEATS_DIMS(配置為4).即(1,256,512,64)卷積得到(1,256,512,4)的tensor.即每個像素用一個四維向量表示.

    所以,整個LaneNet的inference返回的是兩個tensor.一個shape為(1,256,512) 一個為(1,256,512,4).

    後處理

    <code>class LaneNetPostProcessor(object):
    def postprocess(self, binary_seg_result, instance_seg_result=None,
    min_area_threshold=100, source_image=None,
    data_source='tusimple'):/<code>

    對binary_seg_result,先通過形態學操作將小的空洞去除.參考 https://www.cnblogs.com/sdu20112013/p/11672634.html

    然後做聚類.

    <code>def _get_lane_embedding_feats(binary_seg_ret, instance_seg_ret):
    """
    get lane embedding features according the binary seg result
    :param binary_seg_ret:
    :param instance_seg_ret:
    :return:
    """
    idx = np.where(binary_seg_ret == 255) #idx (b,h,w)
    lane_embedding_feats = instance_seg_ret[idx]

    # idx_scale = np.vstack((idx[0] / 256.0, idx[1] / 512.0)).transpose()
    # lane_embedding_feats = np.hstack((lane_embedding_feats, idx_scale))
    lane_coordinate = np.vstack((idx[1], idx[0])).transpose()

    assert lane_embedding_feats.shape[0] == lane_coordinate.shape[0]

    ret = {
    'lane_embedding_feats': lane_embedding_feats,
    'lane_coordinates': lane_coordinate
    }

    return ret/<code>

    獲取到座標及對應座標像素對應的向量表示.

    np.where(condition)

    只有條件 (condition),沒有x和y,則輸出滿足條件 (即非0) 元素的座標 (等價於numpy.nonzero)。這裡的座標以tuple的形式給出,通常原數組有多少維,輸出的tuple中就包含幾個數組,分別對應符合條件元素的各維座標。

    測試結果

    tensorflow-gpu 1.15.2

    4張titan xp

    (4, 256, 512) (4, 256, 512, 4)

    I0302 17:04:31.276140 29376 test_lanenet.py:222] imgae inference cost time: 2.58794s

    (32, 256, 512) (32, 256, 512, 4)

    I0302 17:05:50.322593 29632 test_lanenet.py:222] imgae inference cost time: 4.31036s

    類似於高吞吐量,高延遲.對單幀圖片處理在1-2s,多幅圖片同時處理,平均下來的處理速度在0.1s.

    論文裡的backbone為enet,在nvida 1080 ti上推理速度52fps.

    對於這個問題的解釋,作者的解釋是

    2.Origin paper use Enet as backbone net but I use vgg16 as backbone net so speed will not get as fast as that. 3.Gpu need a short time to warm up and you can adjust your batch size to test the speed again:)

    一個是特徵提取網絡和論文裡不一致,一個是gpu有一個短暫的warm up的時間.

    我自己的測試結果是在extract image features耗時較多.換一個backbone可能會有改善.

    <code>def inference(self, input_tensor, name):
    """

    :param input_tensor:
    :param name:
    :return:
    """
    print("***************,input_tensor shape:",input_tensor.shape)
    with tf.variable_scope(name_or_scope=name, reuse=self._reuse):
    t_start = time.time()
    # first extract image features
    extract_feats_result = self._frontend.build_model(
    input_tensor=input_tensor,

    name='{:s}_frontend'.format(self._net_flag),
    reuse=self._reuse
    )
    t_cost = time.time() - t_start
    glog.info('extract image features cost time: {:.5f}s'.format(t_cost))

    # second apply backend process
    t_start = time.time()
    binary_seg_prediction, instance_seg_prediction = self._backend.inference(
    binary_seg_logits=extract_feats_result['binary_segment_logits']['data'],
    instance_seg_logits=extract_feats_result['instance_segment_logits']['data'],
    name='{:s}_backend'.format(self._net_flag),
    reuse=self._reuse
    )
    t_cost = time.time() - t_start
    glog.info('backend process cost time: {:.5f}s'.format(t_cost))

    if not self._reuse:
    self._reuse = True

    return binary_seg_prediction, instance_seg_prediction/<code>



    分享到:


    相關文章: