TensorFlow 2.0 Tutorial: 4

中學習了用 TF 2.0 搭建一個神經網絡的基本流程

中學習了圖片識別和可視化的方法

中對比學習了幾種常見 RNN 模型的代碼實現

這裡我們將學習一些基礎操作:

  1. 特徵標準化
  2. 畫學習曲線
  3. callbacks
TensorFlow 2.0 Tutorial: 4 - 幾個常用技術


%matplotlib inline
%load_ext tensorboard.notebook
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras # tf.keras
import time
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = (
fashion_mnist.load_data())
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
plt.imshow(X_train[0], cmap="binary")
plt.show()
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]

1. 建立神經網絡

下面這個過程是一個最基礎的模型建立到評估到預測的流程,

幾乎都是遵循這樣的一個過程,

  • 先是建立一個基礎的網絡模型,
  • 輸入層,先將 28x28 圖片轉換成 1x784 ,
  • 隱藏層,定義神經元個數和激活函數,
  • 輸出層,定義類別的個數,並用 softmax 得到概率。
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])

然後是查看模型,

  • 編譯模型,此時定義 loss,optimizer,metrics,
  • 訓練模型可以用最簡單的 fit,
  • 評估模型用 model.evaluate
  • 最後是預測新數據
model.summary()
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
model.evaluate(X_test, y_test)
n_new = 10
X_new = X_test[:n_new]
y_proba = model.predict(X_new)

2. 常用技術

1. 我們可以對特徵進行標準化的預處理:

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)
X_valid_scaled = scaler.transform(X_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)
X_test_scaled = scaler.transform(X_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)

然後在模型訓練和評估時使用標準化的數據,可以對比兩次效果:

history = model.fit(X_train_scaled, y_train, epochs=20,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)

2. 我們還可以用 pd.DataFrame(history.history).plot 畫出學習曲線

def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
plot_learning_curves(history)

3. 想要知道預測結果的類別 ID,可以用下面兩種方式:

y_pred = y_proba.argmax(axis=1)
y_pred = model.predict_classes(X_new)

要查看 top k 類別:

k = 3
top_k = np.argsort(-y_proba, axis=1)[:, :k]
top_k

4. fit() 裡面可以接收 callbacks:

callbacks 回調函數是一個函數的合集,可以調用 TensorBoard,EarlyStopping, ModelCheckpoint 等函數。

model 和 compile 沒有變化,只需要將 callbacks 傳遞給 fit:

model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd", metrics=["accuracy"])
logdir = os.path.join(root_logdir, "run_{}".format(time.time()))
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.EarlyStopping(patience=5),
keras.callbacks.ModelCheckpoint("my_mnist_model.h5", save_best_only=True),
]
history = model.fit(X_train_scaled, y_train, epochs=50,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)

學習資料:

https://github.com/ageron/tf2_course/blob/master/01_neural_nets_with_keras.ipynb


大家好!

我是 不會停的蝸牛 Alice,

喜歡人工智能,每天寫點機器學習乾貨,


分享到:


相關文章: