深度学习TF—5.tf.kears高层API 原创

1147-柳同学

发表文章数:593

首页 » 算法 » 正文

一、metrics

  • 新建一个评价指标
acc_meter = metrics.Accuracy()
loss_meter = metrics.Mean()
  • update data- 添加数据
loss_meter.update_state(loss)
acc_meter.update_state(y,pred)
  • result().numpy()-显示结果
print(step,'loss:', loss_meter.result().numpy())
...
print(step,'Evaluate Acc:',total_correct/total, acc_meter.result().numpy())
  • reset_states()-清零
if step % 100 == 0:
	print(step,'loss:', loss_meter.result().numpy())
	# 清除上一个时间戳的数据
	loss_meter.reset_states()

if step % 500 == 0:
	print(step,'Evaluate Acc:',total_correct/total, acc_meter.result().numpy())
	acc_meter.reset_states()

1.实战

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics

# 预处理函数
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y


batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

# 优化器
optimizer = optimizers.Adam(lr=0.01)

# 评价指标-acc,loss
acc_meter = metrics.Accuracy()
loss_meter = metrics.Mean()


for step, (x, y) in enumerate(db):

    with tf.GradientTape() as tape:
        # [b, 28, 28] => [b, 784]
        x = tf.reshape(x, (-1, 28 * 28))
        # [b, 784] => [b, 10]
        out = network(x)
        # [b] => [b, 10]
        y_onehot = tf.one_hot(y, depth=10)
        # [b]
        loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))

        loss_meter.update_state(loss)

    grads = tape.gradient(loss, network.trainable_variables)
    optimizer.apply_gradients(zip(grads, network.trainable_variables))

    if step % 100 == 0:
        print(step, 'loss:', loss_meter.result().numpy())
        loss_meter.reset_states()

    # evaluate
    if step % 500 == 0:
        total, total_correct = 0., 0
        acc_meter.reset_states()

        for step, (x, y) in enumerate(ds_val):
            # [b, 28, 28] => [b, 784]
            x = tf.reshape(x, (-1, 28 * 28))
            # [b, 784] => [b, 10]
            out = network(x)

            # [b, 10] => [b]
            pred = tf.argmax(out, axis=1)
            pred = tf.cast(pred, dtype=tf.int32)
            # bool type
            correct = tf.equal(pred, y)
            # bool tensor => int tensor => numpy
            total_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()
            total += x.shape[0]

            acc_meter.update_state(y, pred)

        print(step, 'Evaluate Acc:', total_correct / total, acc_meter.result().numpy())
datasets: (60000, 28, 28) (60000,) 0 255
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                multiple                  200960    
_________________________________________________________________
dense_1 (Dense)              multiple                  32896     
_________________________________________________________________
dense_2 (Dense)              multiple                  8256      
_________________________________________________________________
dense_3 (Dense)              multiple                  2080      
_________________________________________________________________
dense_4 (Dense)              multiple                  330       
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 loss: 2.3095727
78 Evaluate Acc: 0.1032 0.1032
100 loss: 0.49836162
200 loss: 0.24281283
300 loss: 0.20814449
400 loss: 0.19040857
500 loss: 0.1471103
78 Evaluate Acc: 0.956 0.956
600 loss: 0.15806517
700 loss: 0.13501912
800 loss: 0.13778095
900 loss: 0.13771541
1000 loss: 0.11204889
78 Evaluate Acc: 0.9666 0.9666
1100 loss: 0.10818114
1200 loss: 0.10698662
1300 loss: 0.10993517
1400 loss: 0.10309881
1500 loss: 0.092004016
78 Evaluate Acc: 0.9658 0.9658
1600 loss: 0.09988546
1700 loss: 0.09517718
1800 loss: 0.102653
1900 loss: 0.10128655
2000 loss: 0.084593534
78 Evaluate Acc: 0.9696 0.9696
2100 loss: 0.089395694
2200 loss: 0.084114745
2300 loss: 0.08294669
2400 loss: 0.0765419
2500 loss: 0.07786285
78 Evaluate Acc: 0.9716 0.9716
2600 loss: 0.08739958
2700 loss: 0.08950595
2800 loss: 0.08106578
2900 loss: 0.06466477
3000 loss: 0.077431396
78 Evaluate Acc: 0.9707 0.9707
3100 loss: 0.08382876
3200 loss: 0.076059125
3300 loss: 0.07230227
3400 loss: 0.05853687
3500 loss: 0.07312769
78 Evaluate Acc: 0.9703 0.9703
3600 loss: 0.07384481
3700 loss: 0.08926408
3800 loss: 0.066682965
3900 loss: 0.05534654
4000 loss: 0.073996484
78 Evaluate Acc: 0.9741 0.9741
4100 loss: 0.066883035
4200 loss: 0.070191
4300 loss: 0.08581101
4400 loss: 0.07324687
4500 loss: 0.056211904
78 Evaluate Acc: 0.9751 0.9751
4600 loss: 0.05384313

二、compile&fit&Evaluate&Predict

1.compile—编译模型

指定训练时loss(损失)、optimizer(优化器)和metrics(评价指标)的选择

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential

# 数据预处理
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y

batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))

# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss = losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

2.fit—训练模型

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential

# 数据预处理
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y

batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))

# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss = losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

# 训练模型
network.fit(db,epochs=100)

3.Evaluate—评估模型

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential
print(tf.__version__)
# 数据预处理
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y

batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))

# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss = losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

# 训练模型
network.fit(db,epochs=10,validation_data = ds_val)

4.predict—预测

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential
from sklearn.metrics import accuracy_score
import numpy as np
print(tf.__version__)
# 数据预处理
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y

batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

x = x.reshape((-1,28*28))
y = tf.one_hot(y,depth=10)
x_val = x_val.reshape((-1,28*28))
y_val = tf.one_hot(y_val,depth=10)

# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))

# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss = losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

# 训练模型
network.fit(db,epochs=5,validation_data = ds_val,validation_freq=2)
network.summary()

# val
network.evaluate(ds_val)

# predict
pred = network.predict(x_val)

y_true = tf.argmax(y_val,axis=1)
y_pred = tf.argmax(pred,axis=1)
correct = tf.equal(y_true,y_pred)
total_correct = tf.reduce_sum(tf.cast(correct,dtype=np.int32)).numpy()
print(total_correct/x_val.shape[0])
Epoch 1/5
4690/4690 [==============================] - 16s 3ms/step - loss: 0.1098 - accuracy: 0.9695
Epoch 2/5
4690/4690 [==============================] - 18s 4ms/step - loss: 0.0531 - accuracy: 0.9873 - val_loss: 0.1227 - val_accuracy: 0.9776
Epoch 3/5
4690/4690 [==============================] - 19s 4ms/step - loss: 0.0448 - accuracy: 0.9902
Epoch 4/5
4690/4690 [==============================] - 18s 4ms/step - loss: 0.0376 - accuracy: 0.9923 - val_loss: 0.1778 - val_accuracy: 0.9763
Epoch 5/5
4690/4690 [==============================] - 19s 4ms/step - loss: 0.0368 - accuracy: 0.9921
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 256)               200960    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_3 (Dense)              (None, 32)                2080      
_________________________________________________________________
dense_4 (Dense)              (None, 10)                330       
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
78/78 [==============================] - 0s 3ms/step - loss: 0.1899 - accuracy: 0.9758
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
0.9737

三、自定义层或网络

1.keras.Sequential

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
# 建立网络参数
network.build(input_shape=(None, 28 * 28))

2.keras.Model / keras.layers.Layer

继承keras.layers.Layer来实现自定义层,自己的逻辑在call()方法中

  • __init__
    
  • call()
    

继承keras.Model来实现自定义网络,中间有一个小类继承keras.layers.Layer

  • __init__
    
  • call()
    
  • Model:compile / fit / evaluate
    

3.自定义层

# 自定义Dense层
class MyDense(layers.Layer):
	# 初始化方法
	def __init__(self,inp_dim,outp_dim):
		# 调用母类的初始化
		super(MyDense,self).__init__()
		# self.add_variable作用是在创建这两个Variable时,同时告诉类这两个variable是需要创建的
		# 当两个容器拼接时,会把这两个variable交给上面的容器来管理,统一管理,不需要人为管理参数
		# 这个函数在母类中实现,所以可以直接调用
		self.kernel = self.add_variable('w',[inp_dim,outp_dim])
		self.bias = self.add_variable('b',[outp_dim])
	
	def call(self,inputs,training = None):
		out = inputs @	self.kernel + self.bias
		return out

4.自定义网络

# 自定义Dense层
class MyDense(layers.Layer):
	# 初始化方法
	def __init__(self,inp_dim,outp_dim):
		# 调用母类的初始化
		super(MyDense,self).__init__()
		# self.add_variable作用是在创建这两个Variable时,同时告诉类这两个variable是需要创建的
		# 当两个容器拼接时,会把这两个variable交给上面的容器来管理,统一管理,不需要人为管理参数
		# 这个函数在母类中实现,所以可以直接调用
		self.kernel = self.add_variable('w',[inp_dim,outp_dim])
		self.bias = self.add_variable('b',[outp_dim])
	
	def call(self,inputs,training = None):
		out = inputs @	self.kernel + self.bias
		return out

# 利用自定义层,创建自定义网络(5层)
class MyModel(keras.Model):
	def __init__(self):
		super(MyModel,self).__init__()
		self.fc1 = MyDense(28*28,256)
		self.fc2 = MyDense(256,128)
		self.fc3 = MyDense(128,64)
		self.fc4 = MyDense(64,32)
		self.fc5 = MyDense(32,10)

	# 定义前向传播
	def call(self,inputs,training = None):
		x = self.fc1(inputs)
		x = tf.nn.relu(x)
		
		x = self.fc2(x)
		x = tf.nn.relu(x)	

		x = self.fc3(x)
		x = tf.nn.relu(x)
		x = self.fc4(x)
		x = tf.nn.relu(x)
		x = self.fc5(x)
		return x	

5.自定义网络实战—手写数字识别

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
from tensorflow import keras


# 数据预处理
def preprocess(x, y):
    """
    x is a simple image, not a batch
    """
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = tf.reshape(x, [28 * 28])
    y = tf.cast(y, dtype=tf.int32)
    y = tf.one_hot(y, depth=10)
    return x, y


batchsz = 128
# 数据集加载
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

sample = next(iter(db))
print(sample[0].shape, sample[1].shape)

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

# 自定义构建多层网络
# 自定义层
class MyDense(layers.Layer):

    def __init__(self, inp_dim, outp_dim):
        super(MyDense, self).__init__()

        self.kernel = self.add_variable('w', [inp_dim, outp_dim])
        self.bias = self.add_variable('b', [outp_dim])

    def call(self, inputs, training=None):
        out = inputs @ self.kernel + self.bias

        return out

# 自定义网络
class MyModel(keras.Model):

    def __init__(self):
        super(MyModel, self).__init__()

        self.fc1 = MyDense(28 * 28, 256)
        self.fc2 = MyDense(256, 128)
        self.fc3 = MyDense(128, 64)
        self.fc4 = MyDense(64, 32)
        self.fc5 = MyDense(32, 10)

    def call(self, inputs, training=None):
        x = self.fc1(inputs)
        x = tf.nn.relu(x)
        x = self.fc2(x)
        x = tf.nn.relu(x)
        x = self.fc3(x)
        x = tf.nn.relu(x)
        x = self.fc4(x)
        x = tf.nn.relu(x)
        x = self.fc5(x)

        return x


network = MyModel()

network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy']
                )

network.fit(db, epochs=5, validation_data=ds_val,
            validation_freq=2)

network.evaluate(ds_val)

sample = next(iter(ds_val))
x = sample[0]
y = sample[1]  # one-hot
pred = network.predict(x)  # [b, 10]
# convert back to number 
y = tf.argmax(y, axis=1)
pred = tf.argmax(pred, axis=1)

print(pred)
print(y)

6.自定义网络实战—CIFAR10

深度学习TF—5.tf.kears高层API
                    原创

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics

# 数据预处理
def preprocess(x,y):
    # [-1,1]
    x = 2 * tf.cast(x,dtype=tf.float32) / 255. - 1
    y = tf.cast(y,dtype=tf.int32)
    return x,y

batchsz = 128
# 数据集的加载
# x[b,32,32,3]  y[b,1]
(x,y),(x_val,y_val) = datasets.cifar10.load_data()

# 消去[b,1]的1这个维度
y = tf.squeeze(y)
y_val = tf.squeeze(y_val)

y = tf.one_hot(y,depth=10)
y_val = tf.one_hot(y_val,depth=10)
print('datasets:',x.shape,y.shape,x.min(),x.max())
# datasets: (50000, 32, 32, 3) (50000, 10) 0 255

# 构建两个数据集
train_db = tf.data.Dataset.from_tensor_slices((x,y))
train_db = train_db.map(preprocess).shuffle(10000).batch(batchsz)
test_db = tf.data.Dataset.from_tensor_slices((x_val,y_val))
test_db = test_db.map(preprocess).batch(batchsz)

sample = next(iter(train_db))
print('batch:',sample[0].shape,sample[1].shape)

# 创建自己的层
# replace standard layers.Dense
class MyDense(layers.Layer):
    def __init__(self,inp_dim,outp_dim):
        super(MyDense,self).__init__()

        self.kernel = self.add_variable('w',[inp_dim,outp_dim])
        # self.bias = self.add_variable('b',[outp_dim])

    # 构建前向传播
    def call(self,input,training = None):
        x = input @ self.kernel
        return x

# 构建自定义网络(5层)
class MyNetwork(keras.Model):
    def __init__(self):
        super(MyNetwork,self).__init__()

        # 优化-使参数变大-但容易造成过拟合
        self.fc1 = MyDense(32*32*3,256)
        self.fc2 = MyDense(256,128)
        self.fc3 = MyDense(128,64)
        self.fc4 = MyDense(64,32)
        self.fc5 = MyDense(32,10)

    def call(self,inputs,training=None):
        """

        :param inputs: [b,32,32,3]
        :param training:
        :return:
        """
        # 打平操作
        x = tf.reshape(inputs,[-1,32*32*3])
        x = self.fc1(x)
        x = tf.nn.relu(x)
        x = self.fc2(x)
        x = tf.nn.relu(x)
        x = self.fc3(x)
        x = tf.nn.relu(x)
        x = self.fc4(x)
        x = tf.nn.relu(x)
        # x[b,32]->[b,10]
        x = self.fc5(x)
        return x

network = MyNetwork()
network.compile(optimizer = optimizers.Adam(lr = 1e-3),
                loss = tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

network.fit(train_db,epochs=15,validation_data = test_db,validation_freq=1)

# 保存模型权值
network.evaluate(test_db)
network.save_weights('ckpt/weights.ckpt')
del network
print('saved to ckpt/weights.ckpt')
network = MyNetwork()
network.compile(optimizer = optimizers.Adam(lr = 1e-3),
                loss = tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics = ['accuracy'])

# 加载模型权值
network.load_weights('ckpt/weights.ckpt')
print('load weights from file')
network.evaluate(test_db)
Epoch 14/15
  1/391 [..............................] - ETA: 2:59 - loss: 0.6248 - accuracy: 0.8047
  8/391 [..............................] - ETA: 24s - loss: 0.6025 - accuracy: 0.7744 
 14/391 [>.............................] - ETA: 15s - loss: 0.5613 - accuracy: 0.7952
 20/391 [>.............................] - ETA: 11s - loss: 0.5669 - accuracy: 0.7969
 26/391 [>.............................] - ETA: 9s - loss: 0.5580 - accuracy: 0.8029 
 32/391 [=>............................] - ETA: 8s - loss: 0.5757 - accuracy: 0.7932
 38/391 [=>............................] - ETA: 7s - loss: 0.5719 - accuracy: 0.7926
 44/391 [==>...........................] - ETA: 6s - loss: 0.5721 - accuracy: 0.7933
 50/391 [==>...........................] - ETA: 5s - loss: 0.5669 - accuracy: 0.7962
 56/391 [===>..........................] - ETA: 5s - loss: 0.5710 - accuracy: 0.7939
 62/391 [===>..........................] - ETA: 5s - loss: 0.5740 - accuracy: 0.7941
 68/391 [====>.........................] - ETA: 4s - loss: 0.5731 - accuracy: 0.7945
 75/391 [====>.........................] - ETA: 4s - loss: 0.5753 - accuracy: 0.7922
 81/391 [=====>........................] - ETA: 4s - loss: 0.5745 - accuracy: 0.7936
 88/391 [=====>........................] - ETA: 4s - loss: 0.5727 - accuracy: 0.7936
 94/391 [======>.......................] - ETA: 3s - loss: 0.5742 - accuracy: 0.7927
101/391 [======>.......................] - ETA: 3s - loss: 0.5736 - accuracy: 0.7932
107/391 [=======>......................] - ETA: 3s - loss: 0.5724 - accuracy: 0.7934
114/391 [=======>......................] - ETA: 3s - loss: 0.5749 - accuracy: 0.7926
120/391 [========>.....................] - ETA: 3s - loss: 0.5757 - accuracy: 0.7934
126/391 [========>.....................] - ETA: 3s - loss: 0.5722 - accuracy: 0.7951
133/391 [=========>....................] - ETA: 3s - loss: 0.5721 - accuracy: 0.7955
139/391 [=========>....................] - ETA: 2s - loss: 0.5717 - accuracy: 0.7955
146/391 [==========>...................] - ETA: 2s - loss: 0.5715 - accuracy: 0.7954
152/391 [==========>...................] - ETA: 2s - loss: 0.5694 - accuracy: 0.7959
159/391 [===========>..................] - ETA: 2s - loss: 0.5688 - accuracy: 0.7957
166/391 [===========>..................] - ETA: 2s - loss: 0.5699 - accuracy: 0.7948
173/391 [============>.................] - ETA: 2s - loss: 0.5699 - accuracy: 0.7953
180/391 [============>.................] - ETA: 2s - loss: 0.5691 - accuracy: 0.7954
187/391 [=============>................] - ETA: 2s - loss: 0.5686 - accuracy: 0.7957
193/391 [=============>................] - ETA: 2s - loss: 0.5687 - accuracy: 0.7956
200/391 [==============>...............] - ETA: 2s - loss: 0.5694 - accuracy: 0.7952
207/391 [==============>...............] - ETA: 1s - loss: 0.5688 - accuracy: 0.7954
214/391 [===============>..............] - ETA: 1s - loss: 0.5673 - accuracy: 0.7951
221/391 [===============>..............] - ETA: 1s - loss: 0.5672 - accuracy: 0.7953
228/391 [================>.............] - ETA: 1s - loss: 0.5661 - accuracy: 0.7958
234/391 [================>.............] - ETA: 1s - loss: 0.5651 - accuracy: 0.7959
240/391 [=================>............] - ETA: 1s - loss: 0.5638 - accuracy: 0.7964
247/391 [=================>............] - ETA: 1s - loss: 0.5638 - accuracy: 0.7962
254/391 [==================>...........] - ETA: 1s - loss: 0.5627 - accuracy: 0.7971
261/391 [===================>..........] - ETA: 1s - loss: 0.5635 - accuracy: 0.7968
268/391 [===================>..........] - ETA: 1s - loss: 0.5642 - accuracy: 0.7966
275/391 [====================>.........] - ETA: 1s - loss: 0.5638 - accuracy: 0.7969
282/391 [====================>.........] - ETA: 1s - loss: 0.5633 - accuracy: 0.7972
289/391 [=====================>........] - ETA: 1s - loss: 0.5626 - accuracy: 0.7973
296/391 [=====================>........] - ETA: 0s - loss: 0.5625 - accuracy: 0.7973
302/391 [======================>.......] - ETA: 0s - loss: 0.5629 - accuracy: 0.7968
309/391 [======================>.......] - ETA: 0s - loss: 0.5641 - accuracy: 0.7967
318/391 [=======================>......] - ETA: 0s - loss: 0.5652 - accuracy: 0.7964
332/391 [========================>.....] - ETA: 0s - loss: 0.5661 - accuracy: 0.7960
347/391 [=========================>....] - ETA: 0s - loss: 0.5674 - accuracy: 0.7959
362/391 [==========================>...] - ETA: 0s - loss: 0.5676 - accuracy: 0.7957
376/391 [===========================>..] - ETA: 0s - loss: 0.5684 - accuracy: 0.7957
389/391 [============================>.] - ETA: 0s - loss: 0.5698 - accuracy: 0.7956
391/391 [==============================] - 4s 10ms/step - loss: 0.5697 - accuracy: 0.7956 - val_loss: 1.9200 - val_accuracy: 0.5195
Epoch 15/15
  1/391 [..............................] - ETA: 2:55 - loss: 0.6455 - accuracy: 0.7812
  8/391 [..............................] - ETA: 24s - loss: 0.5190 - accuracy: 0.8135 
 15/391 [>.............................] - ETA: 14s - loss: 0.5051 - accuracy: 0.8161
 22/391 [>.............................] - ETA: 10s - loss: 0.4930 - accuracy: 0.8224
 29/391 [=>............................] - ETA: 8s - loss: 0.4935 - accuracy: 0.8217 
 36/391 [=>............................] - ETA: 7s - loss: 0.4941 - accuracy: 0.8238
 43/391 [==>...........................] - ETA: 6s - loss: 0.4999 - accuracy: 0.8212
 50/391 [==>...........................] - ETA: 5s - loss: 0.5044 - accuracy: 0.8181
 57/391 [===>..........................] - ETA: 5s - loss: 0.5097 - accuracy: 0.8177
 64/391 [===>..........................] - ETA: 4s - loss: 0.5112 - accuracy: 0.8174
 71/391 [====>.........................] - ETA: 4s - loss: 0.5097 - accuracy: 0.8168
 78/391 [====>.........................] - ETA: 4s - loss: 0.5115 - accuracy: 0.8172
 85/391 [=====>........................] - ETA: 4s - loss: 0.5161 - accuracy: 0.8148
 92/391 [======>.......................] - ETA: 3s - loss: 0.5176 - accuracy: 0.8145
 99/391 [======>.......................] - ETA: 3s - loss: 0.5187 - accuracy: 0.8149
106/391 [=======>......................] - ETA: 3s - loss: 0.5168 - accuracy: 0.8155
113/391 [=======>......................] - ETA: 3s - loss: 0.5177 - accuracy: 0.8148
119/391 [========>.....................] - ETA: 3s - loss: 0.5190 - accuracy: 0.8147
125/391 [========>.....................] - ETA: 3s - loss: 0.5164 - accuracy: 0.8159
132/391 [=========>....................] - ETA: 2s - loss: 0.5162 - accuracy: 0.8159
139/391 [=========>....................] - ETA: 2s - loss: 0.5149 - accuracy: 0.8156
146/391 [==========>...................] - ETA: 2s - loss: 0.5149 - accuracy: 0.8157
153/391 [==========>...................] - ETA: 2s - loss: 0.5139 - accuracy: 0.8161
159/391 [===========>..................] - ETA: 2s - loss: 0.5161 - accuracy: 0.8150
165/391 [===========>..................] - ETA: 2s - loss: 0.5156 - accuracy: 0.8154
171/391 [============>.................] - ETA: 2s - loss: 0.5135 - accuracy: 0.8162
177/391 [============>.................] - ETA: 2s - loss: 0.5148 - accuracy: 0.8158
183/391 [=============>................] - ETA: 2s - loss: 0.5155 - accuracy: 0.8155
189/391 [=============>................] - ETA: 2s - loss: 0.5171 - accuracy: 0.8147
195/391 [=============>................] - ETA: 2s - loss: 0.5189 - accuracy: 0.8140
201/391 [==============>...............] - ETA: 1s - loss: 0.5175 - accuracy: 0.8144
208/391 [==============>...............] - ETA: 1s - loss: 0.5165 - accuracy: 0.8144
214/391 [===============>..............] - ETA: 1s - loss: 0.5185 - accuracy: 0.8137
221/391 [===============>..............] - ETA: 1s - loss: 0.5182 - accuracy: 0.8140
228/391 [================>.............] - ETA: 1s - loss: 0.5175 - accuracy: 0.8143
234/391 [================>.............] - ETA: 1s - loss: 0.5170 - accuracy: 0.8144
240/391 [=================>............] - ETA: 1s - loss: 0.5161 - accuracy: 0.8150
246/391 [=================>............] - ETA: 1s - loss: 0.5168 - accuracy: 0.8142
253/391 [==================>...........] - ETA: 1s - loss: 0.5165 - accuracy: 0.8140
259/391 [==================>...........] - ETA: 1s - loss: 0.5169 - accuracy: 0.8136
265/391 [===================>..........] - ETA: 1s - loss: 0.5164 - accuracy: 0.8138
271/391 [===================>..........] - ETA: 1s - loss: 0.5161 - accuracy: 0.8139
278/391 [====================>.........] - ETA: 1s - loss: 0.5155 - accuracy: 0.8142
284/391 [====================>.........] - ETA: 1s - loss: 0.5156 - accuracy: 0.8140
291/391 [=====================>........] - ETA: 0s - loss: 0.5143 - accuracy: 0.8148
298/391 [=====================>........] - ETA: 0s - loss: 0.5146 - accuracy: 0.8146
305/391 [======================>.......] - ETA: 0s - loss: 0.5148 - accuracy: 0.8146
312/391 [======================>.......] - ETA: 0s - loss: 0.5151 - accuracy: 0.8144
325/391 [=======================>......] - ETA: 0s - loss: 0.5148 - accuracy: 0.8142
339/391 [=========================>....] - ETA: 0s - loss: 0.5161 - accuracy: 0.8137
354/391 [==========================>...] - ETA: 0s - loss: 0.5169 - accuracy: 0.8136
369/391 [===========================>..] - ETA: 0s - loss: 0.5193 - accuracy: 0.8127
383/391 [============================>.] - ETA: 0s - loss: 0.5197 - accuracy: 0.8126
391/391 [==============================] - 4s 10ms/step - loss: 0.5200 - accuracy: 0.8126 - val_loss: 2.0124 - val_accuracy: 0.5189
 1/79 [..............................] - ETA: 0s - loss: 1.6155 - accuracy: 0.5625
10/79 [==>...........................] - ETA: 0s - loss: 1.8749 - accuracy: 0.5273
19/79 [======>.......................] - ETA: 0s - loss: 1.9776 - accuracy: 0.5169
27/79 [=========>....................] - ETA: 0s - loss: 1.9817 - accuracy: 0.5194
36/79 [============>.................] - ETA: 0s - loss: 1.9576 - accuracy: 0.5252
45/79 [================>.............] - ETA: 0s - loss: 1.9520 - accuracy: 0.5274
54/79 [===================>..........] - ETA: 0s - loss: 1.9581 - accuracy: 0.5268
63/79 [======================>.......] - ETA: 0s - loss: 1.9572 - accuracy: 0.5255
72/79 [==========================>...] - ETA: 0s - loss: 1.9786 - accuracy: 0.5215
79/79 [==============================] - 0s 6ms/step - loss: 2.0124 - accuracy: 0.5189
saved to ckpt/weights.ckpt
load weights from file
 1/79 [..............................] - ETA: 5s - loss: 1.6155 - accuracy: 0.5625
10/79 [==>...........................] - ETA: 0s - loss: 1.8749 - accuracy: 0.5273
19/79 [======>.......................] - ETA: 0s - loss: 1.9776 - accuracy: 0.5169
28/79 [=========>....................] - ETA: 0s - loss: 1.9824 - accuracy: 0.5176
37/79 [=============>................] - ETA: 0s - loss: 1.9426 - accuracy: 0.5283
46/79 [================>.............] - ETA: 0s - loss: 1.9576 - accuracy: 0.5275
55/79 [===================>..........] - ETA: 0s - loss: 1.9660 - accuracy: 0.5259
64/79 [=======================>......] - ETA: 0s - loss: 1.9604 - accuracy: 0.5242
73/79 [==========================>...] - ETA: 0s - loss: 1.9814 - accuracy: 0.5205
79/79 [==============================] - 1s 7ms/step - loss: 2.0124 - accuracy: 0.5189

四、模型的加载与保存

模型的加载与保存一共有三个模式,分别为:

  • save / load weights
    最干净、最轻量级的模型,只保存网络参数,适用于有源代码的情况下
  • save / load entire model
    最简单粗暴,将模型的所有状态都保存下来,可以用来恢复
  • save_model
    模型的一种保存模式,与pyt中的ONNX模式相对应,适用于工厂环境部署
    python代码可以用C++来解析

1.save / load weights

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics


def preprocess(x, y):
    """
    x is a simple image, not a batch
    """
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = tf.reshape(x, [28 * 28])
    y = tf.cast(y, dtype=tf.int32)
    y = tf.one_hot(y, depth=10)
    return x, y


batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

sample = next(iter(db))
print(sample[0].shape, sample[1].shape)

network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy']
                )

network.fit(db, epochs=3, validation_data=ds_val, validation_freq=2)

network.evaluate(ds_val)

# 保存模型的参数
network.save_weights('weights.ckpt')
print('saved weights.')
del network

# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy']
                )
# 加载模型的参数
network.load_weights('weights.ckpt')
print('loaded weights!')
network.evaluate(ds_val)
Epoch 2/3
  1/469 [..............................] - ETA: 13:50 - loss: 0.1488 - accuracy: 0.9688
 19/469 [>.............................] - ETA: 43s - loss: 0.1262 - accuracy: 0.9634  
 39/469 [=>............................] - ETA: 20s - loss: 0.1342 - accuracy: 0.9619
 57/469 [==>...........................] - ETA: 13s - loss: 0.1324 - accuracy: 0.9635
 76/469 [===>..........................] - ETA: 10s - loss: 0.1379 - accuracy: 0.9623
 93/469 [====>.........................] - ETA: 8s - loss: 0.1348 - accuracy: 0.9632 
110/469 [======>.......................] - ETA: 6s - loss: 0.1370 - accuracy: 0.9618
130/469 [=======>......................] - ETA: 5s - loss: 0.1375 - accuracy: 0.9615
150/469 [========>.....................] - ETA: 4s - loss: 0.1384 - accuracy: 0.9618
169/469 [=========>....................] - ETA: 3s - loss: 0.1384 - accuracy: 0.9614
187/469 [==========>...................] - ETA: 3s - loss: 0.1369 - accuracy: 0.9619
207/469 [============>.................] - ETA: 2s - loss: 0.1385 - accuracy: 0.9612
229/469 [=============>................] - ETA: 2s - loss: 0.1387 - accuracy: 0.9612
251/469 [===============>..............] - ETA: 2s - loss: 0.1393 - accuracy: 0.9610
274/469 [================>.............] - ETA: 1s - loss: 0.1388 - accuracy: 0.9615
297/469 [=================>............] - ETA: 1s - loss: 0.1378 - accuracy: 0.9616
319/469 [===================>..........] - ETA: 1s - loss: 0.1373 - accuracy: 0.9618
342/469 [====================>.........] - ETA: 0s - loss: 0.1366 - accuracy: 0.9621
363/469 [======================>.......] - ETA: 0s - loss: 0.1356 - accuracy: 0.9624
385/469 [=======================>......] - ETA: 0s - loss: 0.1362 - accuracy: 0.9623
407/469 [=========================>....] - ETA: 0s - loss: 0.1358 - accuracy: 0.9624
429/469 [==========================>...] - ETA: 0s - loss: 0.1350 - accuracy: 0.9627
450/469 [===========================>..] - ETA: 0s - loss: 0.1342 - accuracy: 0.9629
466/469 [============================>.] - ETA: 0s - loss: 0.1343 - accuracy: 0.9629
469/469 [==============================] - 3s 7ms/step - loss: 0.1344 - accuracy: 0.9629 - val_loss: 0.1209 - val_accuracy: 0.9648
Epoch 3/3
  1/469 [..............................] - ETA: 14:16 - loss: 0.1254 - accuracy: 0.9609
 20/469 [>.............................] - ETA: 42s - loss: 0.1014 - accuracy: 0.9695  
 39/469 [=>............................] - ETA: 21s - loss: 0.1063 - accuracy: 0.9700
 60/469 [==>...........................] - ETA: 13s - loss: 0.1006 - accuracy: 0.9703
 82/469 [====>.........................] - ETA: 9s - loss: 0.1041 - accuracy: 0.9690 
105/469 [=====>........................] - ETA: 7s - loss: 0.1089 - accuracy: 0.9676
128/469 [=======>......................] - ETA: 5s - loss: 0.1072 - accuracy: 0.9684
151/469 [========>.....................] - ETA: 4s - loss: 0.1056 - accuracy: 0.9692
171/469 [=========>....................] - ETA: 3s - loss: 0.1089 - accuracy: 0.9688
189/469 [===========>..................] - ETA: 3s - loss: 0.1094 - accuracy: 0.9688
208/469 [============>.................] - ETA: 2s - loss: 0.1122 - accuracy: 0.9681
228/469 [=============>................] - ETA: 2s - loss: 0.1099 - accuracy: 0.9687
250/469 [==============>...............] - ETA: 2s - loss: 0.1093 - accuracy: 0.9691
270/469 [================>.............] - ETA: 1s - loss: 0.1088 - accuracy: 0.9692
291/469 [=================>............] - ETA: 1s - loss: 0.1081 - accuracy: 0.9696
312/469 [==================>...........] - ETA: 1s - loss: 0.1079 - accuracy: 0.9700
334/469 [====================>.........] - ETA: 1s - loss: 0.1082 - accuracy: 0.9700
356/469 [=====================>........] - ETA: 0s - loss: 0.1086 - accuracy: 0.9699
378/469 [=======================>......] - ETA: 0s - loss: 0.1083 - accuracy: 0.9699
401/469 [========================>.....] - ETA: 0s - loss: 0.1071 - accuracy: 0.9700
422/469 [=========================>....] - ETA: 0s - loss: 0.1081 - accuracy: 0.9698
441/469 [===========================>..] - ETA: 0s - loss: 0.1089 - accuracy: 0.9697
459/469 [============================>.] - ETA: 0s - loss: 0.1083 - accuracy: 0.9700
469/469 [==============================] - 3s 6ms/step - loss: 0.1082 - accuracy: 0.9701
 1/79 [..............................] - ETA: 0s - loss: 0.0620 - accuracy: 0.9844
11/79 [===>..........................] - ETA: 0s - loss: 0.1625 - accuracy: 0.9616
21/79 [======>.......................] - ETA: 0s - loss: 0.1902 - accuracy: 0.9576
32/79 [===========>..................] - ETA: 0s - loss: 0.1910 - accuracy: 0.9570
41/79 [==============>...............] - ETA: 0s - loss: 0.1845 - accuracy: 0.9573
50/79 [=================>............] - ETA: 0s - loss: 0.1695 - accuracy: 0.9605
60/79 [=====================>........] - ETA: 0s - loss: 0.1499 - accuracy: 0.9645
70/79 [=========================>....] - ETA: 0s - loss: 0.1389 - accuracy: 0.9667
79/79 [==============================] - 0s 5ms/step - loss: 0.1372 - accuracy: 0.9664
saved weights.
loaded weights!
 1/79 [..............................] - ETA: 6s - loss: 0.0620 - accuracy: 0.9844
11/79 [===>..........................] - ETA: 0s - loss: 0.1625 - accuracy: 0.9616
21/79 [======>.......................] - ETA: 0s - loss: 0.1902 - accuracy: 0.9576
30/79 [==========>...................] - ETA: 0s - loss: 0.1884 - accuracy: 0.9581
39/79 [=============>................] - ETA: 0s - loss: 0.1914 - accuracy: 0.9559
49/79 [=================>............] - ETA: 0s - loss: 0.1724 - accuracy: 0.9600
59/79 [=====================>........] - ETA: 0s - loss: 0.1522 - accuracy: 0.9640
69/79 [=========================>....] - ETA: 0s - loss: 0.1404 - accuracy: 0.9665
78/79 [============================>.] - ETA: 0s - loss: 0.1388 - accuracy: 0.9663
79/79 [==============================] - 1s 6ms/step - loss: 0.1372 - accuracy: 0.9664

2.save / load entire model

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics

# 数据预处理
def preprocess(x, y):
    """
    x is a simple image, not a batch
    """
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = tf.reshape(x, [28 * 28])
    y = tf.cast(y, dtype=tf.int32)
    y = tf.one_hot(y, depth=10)
    return x, y


batchsz = 128
# 数据集加载
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

sample = next(iter(db))
print(sample[0].shape, sample[1].shape)

network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

network.compile(optimizer=optimizers.Adam(lr=0.01),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy']
                )

network.fit(db, epochs=3, validation_data=ds_val, validation_freq=2)

network.evaluate(ds_val)

# 保存整个模型
network.save('model.h5')
print('saved total model.')
del network

print('loaded model from file.')
# 加载整个模型
network = tf.keras.models.load_model('model.h5', compile=False)

x_val = tf.cast(x_val, dtype=tf.float32) / 255.
x_val = tf.reshape(x_val, [-1, 28 * 28])
y_val = tf.cast(y_val, dtype=tf.int32)
y_val = tf.one_hot(y_val, depth=10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(128)
network.evaluate(ds_val)
datasets: (60000, 28, 28) (60000,) 0 255
(128, 784) (128, 10)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 256)               200960    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_3 (Dense)              (None, 32)                2080      
_________________________________________________________________
dense_4 (Dense)              (None, 10)                330       
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
Epoch 1/3
469/469 [==============================] - 1s 2ms/step - loss: 0.2723 - accuracy: 0.9182
Epoch 2/3
469/469 [==============================] - 1s 3ms/step - loss: 0.1363 - accuracy: 0.9628 - val_loss: 0.1280 - val_accuracy: 0.9637
Epoch 3/3
469/469 [==============================] - 1s 2ms/step - loss: 0.1101 - accuracy: 0.9692
79/79 [==============================] - 0s 3ms/step - loss: 0.1372 - accuracy: 0.9673
saved total model.
loaded model from file.
79/79 [==============================] - 0s 1ms/step - loss: 0.1372 - accuracy: 0.9673

3.save_model

tf.saved_model.save(m,'/tmp/saved_model/')

imported = tf.saved_model.load(path)
f = imported.signatures['serving_default']
print(f(x = tf.ones([1,28,28,3])))

未经允许不得转载:作者:1147-柳同学, 转载或复制请以 超链接形式 并注明出处 拜师资源博客
原文地址:《深度学习TF—5.tf.kears高层API 原创》 发布于2021-02-24

分享到:
赞(0) 打赏

评论 抢沙发

评论前必须登录!

  注册



长按图片转发给朋友

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏

Vieu3.3主题
专业打造轻量级个人企业风格博客主题!专注于前端开发,全站响应式布局自适应模板。

登录

忘记密码 ?

您也可以使用第三方帐号快捷登录

Q Q 登 录
微 博 登 录