官方API:https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/saved_model/
官方GUIDE:
https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.md#build-and-load-a-savedmodel
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md (墙裂推荐阅读)
一、saved_model
上一小节已经讨论过TensorFlow的模型保存机制,已经可以将variable全部转换为constant并使用graph_def保存为pb文件进行持久化,方便线上进行部署推理。但是这还不够。对企业级模型来说,最好有着统一规范的输入输出节点名称,最好带上variable能训练(即保存为带saver_def的metaGraphDef),最好能附带外部资源assets,最好一个模型中能存放着多个metaGraphDef,比如分为训练和测试俩模型。因此官方推出了saved_model格式:
SavedModel, the universal serialization format for TensorFlow models.
SavedModel provides a language-neutral format to save machine-learned models that is recoverable and hermetic. It enables higher-level systems and tools to produce, consume and transform TensorFlow models.
Saved_model工作原理:
SavedModel manages and builds upon existing TensorFlow primitives such as
TensorFlow Saver
andMetaGraphDef
. Specifically, SavedModel wraps a TensorFlow Saver. The Saver is primarily used to generate the variable checkpoints. SavedModel will replace the existing TensorFlow Inference Model Format as the canonical way to export TensorFlow graphs for serving.
可以发现其实它是在上一小节中讲的存储机制(Saver 与 MetaGraphDef)上再进行了包装,以适配企业级服务器的使用。
1.1 企业级模型常量的统一定义
- saved_model可以保存多图,并通过
tf.saved_model.tag_constants.xxx
对图打上标记,主要有三种:SERVING,TRAINING和GPU,标记可组合。 - 可以通过Signature给模型输入输出的operation打上标记,而
tf.saved_model.signature_constants.xxx
预先定义了一堆标签,方便对模型的输入输出节点名称进行统一。
1.2 存储后的文件夹目录
注意的是,可以发现pb中存储的是多个元图的定义,即多个的MetaGraphDef,对应上一节中的meta文件(但是.meta文件里只有一个元图)。而variables文件下就是利用Saver存储的具体variable数值,包括data与index文件。
1.3 源码分析
看官方的tf.saved_model.simple_save():
from tensorflow.python.saved_model import builder, signature_constants, signature_def_utils, tag_constants
def simple_save(session, export_dir, inputs, outputs, legacy_init_op=None):
# 每个metaGraph中包含了一个signature dict。 map<string, SignatureDef> signature_def = 5;
# SignatureDef的定义: https://github.com/tensorflow/tensorflow/blob/bab74a15d9ad6bb9066b3e31d601d6a45b1cb221/tensorflow/core/protobuf/meta_graph.proto#L313
# 总之一个signatureDef定义了一组输入输出以及其别名(map<string, TensorInfo> inputs ),以及一个signature本身的method_name
signature_def_map = {
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: # "serving_default"
signature_def_utils.predict_signature_def(inputs, outputs) # 返回一个signature_def, 带一个method_name
}
b = builder.SavedModelBuilder(export_dir) # 实例化builder,采用建造者设计模式。并实例化self._saved_model
# saved_model其实类似于saver_def,是对pb文件的一种描述形式。其中可以包含多张metaGraph,是对metaGraphDef的进一步封装
# (A MetaGraphDef is the protocol buffer representation of a MetaGraph.)
# Adds the current meta graph to the SavedModel and saves variables
b.add_meta_graph_and_variables(
session,
tags=[tag_constants.SERVING], # 每张metaGraph必须要有一组唯一的tags加以标记
signature_def_map=signature_def_map, # 这张metaGraph的signature_def_map
assets_collection=ops.get_collection(ops.GraphKeys.ASSET_FILEPATHS),
main_op=legacy_init_op,
clear_devices=True)
b.save()
步骤总结如下:
首先建立signature_def_map,这是一个signature_def的dict,之后会存放于meta_graph_def中(
map<string, SignatureDef> signature_def
),用以建立统一的输入和输出标记。实例化SavedModelBuilder,采用建造者设计模式,创建的时候实例化self._saved_model,这也是一种pb数据格式的表现形式,其中可包裹多个meta_graph。其定义在:https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/core/protobuf/saved_model.proto
使用builder.add_meta_graph_and_variables方法为save_model_def添加meta_graph,同时构建Saver存储variables(因此不用运行
builder.save()
方法就发型variable文件夹已导出),导出assets。tag会被添加进meta_graph的repeated string tags
属性中用以唯一标识meta_graph。signature_def_map传入之前构建好的signature_def_map。
注意:第一次必须调用add_meta_graph_and_variables,之后若想再添加meta_graph只能调用add_meta_graph方法,但是这些图共享拥有第一次保存下来的variables和assets。使用builder.save(as_text=False)持久化pb模型到硬盘。
1.4 持久化demo结果展示
运行下面代码,将save方法中的as_text=True,可查看持久化后的pbtxt文件内容:
二、saved_model实战
官方mnist模型:https://github.com/tensorflow/serving/blob/r1.15/tensorflow_serving/example/mnist_saved_model.py
2.1 自主构建signature_def_map与saved_model:
# https://www.cnblogs.com/mbcbyq-2137/p/10044837.html
import tensorflow as tf
from tensorflow import saved_model as sm
tf.reset_default_graph()
# 首先定义一个极其简单的计算图
X = tf.placeholder(tf.float32, shape=(3, ))
scale = tf.Variable([10, 11, 12], dtype=tf.float32)
y = tf.multiply(X, scale)
# 在会话中运行
with tf.Session() as sess:
sess.run(tf.initializers.global_variables())
value = sess.run(y, feed_dict={X: [1., 2., 3.]})
print(value)
# 准备存储模型
path = 'saved_model_test/'
builder = sm.builder.SavedModelBuilder(path)
# 构建需要在新会话中恢复的变量的 TensorInfo protobuf
X_TensorInfo = sm.utils.build_tensor_info(X)
scale_TensorInfo = sm.utils.build_tensor_info(scale)
y_TensorInfo = sm.utils.build_tensor_info(y)
# 构建 SignatureDef protobuf
SignatureDef = sm.signature_def_utils.build_signature_def(
inputs={'input_1': X_TensorInfo, 'input_2': scale_TensorInfo}, # 可用sm.signature_constants.PREDICT_INPUTS
outputs={'output_1': y_TensorInfo}, # 可用sm.signature_constants.PREDICT_OUTPUTS
method_name=sm.signature_constants.PREDICT_METHOD_NAME # "tensorflow/serving/predict"
)
# 将 graph 和变量等信息写入 MetaGraphDef protobuf
# 这里的 tags 里面的参数和 signature_def_map 字典里面的键都可以是自定义字符串,也可用tf里预设好的方便统一
builder.add_meta_graph_and_variables(sess, tags=[sm.tag_constants.TRAINING],
signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: SignatureDef}
)
# 将 MetaGraphDef 写入磁盘
builder.save(as_text=True)
构建流程:
- 构建tensor_info,包含tensor信息(Variable内部也维护着一个tensor)
- 使用tensor_info构建SignatureDef,由于里面是map<string, TensorInfo> inputs,故属性是字典格式,并对tensor_info的key值自定义。这里的key值可以用signature_constants.PREDICT_INPUTS等,方法名method_name可用signature_constants.PREDICT_METHOD_NAME等,注意,不同signature_def中的method_name可以是一样的。这里的method是为了扩展标记用,不是为该signature_def标识唯一的名称。
- 构建好SignatureDef之后,构建SignatureDef的dict,用于传入metaGraph。因为一个元图中可能包含多个SignatureDef,而它们的标识就是key,默认为:
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
。 - 同时,saved_model还需要唯一的tags来标识这张元图,并存放于metaGraph中。可选用tag_constants.TRAINING等。
- 最后使用builder.save()方法存储saved_model。
2.2 载入saved_model
import tensorflow as tf
from tensorflow import saved_model as sm
tf.reset_default_graph()
# 需要建立一个会话对象,将模型恢复到其中
with tf.Session() as sess:
path = 'saved_model_test/'
MetaGraphDef = sm.loader.load(sess, tags=[sm.tag_constants.TRAINING], export_dir=path)
# 解析得到 SignatureDef protobuf
SignatureDef_map = MetaGraphDef.signature_def
SignatureDef = SignatureDef_map[sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
# 解析得到 3 个变量对应的 TensorInfo protobuf
X_TensorInfo = SignatureDef.inputs['input_1']
scale_TensorInfo = SignatureDef.inputs['input_2']
y_TensorInfo = SignatureDef.outputs['output_1']
# 解析得到具体 Tensor
# .get_tensor_from_tensor_info() 函数中可以不传入 graph 参数,TensorFlow 自动使用默认图
X = sm.utils.get_tensor_from_tensor_info(X_TensorInfo, sess.graph)
scale = sm.utils.get_tensor_from_tensor_info(scale_TensorInfo, sess.graph)
y = sm.utils.get_tensor_from_tensor_info(y_TensorInfo, sess.graph)
# 或使用如下方法
# X = sess.graph.get_tensor_by_name(X_TensorInfo.name)
# scale = sess.grpah.get_tensor_by_name(scale_TensorInfo.name)
# y = sess.graph.get_tensor_by_name(y_TensorInfo.name)
print('scale=',sess.run(scale))
print('y=',sess.run(y, feed_dict={X: [3., 2., 1.]}))
- 首先在sm.loader.load中标明模型位置,以及需要提取的那张metaGraph的tags。
- 从metaGraph中拿出signature_def_map这个dict
- 通过key从dict中获取想要取出的signature_def
- 通过key,从signatureDef中获取input和output对应的tensorInfo
- 通过tensorInfo获取到最终的tensor(根据tensor的name)
最后输出:
INFO:tensorflow:Restoring parameters from b'saved_model_test/variables\variables'
scale= [10. 11. 12.]
y= [30. 22. 12.]
.pbtxt
中保存的signature信息:
signature_def {
key: "serving_default"
value {
inputs {
key: "input_1"
value {
name: "Placeholder:0"
dtype: DT_FLOAT
tensor_shape {
dim {
size: 3
}
}
}
}
inputs {
key: "input_2"
value {
name: "Variable:0"
dtype: DT_FLOAT_REF
tensor_shape {
dim {
size: 3
}
}
}
}
outputs {
key: "output_1"
value {
name: "Mul:0"
dtype: DT_FLOAT
tensor_shape {
dim {
size: 3
}
}
}
}
method_name: "tensorflow/serving/predict"
}
}
三、saved_model_cli
首先可使用saved_model_cli工具对保存后的pb模型进行查看:
venv\tf8\Lib\site-packages\tensorflow\python\tools>python saved_model_cli.py -h
show, which shows a computation on a MetaGraphDef in a SavedModel.
四、saved_model 与 常量图结合
之前讲过这两块内容,那么能否首先将图中变量转换成常量,这样就能保存到一个pb文件中用于推理。然后将该图通过saved_model接口存储起来?一个pb文件就可以直接部署tf serving。当然是可以的。
这个是h5转pb的项目:https://github.com/amir-abdi/keras_to_tensorflow
但是注意,当K.set_learning_phase(0)后,再使用convert_variables_to_constants()这个函数作用在Keras的BN层上后,会出现输出值和原来不一致的bug。
https://github.com/amir-abdi/keras_to_tensorflow/issues/109
因此我们不用手动K.set_learning_phase()。具体learning_phase的机制可参考我的另一篇文章。
完整代码项目:https://github.com/youyuge34/convert_keras_to_pb_saved_model
部分关键具体代码如下:
def save():
model_ = ResNet50(include_top=False, pooling='avg', weights='imagenet')
logging.info('K.learning_phase() = {}'.format(K.learning_phase()))
if CHECK_VALUE:
results = model_.predict(x_input)
logging.info('keras.model.predict: {}'.format(results.squeeze()[:10]))
save_with_single_pb(model=model_, output_dir=OUT_PUTDIR, use_saved_model=True)
def save_with_single_pb(model, output_dir, use_saved_model=True):
"""
convert Keras model to the single .pb file in output_dir
:param model: Keras model
:param output_dir: the output dir to save .pb file
:return:
"""
if os.path.exists(output_dir):
os.removedirs(output_dir)
sess = K.get_session()
if LOG:
logging.info("input tensor name is: {}".format(model.get_input_at(0).name)) # input_1:0
logging.info(
'output tensor name is: {}'.format(model.get_output_at(0).name)) # global_average_pooling2d_1/Mean:0
logging.info('original graph.node num is {}'.format(len(sess.graph_def.node)))
output_node_name = model.get_output_at(0).op.name # 'global_average_pooling2d_1/Mean'
constant_graph_def = convert_variables_to_constants(sess, sess.graph_def, [output_node_name])
if use_saved_model:
x_name = model.get_input_at(0).name # input_3:0
y_name = model.get_output_at(0).name # global_average_pooling2d_1/Mean:0
write_saved_model(output_dir, constant_graph_def, x_name, {'output': y_name})
else:
# write out constant graph directly
with tf.gfile.GFile(os.path.join(output_dir, 'save_model_wo_signature.pb'), 'wb') as f:
f.write(constant_graph_def.SerializeToString())
def write_saved_model(output_dir, constant_graph_def, x_name, outputs_map):
"""
write constant graph_def into output_dir as a pb file using tf.saved_model API
将静态的graph_def使用 tf.saved_model API 写入目标文件夹中
:param output_dir:
:param constant_graph_def: graph_def returned by graph_util.convert_variables_to_constants()
:param x_name: the name of input tensor
:param outputs_map: <the signature name, name of the output tensor> dict
:return:
"""
with tf.Graph().as_default() as graph:
with tf.Session(graph=graph) as sess:
# must have name='', otherwise imported nodes will be auto added prefix 'import/'
tf.import_graph_def(constant_graph_def, name='')
builder = tf.saved_model.builder.SavedModelBuilder(output_dir)
if LOG:
logging.info('new constant graph.node num is {}'.format(len(sess.graph_def.node)))
if CHECK_VALUE:
print('-------------New graph-------------\n')
preds = sess.run(sess.graph.get_tensor_by_name(outputs_map['output']),
feed_dict={sess.graph.get_tensor_by_name(x_name): x_input})
logging.info('after constant predict output: {}'.format(np.array(preds).squeeze()[:10]))
tensor_info_inputs = {
'inputs': tf.saved_model.utils.build_tensor_info(sess.graph.get_tensor_by_name(x_name)),
# 'learning_phase': tf.saved_model.utils.build_tensor_info(K.learning_phase())
}
tensor_info_outputs = {}
for k, v in outputs_map.items():
tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(sess.graph.get_tensor_by_name(v))
signature_def = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=tensor_info_inputs,
outputs=tensor_info_outputs,
method_name=signature_constants.PREDICT_METHOD_NAME)) # "tensorflow/serving/predict"
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: # "serving_default"
signature_def,
},
)
builder.save(as_text=False)
读取操作:
def test():
sess, images, scores = load_saved_model(OUT_PUTDIR)
pred_attributes = sess.run([scores],
feed_dict={images: x_input})
logging.info("predict value: {}".format(np.array(pred_attributes).squeeze()[:10]))
sess.close()
def load_saved_model(saved_model_dir):
"""
读取保存好的tf-serving .pb模型,参数存成了constant依附于graph中
:param saved_model_dir:
:return:
"""
print("\nStart to load tensorflow saved model...")
sess = tf.Session()
print("\tStart to construct tensorflow model fro buckets...")
print('\tReading params from {}'.format(saved_model_dir))
signature_key = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
input_key = 'inputs'
output_key0 = 'output'
meta_graph_def = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], saved_model_dir)
signature_map = meta_graph_def.signature_def
x_tensor_name = signature_map[signature_key].inputs[input_key].name
score_tensor_name = signature_map[signature_key].outputs[output_key0].name
print("\tx_tensor_name: ", x_tensor_name)
print("\toutput_tensor_name: ", score_tensor_name)
images = sess.graph.get_tensor_by_name(x_tensor_name)
scores = sess.graph.get_tensor_by_name(score_tensor_name)
print("\nFinish construct tensorflow model.")
return sess, images, scores