Model optimizer in AI and it's types

This article would tell you about Model optimizer and it's types

Originally published in en
Reactions 0
742
DT
DT 31 Dec, 2019 | 1 min read

Model optimizer converts model from multiple frameworks to IR for Inference Engine. It helps in shrinking and speeding up of the Model.

Model optimizer fails when we have to lower the precision because while performing this operation loss of accuracy taken place.

There are three optimization techniques:

1. Quantization: This technique reduces the precision value from FP 32 to FP 16 or INT 8.

This technique causes substantial loss inaccuracy. It leads to smaller and much faster models.

2. Freezing: This is basically used in Tensorflow Model. This technique is primarily used for freezing the individual layer.

3. Fusion: As the name goes this technique combines multiple-layer operations into one. This technique relatively useful in Parallel computation.

The various layers such as Batch Norm, Activation and Convolution is combined into one to produce a combined output.

Some of the frameworks are their developed officials are as follows :

1. Caffee - By UC Berkeley

2. MXNET - By Apache Software

3. Tensorflow- By Google

4. ONNX - By Facebook and Microsoft

0 likes

Published By

DT

DT

Comments

Appreciate the author by telling what you feel about the post 💓

Please Login or Create a free account to comment.