Skip to main content
Skip table of contents

Graph Representation in Glacier - Op Events

Motivation of Representation

The main purpose of developing a flat, topological, “relay-esque” representation of models in glacier.py is to be able to work with layers on both an individual and holistic level. By having this more individualized node representation, we are enabling the user to attain a more granular understanding of the performance of their model. Some potential applications of this representation include structural ngrams for op-names, runtime for sequences of and individual ops, accuracy of operations, and other similar performance metrics that can be benefit from increased granularity.

Representation

nodeStruct

At the heart of this representation, there is the nodeStruct. This is a class object that represents the different types of nodes that will be sent as events from the representation. It holds information such as the node ID, op name, inputs into the node, datatype, and shape.

PY
  def __init__(self, nId, op_name, dtype="", shape=""):
      self.nId = nId          
      self.inputs = list()     
      self.op_name = op_name  
      self.dtype = dtype      
      self.shape = shape      

nId

The nId (node id) is for all of the nodes that are in the graph. This is essentially what connects one node to its inputs, and is what allows us to represent the graph.

inputs

The ‘inputs’ variable starts off as an empty list. For each arg of a node, the id (if it is a call node) or the name (if it is a Var or Constant node) of the respective args get added to the list. More on how this is done later.

op_name

This is the name of each operation/arg. This will either be the operation (nn.conv2d, nn.relu, transpose, etc.), or will be the name of the Var if it is a Var node, or constant if the input is a Constant.

dtype

This is only for Var and Constant nodes, as this will be part of what we send as a part of events for those types of nodes.

shape

This is also only for Var and Constant nodes as well, for the same reason as dtype.

Building Input Set

For each node, the main goal is to build their respective input lists so that they include…the inputs… for that specific node. To do this we have to utilize 2 distinct dicts that have the same type of key, but different values. For each of the dicts, self.op_mem and self.varconst_mem, we can use the nodeid as the key because we know that it will be distinct for each key due to the nature of the relay representation.

  • self.op_mem[nodeid/arg] = #self.num_nodes (which is also the node id. every time we add something to the dict, we are adding one to self.num_nodes, so nothing will have the same id. also indexed from 0

  • self.varconst_mem[nodeid/arg] = #node["name"] if it is a Var node, or "constant" if it is a Constant node.

We can index the dict with nodeid/arg because arg is equivalent to the nodeid of the input. The actual adding to the set is done after the recursive call in the _invoke_uncached() function when the type of the node is “Call”. This means that we first start by building the input list for the first op in the graph/our model a.k.a the inner most op in the relay model (DFS post-order traversal).

Event Format

These events are logged using the the .track attribute of the EventLogger object. The data that we have added to the event contains the nodeID, duration, inputs, size, and name. Each node of a model, with this same info, is linked to the same session. Hence, for implementing structural ngrams, as mentioned before, we can perform a DFS (depth-first-search) postorder traversal by making use of the nodeID and input list for each sequential node. First, we would have to reconstruct the graph representation of the model, then we would use a sliding window of size n to extract the specified ngrams, potentially representing them in a histogram format.

Use in Dev.

In development, we can actually specify what type of nodes we want to see in the input list and send as events. We have the ability to chose from three types of nodes: “Call”, “Var”, and “Constant” nodes (or none).

To specify which types of nodes we would like to see in the input list, we have to call the function set_event_node_types(data), where data is the type of node that you would like to see in the cache. If none are specified, nothing will be added to the input list of any of the lists. If you would like to specify which nodes would go in the inputs list, you would have to do something like

CODE
gvm, _, _ = GlacierVM.from_model_dir(model_dir) #where model_dir is some directory
gvm.set_input("input", in_data)
gvm.set_event_node_types("Call")
gvm.set_event_node_types("Var")
#or
gvm.set_event_node_types("Var")
gvm.set_event_node_types("Constant")

Essentially, for each different type of node that you want to see in the input list, you have to make a separate call to the function.

With the node types set to "Call", "Var", and "Constant", the output for model_dir = Path("../models/mobilenetv2/keras-open-images-10-classes/") should look like

representation for mobilenetv2/keras-open-images-10-classes
TEXT
0 nn.pad ['input_1']
1 nn.conv2d [0, 'model/Conv1/Conv2D/ReadVariableOp/resource']
2 nn.bias_add [1, 'leip_inserted_bias_by_thu_65']
3 clip [2]
4 nn.conv2d [3, 'model/expanded_conv_depthwise/depthwise/ReadVariableOp/resource']
5 nn.bias_add [4, 'leip_inserted_bias_by_thu_62']
6 clip [5]
7 nn.conv2d [6, 'model/expanded_conv_project/Conv2D/ReadVariableOp/resource']
8 nn.bias_add [7, 'leip_inserted_bias_by_thu_59']
9 nn.conv2d [8, 'model/block_1_expand/Conv2D/ReadVariableOp/resource']
10 nn.bias_add [9, 'leip_inserted_bias_by_thu_56']
11 clip [10]
12 nn.pad [11]
13 nn.conv2d [12, 'model/block_1_depthwise/depthwise/ReadVariableOp/resource']
14 nn.bias_add [13, 'leip_inserted_bias_by_thu_53']
15 clip [14]
16 nn.conv2d [15, 'model/block_1_project/Conv2D/ReadVariableOp/resource']
17 nn.bias_add [16, 'leip_inserted_bias_by_thu_50']
18 nn.conv2d [17, 'model/block_2_expand/Conv2D/ReadVariableOp/resource']
19 nn.bias_add [18, 'leip_inserted_bias_by_thu_74']
20 clip [19]
21 nn.conv2d [20, 'model/block_2_depthwise/depthwise/ReadVariableOp/resource']
22 nn.bias_add [21, 'leip_inserted_bias_by_thu_71']
23 clip [22]
24 nn.conv2d [23, 'model/block_2_project/Conv2D/ReadVariableOp/resource']
25 nn.bias_add [24, 'leip_inserted_bias_by_thu_68']
26 add [17, 25]
27 nn.conv2d [26, 'model/block_3_expand/Conv2D/ReadVariableOp/resource']
28 nn.bias_add [27, 'leip_inserted_bias_by_thu_47']
29 clip [28]
30 nn.pad [29]
31 nn.conv2d [30, 'model/block_3_depthwise/depthwise/ReadVariableOp/resource']
32 nn.bias_add [31, 'leip_inserted_bias_by_thu_44']
33 clip [32]
34 nn.conv2d [33, 'model/block_3_project/Conv2D/ReadVariableOp/resource']
35 nn.bias_add [34, 'leip_inserted_bias_by_thu_41']
36 nn.conv2d [35, 'model/block_4_expand/Conv2D/ReadVariableOp/resource']
37 nn.bias_add [36, 'leip_inserted_bias_by_thu_83']
38 clip [37]
39 nn.conv2d [38, 'model/block_4_depthwise/depthwise/ReadVariableOp/resource']
40 nn.bias_add [39, 'leip_inserted_bias_by_thu_80']
41 clip [40]
42 nn.conv2d [41, 'model/block_4_project/Conv2D/ReadVariableOp/resource']
43 nn.bias_add [42, 'leip_inserted_bias_by_thu_77']
44 add [35, 43]
45 nn.conv2d [44, 'model/block_5_expand/Conv2D/ReadVariableOp/resource']
46 nn.bias_add [45, 'leip_inserted_bias_by_thu_92']
47 clip [46]
48 nn.conv2d [47, 'model/block_5_depthwise/depthwise/ReadVariableOp/resource']
49 nn.bias_add [48, 'leip_inserted_bias_by_thu_89']
50 clip [49]
51 nn.conv2d [50, 'model/block_5_project/Conv2D/ReadVariableOp/resource']
52 nn.bias_add [51, 'leip_inserted_bias_by_thu_86']
53 add [44, 52]
54 nn.conv2d [53, 'model/block_6_expand/Conv2D/ReadVariableOp/resource']
55 nn.bias_add [54, 'leip_inserted_bias_by_thu_38']
56 clip [55]
57 nn.pad [56]
58 nn.conv2d [57, 'model/block_6_depthwise/depthwise/ReadVariableOp/resource']
59 nn.bias_add [58, 'leip_inserted_bias_by_thu_35']
60 clip [59]
61 nn.conv2d [60, 'model/block_6_project/Conv2D/ReadVariableOp/resource']
62 nn.bias_add [61, 'leip_inserted_bias_by_thu_32']
63 nn.conv2d [62, 'model/block_7_expand/Conv2D/ReadVariableOp/resource']
64 nn.bias_add [63, 'leip_inserted_bias_by_thu_101']
65 clip [64]
66 nn.conv2d [65, 'model/block_7_depthwise/depthwise/ReadVariableOp/resource']
67 nn.bias_add [66, 'leip_inserted_bias_by_thu_98']
68 clip [67]
69 nn.conv2d [68, 'model/block_7_project/Conv2D/ReadVariableOp/resource']
70 nn.bias_add [69, 'leip_inserted_bias_by_thu_95']
71 add [62, 70]
72 nn.conv2d [71, 'model/block_8_expand/Conv2D/ReadVariableOp/resource']
73 nn.bias_add [72, 'leip_inserted_bias_by_thu_110']
74 clip [73]
75 nn.conv2d [74, 'model/block_8_depthwise/depthwise/ReadVariableOp/resource']
76 nn.bias_add [75, 'leip_inserted_bias_by_thu_107']
77 clip [76]
78 nn.conv2d [77, 'model/block_8_project/Conv2D/ReadVariableOp/resource']
79 nn.bias_add [78, 'leip_inserted_bias_by_thu_104']
80 add [71, 79]
81 nn.conv2d [80, 'model/block_9_expand/Conv2D/ReadVariableOp/resource']
82 nn.bias_add [81, 'leip_inserted_bias_by_thu_119']
83 clip [82]
84 nn.conv2d [83, 'model/block_9_depthwise/depthwise/ReadVariableOp/resource']
85 nn.bias_add [84, 'leip_inserted_bias_by_thu_116']
86 clip [85]
87 nn.conv2d [86, 'model/block_9_project/Conv2D/ReadVariableOp/resource']
88 nn.bias_add [87, 'leip_inserted_bias_by_thu_113']
89 add [80, 88]
90 nn.conv2d [89, 'model/block_10_expand/Conv2D/ReadVariableOp/resource']
91 nn.bias_add [90, 'leip_inserted_bias_by_thu_29']
92 clip [91]
93 nn.conv2d [92, 'model/block_10_depthwise/depthwise/ReadVariableOp/resource']
94 nn.bias_add [93, 'leip_inserted_bias_by_thu_26']
95 clip [94]
96 nn.conv2d [95, 'model/block_10_project/Conv2D/ReadVariableOp/resource']
97 nn.bias_add [96, 'leip_inserted_bias_by_thu_23']
98 nn.conv2d [97, 'model/block_11_expand/Conv2D/ReadVariableOp/resource']
99 nn.bias_add [98, 'leip_inserted_bias_by_thu_128']
100 clip [99]
101 nn.conv2d [100, 'model/block_11_depthwise/depthwise/ReadVariableOp/resource']
102 nn.bias_add [101, 'leip_inserted_bias_by_thu_125']
103 clip [102]
104 nn.conv2d [103, 'model/block_11_project/Conv2D/ReadVariableOp/resource']
105 nn.bias_add [104, 'leip_inserted_bias_by_thu_122']
106 add [97, 105]
107 nn.conv2d [106, 'model/block_12_expand/Conv2D/ReadVariableOp/resource']
108 nn.bias_add [107, 'leip_inserted_bias_by_thu_137']
109 clip [108]
110 nn.conv2d [109, 'model/block_12_depthwise/depthwise/ReadVariableOp/resource']
111 nn.bias_add [110, 'leip_inserted_bias_by_thu_134']
112 clip [111]
113 nn.conv2d [112, 'model/block_12_project/Conv2D/ReadVariableOp/resource']
114 nn.bias_add [113, 'leip_inserted_bias_by_thu_131']
115 add [106, 114]
116 nn.conv2d [115, 'model/block_13_expand/Conv2D/ReadVariableOp/resource']
117 nn.bias_add [116, 'leip_inserted_bias_by_thu_20']
118 clip [117]
119 nn.pad [118]
120 nn.conv2d [119, 'model/block_13_depthwise/depthwise/ReadVariableOp/resource']
121 nn.bias_add [120, 'leip_inserted_bias_by_thu_17']
122 clip [121]
123 nn.conv2d [122, 'model/block_13_project/Conv2D/ReadVariableOp/resource']
124 nn.bias_add [123, 'leip_inserted_bias_by_thu_14']
125 nn.conv2d [124, 'model/block_14_expand/Conv2D/ReadVariableOp/resource']
126 nn.bias_add [125, 'leip_inserted_bias_by_thu_146']
127 clip [126]
128 nn.conv2d [127, 'model/block_14_depthwise/depthwise/ReadVariableOp/resource']
129 nn.bias_add [128, 'leip_inserted_bias_by_thu_143']
130 clip [129]
131 nn.conv2d [130, 'model/block_14_project/Conv2D/ReadVariableOp/resource']
132 nn.bias_add [131, 'leip_inserted_bias_by_thu_140']
133 add [124, 132]
134 nn.conv2d [133, 'model/block_15_expand/Conv2D/ReadVariableOp/resource']
135 nn.bias_add [134, 'leip_inserted_bias_by_thu_155']
136 clip [135]
137 nn.conv2d [136, 'model/block_15_depthwise/depthwise/ReadVariableOp/resource']
138 nn.bias_add [137, 'leip_inserted_bias_by_thu_152']
139 clip [138]
140 nn.conv2d [139, 'model/block_15_project/Conv2D/ReadVariableOp/resource']
141 nn.bias_add [140, 'leip_inserted_bias_by_thu_149']
142 add [133, 141]
143 nn.conv2d [142, 'model/block_16_expand/Conv2D/ReadVariableOp/resource']
144 nn.bias_add [143, 'leip_inserted_bias_by_thu_11']
145 clip [144]
146 nn.conv2d [145, 'model/block_16_depthwise/depthwise/ReadVariableOp/resource']
147 nn.bias_add [146, 'leip_inserted_bias_by_thu_8']
148 clip [147]
149 nn.conv2d [148, 'model/block_16_project/Conv2D/ReadVariableOp/resource']
150 nn.bias_add [149, 'leip_inserted_bias_by_thu_5']
151 nn.conv2d [150, 'model/Conv_1/Conv2D/ReadVariableOp/resource']
152 nn.bias_add [151, 'leip_inserted_bias_by_thu_2']
153 clip [152]
154 mean [153]
155 transpose ['model/dense/MatMul/ReadVariableOp/resource']
156 nn.dense [154, 155]
157 add [156, 'model/dense/BiasAdd/ReadVariableOp/resource']
158 nn.relu [157]
159 transpose ['model/dense_1/MatMul/ReadVariableOp/resource']
160 nn.dense [158, 159]
161 add [160, 'model/dense_1/BiasAdd/ReadVariableOp/resource']
162 nn.relu [161]
163 transpose ['model/dense_2/MatMul/ReadVariableOp/resource']
164 nn.dense [162, 163]
165 add [164, 'model/dense_2/BiasAdd/ReadVariableOp/resource']
166 nn.relu [165]
167 transpose ['model/dense_3/MatMul/ReadVariableOp/resource']
168 nn.dense [166, 167]
169 add [168, 'model/dense_3/BiasAdd/ReadVariableOp/resource']
170 nn.softmax [169]

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.