1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
|
Structure of the Myrddin Compiler
Aug 2012
Ori Bernstein
TABLE OF CONTENTS:
1. OVERVIEW
2. PARSING
2.1. Lexing
2.2. AST Creation
2.3. Type checking
2.4. Generic Specialization
2.5. Serialization
2.6. Usefiles
3. FLATTENING
3.1. Control Flow
3.2. Complex Expressions
4. OPTIMIZATION
4.1. Constant Folding
5. CODE GENERATION
5.1. Instruction Selection
5.2. Register Allocation
6. TUTORIAL: ADDING A STATEMENT
6.1. Stubbing in the node types
6.2. Parsing
6.3. Flattening
6.4. Optimization
6.5. Instruction Selection
1. OVERVIEW:
The Myrddin compiler suite consists of a set of binaries, written in C,
which translate Myrddin source code to the assembly most appropriate for
the target platform, and subsequently invoke the native assembler on it.
The linker is not invoked by the compiler, and the final output is an
object file for the target platform.
The compilers are named with a single character for the target platform,
with a single character for the language being compiled. A table of the
compilers and their names is below:
Compiler Platform
-------------------------
6m x86-64
The compilation is divided into a small number of phases. The first phase
is parsing. The first phase is parsing, where the source code is first
tokenized, parsed, and semantically checked. The second phase is the
machine dependent tree flattening. In this phase, the tree is decomposed
function by function into simple operations that are relatively close to
the machine. Sizes are fixed, and all loops, if statements, etc are
replaced with gotos. The next phase is a machine independent optimizer,
which currenty does nothing other than simply folding trees. In the final
phase, the instructions are selected and the registers are allocated.
So, to recap, the phases are as follows:
parse Tokenize, parse and analyze the source.
flatten Rewrite the complex nodes into simpe ones
opt Optimize the flattened source trees
gen Generate the assembly code
2. PARSING:
This phase takes in a source file, and outputs a tree that is guaranteed
to be valid.
2.1. Lexing:
Lexing occurs in parse/tok.c. Because we desire to use this lexer from
within yacc, the entry point to this code is in 'yylex()'. As required
by yacc, 'yylex()' returns an integer defining the token type, and
sets the 'tok' member of yylval to the token that was taken from the
input stream. In addition, to allow for better error messages, the
global variable 'curtok' is set to the value of 'yylval.tok'. This
allows yyerror to print the last token that was seen.
The tokens that are allowable are generated by Yacc from the '%token'
definiitions in parse/gram.y, and are placed into the file
'parse/gram.h'. The lexer and parser code is the only code that
depends on these token constants.
The lexer is initalized through 'tokinit(char *file)'. This function
will open the file passed in, read all the data from it in one go
and set up the internal data for the tokenizer. The tokenizing is then
done while the whole file is in memory, which means that this code
will work poorly on files that are larger than the address space
available to the compiler. If this is a problem, you deserve all the
pain that is caused.
The file data is stored in the three global variables 'fidx', 'fbuf',
and 'fbufsz'. The actual tokenization happens through 'toknext()' and
its callees, which operate on these data structures character by
character, matching the values read, and shoving them into the 'Tok'
data structure.
2.2. AST Creation:
The parser used is a traditional Yacc based parser. It is generated
from the source in parse/gram.y. The starting production is 'file',
which fills in a global 'file' tree node. This 'file' tree node must
be initialized before yyparse() is called.
2.3. Type Checking:
Type checking is done through unification of types. It's implemented
in parse/infer.c. It proceeds through a simple unification algorithm,
which is documented in lang.txt. As a result, only the internal
details of this algorithm will be discussed here.
The first step done is loading and resolving use files. This is
deferred to the type checking phase for two reasons. First, we
do not want to force tools to have all dependencies compiled if they
use this parser, even though type full type checking is impossible
until all usefiles are loaded. And second, this is when the
information is actually needed.
Next, the types declared in the package section are merged with the
exported types, allowing us to start off with our type information as
complete as possible, and making sure that the types of globals
actually match up with the exported types.
The next step is the actual type inference. We do a bottom up walk of
the tree, unifying types as we go. There are subtleties with the
member operator, however. Because the '.' operator is used for both
member lookups and namespace lookups, before we descend into a node
that has operator Omemb, we need to check if it's a namespaced name,
or an actual member reference. If it is a namespaced name, we replace
the expression with an Ovar expression. This check happens in the
'checkns()' function. Second, because we need to know the LHS of a
member expression before we can check if the RHS is valid, and we
are not guaranteed to know this at the first time that we see it, the
expression is assumed to be valid, and this asumption is verified in
a post-processing pass. Casts are validated in a deferred manner
similarly.
Generic variables are added to a list of generic callsites to
specialize when they are seen in as a leaf of an Ovar node.
The type inference, to this point, has only built up a mapping
of types. So, for example, if we were to have the inferred types
for the following set of statements:
var a
var b
var c
a = b
c = b + 1
We would have the mappings:
$t0 -> $t1
$t1 -> $t2
$t2 -> int
So, in the 'typesub()' function, we iterate over the entire tree,
replacing every instance of a non-concrete type with the final
mapped type. If a type does not map to a fully concrete type,
this is where we error.
FIXME: DESCRIBE HOW YOU FIXED GENERICS ONCE YOU FIX GENERICS.
2.4. Generic Specialization:
After type inference (well, technially, as the final step of it),
we walk through the list of callsites that need instantiations
of generics, and create a specialized generic instance for each of
them. This specialization is done, unsurprisingly, in specialize.c,
by the simple algorithm of cloning the entire tree that needs to
be specialized, and walking over all nodes substituting the types
that are replacing the type parameters.
2.5. Serialization:
Trees of all sorts can be serialized and deserialized from files,
as long as they are fully typed. Trees containing type variables (ie,
uninferred types) cannot be serialized, as type variables cannot be
deserialized meaningfully.
The format for this is only documented in the source, and is a
straighforward dump of the trees to memory. It is constantly shifting,
and will not reliably work between compiler versions.
2.6. Usefiles:
Usefiles are more or less files that consist of a single character tag
that tells us what type of tree to deserialize. Because serialized
trees are compiler version dependant, so are usefiles.
3. FLATTENING:
This phase is invoked repeatedly on each top level declaration that we
want to generate code for.
3.1. Control Flow:
All control flow is simplified to
3.2. Complex Expressions:
4. OPTIMIZATION:
4.1. Constant Folding:
5. CODE GENERATION:
5.1. Instruction Selection:
5.2. Register Allocation:
6: TUTORIAL: ADDING A STATEMENT:
6.1. Stubbing in the node types:
6.2. Parsing:
6.3. Flattening:
6.4. Optimization:
6.5. Instruction Selection:
|