flcc: a highly optimized C compiler


updated 2024-10-11

'flcc' (the front-line C compiler) is a long-term project to build the most efficient C compiler possible. I aim to support C23 with a limited set of vendor extensions (depending on popularity and ease of implementation). Common C codebases (e.g. popular Linux packages) should be well-supported. While the optimizer may not be as sophisticated as that of GCC or LLVM, code outside hot loops should be of decent quality.

Note: this project is in its infancy. I am collecting ideas and techniques in order to get a relatively complete picture of the compiler architecture. While I am quite confident in the designs I set out, unforesoon issues can always crop up. This project is a few years from being usable. Follow the development on SourceHut.

Compilation is divided into the following stages. These stages must occur in order, but they may occur in a pipelined fashion, and each stage may use parallelism. Some stages may also support incremental compilation, so that unchanged parts of the code are not processed again.

  1. Tokenizing: Each input file is processed into a sequence of tokens (symbols, identifiers, numbers, string and character literals, and comments).

    This stage can be overhauled completely. Traditional implementations parse tokens on demand, examining the input character-by-character and selecting the right kind of token. 'flcc' transposes this logic: for each kind of token, it will scan through vast amounts of input and identify each occurrence of that kind. This is highly amenable to vectorization.

  2. Pre-processing: Preprocessor directives in the input file are evaluated and macros are substituted in the input token stream.

    Source files tend to include many header files (directly and indirectly); within a single project, many of those header files are shared between source files. Traditional implementations are executed once per source file, so they pre-process the same header files repeatedly. 'flcc' is executed once over an entire project, and caches the results of pre-processing header files so that they can be reused.

  3. Parsing: Input file tokens are organized into syntactic constructs like functions and expressions.

    Traditional implementations consume the token stream using recursive descent, again processing token-by-token. 'flcc' first structures the input based on matching pairs of parentheses, curly braces, and square brackets. The tokens within each matching pair are scanned for syntactic constructs. Scanning begins with the most deeply nested matching pairs and works outwards, so that each scan can see the parsed results of matching pairs within their content.

    TODO: The memory layout of the AST (see Zig), and how the parser rearranges input tokens to fit it.