Communication with client (mainly processing client's input) * understand and implement the postgres protocol * setup of concurrent connections * when client sends a query, we send it to the parser Parsing can be started now basically a function String -> Result Validation of operation When parsing gives you a raw syntax of an operation, it need not make sense (e.g. table doesn't exist) So we need to validate it. The output of the validation is * a proper operation validated against the database schema * reject when e.g. table doesn't exist, condition refers to non-existent column etc * This phaze doesn't need access to runtime data (e.g. rows or indices), just the table schemas. ==========Locking of tables should happen here======== Interpretation: Change the state of the table given the validated operation Responding to the client * with error messages * with success message (after insert/delete etc) * with rows ==========Lock on the table should be dropped here=========== Serialization/Desearilization to disk There are two approaches * one is incremental (and very hard), a storage engine where you have a huge database on disk encoding something like a BTree, and then you have a small in-memory view (cache) on what's on the disk. And you try to constantly keep these synced. * the other is: when the server is shut down, just serialize the in-memory database state onto disk in e.g. JSON. Trivial to implement.