Clear up division of labour

This commit is contained in:
Yuriy Dupyn 2024-01-28 22:51:27 +01:00
parent 53c5d3f3f7
commit c4de02c1e6

View file

@ -1,21 +1,19 @@
Communication with client (mainly processing client's input)
* understand and implement the postgres protocol
* setup of concurrent connections
* when client sends a query, we send it to the parser "component"
* when client sends a query, we send it to the parser
Parsing can be started now
basically a function
String -> Result<Operation, ParsingError>
String -> Result<RawOperationSyntax, ParsingError>
Validation of operation
When parsing gives you an operation, it need not make sense (e.g. table doesn't exist)
So we need to validate the operation.
When parsing gives you a raw syntax of an operation, it need not make sense (e.g. table doesn't exist)
So we need to validate it.
The output of the validation is
* relevant table
* an "augmented" operation (e.g. operation with type operation)
* a proper operation validated against the database schema
* reject when e.g. table doesn't exist, condition refers to non-existent column etc
* This phaze doesn't need access to rows nor indices, just table meta-data.
* This phaze doesn't need access to runtime data (e.g. rows or indices), just the table schemas.
==========Locking of tables should happen here========
@ -30,9 +28,9 @@ Responding to the client
==========Lock on the table should be dropped here===========
Serialization/Desearilization to disk
* There are two approaches, one is incremental (and very hard)
the other is: when the server is shut down.
There are two approaches
* one is incremental (and very hard), a storage engine where you have a huge database on disk
encoding something like a BTree, and then you have a small in-memory view (cache) on what's on the disk.
And you try to constantly keep these synced.
* the other is: when the server is shut down, just serialize the in-memory database state onto disk
in e.g. JSON. Trivial to implement.