update docs and add more questions
Signed-off-by: Alex Chi <iskyzh@gmail.com>
This commit is contained in:
@@ -166,6 +166,7 @@ You may print something, for example, the compaction task information, when the
|
||||
* Actively choosing some old files/levels to compact even if they do not violate the level amplifier would be a good choice, is it true? (Look at the [Lethe](https://disc-projects.bu.edu/lethe/) paper!)
|
||||
* If the storage device can achieve a sustainable 1GB/s write throughput and the write amplification of the LSM tree is 10x, how much throughput can the user get from the LSM key-value interfaces?
|
||||
* Can you merge L1 and L3 directly if there are SST files in L2? Does it still produce correct result?
|
||||
* So far, we have assumed that our SST files use a monotonically increasing id as the file name. Is it okay to use `<level>_<begin_key>_<end_key>.sst` as the SST file name? What might be the potential problems with that? (You can ask yourself the same question in week 3...)
|
||||
* What is your favorite boba shop in your city? (If you answered yes in week 1 day 3...)
|
||||
|
||||
We do not provide reference answers to the questions, and feel free to discuss about them in the Discord community.
|
||||
|
||||
@@ -80,6 +80,7 @@ get 1500
|
||||
|
||||
* When do you need to call `fsync`? Why do you need to fsync the directory?
|
||||
* What are the places you will need to write to the manifest?
|
||||
* Consider an alternative implementation of an LSM engine that does not use a manifest file. Instead, it records the level/tier information in the header of each file, scans the storage directory every time it restarts, and recover the LSM state solely from the files present in the directory. Is it possible to correctly maintain the LSM state in this implementation and what might be the problems/challenges with that?
|
||||
|
||||
## Bonus Tasks
|
||||
|
||||
|
||||
@@ -19,3 +19,8 @@ keep all versions, split file, run merge iterator tests
|
||||
return the latest version
|
||||
|
||||
pass all tests except week 2 day 6
|
||||
|
||||
## Test Your Understanding
|
||||
|
||||
* What is the difference of `get` in the MVCC engine and the engine you built in week 2?
|
||||
* In week 2, you stop at the first memtable/level where a key is found when `get`. Can you do the same in the MVCC version?
|
||||
|
||||
@@ -13,3 +13,7 @@ For now, inner = `Fused<LsmIterator>`, do not use `TxnLocalIterator`
|
||||
explain why store txn inside iterator
|
||||
|
||||
do not implement put and delete
|
||||
|
||||
## Test Your Understanding
|
||||
|
||||
* So far, we have assumed that our SST files use a monotonically increasing id as the file name. Is it okay to use `<level>_<begin_key>_<end_key>_<max_ts>.sst` as the SST file name? What might be the potential problems with that?
|
||||
|
||||
@@ -5,3 +5,7 @@
|
||||
## Task 2: Serializable: Record Read Set and Write Set
|
||||
|
||||
## Task 3: Serializable Verification
|
||||
|
||||
## Test Your Understanding
|
||||
|
||||
* If you have some experience with building a relational database, you may think about the following question: assume that we build a database based on Mini-LSM where we store each row in the relation table as a key-value pair and enable serializable verification, does the database system directly get ANSI serializable isolation level capability? Why or why not?
|
||||
|
||||
@@ -4,6 +4,8 @@ In this part, you will implement MVCC over the LSM engine that you have built in
|
||||
|
||||
The key of MVCC is to store and access multiple versions of a key in the storage engine. Therefore, we will need to change the key format to `user_key + timestamp (u64)`. And on the user interface side, we will need to have new APIs to help users to gain access to a history version. In summary, we will add a monotonically-increasing timestamp to the key.
|
||||
|
||||
In previous parts, we assumed that newer keys are in the upper level of the LSM tree, and older keys are in the lower level of the LSM tree. During compaction, we only keep the latest version of a key if multiple versions are found in multiple levels, and the compaction process will ensure that newer keys will be kept on the upper level by only merging adjacent levels/tiers. In the MVCC implementation, the key with a larger timestamp is the newest key. During compaction, we can only remove the key if no user is accessing an older version of the database. Though not keeping the latest version of key in the upper level may still yield a correct result for the MVCC LSM implementation, in our tutorial, we choose to keep the invariant, and if there are multiple versions of a key, a later version will always appear in a upper level.
|
||||
|
||||
Generally, there are two ways of utilizing a storage engine with MVCC support. If the user uses the engine as a standalone component and do not want to manually assign the timestamps of the keys, they will use transaction APIs to store and retrieve data from the storage engine. Timestamps are transparent to the users. The other way is to integrate the storage engine into the system, where the user manages the timestamps by themselves. To compare these two approaches, we can look at the APIs they provide. We use the terminologies of BadgerDB to describe these two usages: the one the hides the timestamp is *un-managed mode*, and the one that gives the user full control is *managed mode*.
|
||||
|
||||
**Managed Mode APIs**
|
||||
|
||||
Reference in New Issue
Block a user