@@ -111,7 +111,7 @@ scan 2000 2333
|
||||
* What are the definitions of read/write/space amplifications? (This is covered in the overview chapter)
|
||||
* What are the ways to accurately compute the read/write/space amplifications, and what are the ways to estimate them?
|
||||
* Is it correct that a key will take some storage space even if a user requests to delete it?
|
||||
* Given that compaction takes a lot of write bandwidth and read bandwidth and may interfere with foreground operations, it is a good idea to postpone compaction when there are large write flow. It is even beneficial to stop/pause existing compaction tasks in this situation. What do you think of this idea? (Read the [Silk](https://www.usenix.org/conference/atc19/presentation/balmau) paper!)
|
||||
* Given that compaction takes a lot of write bandwidth and read bandwidth and may interfere with foreground operations, it is a good idea to postpone compaction when there are large write flow. It is even beneficial to stop/pause existing compaction tasks in this situation. What do you think of this idea? (Read the [SILK: Preventing Latency Spikes in Log-Structured Merge Key-Value Stores](https://www.usenix.org/conference/atc19/presentation/balmau) paper!)
|
||||
* Is it a good idea to use/fill the block cache for compactions? Or is it better to fully bypass the block cache when compaction?
|
||||
* Does it make sense to have a `struct ConcatIterator<I: StorageIterator>` in the system?
|
||||
* Some researchers/engineers propose to offload compaction to a remote server or a serverless lambda function. What are the benefits, and what might be the potential challenges and performance impacts of doing remote compaction? (Think of the point when a compaction completes and what happens to the block cache on the next read request...)
|
||||
|
||||
Reference in New Issue
Block a user