Reference:
Memory Hierarchy Models: https://www.youtube.com/watch?v=V3omVLzI0WE
TLPI: http://man7.org/tlpi/
When design data structure, targeting the cache level.
(cell probes, words read from memory)/B <= Memory transfers <= RAM running time
1. Sequential scanning.
O([N/B), i.e ceiling N over B.
B is the word size.
N is the data number.
2. Search trees. B-tree (B+1 as branch factor, B as B number of blocks)
O((B+1)logN) optimized i.e consider binary tree, which is O(2logN)
Omega((B+1)logN) search in comparison model.
Proof:
Take the ratio.
3. Sorting.
O(N/B * n/BlogN/B),i.e read as block B.
4. Permuting
O(min{N, N/B * n/BlogN/B})
5. Buffer-tree.
O(1/B * n/BlogN/B)
For delayed query and batched updates.
tbd
Memory Hierarchy Models: https://www.youtube.com/watch?v=V3omVLzI0WE
TLPI: http://man7.org/tlpi/
When design data structure, targeting the cache level.
External memory model
(Consider only last 2 level: cache -> disk)(cell probes, words read from memory)/B <= Memory transfers <= RAM running time
1. Sequential scanning.
O([N/B), i.e ceiling N over B.
B is the word size.
N is the data number.
2. Search trees. B-tree (B+1 as branch factor, B as B number of blocks)
O((B+1)logN) optimized i.e consider binary tree, which is O(2logN)
Omega((B+1)logN) search in comparison model.
Proof:
Take the ratio.
3. Sorting.
O(N/B * n/BlogN/B),i.e read as block B.
4. Permuting
O(min{N, N/B * n/BlogN/B})
5. Buffer-tree.
O(1/B * n/BlogN/B)
For delayed query and batched updates.
tbd
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.