You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/data_structures/sqrt-tree.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,9 @@ Also we have some queries $q(l, r)$. For each query, we need to compute $a_l \ci
10
10
11
11
Sqrt Tree can process such queries in $O(1)$ time with $O(n \cdot \log \log n)$ preprocessing time and $O(n \cdot \log \log n)$ memory.
12
12
13
-
# Description
13
+
##Description
14
14
15
-
## Building sqrt decomposition
15
+
###Building sqrt decomposition
16
16
17
17
Let's make a [sqrt decomposition](/data_structures/sqrt_decomposition.html). We divide our array in $\sqrt{n}$ blocks, each block has size $\sqrt{n}$. For each block, we compute:
18
18
@@ -55,7 +55,7 @@ We already can answer some queries using these arrays. If the query doesn't fit
55
55
56
56
But if we have queries that entirely fit into one block, we cannot process them using these three arrays. So, we need to do something.
57
57
58
-
## Making a tree
58
+
###Making a tree
59
59
60
60
We cannot answer only the queries that entirely fit in one block. But what **if we build the same structure as described above for each block?** Yes, we can do it. And we do it recursively, until we reach the block size of $1$ or $2$. Answers for such blocks can be calculated easily in $O(1)$.
61
61
@@ -67,7 +67,7 @@ Now we can answer the queries in $O(\log \log n)$. We can go down on the tree un
67
67
68
68
OK, now we can do $O(\log \log n)$ per query. Can it be done faster?
69
69
70
-
## Optimizing the query complexity
70
+
###Optimizing the query complexity
71
71
72
72
One of the most obvious optimization is to binary search the tree node we need. Using binary search, we can reach the $O(\log \log \log n)$ complexity per query. Can we do it even faster?
73
73
@@ -100,23 +100,23 @@ For more details, see the code below.
100
100
101
101
So, using this, we can answer the queries in $O(1)$ each. Hooray! :)
102
102
103
-
# Updating elements
103
+
##Updating elements
104
104
105
105
We can also update elements in Sqrt Tree. Both single element updates and updates on a segment are supported.
106
106
107
-
## Updating a single element
107
+
###Updating a single element
108
108
109
109
Consider a query $\text{update}(x, val)$ that does the assignment $a_x = val$. We need to perform this query fast enough.
110
110
111
-
### Naive approach
111
+
####Naive approach
112
112
113
113
First, let's take a look of what is changed in the tree when a single element changes. Consider a tree node with length $l$ and its arrays: $\text{prefixOp}$, $\text{suffixOp}$ and $\text{between}$. It is easy to see that only $O(\sqrt{l})$ elements from $\text{prefixOp}$ and $\text{suffixOp}$ change (only inside the block with the changed element). $O(l)$ elements are changed in $\text{between}$. Therefore, $O(l)$ elements in the tree node are updated.
114
114
115
115
We remember that any element $x$ is present in exactly one tree node at each layer. Root node (layer $0$) has length $O(n)$, nodes on layer $1$ have length $O(\sqrt{n})$, nodes on layer $2$ have length $O(\sqrt{\sqrt{n}})$, etc. So the time complexity per update is $O(n + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(n)$.
116
116
117
117
But it's too slow. Can it be done faster?
118
118
119
-
### An sqrt-tree inside the sqrt-tree
119
+
####An sqrt-tree inside the sqrt-tree
120
120
121
121
Note that the bottleneck of updating is rebuilding $\text{between}$ of the root node. To optimize the tree, let's get rid of this array! Instead of $\text{between}$ array, we store another sqrt-tree for the root node. Let's call it $\text{index}$. It plays the same role as $\text{between}$— answers the queries on segments of blocks. Note that the rest of the tree nodes don't have $\text{index}$, they keep their $\text{between}$ arrays.
122
122
@@ -134,15 +134,15 @@ Note that the query complexity is still $O(1)$: we need to use $\text{index}$ in
134
134
135
135
So, total time complexity for updating a single element is $O(\sqrt{n})$. Hooray! :)
136
136
137
-
## Updating a segment
137
+
###Updating a segment
138
138
139
139
Sqrt-tree also can do things like assigning an element on a segment. $\text{massUpdate}(x, l, r)$ means $a_i = x$ for all $l \le i \le r$.
140
140
141
141
There are two approaches to do this: one of them does $\text{massUpdate}$ in $O(\sqrt{n}\cdot \log \log n)$, keeping $O(1)$ per query. The second one does $\text{massUpdate}$ in $O(\sqrt{n})$, but the query complexity becomes $O(\log \log n)$.
142
142
143
143
We will do lazy propagation in the same way as it is done in segment trees: we mark some nodes as _lazy_, meaning that we'll push them when it's necessary. But one thing is different from segment trees: pushing a node is expensive, so it cannot be done in queries. On the layer $0$, pushing a node takes $O(\sqrt{n})$ time. So, we don't push nodes inside queries, we only look if the current node or its parent are _lazy_, and just take it into account while performing queries.
144
144
145
-
### First approach
145
+
####First approach
146
146
147
147
In the first approach, we say that only nodes on layer $1$ (with length $O(\sqrt{n}$) can be _lazy_. When pushing such node, it updates all its subtree including itself in $O(\sqrt{n}\cdot \log \log n)$. The $\text{massUpdate}$ process is done as follows:
148
148
@@ -164,7 +164,7 @@ So we can do $\text{massUpdate}$ fast. But how lazy propagation affects queries?
164
164
165
165
The query complexity still remains $O(1)$.
166
166
167
-
### Second approach
167
+
####Second approach
168
168
169
169
In this approach, each node can be _lazy_ (except root). Even nodes in $\text{index}$ can be _lazy_. So, while processing a query, we have to look for _lazy_ tags in all the parent nodes, i. e. query complexity will be $O(\log \log n)$.
170
170
@@ -182,7 +182,7 @@ But $\text{massUpdate}$ becomes faster. It looks in the following way:
182
182
183
183
Note that when we do the recursive call, we do prefix or suffix $\text{massUpdate}$. But for prefix and suffix updates we can have no more than one partially covered child. So, we visit one node on layer $1$, two nodes on layer $2$ and two nodes on any deeper level. So, the time complexity is $O(\sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. The approach here is similar to the segment tree mass update.
184
184
185
-
# Implementation
185
+
##Implementation
186
186
187
187
The following implementation of Sqrt Tree can perform the following operations: build in $O(n \cdot \log \log n)$, answer queries in $O(1)$ and update an element in $O(\sqrt{n})$.
Copy file name to clipboardExpand all lines: src/graph/mpm.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
MPM (Malhotra, Pramodh-Kumar and Maheshwari) algorithm solves the maximum flow problem in $O(V^3)$. This algorithm is similar to [Dinic's algorithm](./graph/dinic.html).
5
5
6
-
# Algorithm
6
+
##Algorithm
7
7
8
8
Like Dinic's algorithm, MPM runs in phases, during each phase we find the blocking flow in the layered network of the residual network of $G$.
9
9
The main difference from Dinic's is how we find the blocking flow.
@@ -32,7 +32,7 @@ Each phase works in $O(V^2)$ because there are at most $V$ iterations (because a
32
32
Summing, we get $O(V^2 + E) = O(V^2)$.
33
33
Since there are less than $V$ phases (see the proof [here](./graph/dinic.html)), MPM works in $O(V^3)$ total.
Copy file name to clipboardExpand all lines: src/sequences/k-th.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Given an array __A__ of size __N__ and a number __K__. The challenge is to find
6
6
7
7
The basic idea - to use the idea of quick sort algorithm. Actually, the algorithm is simple, it is more difficult to prove that it runs in an average of O(N), in contrast to the quick sort.
0 commit comments