Skip to content
Snippets Groups Projects
Commit 66654667 authored by Ondřej Mička's avatar Ondřej Mička
Browse files

Graphs: Rest of the heavy-light

parent bb991088
Branches
No related tags found
No related merge requests found
......@@ -4,6 +4,7 @@
\fi
\def\TODO{{\bf TODO}}
\def\NOTE{{\bf NOTE}}
\def\LCA{\op{LCA}}
......@@ -81,7 +82,7 @@ always traverse the tree top-down in order to see correct values in the nodes.
\section[hld]{Heavy-light decomposition}
Now we are ready build data structure for static trees using \em{heavy-light
decomposition}. We assume our tree $F$ is rooted and we orient all edges
up, towards the root \TODO\foot{maybe unnecessary now}.
up, towards the root \NOTE\foot{maybe unnecessary now}.
\defn{
Let~$F$ be a rooted tree. For any vertex~$v$ we define $s(v)$ to be the size of subtree
......@@ -99,16 +100,35 @@ each vertex~$v$ lies on exactly one heavy path (the path can consist of only~$v$
Any root-to-leaf path in~$F$ contains at most $\log n$ light edges.
}
This gives us the decomposition of the tree into heavy paths that connected via light
This gives us the decomposition of the tree into heavy paths that are connected via light
edges. The decomposition can be easily found using depth-first search in
linear time.
We represent each heavy path using the range tree structure for static path from the
previous chapter. The root of each range tree will also store the light edge that leads up
from the top of the path and connects it with other heavy path.
\TODO decomposition example
\subsection{Lowest common ancestor}
A simple application of heavy-light decomposition is a data structure to answer lowest
common ancestor (LCA) queries. We will also need to calculate LCA in order to evaluate
path queries and updates.
For each vertex~$v$ we store an identifier of the heavy path it lies on and we also store
the position of~$v$ on that path. For each heavy path~$H$ we store the light edge that
leads from the top of the path and connects~$H$ to the rest of the tree. These information
can be precalculated in~$\O(n)$ time.
To answer $\LCA(x,y)$ we start at both~$x$ and~$y$ and we jump along heavy paths up,
towards the root. Once we discover lowest common heavy path, we compare position of
``entry-points'' to decide which one of them is LCA. We have to traverse $\O(\log n)$
light edges and we can jump over a heavy path in constant time, thus we spend $\O(\log n)$
time in total. \NOTE perhaps make it theorem/lemma?
\TODO
\subsection{Path queries and updates}
Let us return to the original problem of path queries and updates. The idea is
straightforward: Heavy-light decomposition turns the tree into a system of paths and we
already know a data structure for a static path.
The following lemma gives a recipe on how to evaluate path queries and updates:
\lemma{
......@@ -116,17 +136,46 @@ Every path $x\to y$ in~$F$ can be partitioned into $\O(\log n)$ light edges and
subpaths of heavy paths.
}
\proof
If the path is top-down we have $O(\log n)$ light edges by the previous observation and
If the path is top-down we have $O(\log n)$ light edges and
these edges split the path into $O(\log n)$ heavy subpaths. Otherwise, path $x\to y$ can
be divided into two top-down paths at the lowest common ancestor of $x$ and $y$.
\qed
Thus, we just partition the query (update) into $\O(\log n)$ queries on heavy paths plus
light edges. As each subquery can be evaluated in $\O(\log n)$ we get $\O(\log^2 n)$ in
total.
We represent each heavy path using the range tree structure for static path from the
previous chapter. The root of each range tree will also store the light edge that leads up
from the top of the path and connects it to the rest of the tree. We also need to store
the extra information used in the LCA algorithm.
Both queries and updates are then evaluated per partes.
We partition the query (update) into $\O(\log n)$ queries on heavy paths plus
$\O(\log n)$ light edges. To do so, we need to calculate LCA of query path's endpoints
which takes $\O(\log n)$ time. Each subquery can be evaluated in $\O(\log n)$ time and so
we get $\O(\log^2 n)$ in total.
\cor{
A data structure based on heavy-light decomposition can perform \em{path queries},
\em{point updates} and \em{path updates} in time $\O(\log^2 n)$, it can be built in
$\O(n)$ time and requires $\O(n)$ space.
}
\subsection{Static weights}
However, there is a catch. To calculate the partitioning, we need to calculate lowest
common ancestor of~$x$~and~$y$ ($\LCA(x,y)$).
Let us analyze the partitioning of a path in a bit more detail:
\obs{
When we partition a path into $\O(\log n)$ heavy subpaths, all of the subpaths, with one
exception, are a prefix or a suffix of heavy path.
}
We can make this observation to make path queries faster but at the cost of keeping the
weights static and forgoing the path updates. For each heavy path we calculate and store
prefix and suffix minimums. This allows us to answer almost all subqueries in constant
time and the one remaining subquery can be answered in $\O(\log n)$.
\cor{
On a static tree with static weights, the path queries can be answered in $\O(\log n)$
time.
}
\TODO
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment