Skip to content
Snippets Groups Projects
Commit de29b85a authored by Martin Mareš's avatar Martin Mareš
Browse files

Intro: Binary counters and coins

parent 9aacbf01
No related branches found
No related tags found
No related merge requests found
......@@ -14,7 +14,7 @@
\section{Examples of data structures}
Generally, a~data structure is a~\uv{black box}, which contains some data and
Generally, a~data structure is a~``black box'', which contains some data and
allows \df{operations} on the data. Some operations are \em{queries} on the current
state of data, some are \em{updates} which modify the data. The data are encapsulated
within the structure, so that they can be accessed only through the operations.
......@@ -177,7 +177,7 @@ time of a~sequence of $n$~such operations is much less than $n$~times the worst-
of a~single operation. This leads to the concept of \em{amortized complexity,} but before
we define it rigorously, let us see several examples of such phenomena.
\subsection{Flexible arrays}
\subsection{Flexible arrays --- the aggregation method}
It often happens than we want to store data in an~array (so that it can be accessed in arbitrary
order), but we cannot predict how much data will arrive. Most programming languages offer some kind
......@@ -208,7 +208,7 @@ is constant.
This type of analysis is sometimes called the \df{aggregation method} --- instead of considering
each operation separately, we aggregated them and found an upper bound on the total time.
\subsection{Shrinkable arrays}
\subsection{Shrinkable arrays --- the accounting method}
What if we also want to remove elements from the flexible array? For example, we might want to
use it to implement a~stack. When an element is removed, we need not change the capacity, but that
......@@ -228,8 +228,8 @@ back to the initial state. Therefore, we have a~sequence of 4~operations which m
stretch and shrink, spending time $\Theta(C_0)$ for an~arbitrarily high~$C_0$. All hopes for constant
amortized time per operation are therefore lost.
The problem is that stretching a~\uv{full} array leads to an~\em{almost empty} array; similarly,
shrinking an~\uv{empty} array gives us an~\uv{almost full} array. We need to design better rules
The problem is that stretching a~``full'' array leads to an~\em{almost empty} array; similarly,
shrinking an~``empty'' array gives us an~``almost full'' array. We need to design better rules
such that an array after a~stretch or shrink will be far from being empty or full.
We will stretch when $n>C$ and shrink when $n<C/4$. Intuitively, this should work: both stretching
......@@ -256,4 +256,48 @@ that we started with an~empty array.
This is a~common technique, which is usually called the \em{accounting method.} It redistributes time
between operations so that the total time remains the same, but the worst-case time decreases.
\subsection{Binary counters --- the coin method}
Now, we turn our attention to a~$\ell$-bit binary counter. Initially, all bits are zero. Then we keep
incrementing it by~1. For $\ell=4$, we get \|0000|, \|0001|, \|0010|, \|0011|, \|0100|, and~so on.
Performing the increment it simple: we scan the number from the right, turning \|1|s to \|0|s,
until we hit a~\|0|, which we change to a~\|1| and stop.
A~single increment can take $\Theta(\ell)$ time when we go from \|0111|\dots\|1| to \|1000|\dots\|0|.
Again, we will show that in amortized sense it is much faster: performing $n$~increments takes only
$\Theta(n)$ time.
We can prove it by simple aggregation: the rightmost bit changes every time, the one before it on every
other increment, generally the $i$-th bit from the right changes once in $2^{i-1}$ increments. The total
number of changes will therefore be:
$$
\sum_{i=0}^{\ell-1} \left\lfloor {n\over 2^i} \right\rfloor \le
\sum_{i=0}^{\ell-1} {n\over 2^i} \le
n\cdot\sum_{i=0}^\ell {1\over 2^i} \le
n\cdot\sum_{i=0}^\infty {1\over 2^i} = 2n.
$$
However, there is an~easier and ``more economical'' approach to analysis.
Imagine that we have some \em{coins} and each coin buys us a~constant amount of computation time,
enough to test and change one bit. We will maintain an invariant that for every~\|1| bit, we have
one coin (we can imagine that the coin is ``placed on the bit'').
When we are asked to increment, we get paid 2~coins. This is enough to cover the
single change \|0| to~\|1|: one will pay for the change itself, the other will be
placed on the newly created~\|1|. Whenever we need to change a~\|1| to~\|0|, we will
simply spend the coin placed on that~\|1|. After all operations, some coins will remain
placed on the bits, but there is no harm in that.
Therefore, 2~coins per operation are enough to pay for all bit changes. This means
that the total time spent on $n$~operations is $\Theta(n)$.
This technique is often called the \em{coin method.} Generally, we obtain a~certain number
of ``time coins'' per operation. Some of them will be spent immediately, some saved for the
future and spent later. Usually, the saved coins are associated with certain features of
the data --- in our example, with \|1|~bits. (This can be considered a~special case of
the accounting method, because we redistribute time from operations which save coins to
those which spend the saved coins later.)
\subsection{Potentials}
\endchapter
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment