Skip to content
Snippets Groups Projects
Select Git revision
  • f4e510856180e9b4a9e09fda133b27a92eb64d4e
  • master default
  • zs-dobrichovice
3 results

db.ddl

Blame
  • streaming.tex 9.21 KiB
    \ifx\chapter\undefined
    \input adsmac.tex
    \singlechapter{20}
    \fi
    
    \chapter[streaming]{Streaming Algorithms}
    
    For this chapter, we will consider the streaming model. In this
    setting, the input is presented as a ``stream'' which we can read
    \em{in order}. In particular, at each step, we can do some processing,
    and then move forward one unit in the stream to read the next piece of data.
    We can choose to read the input again after completing a ``pass'' over it.
    
    There are two measures for the performance of algorithms in this setting.
    The first is the number of passes we make over the input, and the second is
    the amount of memory that we consume. Some interesting special cases are:
    \tightlist{o}
    \: 1 pass, and $O(1)$ memory: This is equivalent to computing with a DFA, and
    hence we can recognise only regular languages.
    \: 1 pass, and unbounded memory: We can store the entire stream, and hence this
    is just the traditional computing model.
    \endlist
    
    \section{Frequent Elements}
    
    For this problem, the input is a stream $\alpha[1 \ldots m]$ where each
    $\alpha[i] \in [n]$.
    We define for each $j \in [n]$ the \em{frequency} $f_j$ which counts
    the occurences of $j$ in $\alpha[1 \ldots m]$. Then the majority problem
    is to find (if it exists) a $j$ such that $f_j > m / 2$.
    
    We consider the more general frequent elements problem, where we want to find
    $F_k = \{ j \mid f_j > m / k \}$. Suppose that we knew some small set
    $C$ which contains $F_k$. Then, with a pass over the input, we can count the
    occurrences of each element of $C$, and hence find $F_k$ in
    $\O(\vert C \vert \log m)$ space.
    
    \subsection{The Misra/Gries Algorithm}
    
    We will now see a deterministic one-pass algorithm that estimates the frequency
    of each element in a stream of integers. We shall see that it also provides
    us with a small set $C$ containing $F_k$, and hence lets us solve the frequent
    elements problem efficiently.
    
    \algo{FrequencyEstimate} \algalias{Misra/Gries Algorithm}
    \algin the data stream $\alpha$, the target for the estimator $k$.
    \:\em{Init}: $A \= \emptyset$. \cmt{an empty map}
    \:\em{Process}($x$): 
    \::If $x \in$ keys($A$), $A[x] \= A[x] + 1$.
    \::Else If $\vert$keys($A$)$\vert < k - 1$, $A[x] \= 1$.
    \::Else
          \forall $a \in $~keys($A$): $A[a] \= A[a] - 1$,
          delete $a$ from $A$ if $A[a] = 0$.
    \algout $\hat{f}_a = A[a]$ If $a \in $~keys($A$), and $\hat{f}_a = 0$ otherwise.
    \endalgo
    
    Let us show that $\hat{f}_a$ is a good estimate for the frequency $f_a$.
    
    \lemma{
    $f_a - m / k \leq \hat{f}_a \leq f_a$
    }
    
    \proof
    We see immediately that $\hat{f}_a \leq f_a$, since it is only incremented when
    we see $a$ in the stream.
    
    To see the other inequality, suppose that we have a counter for each
    $a \in [n]$ (instead of just $k - 1$ keys at a time). Whenever we have at least
    $k$ non-zero counters, we will decrease all of them by $1$; this gives exactly
    the same estimate as the algorithm above. 
    
    Now consider the potential
    function $\Phi = \sum_{a \in [n]} A[a]$. Note that $\Phi$ increases by
    exactly $m$ (since $\alpha$ contains $m$ elements), and is decreased by $k$
    every time any $A[x]$ decreases. Since $\Phi = 0$ initially and $\Phi \geq 0$,
    we get that $A[x]$ decreases at most $m / k$ times.
    \qed
    
    \theorem{
        There exists a deterministic 2-pass algorithm that finds $F_k$ in
        $\O(k(\log n + \log m))$ space.
    }
    \proof
    In the first pass, we obtain the frequency estimate $\hat{f}$ by the
    Misra/Gries algorithm.
    We set $C = \{ a \mid \hat{f}_a > 0 \}$. For $a \in F_k$, we have
    $f_a > m / k$, and hence $\hat{f}_a > 0$ by the previous Lemma.
    In the second pass, we count $f_c$ exactly for each $c \in C$, and hence know
    $F_k$ at the end.
    
    To see the bound on space used, note that
    $\vert C \vert = \vert$keys($A$)$\vert \leq k - 1$, and a key-value pair can
    be stored in $\O(\log n + \log m)$ bits.
    \qed
    
    \subsection{The Count-Min Sketch}
    
    We will now look at a randomized streaming algorithm that solves the 
    frequency estimation problem. While this algorithm can fail with some
    probability, it has the advantage that the output on two different streams
    can be easily combined.
    
    \algo{FrequencyEstimate} \algalias{Count-Min Sketch}
    \algin the data stream $\alpha$, the accuracy $\varepsilon$,
     the error parameter $\delta$.
    \:\em{Init}:
     $C[1\ldots t][1\ldots k] \= 0$, where $k \= \lceil 2 / \varepsilon \rceil$
     and $t \= \lceil \log(1 / \delta) \rceil$.
    \:: Choose $t$ independent hash functions $h_1, \ldots h_t : [n] \to [k]$, each
     from a 2-independent family.
    \:\em{Process}($x$): 
    \::For $i \in [t]$: $C[i][h_i(x)] \= C[i][h_i(x)] + 1$.
    \algout Report $\hat{f}_a = \min_{i \in t} C[i][h_i(a)]$.
    \endalgo
    
    Note that the algorithm needs $\O(tk \log m)$ bits to store the table $C$, and
    $\O(t \log n)$ bits to store the hash functions $h_1, \ldots h_t$, and hence
    uses $\O(1/\varepsilon \cdot \log (1 / \delta) \cdot \log m
     + \log (1 / \delta)\cdot  \log n)$ bits. It remains to show that it computes
    a good estimate.
    
    \lemma{
        $f_a \leq \hat{f}_a \leq f_a + \varepsilon m$ with probability $\delta$.
    }
    
    \proof
    Clearly $\hat{f}_a \geq f_a$ for all $a \in [n]$; we will show that
    $\hat{f}_a \leq f_a + \varepsilon m$ with probability at least $\delta$.
    For a fixed element $a$, define the random variable
    $$X_i := C[i][h_i(a)] - f_a$$
    For $j \in [n] \setminus \{ a \}$, define the
    indicator variable $Y_{i, j} := [ h_i(j) = h_i(a) ]$. Then we can see that
    $$X_i = \sum_{j \neq a} f_j\cdot Y_{i, j}$$
    
    Note that $\E[Y_{i, j}] = 1/k$ since each $h_i$ is from a 2-independent family,
    and hence by linearity of expectation:
    $$\E[X_i] = {\vert\vert f \vert\vert_1 - f_a \over k} =
     {\vert\vert f_{-a} \vert\vert_1 \over k}$$
    
    And by applying Markov's inequality we obtain a bound on the error of a single
    counter:
    $$ \Pr[X_i > \varepsilon \cdot m ] \geq
     \Pr[ X_i > \varepsilon \cdot \vert\vert f_{-a} \vert\vert_1 ] \leq
     {1 \over k\varepsilon} \leq 1/2$$
    
    Finally, since we have $t$ independent counters, the probability that they
    are all wrong is:
    
    $$ \Pr\left[\bigcap_i X_i > \varepsilon \cdot m \right] \leq 1/2^t \leq \delta $$
    \qed
    
    \section{Counting Distinct Elements}
    We continue working with a stream $\alpha[1 \ldots m]$ of integers from $[n]$,
    and define $f_a$ (the frequency of $a$) as before. Let
    $d = \vert \{ j : f_j > 0 \} \vert$. Then the distinct elements problem is
    to estimate $d$.
    
    \subsection{The AMS Algorithm}
    
    Define ${\tt tz}(x) := \max\{ i \mid 2^i $~divides~$ x \}$
     (i.e. the number of trailing zeroes in the base-2 representation of $x$).
    
    \algo{DistinctElements} \algalias{AMS}
    \algin the data stream $\alpha$, the accuracy $\varepsilon$,
     the error parameter $\delta$.
    \:\em{Init}: Choose a random hash function $h : [n] \to [n]$ from a 2-independent
     family.
    \:: $z \= 0$.
    \:\em{Process}($x$): 
    \::If ${\tt tz}(h(x)) > z$: $z \= {\tt tz}(h(x))$.
    \algout $\hat{d} \= 2^{z + 1/2}$
    \endalgo
    
    \lemma{
        The AMS algorithm is a $(3, \delta)$-estimator for some constant
        $\delta$.
    }
    \proof
    For $j \in [n]$, $r \geq 0$, let $X_{r, j} := [ {\tt tz}(h(j)) \geq r ]$, the
    indicator that is true if $h(j)$ has at least $r$ trailing $0$s.
    Now define $$ Y_r = \sum_{j : f_j > 0} X_{r, j} $$
    How is our estimate related to $Y_r$? If the algorithm outputs
    $\hat{d} \geq 2^{a + 1/2}$, then we know that $Y_a > 0$. Similarly, if the
    output is smaller than $2^{a + 1/2}$, then we know that $Y_a = 0$. We will now
    bound the probabilities of these events.
    
    For any $j \in [n]$, $h(j)$ is uniformly distributed over $[n]$ (since $h$
    is $2$-independent). Hence $\E[X_{r, j}] = 1 / 2^r$. By linearity of
    expectation, $\E[Y_{r}] = d / 2^r$.
    
    We will also use the variance of these variables -- note that
    $$\Var[X_{r, j}] \leq \E[X_{r, j}^2] = \E[X_{r, j}] = 1/2^r$$
    
    And because $h$ is $2$-independent, the variables $X_{r, j}$ and $X_{r, j'}$
    are independent for $j \neq j'$, and hence:
    $$\Var[Y_{r}] = \sum_{j : f_j > 0} \Var[X_{r, j}] \leq d / 2^r $$
    
    Now, let $a$ be the smallest integer such that $2^{a + 1/2} \geq 3d$. Then we
    have:
    $$ \Pr[\hat{d} \geq 3d] = \Pr[Y_a > 0] = \Pr[Y_a \geq 1] $$
    
    Using Markov's inequality we get:
    $$ \Pr[\hat{d} \geq 3d] \leq \E[Y_a] = {d \over 2^a} \leq {\sqrt{2} \over 3} $$
    
    For the other side, let $b$ be the smallest integer so that
    $2^{b + 1/2} \leq d/3$. Then we have:
    $$ \Pr[\hat{d} \leq d / 3] = \Pr[ Y_{b + 1} = 0] \leq
     \Pr[ \vert Y_{b + 1} - \E[Y_{b + 1}] \vert \geq d / 2^{b + 1} ]$$
    
    Using Chebyshev's inequality, we get:
    $$ \Pr[\hat{d} < d / 3] \leq {\Var[Y_b] \over (d / 2^{b + 1})^2} \leq
     {2^{b + 1} \over d} \leq {\sqrt{2} \over 3}$$
    
    \qed
    
    The previous algorithm is not particularly satisfying -- by our analysis it
    can make an error around $94\%$ of the time (taking the union of the two bad
    events). However we can improve the success probability easily; we run $t$
    independent estimators simultaneously, and print the median of their outputs.
    By a standard use of Chernoff Bounds one can show that the probability that
    the median is more than $3d$ is at most $2^{-\Theta(t)}$ (and similarly also
    the probability that it is less than $d / 3$).
    
    Hence it is enough to run $\O(\log (1/ \delta))$ copies of the AMS estimator
    to get a $(3, \delta)$ estimator for any $\delta > 0$. Finally, we note that
    the space used by a single estimator is $\O(\log n)$ since we can store $h$ in
    $\O(\log n)$ bits, and $z$ in $\O(\log \log n)$ bits, and hence a $(3, \delta)$
    estimator uses $\O(\log (1/\delta) \cdot \log n)$ bits.
    
    \endchapter