@@ -236,30 +236,54 @@ into the output. The final carry is then used to output some extra blocks at the
...
@@ -236,30 +236,54 @@ into the output. The final carry is then used to output some extra blocks at the
\figure[mixer]{mixer.pdf}{}{General structure of a mixer}
\figure[mixer]{mixer.pdf}{}{General structure of a mixer}
At a high level, a mixer can be thought of as a mapping $f: [X]\times[Y]\rightarrow[2^M]\times[S]$
At a high level, a mixer can be thought of as a mapping $f: [X]\times[Y]\rightarrow[2^M]\times[S]$
with the property that when $(m,s)= f(x,y)$, $s$ depends only on $x$.
with the property that when $(m,s)= f(x,y)$, $s$ depends only on $x$. This is the key property
that allows local decoding and modification because carry does not cascade.
Internally, the a mixer is
Internally, the a mixer is
always implemented as a composition of two mappings, $f_1$ that transforms $x \rightarrow(t,s)$ and $f2$
always implemented as a composition of two mappings, $f_1$ that transforms $x \rightarrow(c,s)$ and $f2$
that transforms $(y,t)\rightarrow m$. See fig. \figref{mixer}. Both $f_1$ and $f_2$ must be injective
that transforms $(y,c)\rightarrow m$. See fig. \figref{mixer}. Both $f_1$ and $f_2$ must be injective
so that the encoding is reversible.
so that the encoding is reversible.
The mappings $f_1$ and $f_2$ themselves are trivial alphabet translations similar to what we
The mappings $f_1$ and $f_2$ themselves are trivial alphabet translations similar to what we
used in the SOLE encoding. You can for example use $f_1(x)=(\lceil x/S \rceil, x \bmod S)$
used in the SOLE encoding. You can for example use $f_1(x)=(\lceil x/S \rceil, x \bmod S)$
and $f_2(y,t)=t\cdot Y + y$.
and $f_2(y,c)=c\cdot Y + y$.
Thus implementing the mixer is simple as long as the parameters allow its existence. A mixer
Thus implementing the mixer is simple as long as the parameters allow its existence. A mixer
with parameters $X$, $Y$, $S$, $M$ can exist if and only if there exists $T$ such that
with parameters $X$, $Y$, $S$, $M$ can exist if and only if there exists $C$ such that
$S\cdotT\le X$ and $2^M \le T\cdot Y$ (once again, the alphabet translations need their
$S\cdotC\ge X$ and $C\cdot Y \le2^M$ (once again, the alphabet translations need their
range to be as large as their domain in order to work).
range to be as large as their domain in order to work).
\lemma{
A mixer $f$ has the following properties (as long as all inputs and outputs fit into a constant
number of words):
\tightlist{o}
\:$f$ can be computed on a RAM in constant time
\:$s$ depends only on $x$, not $y$
\:$x$ can be decoded given $m$, $s$ in constant time
\:$y$ can be decoded given $m$ in constant time
\endlist
}
All these properties should be evident from the construction.
\defn{The redundancy of a mixer is $$r(f) :=\underbrace{M +\log S}_{\hbox{output entropy}}-\quad\underbrace{(\log X +\log Y)}_{\hbox{input entropy}}.$$}
\subsection{On the existence of certain kinds of mixers}
\subsection{On the existence of certain kinds of mixers}
Now we would like to show that mixers with certain parameters do exist.
Now we would like to show that mixers with certain parameters do exist.
\lemma{For $X,Y \le2^w$ there exists a mixer $f: [X]\times[Y]\rightarrow[2^M]\times[S]$ such that:
\lemma{For $X,Y$ there exists a mixer $f: [X]\times[Y]\rightarrow[2^M]\times[S]$
such that:
\tightlist{o}
\tightlist{o}
\:$S =\O(\sqrt{X})$
\:$S =\O(\sqrt{X})$, $2^M =\O(Y\cdot\sqrt{X})$
\:$r(f)=\O(1/\sqrt{X})$
\endlist
\endlist
}
}
\proof{
First, let's assume we have chosen an $M$ (which we shall do later). Then we
want to set $C$ so that it satisfies the inequality $C \cdot Y \le2^M$. Basically
we are asking the question how much information can we fit in $m$ in addition to
the whole of $y$. Clearly we want $C$ to be as high as possibly, thus we set