Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
D
ds2-notes
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Releases
Model registry
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
datovky
ds2-notes
Commits
8acde4da
Commit
8acde4da
authored
5 years ago
by
Martin Mareš
Browse files
Options
Downloads
Patches
Plain Diff
English: concrete -> specific
parent
119a4676
No related branches found
No related tags found
No related merge requests found
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
01-prelim/prelim.tex
+1
-1
1 addition, 1 deletion
01-prelim/prelim.tex
05-cache/cache.tex
+2
-2
2 additions, 2 deletions
05-cache/cache.tex
06-hash/hash.tex
+2
-2
2 additions, 2 deletions
06-hash/hash.tex
with
5 additions
and
5 deletions
01-prelim/prelim.tex
+
1
−
1
View file @
8acde4da
...
@@ -400,7 +400,7 @@ efficient in the worst case --- imagine that your program controls a~space rocke
...
@@ -400,7 +400,7 @@ efficient in the worst case --- imagine that your program controls a~space rocke
We should also pay attention to the difference between amortized and average-case
We should also pay attention to the difference between amortized and average-case
complexity. Averages can be computed over all possible inputs or over all possible
complexity. Averages can be computed over all possible inputs or over all possible
random bits generated in an~randomized algorithm, but they do not promise anything
random bits generated in an~randomized algorithm, but they do not promise anything
about a~
concrete
computation on a~
concrete
input. On the other hand, amortized complexity guarantees an~upper
about a~
specific
computation on a~
specific
input. On the other hand, amortized complexity guarantees an~upper
bound on the total execution time, but it does not reveal anything about distribution
bound on the total execution time, but it does not reveal anything about distribution
of this time betwen individual operations.
of this time betwen individual operations.
...
...
This diff is collapsed.
Click to expand it.
05-cache/cache.tex
+
2
−
2
View file @
8acde4da
...
@@ -98,7 +98,7 @@ to read $\lceil N/B\rceil \le N/B+1$ consecutive blocks to scan all items. All b
...
@@ -98,7 +98,7 @@ to read $\lceil N/B\rceil \le N/B+1$ consecutive blocks to scan all items. All b
be stored at the same place in the internal memory. This is obviously optimal.
be stored at the same place in the internal memory. This is obviously optimal.
A~cache-aware algorithm can use the same sequence of reads. Generally, we do not know
A~cache-aware algorithm can use the same sequence of reads. Generally, we do not know
the sequence of reads used by the optimal caching strategy, but any
concrete
sequence
the sequence of reads used by the optimal caching strategy, but any
specific
sequence
can serve as an upper bound. For example the sequence we used in the I/O model.
can serve as an upper bound. For example the sequence we used in the I/O model.
A~cache-oblivious algorithm cannot guarantee that the array will be aligned on
A~cache-oblivious algorithm cannot guarantee that the array will be aligned on
...
@@ -280,7 +280,7 @@ to tiles from the previous algorithm. Specifically, we will find the smallest~$i
...
@@ -280,7 +280,7 @@ to tiles from the previous algorithm. Specifically, we will find the smallest~$i
the sub-problem size
$
d
=
N
/
2
^
i
$
is at most~
$
B
$
. Unless the whole input is small and
$
i
=
0
$
,
the sub-problem size
$
d
=
N
/
2
^
i
$
is at most~
$
B
$
. Unless the whole input is small and
$
i
=
0
$
,
this implies
$
2
d
=
N
/
2
^{
i
-
1
}
> B
$
. Therefore
$
B
/
2
< d
\le
B
$
.
this implies
$
2
d
=
N
/
2
^{
i
-
1
}
> B
$
. Therefore
$
B
/
2
< d
\le
B
$
.
To establish an upper bound on the optimal number of block transfers, we show a~
concrete
To establish an upper bound on the optimal number of block transfers, we show a~
specific
caching strategy. Above level~
$
i
$
, we cache nothing --- this is correct, since we touch no
caching strategy. Above level~
$
i
$
, we cache nothing --- this is correct, since we touch no
items. (Well, we need a~little cache for auxiliary variables like the recursion stack, but
items. (Well, we need a~little cache for auxiliary variables like the recursion stack, but
this is asymptotically insignificant.) When we enter a~node at level~
$
i
$
, we load the whole
this is asymptotically insignificant.) When we enter a~node at level~
$
i
$
, we load the whole
...
...
This diff is collapsed.
Click to expand it.
06-hash/hash.tex
+
2
−
2
View file @
8acde4da
...
@@ -35,7 +35,7 @@ In other words, if we pick a~hash function~$h$ uniformly at random from~$\cal H$
...
@@ -35,7 +35,7 @@ In other words, if we pick a~hash function~$h$ uniformly at random from~$\cal H$
the probability that
$
x
$
and~
$
y
$
collide is at most
$
c
$
-times more than for
the probability that
$
x
$
and~
$
y
$
collide is at most
$
c
$
-times more than for
a~completely random function~
$
h
$
.
a~completely random function~
$
h
$
.
Occasionally, we are not interested in the
concrete
value of~
$
c
$
,
Occasionally, we are not interested in the
specific
value of~
$
c
$
,
so we simply say that the family is
\em
{
universal.
}
so we simply say that the family is
\em
{
universal.
}
}
}
...
@@ -844,7 +844,7 @@ all~$x_i$'s, we can get a~false positive answer if $x$~falls to the same bucket
...
@@ -844,7 +844,7 @@ all~$x_i$'s, we can get a~false positive answer if $x$~falls to the same bucket
as one of the
$
x
_
i
$
's.
as one of the
$
x
_
i
$
's.
Let us calculate the probability of a~false positive answer.
Let us calculate the probability of a~false positive answer.
For a~
concrete
~
$
i
$
, we have
$
\Pr
_
h
[
h
(
y
)
=
h
(
x
_
i
)]
\le
1
/
m
$
by 1-universality.
For a~
specific
~
$
i
$
, we have
$
\Pr
_
h
[
h
(
y
)
=
h
(
x
_
i
)]
\le
1
/
m
$
by 1-universality.
By union bound, the probability that
$
h
(
y
)
=
h
(
x
_
i
)
$
for least one~
$
i
$
By union bound, the probability that
$
h
(
y
)
=
h
(
x
_
i
)
$
for least one~
$
i
$
is at most
$
n
/
m
$
.
is at most
$
n
/
m
$
.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment