diff --git a/01-prelim/prelim.tex b/01-prelim/prelim.tex
index 58599fea2f6e57d4c5711e180b565d2f7d049e46..655cf1ce0ecdc269df073fa905979a475c3e81b6 100644
--- a/01-prelim/prelim.tex
+++ b/01-prelim/prelim.tex
@@ -400,7 +400,7 @@ efficient in the worst case --- imagine that your program controls a~space rocke
 We should also pay attention to the difference between amortized and average-case
 complexity. Averages can be computed over all possible inputs or over all possible
 random bits generated in an~randomized algorithm, but they do not promise anything
-about a~concrete computation on a~concrete input. On the other hand, amortized complexity guarantees an~upper
+about a~specific computation on a~specific input. On the other hand, amortized complexity guarantees an~upper
 bound on the total execution time, but it does not reveal anything about distribution
 of this time betwen individual operations.
 
diff --git a/05-cache/cache.tex b/05-cache/cache.tex
index 6e55458e7c707b43d374716c976b336df458b5dc..dcc09c1aff8431536c4e3e3f90633c89bd42ed33 100644
--- a/05-cache/cache.tex
+++ b/05-cache/cache.tex
@@ -98,7 +98,7 @@ to read $\lceil N/B\rceil \le N/B+1$ consecutive blocks to scan all items. All b
 be stored at the same place in the internal memory. This is obviously optimal.
 
 A~cache-aware algorithm can use the same sequence of reads. Generally, we do not know
-the sequence of reads used by the optimal caching strategy, but any concrete sequence
+the sequence of reads used by the optimal caching strategy, but any specific sequence
 can serve as an upper bound. For example the sequence we used in the I/O model.
 
 A~cache-oblivious algorithm cannot guarantee that the array will be aligned on
@@ -280,7 +280,7 @@ to tiles from the previous algorithm. Specifically, we will find the smallest~$i
 the sub-problem size $d = N/2^i$ is at most~$B$. Unless the whole input is small and $i=0$,
 this implies $2d = N/2^{i-1} > B$. Therefore $B/2 < d \le B$.
 
-To establish an upper bound on the optimal number of block transfers, we show a~concrete
+To establish an upper bound on the optimal number of block transfers, we show a~specific
 caching strategy. Above level~$i$, we cache nothing --- this is correct, since we touch no
 items. (Well, we need a~little cache for auxiliary variables like the recursion stack, but
 this is asymptotically insignificant.) When we enter a~node at level~$i$, we load the whole
diff --git a/06-hash/hash.tex b/06-hash/hash.tex
index 3c85b2bd1713257e0a7822adabfe6d6225d5fc54..3cbe3e6e42f70d310b1d2f3b966eb70a0057208b 100644
--- a/06-hash/hash.tex
+++ b/06-hash/hash.tex
@@ -35,7 +35,7 @@ In other words, if we pick a~hash function~$h$ uniformly at random from~$\cal H$
 the probability that $x$ and~$y$ collide is at most $c$-times more than for
 a~completely random function~$h$.
 
-Occasionally, we are not interested in the concrete value of~$c$,
+Occasionally, we are not interested in the specific value of~$c$,
 so we simply say that the family is \em{universal.}
 }
 
@@ -844,7 +844,7 @@ all~$x_i$'s, we can get a~false positive answer if $x$~falls to the same bucket
 as one of the $x_i$'s.
 
 Let us calculate the probability of a~false positive answer.
-For a~concrete~$i$, we have $\Pr_h[h(y) = h(x_i)] \le 1/m$ by 1-universality.
+For a~specific~$i$, we have $\Pr_h[h(y) = h(x_i)] \le 1/m$ by 1-universality.
 By union bound, the probability that $h(y) = h(x_i)$ for least one~$i$
 is at most $n/m$.