Note: To make a columnwise reshape in Python one should use np.reshape(X, order='f')
, where the string 'f'
stands for the Fortran ordering.
(2 pts) What is the complexity of a naive computation of $(A \otimes B) x$? Show how it can be reduced.
(3 pts) Let matrices $A$ and $B$ have eigendecompositions $A = S_A\Lambda_A S_A^{-1}$ and $B = S_B\Lambda_B S^{-1}_B$. Find eigenvectors and eigenvalues of the matrix $A\otimes I + I \otimes B$, where dimension of $I$ coincides with the dimension of $A$ and $B$.
(10 pts) Let $A = \mathrm{diag}\left(\frac{1}{1000},\frac{2}{1000},\dots \frac{999}{1000}, 1, 1000 \right)$. Estimate analytically the number of iterations required to solve linear system with $A$ with the relative accuracy $10^{-4}$ using
(5 pts) Provide numerical confirmation of your estimate from theoretical point of view
# Your solution is here
You are given 2D image (QR-code) and convolution operator $T$. The application of $T$ results in smoothing of the image. In exact arithmetic, the proposed $T$ is non-singular. However, it becomes rather ill-conditioned with increasing $N$ in finite precision arithmetic. In this task you need to study how the conjugate gradient method performs in this case.
The original passcode is given below:
import matplotlib.pyplot as plt
x = plt.imread('qrcode.gif')[:,:,0]
n = x.shape[0]
plt.imshow(x, cmap="gray")
plt.axis("off")
(-0.5, 329.5, 329.5, -0.5)
Blurring can be performed by convolving $n\times n$ QR-code with the following filter:
$$T_{i_1j_1,i_2j_2} = T_{i_1-j_1,i_2-j_2} = \frac{\alpha}{\pi}e^{-\alpha[(i_1-j_1)^2 + (i_2-j_2)^2]}, \quad i_1,j_1,i_2,j_2 = 1 \ldots n, \quad 1 > \alpha > 0.$$You know from lectures that this convolution can be viewed as a matrix-vector multiplication of some BTTB matrix $T$ of size $n^2 \times n^2$.
T_matvec()
that performs multiplication of $T$ by a given vector $x$ efficiently. Remember about FFT.scipy.sparse.linalg.LinearOperator
to create an object that has attribute .dot()
(this object will be further used in the iterative process). Note that .dot()
input and output must be 1D vectors, so do not forget to use reshape.def T_matvec(x, aplha):
pass
# T = spla.LinearOperator((n**2, n**2), matvec = lambda x : T_matvec(x, alpha))
# your code is here
Remark. The obtained matrix $T$ is positive definite (at least in the exact arithmetic), hence the conjugate gradient method can be applied to solve with $T$.
Bonus question: Prove the remark above.
scipy.sparse.linalg.cg
with $tol \in \{10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}, 10^{-7}\}$. For each pair $\alpha$, $tol$ write out num_iters
and relative error $e = \frac{\|x - x^*\|_2}{\|x\|_2}$ .Comment on the results:
1) why the relative error does not converge to zero?
2) why the relative error converges to different values for different $\alpha$?
# your code is here
In all further tasks fix $\alpha = 0.01$. Compute $y = Tx$, and add vector with Gaussian noise from $\mathcal{N}(0, 1)$ to $y$ and get the final right-hand side $\hat{y}$.
# your code is here
T_matvec()
according to the template below.num_iters
and relative error $e = \frac{\|x - x^*\|_2}{\|x\|_2}$. Comment on the results: def T_lmbda_matvec(x, aplha, lmbda):
pass
# your code is here
def C_inv_matvec(x, alpha, lmbda):
pass
# your code is here