next up previous
Next: Generalized eigenproblem Up: The basic scaling method Previous: Use of a scaling

Solving for the scaling eigenfunctions

We have a very-nearly-diagonal form for the tension matrix for wavenumbers which are close to $k$, in a scaling eigenfunction basis (Section 6.1.2). We consider this matrix and its derivative with respect to $k$,

$\displaystyle \tilde{F}_{\mu \nu}(k) \;$ $\textstyle =\;$ $\displaystyle 2 \delta_\mu \delta_\nu M_{\mu \nu} \ + \ O(\delta^3) \cdots$ (6.19)
$\displaystyle \frac{d\tilde{F}_{\mu \nu}}{dk}$ $\textstyle =$ $\displaystyle 2 (\delta_\mu + \delta_\nu)
M_{\mu \nu} \ + \ O(\delta^2) \cdots ,$ (6.20)

where the tilde indicates the scaling eigenfunction representation. Use has been made of $d \delta_\mu/dk = 1$. (I will not write the $k$-dependence of the $\delta$'s explicitly). The quasi-diagonality of both these matrices results from that of $M$, whereas the $k$-dependence results from the $\delta$'s.

The transformation between matrix representations is

$\displaystyle \tilde{F}(k) \;$ $\textstyle =\;$ $\displaystyle X^{{\mbox{\tiny T}}} F(k) X$ (6.21)
$\displaystyle \frac{d\tilde{F}}{dk} \;$ $\textstyle =\;$ $\displaystyle X^{{\mbox{\tiny T}}} \frac{dF}{dk} X ,$ (6.22)

where $X_{i\mu} \equiv x_i^{(\mu)}$ are the desired coefficient vectors of the scaling eigenfunctions (the error functions in (6.17) will be dropped). $X$ is rectangular with some number of columns less than $N$; the small-$\delta$ approximation breaks down before this number is reached anyway. The scaling basis representations
$\displaystyle F_{ij}(k) \;$ $\textstyle =\;$ $\displaystyle \oint_\Gamma \!\! d{\mathbf s} \,\frac{1}{r_n} \, \phi_i(k,{\mathbf r})
\phi_j(k,{\mathbf r})$ (6.23)
$\displaystyle \frac{dF_{ij}}{dk}$ $\textstyle =$ $\displaystyle \frac{1}{k} \oint_\Gamma \!\! d{\mathbf s} \,\frac{1}{r_n} \,
...,{\mathbf r}) \, {\mathbf r}\cdot\nabla\phi_j(k,{\mathbf r}) +
\mbox{transpose}$ (6.24)

can easily be evaluated using the method of Appendix G.

Then diagonalization of the matrix $F(k)$ can give eigenvectors which are very close to the desired rows of $X$. In particular, an eigenvector very close to the scaling eigenfunction ${\mathbf x}_\mu$ may be returned (if $\delta_\mu \ll
1/{\mathsf{L}}$). However the eigenvalue will equal the tension of the state resulting from a unit norm $\vert{\mathbf x}\vert = 1$ in coefficient space, which has no physical significance in the basis sets used (RPWs + EPWs). Also the null-space vectors in ${\mathbf x}$ produce small-eigenvalue solutions (exponentially spread down to machine precision, as in Fig. 5.3) which interfere (mix) with the desired vectors. However the parabolic tension minima are still visible. The same is true if the matrix $dF/dk$ is diagonalized, only now small eigenvalues correspond to both Dirichlet (upwards-travelling) and Neumann (downwards-travelling) eigenstates. Therefore direct diagonalization is not a good way to extract the desired states. Fig. 6.4 shows the $k$-dependence of these two diagonalizations.

next up previous
Next: Generalized eigenproblem Up: The basic scaling method Previous: Use of a scaling
Alex Barnett 2001-10-03