GradientDescentOptimizer: Natural gradient descent optimizer

class qat.plugins.GradientDescentOptimizer(maxiter: int = 200, lambda_step: float = 0.2, natural_gradient: bool = True, stop_crit: str = 'grad_norm', tol: float = 0.001, x0: Optional[List[float]] = None, user_custom_gates=None, collective: bool = False)

Gradient-based optimization plugin, with possibility to use natural gradients as described in this publication.

The variational parameters are updated according to the rule

\[\vec{\theta}_{k+1} = \vec{\theta}_{k} - \eta g^{+} \vec{\nabla} E\]

with \(E(\vec{\theta}) = \langle \psi(\vec{\theta}) | H | \psi (\vec{\theta}) \rangle\), \(\left [ \vec{\nabla} E \right ]_i = \frac{\partial E}{\partial \theta_i}\) and

the metric tensor

\[g_{ij} = \mathrm{Re} \left[ \left \langle \frac{\partial \psi}{\partial \theta_i} \Bigg | \frac{\partial \psi}{\partial \theta_j} \right \rangle - \left \langle \frac{\partial \psi}{\partial \theta_i} \Bigg | \psi \right \rangle \left \langle \psi \Bigg | \frac{\partial \psi}{\partial \theta_j} \right \rangle \right ]\]

For regular gradient descent, we choose \(g = I\).

  • maxiter (int, optional) – Maximum number of iterations. Defaults to 1000.

  • natural_gradient (bool, optional) – Whether to perform natural gradient descent or “classical” gradient descent. Default to True.

  • lambda_step (float, optional) – Gradient descent step size \(\lambda\). Defaults to 0.2.

  • stop_crit (string, optional) – Stopping criterion, among {“grad_norm”|”energy_dist”}. Defaults to grad_norm.

  • tol (float, optional) – Tolerance for stopping criterion. Defaults to 1e-10.

  • x0 (list, optional) – Initial value of the parameters. The indexing must be the same as for the variables obtained via the `.get_variables()`method. If None, the initial parameters will be randomly chosen. Defaults to None.

This plugin used in the following notebooks: