pygsti.optimize.arraysinterface

Implements the ArraysInterface object and supporting functionality.

Module Contents

Classes

ArraysInterface

An interface between pyGSTi's optimization methods and data storage arrays.

UndistributedArraysInterface

An arrays interface for the case when the arrays are not actually distributed.

DistributedArraysInterface

An arrays interface where the arrays are distributed according to a distributed layout.

class pygsti.optimize.arraysinterface.ArraysInterface

Bases: object

An interface between pyGSTi’s optimization methods and data storage arrays.

This class provides an abstract interface to algorithms (particularly the Levenberg-Marquardt nonlinear least-squares algorithm) for creating an manipulating potentially distributed data arrays with types such as “jtj” (Jacobian^T * Jacobian), “jtf” (Jacobian^T * objectivefn_vector), and “x” (model parameter vector). The class encapsulates all the operations on these arrays so that the algorithm doesn’t need to worry about how the arrays are actually stored in memory, e.g. whether shared memory is used or not.

class pygsti.optimize.arraysinterface.UndistributedArraysInterface(num_global_elements, num_global_params)

Bases: ArraysInterface

An arrays interface for the case when the arrays are not actually distributed.

Parameters
  • num_global_elements (int) – The total number of objective function “elements”, i.e. the size of the objective function array f.

  • num_global_params (int) – The total number of (model) parameters, i.e. the size of the x array.

allocate_jtf(self)

Allocate an array for holding a ‘jtf’-type value.

Returns

numpy.ndarray or LocalNumpyArray

allocate_jtj(self)

Allocate an array for holding an approximated Hessian (type ‘jtj’).

Returns

numpy.ndarray or LocalNumpyArray

allocate_jac(self)

Allocate an array for holding a Jacobian matrix (type ‘ep’).

Returns

numpy.ndarray or LocalNumpyArray

deallocate_jtf(self, jtf)

Free an array for holding an objective function value (type ‘jtf’).

Returns

None

deallocate_jtj(self, jtj)

Free an array for holding an approximated Hessian (type ‘jtj’).

Returns

None

deallocate_jac(self, jac)

Free an array for holding a Jacobian matrix (type ‘ep’).

Returns

None

global_num_elements(self)

The total number of objective function “elements”.

This is the size/length of the objective function f vector.

Returns

int

jac_param_slice(self, only_if_leader=False)

The slice into a Jacobian’s columns that belong to this processor.

Parameters

only_if_leader (bool, optional) – If True, the current processor’s parameter slice is ony returned if the processor is the “leader” (i.e. the first) of the processors that calculate the same parameter slice. All non-leader processors return the zero-slice slice(0,0).

Returns

slice

jtf_param_slice(self)

The slice into a ‘jtf’ vector giving the rows of owned by this processor.

Returns

slice

param_fine_info(self)

Returns information regarding how model parameters are distributed among hosts and processors.

This information relates to the “fine” distribution used in distributed layouts, and is needed by some algorithms which utilize shared-memory communication between processors on the same host.

Returns

  • param_fine_slices_by_host (list) – A list with one entry per host. Each entry is itself a list of (rank, (global_param_slice, host_param_slice)) elements where rank is the top-level overall rank of a processor, global_param_slice is the parameter slice that processor owns and host_param_slice is the same slice relative to the parameters owned by the host.

  • owner_host_and_rank_of_global_fine_param_index (dict) – A mapping between parameter indices (keys) and the owning processor rank and host index. Values are (host_index, processor_rank) tuples.

allgather_x(self, x, global_x)

Gather a parameter (x) vector onto all the processors.

Parameters
Returns

None

allscatter_x(self, global_x, x)

Pare down an already-scattered global parameter (x) vector to be just a local x vector.

Parameters
  • global_x (numpy.array or LocalNumpyArray) – The input vector. This global vector is already present on all the processors, so there’s no need to do any MPI communication.

  • x (numpy.array or LocalNumpyArray) – The output vector, typically a slice of global_x..

Returns

None

scatter_x(self, global_x, x)

Scatter a global parameter (x) vector onto all the processors.

Parameters
Returns

None

allgather_f(self, f, global_f)

Gather an objective funtion (f) vector onto all the processors.

Parameters
Returns

None

gather_jtj(self, jtj, return_shared=False)

Gather a Hessian (jtj) matrix onto the root processor.

Parameters
  • jtj (numpy.array or LocalNumpyArray) – The (local) input matrix to gather.

  • return_shared (bool, optional) – Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via :function:`pygsti.tools.sharedmemtools.cleanup_shared_ndarray`.

Returns

  • gathered_array (numpy.ndarray or None) – The full (global) output array on the root (rank=0) processor and None on all other processors.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.

scatter_jtj(self, global_jtj, jtj)

Scatter a Hessian (jtj) matrix onto all the processors.

Parameters
  • global_jtj (numpy.ndarray) – The global Hessian matrix to scatter.

  • jtj (numpy.ndarray or LocalNumpyArray) – The local destination array.

Returns

None

gather_jtf(self, jtf, return_shared=False)

Gather a jtf vector onto the root processor.

Parameters
  • jtf (numpy.array or LocalNumpyArray) – The local input vector to gather.

  • return_shared (bool, optional) – Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via :function:`pygsti.tools.sharedmemtools.cleanup_shared_ndarray`.

Returns

  • gathered_array (numpy.ndarray or None) – The full (global) output array on the root (rank=0) processor and None on all other processors.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.

scatter_jtf(self, global_jtf, jtf)

Scatter a jtf vector onto all the processors.

Parameters
  • global_jtf (numpy.ndarray) – The global vector to scatter.

  • jtf (numpy.ndarray or LocalNumpyArray) – The local destination array.

Returns

None

global_svd_dot(self, jac_v, minus_jtf)

Gathers the dot product between a jtj-type matrix and a jtf-type vector into a global result array.

This is typically used within SVD-defined basis calculations, where jac_v is the “V” matrix of the SVD of a jacobian, and minus_jtf is the negative dot product between the Jacobian matrix and objective function vector.

Parameters
Returns

numpy.ndarray – The global (gathered) parameter vector dot(jac_v.T, minus_jtf).

fill_dx_svd(self, jac_v, global_vec, dx)

Computes the dot product of a jtj-type array with a global parameter array.

The result (dx) is a jtf-type array. This is typically used for computing the x-update vector in the LM method when using a SVD-defined basis.

Parameters
  • jac_v (numpy.ndarray or LocalNumpyArray) – An array of jtj-type.

  • global_vec (numpy.ndarray) – A global parameter vector.

  • dx (numpy.ndarray or LocalNumpyArray) – An array of jtf-type. Filled with dot(jac_v, global_vec) values.

Returns

None

dot_x(self, x1, x2)

Take the dot product of two x-type vectors.

Parameters
Returns

float

norm2_x(self, x)

Compute the Frobenius norm squared of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

infnorm_x(self, x)

Compute the infinity-norm of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

max_x(self, x)

Compute the maximum of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

norm2_f(self, f)

Compute the Frobenius norm squared of an f-type vector.

Parameters

f (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

norm2_jtj(self, jtj)

Compute the Frobenius norm squared of an jtj-type matrix.

Parameters

jtj (numpy.ndarray or LocalNumpyArray) – The array to operate on.

Returns

float

norm2_jac(self, j)

Compute the Frobenius norm squared of an Jacobian matrix (ep-type).

Parameters

j (numpy.ndarray or LocalNumpyArray) – The Jacobian to operate on.

Returns

float

fill_jtf(self, j, f, jtf)

Compute dot(Jacobian.T, f) in supplied memory.

Parameters
  • j (numpy.ndarray or LocalNumpyArray) – Jacobian matrix (type ep).

  • f (numpy.ndarray or LocalNumpyArray) – Objective function vector (type e).

  • jtf (numpy.ndarray or LocalNumpyArray) – Output array, type jtf. Filled with dot(j.T, f) values.

Returns

None

fill_jtj(self, j, jtj, shared_mem_buf=None)

Compute dot(Jacobian.T, Jacobian) in supplied memory.

Parameters
  • j (numpy.ndarray or LocalNumpyArray) – Jacobian matrix (type ep).

  • jtf (numpy.ndarray or LocalNumpyArray) – Output array, type jtj. Filled with dot(j.T, j) values.

  • shared_mem_buf (tuple or None) – Scratch space of shared memory used to speed up repeated calls to fill_jtj. If not none, the value returned from :method:`allocate_jtj_shared_mem_buf`.

Returns

None

allocate_jtj_shared_mem_buf(self)

Allocate scratch space to be used for repeated calls to :method:`fill_jtj`.

Returns

  • scratch (numpy.ndarray or None) – The scratch array.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – The shared memory handle associated with scratch, which is needed to free the memory.

deallocate_jtj_shared_mem_buf(self, jtj_buf)

Frees the scratch memory allocated by :method:`allocate_jtj_shared_mem_buf`.

Parameters

jtj_buf (tuple or None) – The value returned from :method:`allocate_jtj_shared_mem_buf`

jtj_diag_indices(self, jtj)

The indices into a jtj-type array that correspond to diagonal elements of the global matrix.

If jtj were a global quantity, then this would just be numpy.diag_indices_from(jtj), however, it may be more complicated in actuality when different processors hold different sections of the global matrix.

Parameters

jtj (numpy.ndarray or None) – The jtj-type array to get the indices with respect to.

Returns

tuple – A tuple of 1D arrays that can be used to index the elements of jtj that correspond to diagonal elements of the global jtj matrix.

class pygsti.optimize.arraysinterface.DistributedArraysInterface(dist_layout, extra_elements=0)

Bases: ArraysInterface

An arrays interface where the arrays are distributed according to a distributed layout.

Parameters
  • dist_layout (DistributableCOPALayout) – The layout giving the distribution of the arrays.

  • extra_elements (int, optional) – The number of additional objective function “elements” beyond those specified by dist_layout. These are often used for penalty terms.

allocate_jtf(self)

Allocate an array for holding a ‘jtf’-type value.

Returns

numpy.ndarray or LocalNumpyArray

allocate_jtj(self)

Allocate an array for holding an approximated Hessian (type ‘jtj’).

Returns

numpy.ndarray or LocalNumpyArray

allocate_jac(self)

Allocate an array for holding a Jacobian matrix (type ‘ep’).

Returns

numpy.ndarray or LocalNumpyArray

deallocate_jtf(self, jtf)

Free an array for holding an objective function value (type ‘jtf’).

Returns

None

deallocate_jtj(self, jtj)

Free an array for holding an approximated Hessian (type ‘jtj’).

Returns

None

deallocate_jac(self, jac)

Free an array for holding a Jacobian matrix (type ‘ep’).

Returns

None

global_num_elements(self)

The total number of objective function “elements”.

This is the size/length of the objective function f vector.

Returns

int

jac_param_slice(self, only_if_leader=False)

The slice into a Jacobian’s columns that belong to this processor.

Parameters

only_if_leader (bool, optional) – If True, the current processor’s parameter slice is ony returned if the processor is the “leader” (i.e. the first) of the processors that calculate the same parameter slice. All non-leader processors return the zero-slice slice(0,0).

Returns

slice

jtf_param_slice(self)

The slice into a ‘jtf’ vector giving the rows of owned by this processor.

Returns

slice

param_fine_info(self)

Returns information regarding how model parameters are distributed among hosts and processors.

This information relates to the “fine” distribution used in distributed layouts, and is needed by some algorithms which utilize shared-memory communication between processors on the same host.

Returns

  • param_fine_slices_by_host (list) – A list with one entry per host. Each entry is itself a list of (rank, (global_param_slice, host_param_slice)) elements where rank is the top-level overall rank of a processor, global_param_slice is the parameter slice that processor owns and host_param_slice is the same slice relative to the parameters owned by the host.

  • owner_host_and_rank_of_global_fine_param_index (dict) – A mapping between parameter indices (keys) and the owning processor rank and host index. Values are (host_index, processor_rank) tuples.

allgather_x(self, x, global_x)

Gather a parameter (x) vector onto all the processors.

Parameters
Returns

None

allscatter_x(self, global_x, x)

Pare down an already-scattered global parameter (x) vector to be just a local x vector.

Parameters
  • global_x (numpy.array or LocalNumpyArray) – The input vector. This global vector is already present on all the processors, so there’s no need to do any MPI communication.

  • x (numpy.array or LocalNumpyArray) – The output vector, typically a slice of global_x..

Returns

None

scatter_x(self, global_x, x)

Scatter a global parameter (x) vector onto all the processors.

Parameters
Returns

None

allgather_f(self, f, global_f)

Gather an objective funtion (f) vector onto all the processors.

Parameters
Returns

None

gather_jtj(self, jtj, return_shared=False)

Gather a Hessian (jtj) matrix onto the root processor.

Parameters
  • jtj (numpy.array or LocalNumpyArray) – The (local) input matrix to gather.

  • return_shared (bool, optional) – Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via :function:`pygsti.tools.sharedmemtools.cleanup_shared_ndarray`.

Returns

  • gathered_array (numpy.ndarray or None) – The full (global) output array on the root (rank=0) processor and None on all other processors.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.

scatter_jtj(self, global_jtj, jtj)

Scatter a Hessian (jtj) matrix onto all the processors.

Parameters
  • global_jtj (numpy.ndarray) – The global Hessian matrix to scatter.

  • jtj (numpy.ndarray or LocalNumpyArray) – The local destination array.

Returns

None

gather_jtf(self, jtf, return_shared=False)

Gather a jtf vector onto the root processor.

Parameters
  • jtf (numpy.array or LocalNumpyArray) – The local input vector to gather.

  • return_shared (bool, optional) – Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via :function:`pygsti.tools.sharedmemtools.cleanup_shared_ndarray`.

Returns

  • gathered_array (numpy.ndarray or None) – The full (global) output array on the root (rank=0) processor and None on all other processors.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.

scatter_jtf(self, global_jtf, jtf)

Scatter a jtf vector onto all the processors.

Parameters
  • global_jtf (numpy.ndarray) – The global vector to scatter.

  • jtf (numpy.ndarray or LocalNumpyArray) – The local destination array.

Returns

None

global_svd_dot(self, jac_v, minus_jtf)

Gathers the dot product between a jtj-type matrix and a jtf-type vector into a global result array.

This is typically used within SVD-defined basis calculations, where jac_v is the “V” matrix of the SVD of a jacobian, and minus_jtf is the negative dot product between the Jacobian matrix and objective function vector.

Parameters
Returns

numpy.ndarray – The global (gathered) parameter vector dot(jac_v.T, minus_jtf).

fill_dx_svd(self, jac_v, global_vec, dx)

Computes the dot product of a jtj-type array with a global parameter array.

The result (dx) is a jtf-type array. This is typically used for computing the x-update vector in the LM method when using a SVD-defined basis.

Parameters
  • jac_v (numpy.ndarray or LocalNumpyArray) – An array of jtj-type.

  • global_vec (numpy.ndarray) – A global parameter vector.

  • dx (numpy.ndarray or LocalNumpyArray) – An array of jtf-type. Filled with dot(jac_v, global_vec) values.

Returns

None

dot_x(self, x1, x2)

Take the dot product of two x-type vectors.

Parameters
Returns

float

norm2_x(self, x)

Compute the Frobenius norm squared of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

infnorm_x(self, x)

Compute the infinity-norm of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

min_x(self, x)

Compute the minimum of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

max_x(self, x)

Compute the maximum of an x-type vector.

Parameters

x (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

norm2_f(self, f)

Compute the Frobenius norm squared of an f-type vector.

Parameters

f (numpy.ndarray or LocalNumpyArray) – The vector to operate on.

Returns

float

norm2_jac(self, j)

Compute the Frobenius norm squared of an Jacobian matrix (ep-type).

Parameters

j (numpy.ndarray or LocalNumpyArray) – The Jacobian to operate on.

Returns

float

norm2_jtj(self, jtj)

Compute the Frobenius norm squared of an jtj-type matrix.

Parameters

jtj (numpy.ndarray or LocalNumpyArray) – The array to operate on.

Returns

float

fill_jtf(self, j, f, jtf)

Compute dot(Jacobian.T, f) in supplied memory.

Parameters
  • j (numpy.ndarray or LocalNumpyArray) – Jacobian matrix (type ep).

  • f (numpy.ndarray or LocalNumpyArray) – Objective function vector (type e).

  • jtf (numpy.ndarray or LocalNumpyArray) – Output array, type jtf. Filled with dot(j.T, f) values.

Returns

None

fill_jtj(self, j, jtj, shared_mem_buf=None)

Compute dot(Jacobian.T, Jacobian) in supplied memory.

Parameters
  • j (numpy.ndarray or LocalNumpyArray) – Jacobian matrix (type ep).

  • jtf (numpy.ndarray or LocalNumpyArray) – Output array, type jtj. Filled with dot(j.T, j) values.

  • shared_mem_buf (tuple or None) – Scratch space of shared memory used to speed up repeated calls to fill_jtj. If not none, the value returned from :method:`allocate_jtj_shared_mem_buf`.

Returns

None

allocate_jtj_shared_mem_buf(self)

Allocate scratch space to be used for repeated calls to :method:`fill_jtj`.

Returns

  • scratch (numpy.ndarray or None) – The scratch array.

  • shared_memory_handle (multiprocessing.shared_memory.SharedMemory or None) – The shared memory handle associated with scratch, which is needed to free the memory.

deallocate_jtj_shared_mem_buf(self, jtj_buf)

Frees the scratch memory allocated by :method:`allocate_jtj_shared_mem_buf`.

Parameters

jtj_buf (tuple or None) – The value returned from :method:`allocate_jtj_shared_mem_buf`

jtj_diag_indices(self, jtj)

The indices into a jtj-type array that correspond to diagonal elements of the global matrix.

If jtj were a global quantity, then this would just be numpy.diag_indices_from(jtj), however, it may be more complicated in actuality when different processors hold different sections of the global matrix.

Parameters

jtj (numpy.ndarray or None) – The jtj-type array to get the indices with respect to.

Returns

tuple – A tuple of 1D arrays that can be used to index the elements of jtj that correspond to diagonal elements of the global jtj matrix.