pygsti.optimize.arraysinterface
Implements the ArraysInterface object and supporting functionality.
Module Contents
Classes
An interface between pyGSTi's optimization methods and data storage arrays. |
|
An arrays interface for the case when the arrays are not actually distributed. |
|
An arrays interface where the arrays are distributed according to a distributed layout. |
- class pygsti.optimize.arraysinterface.ArraysInterface
Bases:
object
An interface between pyGSTi’s optimization methods and data storage arrays.
This class provides an abstract interface to algorithms (particularly the Levenberg-Marquardt nonlinear least-squares algorithm) for creating an manipulating potentially distributed data arrays with types such as “jtj” (Jacobian^T * Jacobian), “jtf” (Jacobian^T * objectivefn_vector), and “x” (model parameter vector). The class encapsulates all the operations on these arrays so that the algorithm doesn’t need to worry about how the arrays are actually stored in memory, e.g. whether shared memory is used or not.
- class pygsti.optimize.arraysinterface.UndistributedArraysInterface(num_global_elements, num_global_params)
Bases:
ArraysInterface
An arrays interface for the case when the arrays are not actually distributed.
Parameters
- num_global_elementsint
The total number of objective function “elements”, i.e. the size of the objective function array f.
- num_global_paramsint
The total number of (model) parameters, i.e. the size of the x array.
- num_global_elements
- num_global_params
- allocate_jtf()
Allocate an array for holding a ‘jtf’-type value.
Returns
numpy.ndarray or LocalNumpyArray
- allocate_jtj()
Allocate an array for holding an approximated Hessian (type ‘jtj’).
Returns
numpy.ndarray or LocalNumpyArray
- allocate_jac()
Allocate an array for holding a Jacobian matrix (type ‘ep’).
Returns
numpy.ndarray or LocalNumpyArray
- deallocate_jtf(jtf)
Free an array for holding an objective function value (type ‘jtf’).
Returns
None
- global_num_elements()
The total number of objective function “elements”.
This is the size/length of the objective function f vector.
Returns
int
- jac_param_slice(only_if_leader=False)
The slice into a Jacobian’s columns that belong to this processor.
Parameters
- only_if_leaderbool, optional
If True, the current processor’s parameter slice is ony returned if the processor is the “leader” (i.e. the first) of the processors that calculate the same parameter slice. All non-leader processors return the zero-slice slice(0,0).
Returns
slice
- jtf_param_slice()
The slice into a ‘jtf’ vector giving the rows of owned by this processor.
Returns
slice
- param_fine_info()
Returns information regarding how model parameters are distributed among hosts and processors.
This information relates to the “fine” distribution used in distributed layouts, and is needed by some algorithms which utilize shared-memory communication between processors on the same host.
Returns
- param_fine_slices_by_hostlist
A list with one entry per host. Each entry is itself a list of (rank, (global_param_slice, host_param_slice)) elements where rank is the top-level overall rank of a processor, global_param_slice is the parameter slice that processor owns and host_param_slice is the same slice relative to the parameters owned by the host.
- owner_host_and_rank_of_global_fine_param_indexdict
A mapping between parameter indices (keys) and the owning processor rank and host index. Values are (host_index, processor_rank) tuples.
- allgather_x(x, global_x)
Gather a parameter (x) vector onto all the processors.
Parameters
- xnumpy.array or LocalNumpyArray
The input vector.
- global_xnumpy.array or LocalNumpyArray
The output (gathered) vector.
Returns
None
- allscatter_x(global_x, x)
Pare down an already-scattered global parameter (x) vector to be just a local x vector.
Parameters
- global_xnumpy.array or LocalNumpyArray
The input vector. This global vector is already present on all the processors, so there’s no need to do any MPI communication.
- xnumpy.array or LocalNumpyArray
The output vector, typically a slice of global_x.
Returns
None
- scatter_x(global_x, x)
Scatter a global parameter (x) vector onto all the processors.
Parameters
- global_xnumpy.array or LocalNumpyArray
The input vector.
- xnumpy.array or LocalNumpyArray
The output (scattered) vector.
Returns
None
- allgather_f(f, global_f)
Gather an objective funtion (f) vector onto all the processors.
Parameters
- fnumpy.array or LocalNumpyArray
The input vector.
- global_fnumpy.array or LocalNumpyArray
The output (gathered) vector.
Returns
None
- gather_jtj(jtj, return_shared=False)
Gather a Hessian (jtj) matrix onto the root processor.
Parameters
- jtjnumpy.array or LocalNumpyArray
The (local) input matrix to gather.
- return_sharedbool, optional
Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via
pygsti.tools.sharedmemtools.cleanup_shared_ndarray()
.
Returns
- gathered_arraynumpy.ndarray or None
The full (global) output array on the root (rank=0) processor and None on all other processors.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.
- scatter_jtj(global_jtj, jtj)
Scatter a Hessian (jtj) matrix onto all the processors.
Parameters
- global_jtjnumpy.ndarray
The global Hessian matrix to scatter.
- jtjnumpy.ndarray or LocalNumpyArray
The local destination array.
Returns
None
- gather_jtf(jtf, return_shared=False)
Gather a jtf vector onto the root processor.
Parameters
- jtfnumpy.array or LocalNumpyArray
The local input vector to gather.
- return_sharedbool, optional
Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via
pygsti.tools.sharedmemtools.cleanup_shared_ndarray()
.
Returns
- gathered_arraynumpy.ndarray or None
The full (global) output array on the root (rank=0) processor and None on all other processors.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.
- scatter_jtf(global_jtf, jtf)
Scatter a jtf vector onto all the processors.
Parameters
- global_jtfnumpy.ndarray
The global vector to scatter.
- jtfnumpy.ndarray or LocalNumpyArray
The local destination array.
Returns
None
- global_svd_dot(jac_v, minus_jtf)
Gathers the dot product between a jtj-type matrix and a jtf-type vector into a global result array.
This is typically used within SVD-defined basis calculations, where jac_v is the “V” matrix of the SVD of a jacobian, and minus_jtf is the negative dot product between the Jacobian matrix and objective function vector.
Parameters
- jac_vnumpy.ndarray or LocalNumpyArray
An array of jtj-type.
- minus_jtfnumpy.ndarray or LocalNumpyArray
An array of jtf-type.
Returns
- numpy.ndarray
The global (gathered) parameter vector dot(jac_v.T, minus_jtf).
- fill_dx_svd(jac_v, global_vec, dx)
Computes the dot product of a jtj-type array with a global parameter array.
The result (dx) is a jtf-type array. This is typically used for computing the x-update vector in the LM method when using a SVD-defined basis.
Parameters
- jac_vnumpy.ndarray or LocalNumpyArray
An array of jtj-type.
- global_vecnumpy.ndarray
A global parameter vector.
- dxnumpy.ndarray or LocalNumpyArray
An array of jtf-type. Filled with dot(jac_v, global_vec) values.
Returns
None
- dot_x(x1, x2)
Take the dot product of two x-type vectors.
Parameters
- x1, x2numpy.ndarray or LocalNumpyArray
The vectors to operate on.
Returns
float
- norm2_x(x)
Compute the Frobenius norm squared of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- infnorm_x(x)
Compute the infinity-norm of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- max_x(x)
Compute the maximum of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- norm2_f(f)
Compute the Frobenius norm squared of an f-type vector.
Parameters
- fnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- norm2_jtj(jtj)
Compute the Frobenius norm squared of an jtj-type matrix.
Parameters
- jtjnumpy.ndarray or LocalNumpyArray
The array to operate on.
Returns
float
- norm2_jac(j)
Compute the Frobenius norm squared of an Jacobian matrix (ep-type).
Parameters
- jnumpy.ndarray or LocalNumpyArray
The Jacobian to operate on.
Returns
float
- fill_jtf(j, f, jtf)
Compute dot(Jacobian.T, f) in supplied memory.
Parameters
- jnumpy.ndarray or LocalNumpyArray
Jacobian matrix (type ep).
- fnumpy.ndarray or LocalNumpyArray
Objective function vector (type e).
- jtfnumpy.ndarray or LocalNumpyArray
Output array, type jtf. Filled with dot(j.T, f) values.
Returns
None
- fill_jtj(j, jtj, shared_mem_buf=None)
Compute dot(Jacobian.T, Jacobian) in supplied memory.
Parameters
- jnumpy.ndarray or LocalNumpyArray
Jacobian matrix (type ep).
- jtfnumpy.ndarray or LocalNumpyArray
Output array, type jtj. Filled with dot(j.T, j) values.
- shared_mem_buftuple or None
Scratch space of shared memory used to speed up repeated calls to fill_jtj. If not none, the value returned from
allocate_jtj_shared_mem_buf()
.
Returns
None
Allocate scratch space to be used for repeated calls to
fill_jtj()
.Returns
- scratchnumpy.ndarray or None
The scratch array.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
The shared memory handle associated with scratch, which is needed to free the memory.
Frees the scratch memory allocated by
allocate_jtj_shared_mem_buf()
.Parameters
- jtj_buftuple or None
The value returned from
allocate_jtj_shared_mem_buf()
- jtj_diag_indices(jtj)
The indices into a jtj-type array that correspond to diagonal elements of the global matrix.
If jtj were a global quantity, then this would just be numpy.diag_indices_from(jtj), however, it may be more complicated in actuality when different processors hold different sections of the global matrix.
Parameters
- jtjnumpy.ndarray or None
The jtj-type array to get the indices with respect to.
Returns
- tuple
A tuple of 1D arrays that can be used to index the elements of jtj that correspond to diagonal elements of the global jtj matrix.
- class pygsti.optimize.arraysinterface.DistributedArraysInterface(dist_layout, lsvec_mode, extra_elements=0)
Bases:
ArraysInterface
An arrays interface where the arrays are distributed according to a distributed layout.
Parameters
- dist_layoutDistributableCOPALayout
The layout giving the distribution of the arrays.
- extra_elementsint, optional
The number of additional objective function “elements” beyond those specified by dist_layout. These are often used for penalty terms.
- layout
- resource_alloc
- extra_elements
- lsvec_mode
- allocate_jtf()
Allocate an array for holding a ‘jtf’-type value.
Returns
numpy.ndarray or LocalNumpyArray
- allocate_jtj()
Allocate an array for holding an approximated Hessian (type ‘jtj’).
Returns
numpy.ndarray or LocalNumpyArray
- allocate_jac()
Allocate an array for holding a Jacobian matrix (type ‘ep’).
Returns
numpy.ndarray or LocalNumpyArray
- deallocate_jtf(jtf)
Free an array for holding an objective function value (type ‘jtf’).
Returns
None
- global_num_elements()
The total number of objective function “elements”.
This is the size/length of the objective function f vector.
Returns
int
- jac_param_slice(only_if_leader=False)
The slice into a Jacobian’s columns that belong to this processor.
Parameters
- only_if_leaderbool, optional
If True, the current processor’s parameter slice is ony returned if the processor is the “leader” (i.e. the first) of the processors that calculate the same parameter slice. All non-leader processors return the zero-slice slice(0,0).
Returns
slice
- jtf_param_slice()
The slice into a ‘jtf’ vector giving the rows of owned by this processor.
Returns
slice
- param_fine_info()
Returns information regarding how model parameters are distributed among hosts and processors.
This information relates to the “fine” distribution used in distributed layouts, and is needed by some algorithms which utilize shared-memory communication between processors on the same host.
Returns
- param_fine_slices_by_hostlist
A list with one entry per host. Each entry is itself a list of (rank, (global_param_slice, host_param_slice)) elements where rank is the top-level overall rank of a processor, global_param_slice is the parameter slice that processor owns and host_param_slice is the same slice relative to the parameters owned by the host.
- owner_host_and_rank_of_global_fine_param_indexdict
A mapping between parameter indices (keys) and the owning processor rank and host index. Values are (host_index, processor_rank) tuples.
- allgather_x(x, global_x)
Gather a parameter (x) vector onto all the processors.
Parameters
- xnumpy.array or LocalNumpyArray
The input vector.
- global_xnumpy.array or LocalNumpyArray
The output (gathered) vector.
Returns
None
- allscatter_x(global_x, x)
Pare down an already-scattered global parameter (x) vector to be just a local x vector.
Parameters
- global_xnumpy.array or LocalNumpyArray
The input vector. This global vector is already present on all the processors, so there’s no need to do any MPI communication.
- xnumpy.array or LocalNumpyArray
The output vector, typically a slice of global_x.
Returns
None
- scatter_x(global_x, x)
Scatter a global parameter (x) vector onto all the processors.
Parameters
- global_xnumpy.array or LocalNumpyArray
The input vector.
- xnumpy.array or LocalNumpyArray
The output (scattered) vector.
Returns
None
- allgather_f(f, global_f)
Gather an objective funtion (f) vector onto all the processors.
Parameters
- fnumpy.array or LocalNumpyArray
The input vector.
- global_fnumpy.array or LocalNumpyArray
The output (gathered) vector.
Returns
None
- gather_jtj(jtj, return_shared=False)
Gather a Hessian (jtj) matrix onto the root processor.
Parameters
- jtjnumpy.array or LocalNumpyArray
The (local) input matrix to gather.
- return_sharedbool, optional
Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via
pygsti.tools.sharedmemtools.cleanup_shared_ndarray()
.
Returns
- gathered_arraynumpy.ndarray or None
The full (global) output array on the root (rank=0) processor and None on all other processors.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.
- scatter_jtj(global_jtj, jtj)
Scatter a Hessian (jtj) matrix onto all the processors.
Parameters
- global_jtjnumpy.ndarray
The global Hessian matrix to scatter.
- jtjnumpy.ndarray or LocalNumpyArray
The local destination array.
Returns
None
- gather_jtf(jtf, return_shared=False)
Gather a jtf vector onto the root processor.
Parameters
- jtfnumpy.array or LocalNumpyArray
The local input vector to gather.
- return_sharedbool, optional
Whether the returned array is allowed to be a shared-memory array, which results in a small performance gain because the array used internally to gather the results can be returned directly. When True a shared memory handle is also returned, and the caller assumes responsibilty for freeing the memory via
pygsti.tools.sharedmemtools.cleanup_shared_ndarray()
.
Returns
- gathered_arraynumpy.ndarray or None
The full (global) output array on the root (rank=0) processor and None on all other processors.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
Returned only when return_shared == True. The shared memory handle associated with gathered_array, which is needed to free the memory.
- scatter_jtf(global_jtf, jtf)
Scatter a jtf vector onto all the processors.
Parameters
- global_jtfnumpy.ndarray
The global vector to scatter.
- jtfnumpy.ndarray or LocalNumpyArray
The local destination array.
Returns
None
- global_svd_dot(jac_v, minus_jtf)
Gathers the dot product between a jtj-type matrix and a jtf-type vector into a global result array.
This is typically used within SVD-defined basis calculations, where jac_v is the “V” matrix of the SVD of a jacobian, and minus_jtf is the negative dot product between the Jacobian matrix and objective function vector.
Parameters
- jac_vnumpy.ndarray or LocalNumpyArray
An array of jtj-type.
- minus_jtfnumpy.ndarray or LocalNumpyArray
An array of jtf-type.
Returns
- numpy.ndarray
The global (gathered) parameter vector dot(jac_v.T, minus_jtf).
- fill_dx_svd(jac_v, global_vec, dx)
Computes the dot product of a jtj-type array with a global parameter array.
The result (dx) is a jtf-type array. This is typically used for computing the x-update vector in the LM method when using a SVD-defined basis.
Parameters
- jac_vnumpy.ndarray or LocalNumpyArray
An array of jtj-type.
- global_vecnumpy.ndarray
A global parameter vector.
- dxnumpy.ndarray or LocalNumpyArray
An array of jtf-type. Filled with dot(jac_v, global_vec) values.
Returns
None
- dot_x(x1, x2)
Take the dot product of two x-type vectors.
Parameters
- x1, x2numpy.ndarray or LocalNumpyArray
The vectors to operate on.
Returns
float
- norm2_x(x)
Compute the Frobenius norm squared of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- infnorm_x(x)
Compute the infinity-norm of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- min_x(x)
Compute the minimum of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- max_x(x)
Compute the maximum of an x-type vector.
Parameters
- xnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- norm2_f(f)
Compute the Frobenius norm squared of an f-type vector.
Parameters
- fnumpy.ndarray or LocalNumpyArray
The vector to operate on.
Returns
float
- norm2_jac(j)
Compute the Frobenius norm squared of an Jacobian matrix (ep-type).
Parameters
- jnumpy.ndarray or LocalNumpyArray
The Jacobian to operate on.
Returns
float
- norm2_jtj(jtj)
Compute the Frobenius norm squared of an jtj-type matrix.
Parameters
- jtjnumpy.ndarray or LocalNumpyArray
The array to operate on.
Returns
float
- fill_jtf(j, f, jtf)
Compute dot(Jacobian.T, f) in supplied memory.
Parameters
- jnumpy.ndarray or LocalNumpyArray
Jacobian matrix (type ep).
- fnumpy.ndarray or LocalNumpyArray
Objective function vector (type e).
- jtfnumpy.ndarray or LocalNumpyArray
Output array, type jtf. Filled with dot(j.T, f) values.
Returns
None
- fill_jtj(j, jtj, shared_mem_buf=None)
Compute dot(Jacobian.T, Jacobian) in supplied memory.
Parameters
- jnumpy.ndarray or LocalNumpyArray
Jacobian matrix (type ep).
- jtfnumpy.ndarray or LocalNumpyArray
Output array, type jtj. Filled with dot(j.T, j) values.
- shared_mem_buftuple or None
Scratch space of shared memory used to speed up repeated calls to fill_jtj. If not none, the value returned from
allocate_jtj_shared_mem_buf()
.
Returns
None
Allocate scratch space to be used for repeated calls to
fill_jtj()
.Returns
- scratchnumpy.ndarray or None
The scratch array.
- shared_memory_handlemultiprocessing.shared_memory.SharedMemory or None
The shared memory handle associated with scratch, which is needed to free the memory.
Frees the scratch memory allocated by
allocate_jtj_shared_mem_buf()
.Parameters
- jtj_buftuple or None
The value returned from
allocate_jtj_shared_mem_buf()
- jtj_diag_indices(jtj)
The indices into a jtj-type array that correspond to diagonal elements of the global matrix.
If jtj were a global quantity, then this would just be numpy.diag_indices_from(jtj), however, it may be more complicated in actuality when different processors hold different sections of the global matrix.
Parameters
- jtjnumpy.ndarray or None
The jtj-type array to get the indices with respect to.
Returns
- tuple
A tuple of 1D arrays that can be used to index the elements of jtj that correspond to diagonal elements of the global jtj matrix.