pygsti
A Python implementation of LinearOperator Set Tomography
Subpackages
pygsti.algorithms
pygsti.baseobjs
pygsti.circuits
pygsti.data
pygsti.drivers
pygsti.evotypes
pygsti.extras
pygsti.forwardsims
pygsti.io
pygsti.layouts
pygsti.modelmembers
pygsti.modelpacks
pygsti.models
pygsti.objectivefns
pygsti.optimize
pygsti.processors
pygsti.protocols
pygsti.report
pygsti.serialization
pygsti.tools
Package Contents
Classes
A basis that is included within and integrated into pyGSTi. 

Class responsible for logging things to stdout or a file. 

A basis that is the direct sum of one or more "component" bases. 

A calculator of circuit outcome probability calculations and their derivatives w.r.t. model parameters. 

A dictionary that also holds category names and types. 

A dictionary that holds perkey type information. 
Functions

Contract a Model to a specified space. 

Performs Linearinversion Gate Set Tomography on the dataset. 

Returns the rank and singular values of the Gram matrix for a dataset. 

Performs core Gate Set Tomography function of model optimization. 

Performs core Gate Set Tomography function of model optimization. 

Performs Iterative Gate Set Tomography on the dataset. 

Performs Iterative Gate Set Tomography on the dataset. 

Find the closest (in fidelity) unitary superoperator to operation_mx. 

Optimize the gauge degrees of freedom of a model to that of a target. 

Optimize the gauge of a model using a custom objective function. 

Compute a maximal set of basis circuits for a Gram matrix. 

Compute the rank and singular values of a maximal Gram matrix. 
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states. 


Construct the singlequbit operation matrix. 

Construct the singlequbit operation matrix. 

Creates a DataSet used for generating bootstrapped error bars. 

Creates a series of "bootstrapped" Models. 

Optimizes the "spam weight" parameter used when gauge optimizing a set of models. 



Compares a 

Perform Linear Gate Set Tomography (LGST). 

Perform longsequence GST (LSGST). 

A more fundamental interface for performing endtoend GST. 

Perform endtoend GST analysis using standard practices. 

Apply a function, f to every element of a list, l in parallel, using MPI. 
Get a comm object 




Get the elements of the specifed basistype which spans the densitymatrix space given by dim. 

Get the "long name" for a particular basis, which is typically used in reports, etc. 

Get a list of short labels corresponding to to the elements of the described basis. 

Whether a basis contains sparse matrices. 

Convert a operation matrix from one basis of a density matrix space to another. 

Constructs bases from transforming mx between two basis names. 

Construct a Basis object with type given by basis and dimension approprate for transforming mx. 

Change the basis of mx to a potentially larger or smaller 'std'type basis given by std_basis_2. 

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed. 

Wrapper for 

Convert a state vector into a density matrix. 

Convert a single qubit state vector into a Liouville vector in the Pauli basis. 

Convert a vector in this basis to a matrix in the standard basis. 

Convert a matrix in the standard basis to a vector in the Pauli basis. 

Computes the total (aggregate) chi^2 for a set of circuits. 

Computes the percircuit chi^2 contributions for a set of cirucits. 

Compute the gradient of the chi^2 function computed by 

Compute the Hessian matrix of the 

Compute and approximate Hessian matrix of the 

Compute the chialpha objective function. 

Compute the percircuit chialpha objective function. 

Computes chi^2 for a 2outcome measurement. 

Computes chi^2 for a 2outcome measurement using frequencyweighting. 

Computes the chi^2 term corresponding to a single outcome. 

Computes the frequencyweighed chi^2 term corresponding to a single outcome. 

Estimate the runtime for an ExperimentDesign from gate times and batch sizes. 

Helper function to calculate all Fisher information terms for each circuit. 

Calculate the Fisher information matrix for a set of circuits and a model. 

Calculate a set of Fisher information matrices for a set of circuits grouped by iteration. 





Utility to explicitly pad out ExperimentDesigns with idle lines. 

Calculates the standard Bonferroni correction. 

Sidak correction. 

Generalized Bonferroni correction. 

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1. 

Given a choi matrix, return the corresponding operation matrix. 

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1. 

Given a choi matrix in the standard basis, return the corresponding operation matrix. 

Compute the amount of nonCPness of a model. 
Compute the amount of nonCPness of a model. 

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model. 


Formats and prints a deprecation warning message. 

Decorator for deprecating a function. 

Utility to deprecate imports from a module. 

The loglikelihood function. 

Computes the percircuit loglikelihood contribution for a set of circuits. 

The jacobian of the loglikelihood function. 

The hessian of the loglikelihood function. 

An approximate Hessian of the loglikelihood function. 

The maximum loglikelihood possible for a DataSet. 

The vector of maximum loglikelihood contributions for each circuit, aggregated over outcomes. 

See docstring for 

Twice the difference between the maximum and actual loglikelihood. 

Twice the percircuit difference between the maximum and actual loglikelihood. 

Term of the 2*[log(L)upperbound  log(L)] sum corresponding to a single circuit and spam label. 

Construct a "dual" elementary error generator matrix in the "standard" (matrixunit) basis. 

Construct an elementary error generator as a matrix in the "standard" (matrixunit) basis. 

Construct the superoperator for a term in the common Lindbladian expansion of an error generator. 

Remove duplicates from the list passed as an argument. 

Remove duplicates from the a list and return the result. 
A 0based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is. 


Replace elements of t according to rules in alias_dict. 

Applies 

Applies alias_dict to the circuits in list_of_circuits. 
Iterate over all sorted (decreasing) partitions of integer n. 


Iterate over all partitions of integer n. 

Iterate over all partitions of integer n into nbins bins. 

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself. 

Recursively replaces lists with tuples. 

Returns the product over the integers modulo 2 of two matrices. 

Returns the product over the integers modulo 2 of a list of matrices. 

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)). 

Returns the direct sum of two square matrices of integers. 

Finds the inverse of a matrix over GL(n,2) 

Solves Ax = b over GF(2) 
Gaussian elimination mod2 of a. 

Returns a 1D array containing the diagonal of the input square 2D array m. 

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere. 

Returns a diagonal matrix containing the diagonal of m. 


Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2. 

Constructs a random bitstring of length n with parity p 

Finds a random invertable matrix M over GL(n,2) 

Creates a random, symmetric, invertible matrix from GL(n,2) 

Returns M such that M a M.T has ones along the main diagonal 

Permutes the first row & col with the i'th row & col 

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible. 
Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible. 


Test whether mx is a hermitian matrix. 

Test whether mx is a positivedefinite matrix. 

Test whether mx is a valid density matrix (hermitian, positivedefinite, and unit trace). 

Compute the nullspace of a matrix. 

Compute the nullspace of a matrix using the QR decomposition. 

Computes the nullspace of a matrix, and tries to return a "nice" basis for it. 

Normalizes the columns of a matrix. 

Compute the norms of the columns of a matrix. 

Scale each column of a matrix by a given value. 

Checks whether a matrix contains orthogonal columns. 

Checks whether a matrix contains orthogonal columns. 

Computes the indices of the linearlyindependent columns in a matrix. 
TODO: docstring 


The "sign" matrix of m 

Print matrix in pretty format. 

Generate a "prettyformat" string for a matrix. 

Generate a "prettyformat" string for a complexvalued matrix. 

Construct the logarithm of superoperator matrix m. 

Construct the logarithm of superoperator matrix m that is near the identity. 

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm. 

Construct a real logarithm of real matrix m. 

Returns the ith standard basis vector in dimension dim. 

Stacks the columns of a matrix to return a vector 

Slices a vector into the columns of a matrix. 

Returns the 1 norm of a matrix 

Generates a random Hermitian matrix 

The Hermitian 1to1 norm of a superoperator represented in the standard basis. 

Comparison function for complex numbers that compares real part, then imaginary part. 
GCD algorithm to produce prime factors of n 


Matches the elements of two vectors, a and b by minimizing the weight between them. 

Matches the elements of a and b, whose elements are assumed to either real or onehalf of a conjugate pair. 

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices. 

Get the realpart of a, where a can be either a dense array or a sparse matrix. 

Get the imaginarypart of a, where a can be either a dense array or a sparse matrix. 

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix. 

Computes the 1norm of the dense or sparse matrix a. 

Precomputes the indices needed to sum a set of CSR sparse matrices. 

Accelerated summation of several CSRformat sparse matrices. 

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices. 

Computation of the summation of several CSRformat sparse matrices. 

Computes "prepared" metainfo about matrix a, to be used in expm_multiply_fast. 

Multiplies v by an exponentiated matrix. 

Returns "prepared" metainfo about operation op, which is assumed to be traceless (so no shift is needed). 

Checks whether two Scipy sparse matrices are (almost) equal. 
Computes the 1norm of the scipy sparse matrix a. 


Get the base memory object for numpy array a. 

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix. 

Similar to numpy.eig, but returns sorted output. 

Computes the "kite" corresponding to a list of eigenvalues. 

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0. 

Project mx onto kite, so mx is zero everywhere except on the kite. 

Project mx onto the complement of kite, so mx is zero everywhere on the kite. 

Removes the linearly dependent columns of a matrix. 

TODO: docstring 

TODO: docstring 

TODO: docstring 

Construct the dense operator or superoperator representation of a computational basis state. 

Compute the partity of x. 

Fills a dense array with the superket representation of a computational basis state. 

Change the signs of the columns of Q and rows of R to follow a convention. 

Returns the quantum state fidelity between density matrices. 

Returns the frobenius distance between arrays: a  b_Fro. 

Returns the square of the frobenius distance between arrays: (a  b_Fro)^2. 

Calculate residuals between the elements of two matrices 

Compute the trace norm of matrix a given by: 

Compute the trace distance between matrices. 

Returns the approximate diamond norm describing the difference between gate matrices. 

Compute the Jamiolkowski trace distance between operation matrices. 

Returns the "entanglement" process fidelity between gate matrices. 

Computes the average gate fidelity (AGF) between two gates. 

Computes the average gate infidelity (AGI) between two gates. 

Returns the entanglement infidelity (EI) between gate matrices. 

Computes the averageovergates of the infidelity between gates in model and the gates in target_model. 

Returns the "unitarity" of a channel. 

Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix. 

Constructs a gatelike quantity for the POVM within model. 

Computes the process (entanglement) fidelity between POVM maps. 

Computes the Jamiolkowski trace distance between POVM maps using 

Computes the diamond distance between POVM maps using 

Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates. 

Compute the vectorized density matrix which acts as the state psi. 

Compute the pure state describing the action of density matrix vector dmvec. 

TODO: docstring 
Compute the superoperator corresponding to unitary matrix u. 


TODO: docstring 

TODO: docstring 



Compute the unitary corresponding to the (unitaryaction!) superoperator superop. 

Construct an error generator from a SPAM vector and it's target. 

Construct the error generator from a gate and its target. 

Construct a gate from an error generator and a target gate. 

Compute the elementary error generators of a certain type. 

Compute the set of dualtoelementary error generators of a given type. 

TODO: docstring 

Compute the projections of a gate error generator onto a set of elementary error generators. 

TODO: docstring  labels can be, e.g. ('H', 'XX') and basis should be a 1qubit basis w/singlechar labels 

TODO: docstring  labels can be, e.g. ('H', 'XX') and basis should be a 1qubit basis w/singlechar labels 

Construct a rotation operation matrix. 

Construct a new model(s) by projecting the error generator of model onto some subspace then reconstructing. 

Returns a gauge transformation that maps gate_mx into a matrix that is codiagonal with target_gate_mx. 

Project each gate of model onto the eigenspace of the corresponding gate within target_model. 
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states. 

Whether typ is a recognized Lindbladgate parameterization type. 


Extract the outcome label from a "simplified" effect label. 

Extract the POVM label from a "simplified" effect label. 

Construct the singlequbit operation matrix. 

Construct the singlequbit operation matrix. 

Decorator for caching a function values 

Context manager that times a block of code 
Get stringversion of current time 


Calculates the total variational distance between two probability distributions. 

Calculates the (classical) fidelity between two probability distributions. 

Predicts the RB error rate from a model. 

Computes the second largest eigenvalue of the 'L matrix' (see the L_matrix function). 

Computes the gauge transformation required so that the RB number matches the average model infidelity. 

Transforms a Model into the "RB gauge" (see the RB_gauge function). 

Constructs a generalization of the 'Lmatrix' linear operator on superoperators. 

Returns the second largest eigenvalue of a generalization of the 'Rmatrix' [see the R_matrix function]. 

Constructs a generalization of the 'Rmatrix' of Proctor et al Phys. Rev. Lett. 119, 130502 (2017). 

Computes the 'leftmultiplied' error maps associated with a noisy gate set, along with the average error map. 

Computes the "gatedependence of errors maps" parameter defined by 

Returns the length (the number of indices) contained in a slice. 

Returns a new slice whose start and stop points are shifted by offset. 

Returns the intersection of two slices (which must have the same step). 

Returns the intersection of two slices (which must have the same step). 

Returns a list of the indices specified by slice s. 

Returns a slice corresponding to a given list of (integer) indices, if this is possible. 

Returns slc_or_list_like as an index array (an integer numpy.ndarray). 

Divides a slice into subslices based on a maximum length (for each subslice). 

A slice that is the composition of base_slc and slc. 



Decorator for applying a smart cache to a single function or method. 

Creates the symplectic form for the number of qubits specified. 

Maps the input symplectic matrix between the 'standard' and 'directsum' symplectic form conventions. 

Checks whether a matrix is symplectic. 
Returns the inverse of a symplectic matrix over the integers mod 2. 


Returns the inverse of a Clifford gate in the symplectic representation. 

Checks if a symplectic matrix  phase vector pair (s,p) is the symplectic representation of a Clifford. 

Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate. 

Finds the Pauli layer that should be appended to a circuit to implement a given Clifford. 

Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford. 

TODO: docstring 

TODO: docstring 

Multiplies two cliffords in the symplectic representation. 

Takes a kronecker product of symplectic representations. 

Contruct the (s,p) stabilizer representation for a computational basis state given by zvals. 

Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation. 

Computes the probabilities of 0/1 (+/) outcomes from measuring a Pauli operator on a stabilizer state. 

A helper routine used for manipulating stabilizer state representations. 

A helper routine used for manipulating stabilizer state representations. 

Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state. 

Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation. 
Creates a dictionary of the symplectic representations of 'standard' Clifford gates. 


Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit. 

Constructs the symplectic representation of the nqubit Clifford implemented by a single quantum circuit layer. 
Gives the group relationship between the 'I', 'H', 'P' 'HP', 'PH', and 'HPH' uptoPaulis operators. 


Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise. 

Returns the symplectic representation of a onequbit or twoqubit Clifford unitary. 

Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n). 

Returns a Clifford, in the symplectic representation, sampled uniformly at random from the nqubit Clifford group. 

Generates a uniformly random phase vector for a nqubit Clifford. 
Get the bitstring corresponding to a Pauli. 


Applies a Clifford gate to the nqubit Clifford gate specified by the 2n x 2n symplectic matrix. 
The number of Clifford gates in the nqubit Clifford group. 

The number of elements in the symplectic group S(n) over the 2element finite field. 

Returns the number of different cosets for the symplectic group S(n) over the 2element finite field. 


Returns the symplectic inner product of two vectors in F_2^(2n). 

Applies transvection Z k to v. 

Converts integer i to an length n array of bits. 

Converts an nbit string b to an integer between 0 and 2^`n`  1. 
A utility function for selecting a random Clifford element. 

Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the "canonical" index i. 


Returns the "canonical" index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1. 

The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1. 
Attributes
 pygsti.contract(model, to_what, dataset=None, maxiter=1000000, tol=0.01, use_direct_cp=True, method='NelderMead', verbosity=0)
Contract a Model to a specified space.
All contraction operations except ‘vSPAM’ operate entirely on the gate matrices and leave state preparations and measurments alone, while ‘vSPAM’ operations only on SPAM.
Parameters
 modelModel
The model to contract
 to_whatstring
Specifies which space is the model is contracted to. Allowed values are:
‘TP’ – All gates are manifestly tracepreserving maps.
‘CP’ – All gates are manifestly completelypositive maps.
‘CPTP’ – All gates are manifestly completelypositive and tracepreserving maps.
‘XP’ – All gates are manifestly “experimentallypositive” maps.
‘XPTP’ – All gates are manifestly “experimentallypositive” and tracepreserving maps.
‘vSPAM’ – state preparation and measurement operations are valid.
‘nothing’ – no contraction is performed.
 datasetDataSet, optional
Dataset to use to determine whether a model is in the “experimentallypositive” (XP) space. Required only when contracting to XP or XPTP.
 maxiterint, optional
Maximum number of iterations for iterative contraction routines.
 tolfloat, optional
Tolerance for iterative contraction routines.
 use_direct_cpbool, optional
Whether to use a faster directcontraction method for CP contraction. This method essentially transforms to the Choi matrix, truncates any negative eigenvalues to zero, then transforms back to a operation matrix.
 methodstring, optional
The method used when contracting to XP and nondirectly to CP (i.e. use_direct_cp == False).
 verbosityint, optional
How much detail to send to stdout.
Returns
 Model
The contracted model
 class pygsti.BuiltinBasis(name, dim_or_statespace, sparse=False)
Bases:
LazyBasis
A basis that is included within and integrated into pyGSTi.
Such bases may, in most cases be represented merely by its name. (In actuality, a dimension is also required, but this is often able to be inferred from context.)
Parameters
 name{“pp”, “gm”, “std”, “qt”, “id”, “cl”, “sv”}
Name of the basis to be created.
 dim_or_statespaceint or StateSpace
The dimension of the basis to be created or the state space for which a basis should be created. Note that when this is an integer it is the dimension of the vectors, which correspond to flattened elements in simple cases. Thus, a 1qubit basis would have dimension 2 in the statevector (name=”sv”) case and dimension 4 when constructing a densitymatrix basis (e.g. name=”pp”).
 sparsebool, optional
Whether basis elements should be stored as SciPy CSR sparse matrices or dense numpy arrays (the default).
Creates a new LazyBasis. Parameters are the same as those to
Basis.__init__()
. property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size
The number of elements (or vectorelements) in the basis.
 property elshape
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 property first_element_is_identity
True if the first element of this basis is proportional to the identity matrix, False otherwise.
 is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
 otherBasis or str
The basis to compare with.
 sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
 class pygsti.VerbosityPrinter(verbosity=1, filename=None, comm=None, warnings=True, split=False, clear_file=True)
Bases:
object
Class responsible for logging things to stdout or a file.
Controls verbosity and can print progress bars. ex:
>>> VerbosityPrinter(1)
would construct a printer that printed out messages of level one or higher to the screen.
>>> VerbosityPrinter(3, 'output.txt')
would construct a printer that sends verbose output to a text file
The static function
create_printer()
will construct a printer from either an integer or an already existing printer. it is a static method of the VerbosityPrinter class, so it is called like so:>>> VerbosityPrinter.create_printer(2)
or
>>> VerbostityPrinter.create_printer(VerbosityPrinter(3, 'output.txt'))
printer.log('status')
would log ‘status’ if the printers verbosity was one or higher.printer.log('status2', 2)
would log ‘status2’ if the printer’s verbosity was two or higherprinter.error('something terrible happened')
would ALWAYS log ‘something terrible happened’.printer.warning('something worrisome happened')
would log if verbosity was one or higher  the same as a normal status.Both printer.error and printer.warning will prepend ‘ERROR: ‘ or ‘WARNING: ‘ to the message they are given. Optionally, printer.log() can also prepend ‘Status_n’ to the message, where n is the message level.
Logging of progress bars/iterations:
>>> with printer_instance.progress_logging(verbosity): >>> for i, item in enumerate(data): >>> printer.show_progress(i, len(data)) >>> printer.log(...)
will output either a progress bar or iteration statuses depending on the printer’s verbosity
Parameters
 verbosityint
How verbose the printer should be.
 filenamestr, optional
Where to put output (If none, output goes to screen)
 commmpi4py.MPI.Comm or ResourceAllocation, optional
Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …
 warningsbool, optional
Whether or not to print warnings
 splitbool, optional
Whether to split output between stdout and stderr as appropriate, or to combine the streams so everything is sent to stdout.
 clear_filebool, optional
Whether or not filename should be cleared (overwritten) or simply appended to.
Attributes
 _comm_pathstr
relative path where comm files (outputs of nonroot ranks) are stored.
 _comm_file_namestr
root filename for comm files (outputs of nonroot ranks).
 _comm_file_extstr
filename extension for comm files (outputs of nonroot ranks).
Customize a verbosity printer object
Parameters
 verbosityint, optional
How verbose the printer should be.
 filenamestr, optional
Where to put output (If none, output goes to screen)
 commmpi4py.MPI.Comm or ResourceAllocation, optional
Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …
 warningsbool, optional
Whether or not to print warnings
 clone()
Instead of deepcopy, initialize a new printer object and feed it some select deepcopied members
Returns
VerbosityPrinter
 static create_printer(verbosity, comm=None)
Function for converting between interfaces
Parameters
 verbosityint or VerbosityPrinter object, required:
object to build a printer from
 commmpi4py.MPI.Comm object, optional
Comm object to build printers with. !Will override!
Returns
 VerbosityPrinter :
The printer object, constructed from either an integer or another printer
 error(message)
Log an error to the screen/file
Parameters
 messagestr
the error message
Returns
None
 warning(message)
Log a warning to the screen/file if verbosity > 1
Parameters
 messagestr
the warning message
Returns
None
 log(message, message_level=None, indent_char=' ', show_statustype=False, do_indent=True, indent_offset=0, end='\n', flush=True)
Log a status message to screen/file.
Determines whether the message should be printed based on current verbosity setting, then sends the message to the appropriate output
Parameters
 messagestr
the message to print (or log)
 message_levelint, optional
the minimum verbosity level at which this level is printed.
 indent_charstr, optional
what constitutes an “indent” (messages at higher levels are indented more when do_indent=True).
 show_statustypebool, optional
if True, prepend lines with “Status Level X” indicating the message_level.
 do_indentbool, optional
whether messages at higher message levels should be indented. Note that if this is False it may be helpful to set show_statustype=True.
 indent_offsetint, optional
an additional number of indentations to add, on top of any due to the message level.
 endstr, optional
the character (or string) to end message lines with.
 flushbool, optional
whether stdout should be flushed right after this message is printed (this avoids delays in onscreen output due to buffering).
Returns
None
 verbosity_env(level)
Create a temporary environment with a different verbosity level.
This is context manager, controlled using Python’s with statement:
>>> with printer.verbosity_env(2): printer.log('Message1') # printed at verbosity level 2 printer.log('Message2') # printed at verbosity level 2
Parameters
 levelint
the verbosity level of the environment.
 progress_logging(message_level=1)
Context manager for logging progress bars/iterations.
(The printer will return to its normal, unrestricted state when the progress logging has finished)
Parameters
 message_levelint, optional
progress messages will not be shown until the verbosity level reaches message_level.
 show_progress(iteration, total, bar_length=50, num_decimals=2, fill_char='#', empty_char='', prefix='Progress:', suffix='', verbose_messages=None, indent_char=' ', end='\n')
Displays a progress message (to be used within a progress_logging block).
Parameters
 iterationint
the 0based current iteration – the interation number this message is for.
 totalint
the total number of iterations expected.
 bar_lengthint, optional
the length, in characters, of a textformat progress bar (only used when the verbosity level is exactly equal to the progress_logging message level.
 num_decimalsint, optional
number of places after the decimal point that are displayed in progress bar’s percentage complete.
 fill_charstr, optional
replaces ‘#’ as the barfilling character
 empty_charstr, optional
replaces ‘’ as the emptybar character
 prefixstr, optional
message in front of the bar
 suffixstr, optional
message after the bar
 verbose_messageslist, optional
A list of strings to display after an initial “Iter X of Y” line when the verbosity level is higher than the progress_logging message level and so more verbose messages are shown (and a progress bar is not). The elements of verbose_messages will occur, one per line, after the initial “Iter X of Y” line.
 indent_charstr, optional
what constitutes an “indentation”.
 endstr, optional
the character (or string) to end message lines with.
Returns
None
 start_recording()
Begins recording the output (to memory).
Begins recording (in memory) a list of (type, verbosityLevel, message) tuples that is returned by the next call to
stop_recording()
.Returns
None
 stop_recording()
Stops recording and returns recorded output.
Stops a “recording” started by
start_recording()
and returns the list of (type, verbosityLevel, message) tuples that have been recorded since then.Returns
list
 class pygsti.DirectSumBasis(component_bases, name=None, longname=None)
Bases:
LazyBasis
A basis that is the direct sum of one or more “component” bases.
Elements of this basis are the union of the basis elements on each component, each embedded into a common blockdiagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).
Parameters
 component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4). namestr, optional
The name of this basis. If None, the names of the component bases joined with “+” is used.
 longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
Attributes
 vector_elementslist
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Create a new DirectSumBasis  a basis for a space that is the directsum of the spaces spanned by other “component” bases.
Parameters
 component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4). namestr, optional
The name of this basis. If None, the names of the component bases joined with “+” is used.
 longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
 property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size
The number of elements (or vectorelements) in the basis.
 property elshape
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 property vector_elements
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Returns
list
 property to_std_transform_matrix
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
Returns
 numpy array or scipy.sparse.lil_matrix
An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
 property to_elementstd_transform_matrix
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space”  that is, vectors in the same standard basis that the elements of this basis are expressed in.
Returns
 numpy array
An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
 is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
 otherBasis or str
The basis to compare with.
 sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
 create_equivalent(builtin_basis_name)
Create an equivalent basis with components of type builtin_basis_name.
Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.
Parameters
 builtin_basis_namestr
The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
Returns
DirectSumBasis
 create_simple_equivalent(builtin_basis_name=None)
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_nameanalogue of the standard basis that this basis’s elements are expressed in.Parameters
 builtin_basis_namestr, optional
The name of the builtin basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and componentfree version of the same builtinbasis type.
Returns
Basis
 pygsti.CUSTOMLM = 'True'
 pygsti.FLOATSIZE = '8'
 pygsti.run_lgst(dataset, prep_fiducials, effect_fiducials, target_model, op_labels=None, op_label_aliases=None, guess_model_for_gauge=None, svd_truncate_to=None, verbosity=0, check=True)
Performs Linearinversion Gate Set Tomography on the dataset.
Parameters
 datasetDataSet
The data used to generate the LGST estimates
 prep_fiducialslist of Circuits
Fiducial Circuits used to construct a informationally complete effective preparation.
 effect_fiducialslist of Circuits
Fiducial Circuits used to construct a informationally complete effective measurement.
 target_modelModel
A model used to specify which operation labels should be estimated, a guess for which gauge these estimates should be returned in.
 op_labelslist, optional
A list of which operation labels (or aliases) should be estimated. Overrides the operation labels in target_model. e.g. [‘Gi’,’Gx’,’Gy’,’Gx2’]
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are circuits corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = pygsti.baseobjs.Circuit([‘Gx’,’Gx’,’Gx’])
 guess_model_for_gaugeModel, optional
A model used to compute a gauge transformation that is applied to the LGST estimates before they are returned. This gauge transformation is computed such that if the estimated gates matched the model given, then the operation matrices would match, i.e. the gauge would be the same as the model supplied. Defaults to target_model.
 svd_truncate_toint, optional
The Hilbert space dimension to truncate the operation matrices to using a SVD to keep only the largest svdToTruncateTo singular values of the I_tildle LGST matrix. Zero means no truncation. Defaults to dimension of target_model.
 verbosityint, optional
How much detail to send to stdout.
 checkbool, optional
Specifies whether we perform computationally expensive assertion checks. Computationally cheap assertions will always be checked.
Returns
 Model
A model containing all of the estimated labels (or aliases)
 pygsti.gram_rank_and_eigenvalues(dataset, prep_fiducials, effect_fiducials, target_model)
Returns the rank and singular values of the Gram matrix for a dataset.
Parameters
 datasetDataSet
The data used to populate the Gram matrix
 prep_fiducialslist of Circuits
Fiducial Circuits used to construct a informationally complete effective preparation.
 effect_fiducialslist of Circuits
Fiducial Circuits used to construct a informationally complete effective measurement.
 target_modelModel
A model used to make sense of circuit elements, and to compute the theoretical gram matrix eigenvalues (returned as svalues_target).
Returns
 rankint
the rank of the Gram matrix
 svaluesnumpy array
the singular values of the Gram matrix
 svalues_targetnumpy array
the corresponding singular values of the Gram matrix generated by target_model.
 pygsti.run_gst_fit_simple(dataset, start_model, circuits, optimizer, objective_function_builder, resource_alloc, verbosity=0)
Performs core Gate Set Tomography function of model optimization.
Optimizes the parameters of start_model by minimizing the objective function built by objective_function_builder. Probabilities are computed by the model, and outcome counts are supplied by dataset.
Parameters
 datasetDataSet
The dataset to obtain counts from.
 start_modelModel
The Model used as a starting point for the leastsquares optimization.
 circuitslist of (tuples or Circuits)
Each tuple contains operation labels and specifies a circuit whose probabilities are considered when trying to leastsquaresfit the probabilities given in the dataset. e.g. [ (), (‘Gx’,), (‘Gx’,’Gy’) ]
 optimizerOptimizer or dict
The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
 objective_function_builderObjectiveFunctionBuilder
Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”.
 resource_allocResourceAllocation
A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.
 verbosityint, optional
How much detail to send to stdout.
Returns
 resultOptimizerResult
the result of the optimization
 modelModel
the bestfit model.
 pygsti.run_gst_fit(mdc_store, optimizer, objective_function_builder, verbosity=0)
Performs core Gate Set Tomography function of model optimization.
Optimizes the model to the data within mdc_store by minimizing the objective function built by objective_function_builder.
Parameters
 mdc_storeModelDatasetCircuitsStore
An object holding a model, data set, and set of circuits. This defines the model to be optimized, the data to fit to, and the circuits where predicted vs. observed comparisons should be made. This object also contains additional information specific to the given model, data set, and circuit list, doubling as a cache for increased performance. This information is also specific to a particular resource allocation, which affects how cached values stored.
 optimizerOptimizer or dict
The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
 objective_function_builderObjectiveFunctionBuilder
Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”. If None, then mdc_store must itself be an alreadybuilt objective function.
 verbosityint, optional
How much detail to send to stdout.
Returns
 resultOptimizerResult
the result of the optimization
 objfn_storeMDCObjectiveFunction
the objective function and store containing the bestfit model evaluated at the bestfit point.
 pygsti.run_iterative_gst(dataset, start_model, circuit_lists, optimizer, iteration_objfn_builders, final_objfn_builders, resource_alloc, verbosity=0)
Performs Iterative Gate Set Tomography on the dataset.
Parameters
 datasetDataSet
The data used to generate MLGST gate estimates
 start_modelModel
The Model used as a starting point for the leastsquares optimization.
 circuit_listslist of lists of (tuples or Circuits)
The ith element is a list of the circuits to be used in the ith iteration of the optimization. Each element of these lists is a circuit, specifed as either a Circuit object or as a tuple of operation labels (but all must be specified using the same type). e.g. [ [ (), (‘Gx’,) ], [ (), (‘Gx’,), (‘Gy’,) ], [ (), (‘Gx’,), (‘Gy’,), (‘Gx’,’Gy’) ] ]
 optimizerOptimizer or dict
The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
 iteration_objfn_builderslist
List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on each iteration.
 final_objfn_builderslist
List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on the final iteration.
 resource_allocResourceAllocation
A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.
 verbosityint, optional
How much detail to send to stdout.
Returns
 modelslist of Models
list whose ith element is the model corresponding to the results of the ith iteration.
 optimumslist of OptimizerResults
list whose ith element is the final optimizer result from that iteration.
 final_objfnMDSObjectiveFunction
The final iteration’s objective function / store, which encapsulated the final objective function evaluated at the bestfit point (an “evaluated” modeldataSetcircuits store).
 pygsti.iterative_gst_generator(dataset, start_model, circuit_lists, optimizer, iteration_objfn_builders, final_objfn_builders, resource_alloc, starting_index=0, verbosity=0)
Performs Iterative Gate Set Tomography on the dataset. Same as run_iterative_gst, except this function produces a generator for producing the output for each iteration instead of returning the lists of outputs all at once.
Parameters
 datasetDataSet
The data used to generate MLGST gate estimates
 start_modelModel
The Model used as a starting point for the leastsquares optimization.
 circuit_listslist of lists of (tuples or Circuits)
The ith element is a list of the circuits to be used in the ith iteration of the optimization. Each element of these lists is a circuit, specifed as either a Circuit object or as a tuple of operation labels (but all must be specified using the same type). e.g. [ [ (), (‘Gx’,) ], [ (), (‘Gx’,), (‘Gy’,) ], [ (), (‘Gx’,), (‘Gy’,), (‘Gx’,’Gy’) ] ]
 optimizerOptimizer or dict
The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
 iteration_objfn_builderslist
List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on each iteration.
 final_objfn_builderslist
List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on the final iteration.
 resource_allocResourceAllocation
A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.
 starting_indexint, optional (default 0)
Index of the iteration to start the optimization at. Primarily used when warmstarting the iterative optimization from a checkpoint.
 verbosityint, optional
How much detail to send to stdout.
Returns
 generator
Returns a generator which when queried the ith time returns a tuple containing:
model: the model corresponding to the results of the ith iteration.
optimums : the final OptimizerResults from the ith iteration.
final_objfn : If the final iteration the MDSObjectiveFunction function / store, which encapsulated the final objective function evaluated at the bestfit point (an “evaluated” modeldatasetcircuits store).
 pygsti.find_closest_unitary_opmx(operation_mx)
Find the closest (in fidelity) unitary superoperator to operation_mx.
Finds the closest operation matrix (by maximizing fidelity) to operation_mx that describes a unitary quantum gate.
Parameters
 operation_mxnumpy array
The operation matrix to act on.
Returns
 numpy array
The resulting closest unitary operation matrix.
 pygsti.gaugeopt_to_target(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', gauge_group=None, method='auto', maxiter=100000, maxfev=None, tol=1e08, oob_check_interval=0, convert_model_to=None, return_all=False, comm=None, verbosity=0, check_jac=False)
Optimize the gauge degrees of freedom of a model to that of a target.
Parameters
 modelModel
The model to gaugeoptimize
 target_modelModel
The model to optimize to. The metric used for comparing models is given by gates_metric and spam_metric.
 item_weightsdict, optional
Dictionary of weighting factors for gates and spam operators. Keys can be gate, state preparation, or POVM effect, as well as the special values “spam” or “gates” which apply the given weighting to all spam operators or gates respectively. Values are floating point numbers. Values given for specific gates or spam operators take precedence over “gates” and “spam” values. The precise use of these weights depends on the model metric(s) being used.
 cptp_penalty_factorfloat, optional
If greater than zero, the objective function also contains CPTP penalty terms which penalize nonCPTPness of the gates being optimized. This factor multiplies these CPTP penalty terms.
 spam_penalty_factorfloat, optional
If greater than zero, the objective function also contains SPAM penalty terms which penalize nonpositiveness of the state preps being optimized. This factor multiplies these SPAM penalty terms.
 gates_metric{“frobenius”, “fidelity”, “tracedist”}, optional
The metric used to compare gates within models. “frobenius” computes the normalized sqrt(sumofsquareddifferences), with weights multiplying the squared differences (see
Model.frobeniusdist()
). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each gate, weighted by the weights. spam_metric{“frobenius”, “fidelity”, “tracedist”}, optional
The metric used to compare spam vectors within models. “frobenius” computes the normalized sqrt(sumofsquareddifferences), with weights multiplying the squared differences (see
Model.frobeniusdist()
). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each “SPAM gate”, weighted by the weights. gauge_groupGaugeGroup, optional
The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.
 methodstring, optional
The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘NelderMead’, ‘CG’, ‘LBFGSB’, or additionally:
‘auto’ – ‘ls’ when allowed, otherwise ‘LBFGSB’
‘ls’ – custom leastsquares optimizer.
‘custom’ – custom CG that often works better than ‘CG’
‘supersimplex’ – repeated application of ‘NelderMead’ to converge it
‘basinhopping’ – scipy.optimize.basinhopping using LBFGSB as a local optimizer
‘swarm’ – particle swarm global optimization algorithm
‘evolve’ – evolutionary global optimization algorithm using DEAP
‘brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions
 maxiterint, optional
Maximum number of iterations for the gauge optimization.
 maxfevint, optional
Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.
 tolfloat, optional
The tolerance for the gauge optimization.
 oob_check_intervalint, optional
If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an outofbounds condition that the gauge optimizer will avoid. If zero, then any gaugetransform failures just terminate the optimization.
 convert_model_tostr, dict, list, optional
For use when model is an ExplicitOpModel. When not None, calls model.convert_members_inplace(convert_model_to, set_default_gauge_group=False) if convert_model_to is a string, model.convert_members_inplace(**convert_model_to) if it is a dict, and repeated calls to either of the above instances when convert_model_to is a list or tuple prior to performing the gauge optimization. This allows the gauge optimization to be performed using a differently constrained model.
 return_allbool, optional
When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 verbosityint, optional
How much detail to send to stdout.
 check_jacbool
When True, check least squares analytic jacobian against finite differences
Returns
model : if return_all == False
 (goodnessMin, gaugeMx, model)if return_all == True
Where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gaugetransformed model.
 pygsti.gaugeopt_custom(model, objective_fn, gauge_group=None, method='LBFGSB', maxiter=100000, maxfev=None, tol=1e08, oob_check_interval=0, return_all=False, jacobian_fn=None, comm=None, verbosity=0)
Optimize the gauge of a model using a custom objective function.
Parameters
 modelModel
The model to gaugeoptimize
 objective_fnfunction
The function to be minimized. The function must take a single Model argument and return a float.
 gauge_groupGaugeGroup, optional
The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.
 methodstring, optional
The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘NelderMead’, ‘CG’, ‘LBFGSB’, or additionally:
‘custom’ – custom CG that often works better than ‘CG’
‘supersimplex’ – repeated application of ‘NelderMead’ to converge it
‘basinhopping’ – scipy.optimize.basinhopping using LBFGSB as a local optimizer
‘swarm’ – particle swarm global optimization algorithm
‘evolve’ – evolutionary global optimization algorithm using DEAP
‘brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions
 maxiterint, optional
Maximum number of iterations for the gauge optimization.
 maxfevint, optional
Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.
 tolfloat, optional
The tolerance for the gauge optimization.
 oob_check_intervalint, optional
If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an outofbounds condition that the gauge optimizer will avoid. If zero, then any gaugetransform failures just terminate the optimization.
 return_allbool, optional
When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.
 jacobian_fnfunction, optional
The jacobian of objective_fn. The function must take three parameters, 1) the untransformed Model, 2) the transformed Model, and 3) the GaugeGroupElement representing the transformation that brings the first argument into the second.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 verbosityint, optional
How much detail to send to stdout.
Returns
 model
if return_all == False
 (goodnessMin, gaugeMx, model)
if return_all == True where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gaugetransformed model.
 pygsti.max_gram_basis(op_labels, dataset, max_length=0)
Compute a maximal set of basis circuits for a Gram matrix.
That is, a maximal set of strings {S_i} such that the gate strings { S_i S_j } are all present in dataset. If max_length > 0, then restrict len(S_i) <= max_length.
Parameters
 op_labelslist or tuple
the operation labels to use in Gram matrix basis strings
 datasetDataSet
the dataset to use when constructing the Gram matrix
 max_lengthint, optional
the maximum string length considered for Gram matrix basis elements. Defaults to 0 (no limit).
Returns
 list of tuples
where each tuple contains operation labels and specifies a single circuit.
 pygsti.max_gram_rank_and_eigenvalues(dataset, target_model, max_basis_string_length=10, fixed_lists=None)
Compute the rank and singular values of a maximal Gram matrix.
That is, compute the rank and singular values of the Gram matrix computed using the basis: max_gram_basis(dataset.gate_labels(), dataset, max_basis_string_length).
Parameters
 datasetDataSet
the dataset to use when constructing the Gram matrix
 target_modelModel
A model used to make sense of circuits and for the construction of a theoretical gram matrix and spectrum.
 max_basis_string_lengthint, optional
the maximum string length considered for Gram matrix basis elements. Defaults to 10.
 fixed_lists(prep_fiducials, effect_fiducials), optional
2tuple of
Circuit
lists, specifying the preparation and measurement fiducials to use when constructing the Gram matrix, and thereby bypassing the search for such lists.
Returns
rank : integer singularvalues : numpy array targetsingularvalues : numpy array
 pygsti.id2x2
 pygsti.sigmax
 pygsti.sigmay
 pygsti.sigmaz
 pygsti.unitary_to_pauligate(u)
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states.
Parameters
 unumpy array
A dxd array giving the action of the unitary on a state in the sigmaz basis. where d = 2 ** nqubits
Returns
 numpy array
The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.
 pygsti.sigmaii
 pygsti.sigmaix
 pygsti.sigmaiy
 pygsti.sigmaiz
 pygsti.sigmaxi
 pygsti.sigmaxx
 pygsti.sigmaxy
 pygsti.sigmaxz
 pygsti.sigmayi
 pygsti.sigmayx
 pygsti.sigmayy
 pygsti.sigmayz
 pygsti.sigmazi
 pygsti.sigmazx
 pygsti.sigmazy
 pygsti.sigmazz
 pygsti.single_qubit_gate(hx, hy, hz, noise=0)
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (hx*X + hy*Y + hz*Z), where X, Y, and Z are the sigma matrices. Thus, hx, hy, and hz correspond to rotation angles divided by 2. Additionally, a uniform depolarization noise can be applied to the gate.
Parameters
 hxfloat
Coefficient of sigmaX matrix in exponent.
 hyfloat
Coefficient of sigmaY matrix in exponent.
 hzfloat
Coefficient of sigmaZ matrix in exponent.
 noisefloat, optional
The amount of uniform depolarizing noise.
Returns
 numpy array
4x4 operation matrix which operates on a 1qubit density matrix expressed as a vector in the Pauli basis ( {I,X,Y,Z}/sqrt(2) ).
 pygsti.two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (xx*XX + xy*XY + …) where terms in the exponent are tensor products of two Pauli matrices.
Parameters
 ixfloat, optional
Coefficient of IX matrix in exponent.
 iyfloat, optional
Coefficient of IY matrix in exponent.
 izfloat, optional
Coefficient of IZ matrix in exponent.
 xifloat, optional
Coefficient of XI matrix in exponent.
 xxfloat, optional
Coefficient of XX matrix in exponent.
 xyfloat, optional
Coefficient of XY matrix in exponent.
 xzfloat, optional
Coefficient of XZ matrix in exponent.
 yifloat, optional
Coefficient of YI matrix in exponent.
 yxfloat, optional
Coefficient of YX matrix in exponent.
 yyfloat, optional
Coefficient of YY matrix in exponent.
 yzfloat, optional
Coefficient of YZ matrix in exponent.
 zifloat, optional
Coefficient of ZI matrix in exponent.
 zxfloat, optional
Coefficient of ZX matrix in exponent.
 zyfloat, optional
Coefficient of ZY matrix in exponent.
 zzfloat, optional
Coefficient of ZZ matrix in exponent.
 iifloat, optional
Coefficient of II matrix in exponent.
Returns
 numpy array
16x16 operation matrix which operates on a 2qubit density matrix expressed as a vector in the PauliProduct basis.
 pygsti.create_bootstrap_dataset(input_data_set, generation_method, input_model=None, seed=None, outcome_labels=None, verbosity=1)
Creates a DataSet used for generating bootstrapped error bars.
Parameters
 input_data_setDataSet
The data set to use for generating the “bootstrapped” data set.
 generation_method{ ‘nonparametric’, ‘parametric’ }
The type of dataset to generate. ‘parametric’ generates a DataSet with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates a DataSet with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.
 input_modelModel, optional
The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).
 seedint, optional
A seed value for numpy’s random number generator.
 outcome_labelslist, optional
The list of outcome labels to include in the output dataset. If None are specified, defaults to the spam labels of input_data_set.
 verbosityint, optional
How verbose the function output is. If 0, then printing is suppressed. If 1 (or greater), then printing is not suppressed.
Returns
DataSet
 pygsti.create_bootstrap_models(num_models, input_data_set, generation_method, fiducial_prep, fiducial_measure, germs, max_lengths, input_model=None, target_model=None, start_seed=0, outcome_labels=None, lsgst_lists=None, return_data=False, verbosity=2)
Creates a series of “bootstrapped” Models.
Models are created from a single DataSet (and possibly Model) and are typically used for generating bootstrapped error bars. The resulting Models are obtained by performing MLGST on data generated by repeatedly calling
create_bootstrap_dataset()
with consecutive integer seed values.Parameters
 num_modelsint
The number of models to create.
 input_data_setDataSet
The data set to use for generating the “bootstrapped” data set.
 generation_method{ ‘nonparametric’, ‘parametric’ }
The type of data to generate. ‘parametric’ generates DataSets with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates DataSets with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.
 fiducial_preplist of Circuits
The state preparation fiducial circuits used by MLGST.
 fiducial_measurelist of Circuits
The measurement fiducial circuits used by MLGST.
 germslist of Circuits
The germ circuits used by MLGST.
 max_lengthslist of ints
List of integers, one per MLGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
 input_modelModel, optional
The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).
 target_modelModel, optional
Mandatory model to use for as the target model for MLGST when generation_method is set to ‘nonparametric’. When ‘parametric’ is selected, this should be the ideal version of input_model.
 start_seedint, optional
The initial seed value for numpy’s random number generator when generating data sets. For each succesive dataset (and model) that are generated, the seed is incremented by one.
 outcome_labelslist, optional
The list of Outcome labels to include in the output dataset. If None are specified, defaults to the effect labels of input_data_set.
 lsgst_listslist of circuit lists, optional
Provides explicit list of circuit lists to be used in analysis; to be given if the dataset uses “incomplete” or “reduced” sets of circuit. Default is None.
 return_databool
Whether generated data sets should be returned in addition to models.
 verbosityint
Level of detail printed to stdout.
Returns
 modelslist
The list of generated Model objects.
 datalist
The list of generated DataSet objects, only returned when return_data == True.
 pygsti.gauge_optimize_models(gs_list, target_model, gate_metric='frobenius', spam_metric='frobenius', plot=True)
Optimizes the “spam weight” parameter used when gauge optimizing a set of models.
This function gauge optimizes multiple times using a range of spam weights and takes the one the minimizes the average spam error multiplied by the average gate error (with respect to a target model).
Parameters
 gs_listlist
The list of Model objects to gauge optimize (simultaneously).
 target_modelModel
The model to compare the gaugeoptimized gates with, and also to gaugeoptimize them to.
 gate_metric{ “frobenius”, “fidelity”, “tracedist” }, optional
The metric used within the gauge optimization to determing error in the gates.
 spam_metric{ “frobenius”, “fidelity”, “tracedist” }, optional
The metric used within the gauge optimization to determing error in the state preparation and measurement.
 plotbool, optional
Whether to create a plot of the modeltarget discrepancy as a function of spam weight (figure displayed interactively).
Returns
 list
The list of Models gaugeoptimized using the best spamWeight.
 pygsti.create_explicit_model(processor_spec, custom_gates=None, depolarization_strengths=None, stochastic_error_probs=None, lindblad_error_coeffs=None, depolarization_parameterization='depolarize', stochastic_parameterization='stochastic', lindblad_parameterization='auto', evotype='default', simulator='auto', ideal_gate_type='auto', ideal_spam_type='computational', embed_gates=False, basis='pp')
 class pygsti.ForwardSimulator(model=None)
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
A calculator of circuit outcome probability calculations and their derivatives w.r.t. model parameters.
Some forward simulators may also be used to perform operationproduct calculations.
This functionality exists in a class separate from Model to allow for additional model classes (e.g. ones which use entirely different – nongatelocal – parameterizations of operation matrices and SPAM vectors) access to these fundamental operations. It also allows for the easier addition of new forward simulators.
Note: a model holds or “contains” a forward simulator instance to perform its computations, and a forward simulator holds a reference to its parent model, so we need to make sure the forward simulator doesn’t serialize the model or we have a circular reference.
Parameters
 modelModel, optional
The model this forward simulator will use to compute circuit outcome probabilities.
 property model
 Castable
 classmethod cast(obj: ForwardSimulator, num_qubits=None)
num_qubits only used if obj == ‘auto’
 probs(circuit, outcomes=None, time=None, resource_alloc=None)
Construct a dictionary containing the outcome probabilities for a single circuit.
Parameters
 circuitCircuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
 outcomeslist or tuple
A sequence of outcomes, which can themselves be either tuples (to include intermediate measurements) or simple strings, e.g. ‘010’. If None, only nonzero outcome probabilities will be reported.
 timefloat, optional
The start time at which circuit is evaluated.
 resource_allocResourceAllocation, optional
The resources available for computing circuit outcome probabilities.
Returns
 probsOutcomeLabelDict
A dictionary with keys equal to outcome labels and values equal to probabilities. If no target outcomes provided, only nonzero probabilities will be reported.
 dprobs(circuit, resource_alloc=None)
Construct a dictionary containing outcome probability derivatives for a single circuit.
Parameters
 circuitCircuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
 resource_allocResourceAllocation, optional
The resources available for computing circuit outcome probability derivatives.
Returns
 dprobsOutcomeLabelDict
A dictionary with keys equal to outcome labels and values equal to an array containing the (partial) derivatives of the outcome probability with respect to all model parameters.
 hprobs(circuit, resource_alloc=None)
Construct a dictionary containing outcome probability Hessians for a single circuit.
Parameters
 circuitCircuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
 resource_allocResourceAllocation, optional
The resources available for computing circuit outcome probability Hessians.
Returns
 hprobsOutcomeLabelDict
A dictionary with keys equal to outcome labels and values equal to a 2D array that is the Hessian matrix for the corresponding outcome probability (with respect to all model parameters).
 create_layout(circuits, dataset=None, resource_alloc=None, array_types=(), derivative_dimensions=None, verbosity=0)
Constructs an circuitoutcomeprobabilityarray (COPA) layout for circuits and dataset.
Parameters
 circuitslist
The circuits whose outcome probabilities should be computed.
 datasetDataSet
The source of data counts that will be compared to the circuit outcome probabilities. The computed outcome probabilities are limited to those with counts present in dataset.
 resource_allocResourceAllocation
A available resources and allocation information. These factors influence how the layout (evaluation strategy) is constructed.
 array_typestuple, optional
A tuple of stringvalued array types, as given by
CircuitOutcomeProbabilityArrayLayout.allocate_local_array()
. These types determine what types of arrays we anticipate computing using this layout (and forward simulator). These are used to check available memory against the limit (if it exists) within resource_alloc. The array types also determine the number of derivatives that this layout is able to compute. So, for example, if you ever want to compute derivatives or Hessians of element arrays then array_types must contain at least one ‘ep’ or ‘epp’ type, respectively or the layout will not allocate needed intermediate storage for derivativecontaining types. If you don’t care about accurate memory limits, use (‘e’,) when you only ever compute probabilities and never their derivatives, and (‘e’,’ep’) or (‘e’,’ep’,’epp’) if you need to compute Jacobians or Hessians too. derivative_dimensionstuple, optional
A tuple containing, optionally, the parameterspace dimension used when taking first and second derivatives with respect to the cirucit outcome probabilities. This should have minimally 1 or 2 elements when array_types contains ‘ep’ or ‘epp’ types, respectively. If array_types contains either of these strings and derivative_dimensions is None on input then we automatically set derivative_dimensions based on self.model.
 verbosityint or VerbosityPrinter
Determines how much output to send to stdout. 0 means no output, higher integers mean more output.
Returns
CircuitOutcomeProbabilityArrayLayout
 bulk_probs(circuits, clip_to=None, resource_alloc=None, smartc=None)
Construct a dictionary containing the probabilities for an entire list of circuits.
Parameters
 circuitslist of Circuits
The list of circuits. May also be a
CircuitOutcomeProbabilityArrayLayout
object containing precomputed quantities that make this function run faster. clip_to2tuple, optional
(min,max) to clip return value if not None.
 resource_allocResourceAllocation, optional
A resource allocation object describing the available resources and a strategy for partitioning them.
 smartcSmartCache, optional
A cache object to cache & use previously cached values inside this function.
Returns
 probsdictionary
A dictionary such that probs[circuit] is an ordered dictionary of outcome probabilities whose keys are outcome labels.
 bulk_dprobs(circuits, resource_alloc=None, smartc=None)
Construct a dictionary containing the probability derivatives for an entire list of circuits.
Parameters
 circuitslist of Circuits
The list of circuits. May also be a
CircuitOutcomeProbabilityArrayLayout
object containing precomputed quantities that make this function run faster. resource_allocResourceAllocation, optional
A resource allocation object describing the available resources and a strategy for partitioning them.
 smartcSmartCache, optional
A cache object to cache & use previously cached values inside this function.
Returns
 dprobsdictionary
A dictionary such that dprobs[circuit] is an ordered dictionary of derivative arrays (one element per differentiated parameter) whose keys are outcome labels
 bulk_hprobs(circuits, resource_alloc=None, smartc=None)
Construct a dictionary containing the probability Hessians for an entire list of circuits.
Parameters
 circuitslist of Circuits
The list of circuits. May also be a
CircuitOutcomeProbabilityArrayLayout
object containing precomputed quantities that make this function run faster. resource_allocResourceAllocation, optional
A resource allocation object describing the available resources and a strategy for partitioning them.
 smartcSmartCache, optional
A cache object to cache & use previously cached values inside this function.
Returns
 hprobsdictionary
A dictionary such that hprobs[circuit] is an ordered dictionary of Hessian arrays (a square matrix with one row/column per differentiated parameter) whose keys are outcome labels
 bulk_fill_probs(array_to_fill, layout)
Compute the outcome probabilities for a list circuits.
This routine fills a 1D array, array_to_fill with circuit outcome probabilities as dictated by a
CircuitOutcomeProbabilityArrayLayout
(“COPA layout”) object, which is usually specifically tailored for efficiency.The array_to_fill array must have length equal to the number of elements in layout, and the meanings of each element are given by layout.
Parameters
 array_to_fillnumpy ndarray
an alreadyallocated 1D numpy array of length equal to the total number of computed elements (i.e. len(layout)).
 layoutCircuitOutcomeProbabilityArrayLayout
A layout for array_to_fill, describing what circuit outcome each element corresponds to. Usually given by a prior call to
create_layout()
.
Returns
None
 bulk_fill_dprobs(array_to_fill, layout, pr_array_to_fill=None)
Compute the outcome probabilityderivatives for an entire tree of circuits.
This routine fills a 2D array, array_to_fill with circuit outcome probabilities as dictated by a
CircuitOutcomeProbabilityArrayLayout
(“COPA layout”) object, which is usually specifically tailored for efficiency.The array_to_fill array must have length equal to the number of elements in layout, and the meanings of each element are given by layout.
Parameters
 array_to_fillnumpy ndarray
an alreadyallocated 2D numpy array of shape (len(layout), Np), where Np is the number of model parameters being differentiated with respect to.
 layoutCircuitOutcomeProbabilityArrayLayout
A layout for array_to_fill, describing what circuit outcome each element corresponds to. Usually given by a prior call to
create_layout()
. pr_mx_to_fillnumpy array, optional
when not None, an alreadyallocated lengthlen(layout) numpy array that is filled with probabilities, just as in
bulk_fill_probs()
.
Returns
None
 bulk_fill_hprobs(array_to_fill, layout, pr_array_to_fill=None, deriv1_array_to_fill=None, deriv2_array_to_fill=None)
Compute the outcome probabilityHessians for an entire list of circuits.
Similar to bulk_fill_probs(…), but fills a 3D array with the Hessians for each circuit outcome probability.
Parameters
 array_to_fillnumpy ndarray
an alreadyallocated numpy array of shape (len(layout),M1,M2) where M1 and M2 are the number of selected model parameters (by wrt_filter1 and wrt_filter2).
 layoutCircuitOutcomeProbabilityArrayLayout
A layout for array_to_fill, describing what circuit outcome each element corresponds to. Usually given by a prior call to
create_layout()
. pr_mx_to_fillnumpy array, optional
when not None, an alreadyallocated lengthlen(layout) numpy array that is filled with probabilities, just as in
bulk_fill_probs()
. deriv1_array_to_fillnumpy array, optional
when not None, an alreadyallocated numpy array of shape (len(layout),M1) that is filled with probability derivatives, similar to
bulk_fill_dprobs()
(see array_to_fill for a definition of M1). deriv2_array_to_fillnumpy array, optional
when not None, an alreadyallocated numpy array of shape (len(layout),M2) that is filled with probability derivatives, similar to
bulk_fill_dprobs()
(see array_to_fill for a definition of M2).
Returns
None
 iter_hprobs_by_rectangle(layout, wrt_slices_list, return_dprobs_12=False)
Iterates over the 2nd derivatives of a layout’s circuit probabilities one rectangle at a time.
This routine can be useful when memory constraints make constructing the entire Hessian at once impractical, and as it only computes a subset of the Hessian’s rows and colums (a “rectangle”) at once. For example, the Hessian of a function of many circuit probabilities can often be computed rectanglebyrectangle and without the need to ever store the entire Hessian at once.
Parameters
 layoutCircuitOutcomeProbabilityArrayLayout
A layout for generated arrays, describing what circuit outcome each element corresponds to. Usually given by a prior call to
create_layout()
. wrt_slices_listlist
A list of (rowSlice,colSlice) 2tuples, each of which specify a “rectangle” of the Hessian to compute. Iterating over the output of this function iterates over these computed rectangles, in the order given by wrt_slices_list. rowSlice and colSlice must by Python slice objects.
 return_dprobs_12boolean, optional
If true, the generator computes a 2tuple: (hessian_col, d12_col), where d12_col is a column of the matrix d12 defined by: d12[iSpamLabel,iOpStr,p1,p2] = dP/d(p1)*dP/d(p2) where P is is the probability generated by the sequence and spam label indexed by iOpStr and iSpamLabel. d12 has the same dimensions as the Hessian, and turns out to be useful when computing the Hessian of functions of the probabilities.
Returns
 rectangle_generator
A generator which, when iterated, yields the 3tuple (rowSlice, colSlice, hprobs) or (rowSlice, colSlice, hprobs, dprobs12) (the latter if return_dprobs_12 == True). rowSlice and colSlice are slices directly from wrt_slices_list. hprobs and dprobs12 are arrays of shape E x B x B’, where:
E is the length of layout elements
B is the number of parameter rows (the length of rowSlice)
B’ is the number of parameter columns (the length of colSlice)
If mx, dp1, and dp2 are the outputs of
bulk_fill_hprobs()
(i.e. args mx_to_fill, deriv1_mx_to_fill, and deriv2_mx_to_fill), then:hprobs == mx[:,rowSlice,colSlice]
dprobs12 == dp1[:,rowSlice,None] * dp2[:,None,colSlice]
 pygsti.ROBUST_SUFFIX_LIST = "['.robust', '.Robust', '.robust+', '.Robust+']"
 pygsti.DEFAULT_BAD_FIT_THRESHOLD = '2.0'
 pygsti.run_model_test(model_filename_or_object, data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2, checkpoint=None, checkpoint_path=None, disable_checkpointing=False, simulator: pygsti.forwardsims.ForwardSimulator.Castable  None = None)
Compares a
Model
’s predictions to a DataSet using GSTlike circuits.This routine tests a Model model against a DataSet using a specific set of structured, GSTlike circuits (given by fiducials, max_lengths and germs). In particular, circuits are constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. Each string thus constructed is sandwiched between all pairs of (preparation, measurement) fiducial sequences.
model_filename_or_object is used directly (without any optimization) as the the model estimate at each maximumlength “iteration”. The model is given a trivial default_gauge_group so that it is not altered during any gauge optimization step.
A
ModelEstimateResults
object is returned, which encapsulates the model estimate and related parameters, and can be used with reportgeneration routines.Parameters
 model_filename_or_objectModel or string
The model model, specified either directly or by the filename of a model file (text format).
 data_filename_or_setDataSet or string
The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
 processorspec_filename_or_objectProcessorSpec or string
A specification of the processor this model test is to be run on, given either directly or by the filename of a processorspec file (text format). The processor specification contains basic interfacelevel information about the processor being tested, e.g., its state space and available gates.
 prep_fiducial_list_or_filename(list of Circuits) or string
The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
 meas_fiducial_list_or_filename(list of Circuits) or string or None
The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename. germs_list_or_filename(list of Circuits) or string
The germ circuits, specified either directly or by the filename of a circuit list file (text format).
 max_lengthslist of ints
List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
 gauge_opt_paramsdict, optional
A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed. advanced_optionsdict, optional
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality.
 commmpi4py.MPI.Comm, optional
When not
None
, an MPI communicator for distributing the computation across multiple processors. mem_limitint or None, optional
A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
 output_pklstr or file, optional
If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
 verbosityint, optional
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
 checkpointModelTestCheckpoint, optional (default None)
If specified use a previously generated checkpoint object to restart or warm start this run part way through.
 checkpoint_pathstr, optional (default None)
A string for the path/name to use for writing intermediate checkpoint files to disk. Format is {path}/{name}, without inclusion of the json file extension. This {path}/{name} combination will have the latest completed iteration number appended to it before writing it to disk. If none, the value of {name} will be set to the name of the protocol being run.
 disable_checkpointingbool, optional (default False)
When set to True checkpoint objects will not be constructed and written to disk during the course of this protocol. It is strongly recommended that this be kept set to False without good reason to disable the checkpoints.
 simulatorForwardSimulator.Castable or None
 Ignored if None. If not None, then we call
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.
Returns
Results
 pygsti.run_linear_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)
Perform Linear Gate Set Tomography (LGST).
This function differs from the lower level
run_lgst()
function in that it may perform a postLGST gauge optimization and this routine returns aResults
object containing the LGST estimate.Overall, this is a highlevel driver routine which can be used similarly to
run_long_sequence_gst()
whereas run_lgst is a lowlevel routine used when building your own algorithms.Parameters
 data_filename_or_setDataSet or string
The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
 target_model_filename_or_objectModel or string
The target model specifying the gates and SPAM elements that LGST is to be run on, given either directly or by the filename of a model file (text format).
 prep_fiducial_list_or_filename(list of Circuits) or string
The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
 meas_fiducial_list_or_filename(list of Circuits) or string or None
The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename. gauge_opt_paramsdict, optional
A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed. advanced_optionsdict, optional
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See
run_long_sequence_gst()
. commmpi4py.MPI.Comm, optional
When not
None
, an MPI communicator for distributing the computation across multiple processors. In this LGST case, this is just the gauge optimization. mem_limitint or None, optional
A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
 output_pklstr or file, optional
If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
 verbosityint, optional
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
Returns
Results
 pygsti.run_long_sequence_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2, checkpoint=None, checkpoint_path=None, disable_checkpointing=False, simulator: pygsti.forwardsims.ForwardSimulator.Castable  None = None)
Perform longsequence GST (LSGST).
This analysis fits a model (target_model_filename_or_object) to data (data_filename_or_set) using the outcomes from periodic GST circuits constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. When LGST is applicable (i.e. for explicit models with full or TP parameterizations), the LGST estimate of the gates is computed, gauge optimized, and used as a starting seed for the remaining optimizations.
LSGST iterates
len(max_lengths)
times, optimizing the chi2 using successively larger sets of circuits. On the ith iteration, the repeated germs sequences limited bymax_lengths[i]
are included in the growing set of circuits used by LSGST. The final iteration maximizes the loglikelihood.Once computed, the model estimates are optionally gauge optimized as directed by gauge_opt_params. A
ModelEstimateResults
object is returned, which encapsulates the input and outputs of this GST analysis, and can generate final enduser output such as reports and presentations.Parameters
 data_filename_or_setDataSet or string
The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
 target_model_filename_or_objectModel or string
The target model, specified either directly or by the filename of a model file (text format).
 prep_fiducial_list_or_filename(list of Circuits) or string
The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
 meas_fiducial_list_or_filename(list of Circuits) or string or None
The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename. germs_list_or_filename(list of Circuits) or string
The germ circuits, specified either directly or by the filename of a circuit list file (text format).
 max_lengthslist of ints
List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
 gauge_opt_paramsdict, optional
A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed. advanced_optionsdict, optional
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. The allowed keys and values include:  objective = {‘chi2’, ‘logl’}  op_labels = list of strings  circuit_weights = dict or None  starting_point = “LGSTifpossible” (default), “LGST”, or “target”  depolarize_start = float (default == 0)  randomize_start = float (default == 0)  contract_start_to_cptp = True / False (default)  cptpPenaltyFactor = float (default = 0)  tolerance = float or dict w/’relx’,’relf’,’f’,’jac’,’maxdx’ keys  max_iterations = int  finitediff_iterations = int  min_prob_clip = float  min_prob_clip_for_weighting = float (default == 1e4)  prob_clip_interval = tuple (default == (1e6,1e6)  radius = float (default == 1e4)  use_freq_weighted_chi2 = True / False (default)  XX nested_circuit_lists = True (default) / False  XX include_lgst = True / False (default is True)  distribute_method = “default”, “circuits” or “deriv”  profile = int (default == 1)  check = True / False (default)  XX op_label_aliases = dict (default = None)  always_perform_mle = bool (default = False)  only_perform_mle = bool (default = False)  XX truncScheme = “whole germ powers” (default) or “truncated germ powers” or “length as exponent”  appendTo = Results (default = None)  estimateLabel = str (default = “default”)  XX missingDataAction = {‘drop’,’raise’} (default = ‘drop’)  XX string_manipulation_rules = list of (find,replace) tuples  germ_length_limits = dict of form {germ: maxlength}  record_output = bool (default = True)  timeDependent = bool (default = False)
 commmpi4py.MPI.Comm, optional
When not
None
, an MPI communicator for distributing the computation across multiple processors. mem_limitint or None, optional
A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
 output_pklstr or file, optional
If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
 verbosityint, optional
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.  0 – prints nothing  1 – shows progress bar for entire iterative GST  2 – show summary details about each individual iteration  3 – also shows outer iterations of LM algorithm  4 – also shows inner iterations of LM algorithm  5 – also shows detailed info from within jacobian and objective function calls.
 checkpointGateSetTomographyCheckpoint, optional (default None)
If specified use a previously generated checkpoint object to restart or warm start this run part way through.
 checkpoint_pathstr, optional (default None)
A string for the path/name to use for writing intermediate checkpoint files to disk. Format is {path}/{name}, without inclusion of the json file extension. This {path}/{name} combination will have the latest completed iteration number appended to it before writing it to disk. If none, the value of {name} will be set to the name of the protocol being run.
 disable_checkpointingbool, optional (default False)
When set to True checkpoint objects will not be constructed and written to disk during the course of this protocol. It is strongly recommended that this be kept set to False without good reason to disable the checkpoints.
 simulatorForwardSimulator.Castable or None
 Ignored if None. If not None, then we call
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.
Returns
Results
 pygsti.run_long_sequence_gst_base(data_filename_or_set, target_model_filename_or_object, lsgst_lists, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2, checkpoint=None, checkpoint_path=None, disable_checkpointing=False, simulator: pygsti.forwardsims.ForwardSimulator.Castable  None = None)
A more fundamental interface for performing endtoend GST.
Similar to
run_long_sequence_gst()
except this function takes lsgst_lists, a list of either raw circuit lists or ofPlaquetteGridCircuitStructure
objects to define which circuits are used on each GST iteration.Parameters
 data_filename_or_setDataSet or string
The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
 target_model_filename_or_objectModel or string
The target model, specified either directly or by the filename of a model file (text format).
 lsgst_listslist of lists or PlaquetteGridCircuitStructure(s)
An explicit list of either the raw circuit lists to be used in the analysis or of
PlaquetteGridCircuitStructure
objects, which additionally contain the structure of a set of circuits. A single PlaquetteGridCircuitStructure object can also be given, which is equivalent to passing a list of successive Lvalue truncations of this object (e.g. if the object has Ls = [1,2,4] then this is like passing a list of three PlaquetteGridCircuitStructure objects w/truncations [1], [1,2], and [1,2,4]). gauge_opt_paramsdict, optional
A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed. advanced_optionsdict, optional
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See
run_long_sequence_gst()
for a list of the allowed keys, with the exception “nested_circuit_lists”, “op_label_aliases”, “include_lgst”, and “truncScheme”. commmpi4py.MPI.Comm, optional
When not
None
, an MPI communicator for distributing the computation across multiple processors. mem_limitint or None, optional
A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
 output_pklstr or file, optional
If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
 verbosityint, optional
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.  0 – prints nothing  1 – shows progress bar for entire iterative GST  2 – show summary details about each individual iteration  3 – also shows outer iterations of LM algorithm  4 – also shows inner iterations of LM algorithm  5 – also shows detailed info from within jacobian and objective function calls.
 checkpointGateSetTomographyCheckpoint, optional (default None)
If specified use a previously generated checkpoint object to restart or warm start this run part way through.
 checkpoint_pathstr, optional (default None)
A string for the path/name to use for writing intermediate checkpoint files to disk. Format is {path}/{name}, without inclusion of the json file extension. This {path}/{name} combination will have the latest completed iteration number appended to it before writing it to disk. If none, the value of {name} will be set to the name of the protocol being run.
 disable_checkpointingbool, optional (default False)
When set to True checkpoint objects will not be constructed and written to disk during the course of this protocol. It is strongly recommended that this be kept set to False without good reason to disable the checkpoints.
 simulatorForwardSimulator.Castable or None
 Ignored if None. If not None, then we call
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.
Returns
Results
 pygsti.run_stdpractice_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, modes=('full TP', 'CPTPLND', 'Target'), gaugeopt_suite='stdgaugeopt', gaugeopt_target=None, models_to_test=None, comm=None, mem_limit=None, advanced_options=None, output_pkl=None, verbosity=2, checkpoint=None, checkpoint_path=None, disable_checkpointing=False, simulator: pygsti.forwardsims.ForwardSimulator.Castable  None = None)
Perform endtoend GST analysis using standard practices.
This routines is an even higherlevel driver than
run_long_sequence_gst()
. It performs bottled, typicallyuseful, runs of long sequence GST on a dataset. This essentially boils down to runningrun_long_sequence_gst()
one or more times using different model parameterizations, and performing commonlyuseful gauge optimizations, based only on the highlevel modes argument.Parameters
 data_filename_or_setDataSet or string
The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
 target_model_filename_or_objectModel or string
A specification of the target model that GST is to be run on, given either directly or by the filename of a model (text format).
 prep_fiducial_list_or_filename(list of Circuits) or string
The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
 meas_fiducial_list_or_filename(list of Circuits) or string or None
The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename. germs_list_or_filename(list of Circuits) or string
The germ circuits, specified either directly or by the filename of a circuit list file (text format).
 max_lengthslist of ints
List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
 modesiterable of strs, optional (default (‘full TP’,’CPTPLND’,’Target’)
An iterable strings corresponding to modes which dictate what types of analyses are performed. Currently, these correspond to different types of parameterizations/constraints to apply to the estimated model. The default value is usually fine. Allowed values are:
“full” : full (completely unconstrained)
“TP” : TPconstrained
“CPTP” : Lindbladian CPTPconstrained
“H+S” : Only Hamiltonian + Stochastic errors allowed (CPTP)
“S” : Only Stochastic errors allowed (CPTP)
“Target” : use the target (ideal) gates as the estimate
<model> : any key in the models_to_test argument
 gaugeopt_suitestr or list or dict, optional
Specifies which gauge optimizations to perform on each estimate. A string or list of strings (see below) specifies builtin sets of gauge optimizations, otherwise gaugeopt_suite should be a dictionary of gaugeoptimization parameter dictionaries, as specified by the gauge_opt_params argument of
run_long_sequence_gst()
. The key names of gaugeopt_suite then label the gauge optimizations within the resuling Estimate objects. The builtin suites are:“single” : performs only a single “best guess” gauge optimization.
“varySpam” : varies spam weight and toggles SPAM penalty (0 or 1).
“varySpamWt” : varies spam weight but no SPAM penalty.
“varyValidSpamWt” : varies spam weight with SPAM penalty == 1.
“toggleValidSpam” : toggles spame penalty (0 or 1); fixed SPAM wt.
“unreliable2Q” : adds branch to a spam suite that weights 2Q gates less
“none” : no gauge optimizations are performed.
 gaugeopt_targetModel, optional
If not None, a model to be used as the “target” for gauge optimization (only). This argument is useful when you want to gauge optimize toward something other than the ideal target gates given by target_model_filename_or_object, which are used as the default when gaugeopt_target is None.
 models_to_testdict, optional
A dictionary of Model objects representing (gateset) models to test against the data. These Models are essentially hypotheses for which (if any) model generated the data. The keys of this dictionary can (and must, to actually test the models) be used within the comma separate list given by the modes argument.
 commmpi4py.MPI.Comm, optional
When not
None
, an MPI communicator for distributing the computation across multiple processors. mem_limitint or None, optional
A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
 advanced_optionsdict, optional
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See
run_long_sequence_gst()
for a list of the allowed keys for each such dictionary. output_pklstr or file, optional
If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
 verbosityint, optional
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
 checkpointStandardGSTCheckpoint, optional (default None)
If specified use a previously generated checkpoint object to restart or warm start this run part way through.
 checkpoint_pathstr, optional (default None)
A string for the path/name to use for writing intermediate checkpoint files to disk. Format is {path}/{name}, without inclusion of the json file extension. This {path}/{name} combination will have the latest completed iteration number appended to it before writing it to disk. If none, the value of {name} will be set to the name of the protocol being run.
 disable_checkpointingbool, optional (default False)
When set to True checkpoint objects will not be constructed and written to disk during the course of this protocol. It is strongly recommended that this be kept set to False without good reason to disable the checkpoints.
 simulatorForwardSimulator.Castable or None
 Ignored if None. If not None, then we call
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.
Returns
Results
 pygsti.parallel_apply(f, l, comm)
Apply a function, f to every element of a list, l in parallel, using MPI.
Parameters
 ffunction
function of an item in the list l
 llist
list of items as arguments to f
 commMPI Comm
MPI communicator object for organizing parallel programs
Returns
 resultslist
list of items after f has been applied
 pygsti.mpi4py_comm()
Get a comm object
Returns
 MPI.Comm
Comm object to be passed down to parallel pygsti routines
 pygsti.starmap_with_kwargs(fn, num_runs, num_processors, args_list, kwargs_list)
 class pygsti.NamedDict(keyname=None, keytype=None, valname=None, valtype=None, items=())
Bases:
dict
,pygsti.baseobjs.nicelyserializable.NicelySerializable
A dictionary that also holds category names and types.
This dictderived class holds a catgory name applicable to its keys, and key and value type names indicating the types of its keys and values.
The main purpose of this class is to utilize its
to_dataframe()
method.Parameters
 keynamestr, optional
A category name for the keys of this dict. For example, if the dict contained the keys “dog” and “cat”, this might be “animals”. This becomes a column header if this dict is converted to a data frame.
 keytype{“float”, “int”, “category”, None}, optional
The keytype, in correspondence with different pandas series types.
 valnamestr, optional
A category name for the keys of this dict. This becomse a column header if this dict is converted to a data frame.
 valtype{“float”, “int”, “category”, None}, optional
The valuetype, in correspondence with different pandas series types.
 itemslist or dict, optional
Initial items, used in serialization.
Initialize self. See help(type(self)) for accurate signature.
 classmethod create_nested(key_val_type_list, inner)
Creates a nested NamedDict.
Parameters
 key_val_type_listlist
A list of (key, value, type) tuples, one per nesting layer.
 innervarious
The value that will be set to the innermost nested dictionary’s value, supplying any additional layers of nesting (if inner is a NamedDict) or the value contained in all of the nested layers.
 class pygsti.TypedDict(types=None, items=())
Bases:
dict
A dictionary that holds perkey type information.
This type of dict is used for the “leaves” in a tree of nested
NamedDict
objects, specifying a collection of data of different types pertaining to some set of category labels (the indexpath of the named dictionaries).When converted to a data frame, each key specifies a different column and values contribute the values of a single data frame row. Columns will be series of the held data types.
Parameters
 typesdict, optional
Keys are the keys that can appear in this dictionary, and values are valid data frame type strings, e.g. “int”, “float”, or “category”, that specify the type of each value.
 itemsdict or list
Initial data, used for serialization.
Initialize self. See help(type(self)) for accurate signature.
 pygsti.basis_matrices(name_or_basis, dim, sparse=False)
Get the elements of the specifed basistype which spans the densitymatrix space given by dim.
Parameters
 name_or_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis
The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match dim.
 dimint
The dimension of the densitymatrix space.
 sparsebool, optional
Whether any built matrices should be SciPy CSR sparse matrices or dense numpy arrays (the default).
Returns
 list
A list of N numpy arrays each of shape (dmDim, dmDim), where dmDim is the matrixdimension of the overall “embedding” density matrix (the sum of dim_or_block_dims) and N is the dimension of the densitymatrix space, equal to sum( block_dim_i^2 ).
 pygsti.basis_longname(basis)
Get the “long name” for a particular basis, which is typically used in reports, etc.
Parameters
 basisBasis or str
The basis or standardbasisname.
Returns
string
 pygsti.basis_element_labels(basis, dim)
Get a list of short labels corresponding to to the elements of the described basis.
These labels are typically used to label the rows/columns of a boxplot of a matrix in the basis.
Parameters
 basis{‘std’, ‘gm’, ‘pp’, ‘qt’}
Which basis the model is represented in. Allowed options are Matrixunit (std), GellMann (gm), Pauliproduct (pp) and Qutrit (qt). If the basis is not known, then an empty list is returned.
 dimint or list
Dimension of basis matrices. If a list of integers, then gives the dimensions of the terms in a directsum decomposition of the density matrix space acted on by the basis.
Returns
 list of strings
A list of length dim, whose elements label the basis elements.
 pygsti.is_sparse_basis(name_or_basis)
Whether a basis contains sparse matrices.
Parameters
 name_or_basisBasis or str
The basis or standardbasisname.
Returns
bool
 pygsti.change_basis(mx, from_basis, to_basis)
Convert a operation matrix from one basis of a density matrix space to another.
Parameters
 mxnumpy array
The operation matrix (a 2D square array) in the from_basis basis.
 from_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source basis. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 to_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The destination basis. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
The given operation matrix converted to the to_basis basis. Array size is the same as mx.
 pygsti.create_basis_pair(mx, from_basis, to_basis)
Constructs bases from transforming mx between two basis names.
Construct a pair of Basis objects with types from_basis and to_basis, and dimension appropriate for transforming mx (if they’re not already given by from_basis or to_basis being a Basis rather than a str).
Parameters
 mxnumpy.ndarray
A matrix, assumed to be square and have a dimension that is a perfect square.
 from_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source basis (named because it’s usually the source basis for a basis change). Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).
 to_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The destination basis (named because it’s usually the destination basis for a basis change). Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).
Returns
from_basis, to_basis : Basis
 pygsti.create_basis_for_matrix(mx, basis)
Construct a Basis object with type given by basis and dimension approprate for transforming mx.
Dimension is taken from mx (if it’s not given by basis) that is sqrt(mx.shape[0]).
Parameters
 mxnumpy.ndarray
A matrix, assumed to be square and have a dimension that is a perfect square.
 basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
A basis name or Basis object. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension must equal sqrt(mx.shape[0]), as this will be checked.
Returns
Basis
 pygsti.resize_std_mx(mx, resize, std_basis_1, std_basis_2)
Change the basis of mx to a potentially larger or smaller ‘std’type basis given by std_basis_2.
(mx is assumed to be in the ‘std’type basis given by std_basis_1.)
This is possible when the two ‘std’type bases have the same “embedding dimension”, equal to the sum of their block dimensions. If, for example, std_basis_1 has block dimensions (kite structure) of (4,2,1) then mx, expressed as a sum of 4^2 + 2^2 + 1^2 = 21 basis elements, can be “embedded” within a larger ‘std’ basis having a single block with dimension 7 (7^2 = 49 elements).
When std_basis_2 is smaller than std_basis_1 the reverse happens and mx is irreversibly truncated, or “contracted” to a basis having a particular kite structure.
Parameters
 mxnumpy array
A square matrix in the std_basis_1 basis.
 resize{‘expand’,’contract’}
Whether mx can be expanded or contracted.
 std_basis_1Basis
The ‘std’type basis that mx is currently in.
 std_basis_2Basis
The ‘std’type basis that mx should be converted to.
Returns
numpy.ndarray
 pygsti.flexible_change_basis(mx, start_basis, end_basis)
Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.
(see
resize_std_mx()
for more details).Parameters
 mxnumpy array
The operation matrix (a 2D square array) in the start_basis basis.
 start_basisBasis
The source basis.
 end_basisBasis
The destination basis.
Returns
numpy.ndarray
 pygsti.resize_mx(mx, dim_or_block_dims=None, resize=None)
Wrapper for
resize_std_mx()
, that manipulates mx to be in another basis.This function first constructs two ‘std’type bases using dim_or_block_dims and sum(dim_or_block_dims). The matrix mx is converted from the former to the latter when resize == “expand”, and from the latter to the former when resize == “contract”.
Parameters
 mxnumpy array
Matrix of size N x N, where N is the dimension of the density matrix space, i.e. sum( dimOrBlockDims_i^2 )
 dim_or_block_dimsint or list of ints
Structure of the densitymatrix space. Gives the matrix dimensions of each block.
 resize{‘expand’,’contract’}
Whether mx should be expanded or contracted.
Returns
numpy.ndarray
 pygsti.state_to_stdmx(state_vec)
Convert a state vector into a density matrix.
Parameters
 state_veclist or tuple
State vector in the standard (sigmaz) basis.
Returns
 numpy.ndarray
A density matrix of shape (d,d), corresponding to the pure state given by the lengthd array, state_vec.
 pygsti.state_to_pauli_density_vec(state_vec)
Convert a single qubit state vector into a Liouville vector in the Pauli basis.
Parameters
 state_veclist or tuple
State vector in the sigmaz basis, len(state_vec) == 2
Returns
 numpy array
The 2x2 density matrix of the pure state given by state_vec, given as a 4x1 column vector in the Pauli basis.
 pygsti.vec_to_stdmx(v, basis, keep_complex=False)
Convert a vector in this basis to a matrix in the standard basis.
Parameters
 vnumpy array
The vector length 4 or 16 respectively.
 basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis
The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match v.
 keep_complexbool, optional
If True, leave the final (output) array elements as complex numbers when v is complex. Usually, the final elements are real (even though v is complex) and so when keep_complex=False the elements are forced to be real and the returned array is float (not complex) valued.
Returns
 numpy array
The matrix, 2x2 or 4x4 depending on nqubits
 pygsti.gmvec_to_stdmx
 pygsti.ppvec_to_stdmx
 pygsti.qtvec_to_stdmx
 pygsti.stdvec_to_stdmx
 pygsti.stdmx_to_vec(m, basis)
Convert a matrix in the standard basis to a vector in the Pauli basis.
Parameters
 mnumpy array
The matrix, shape 2x2 (1Q) or 4x4 (2Q)
 basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis
The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match m.
Returns
 numpy array
The vector, length 4 or 16 respectively.
 pygsti.stdmx_to_ppvec
 pygsti.stdmx_to_gmvec
 pygsti.stdmx_to_stdvec
 pygsti.chi2(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Computes the total (aggregate) chi^2 for a set of circuits.
The chi^2 test statistic obtained by summing up the contributions of a given set of circuits or all the circuits available in a dataset. For the gradient or Hessian, see the
chi2_jacobian()
andchi2_hessian()
functions.Parameters
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
 min_prob_clip_for_weightingfloat, optional
defines the clipping interval for the statistical weight.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 chi2float
chi^2 value, equal to the sum of chi^2 terms from all specified circuits
 pygsti.chi2_per_circuit(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Computes the percircuit chi^2 contributions for a set of cirucits.
This function returns the same value as
chi2()
except the contributions from different circuits are not summed but returned as an array (the contributions of all the outcomes of a given cirucit are summed together).Parameters
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
 min_prob_clip_for_weightingfloat, optional
defines the clipping interval for the statistical weight.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 chi2numpy.ndarray
Array of length either len(circuits) or len(dataset.keys()). Values are the chi2 contributions of the corresponding circuit aggregated over outcomes.
 pygsti.chi2_jacobian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Compute the gradient of the chi^2 function computed by
chi2()
.The returned value holds the derivatives of the chi^2 function with respect to model’s parameters.
Parameters
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
 min_prob_clip_for_weightingfloat, optional
defines the clipping interval for the statistical weight.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 numpy array
The gradient vector of length model.num_params, the number of model parameters.
 pygsti.chi2_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Compute the Hessian matrix of the
chi2()
function.Parameters
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
 min_prob_clip_for_weightingfloat, optional
defines the clipping interval for the statistical weight.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 numpy array or None
On the root processor, the Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on nonroot processors.
 pygsti.chi2_approximate_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Compute and approximate Hessian matrix of the
chi2()
function.This approximation neglects terms proportional to the Hessian of the probabilities w.r.t. the model parameters (which can take a long time to compute). See logl_approximate_hessian for details on the analogous approximation for the loglikelihood Hessian.
Parameters
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
 min_prob_clip_for_weightingfloat, optional
defines the clipping interval for the statistical weight.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 numpy array or None
On the root processor, the approximate Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on nonroot processors.
 pygsti.chialpha(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Compute the chialpha objective function.
Parameters
 alphafloat
The alpha parameter, which lies in the interval (0,1].
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chialpha sum. Default value (None) means “all strings in dataset”.
 pfratio_stitchptfloat, optional
The xvalue (x = probility/frequency ratio) below which the chialpha function is replaced with it secondorder Taylor expansion.
 pfratio_derivptfloat, optional
The xvalue at which the Taylor expansion derivatives are evaluated at.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 radiusfloat, optional
If radius is not None then a “harsh” method of regularizing the zerofrequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zerofrequency terms.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
float
 pygsti.chialpha_per_circuit(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)
Compute the percircuit chialpha objective function.
Parameters
 alphafloat
The alpha parameter, which lies in the interval (0,1].
 modelModel
The model used to specify the probabilities and SPAM labels
 datasetDataSet
The data used to specify frequencies and counts
 circuitslist of Circuits or tuples, optional
List of circuits whose terms will be included in chialpha sum. Default value (None) means “all strings in dataset”.
 pfratio_stitchptfloat, optional
The xvalue (x = probility/frequency ratio) below which the chialpha function is replaced with it secondorder Taylor expansion.
 pfratio_derivptfloat, optional
The xvalue at which the Taylor expansion derivatives are evaluated at.
 prob_clip_intervaltuple, optional
A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
 radiusfloat, optional
If radius is not None then a “harsh” method of regularizing the zerofrequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zerofrequency terms.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 numpy.ndarray
Array of length either len(circuits) or len(dataset.keys()). Values are the chialpha contributions of the corresponding circuit aggregated over outcomes.
 pygsti.chi2fn_2outcome(n, p, f, min_prob_clip_for_weighting=0.0001)
Computes chi^2 for a 2outcome measurement.
The chisquared function for a 2outcome measurement using a clipped probability for the statistical weighting.
Parameters
 nfloat or numpy array
Number of samples.
 pfloat or numpy array
Probability of 1st outcome (typically computed).
 ffloat or numpy array
Frequency of 1st outcome (typically observed).
 min_prob_clip_for_weightingfloat, optional
Defines clipping interval (see return value).
Returns
 float or numpy array
n(pf)^2 / (cp(1cp)), where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1min_prob_clip_for_weighting)
 pygsti.chi2fn_2outcome_wfreqs(n, p, f)
Computes chi^2 for a 2outcome measurement using frequencyweighting.
The chisquared function for a 2outcome measurement using the observed frequency in the statistical weight.
Parameters
 nfloat or numpy array
Number of samples.
 pfloat or numpy array
Probability of 1st outcome (typically computed).
 ffloat or numpy array
Frequency of 1st outcome (typically observed).
Returns
 float or numpy array
n(pf)^2 / (f*(1f*)), where f* = (f*n+1)/n+2 is the frequency value used in the statistical weighting (prevents divide by zero errors)
 pygsti.chi2fn(n, p, f, min_prob_clip_for_weighting=0.0001)
Computes the chi^2 term corresponding to a single outcome.
The chisquared term for a single outcome of a multioutcome measurement using a clipped probability for the statistical weighting.
Parameters
 nfloat or numpy array
Number of samples.
 pfloat or numpy array
Probability of 1st outcome (typically computed).
 ffloat or numpy array
Frequency of 1st outcome (typically observed).
 min_prob_clip_for_weightingfloat, optional
Defines clipping interval (see return value).
Returns
 float or numpy array
n(pf)^2 / cp , where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1min_prob_clip_for_weighting)
 pygsti.chi2fn_wfreqs(n, p, f, min_freq_clip_for_weighting=0.0001)
Computes the frequencyweighed chi^2 term corresponding to a single outcome.
The chisquared term for a single outcome of a multioutcome measurement using the observed frequency in the statistical weight.
Parameters
 nfloat or numpy array
Number of samples.
 pfloat or numpy array
Probability of 1st outcome (typically computed).
 ffloat or numpy array
Frequency of 1st outcome (typically observed).
 min_freq_clip_for_weightingfloat, optional
The minimum frequency weighting used in the weighting, i.e. the largest weighting factor is 1 / fmin_freq_clip_for_weighting.
Returns
float or numpy array
 pygsti.calculate_edesign_estimated_runtime(edesign, gate_time_dict=None, gate_time_1Q=None, gate_time_2Q=None, measure_reset_time=0.0, interbatch_latency=0.0, total_shots_per_circuit=1000, shots_per_circuit_per_batch=None, circuits_per_batch=None)
Estimate the runtime for an ExperimentDesign from gate times and batch sizes.
The rough model is that the required circuit shots are split into batches, where each batch runs a subset of the circuits for some fraction of the needed shots. One round consists of running all batches once, i.e. collecting some shots for all circuits, and rounds are repeated until the required number of shots is met for all circuits.
In addition to gate times, the user can also provide the time at the end of each circuit for measurement and/or reset, as well as the latency between batches for classical upload/ communication of the next set of circuits. Since times are userprovided, this function makes no assumption on the units of time, only that a consistent unit is used for all times.
Parameters
 edesign: ExperimentDesign
An experiment design containing all required circuits.
 gate_time_dict: dict
Dictionary with keys as either gate names or gate labels (for qubitspecific overrides) and values as gate time in userspecified units. All operations in the circuits of edesign must be specified. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.
 gate_time_1Q: float
Gate time in userspecified units for all operations acting on one qubit. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.
 gate_time_2Q: float
Gate time in userspecified units for all operations acting on more than one qubit. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.
 measure_reset_time: float
Measurement and/or reset time in userspecified units. This is applied once for every circuit.
 interbatch_latency: float
Time between batches in userspecified units.
 total_shots_per_circuit: int
Total number of shots per circuit. Together with shots_per_circuit_per_batch, this will determine the total number of rounds needed.
 shots_per_circuit_per_batch: int
Number of shots to do for each circuit within a batch. Together with total_shots_per_circuit, this will determine the total number of rounds needed. If None, this is set to the total shots, meaning that only one round is done.
 circuits_per_batch: int
Number of circuits to include in each batch. Together with the number of circuits in edesign, this will determine the number of batches in each round. If None, this is set to the total number of circuits such that only one batch is done.
Returns
 float
The estimated time to run the experiment design.
 pygsti.calculate_fisher_information_per_circuit(regularized_model, circuits, approx=False, verbosity=1, comm=None, mem_limit=None)
Helper function to calculate all Fisher information terms for each circuit.
This function can be used to pregenerate a cache for the calculate_fisher_information_matrix() function, and this should be done for computational efficiency when computing many Fisher information matrices.
Parameters
 regularized_model: OpModel
The model used to calculate the terms of the Fisher information matrix. This model must already be “regularized” such that there are no small probabilities, usually by adding a small amount of SPAM error.
 circuits: list
List of circuits to compute Fisher information for.
 approx: bool, optional (default False)
When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.
 verbosity: int, optional (default 1)
Used to control the level of output printed by a VerbosityPrinter object.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.
Returns
 fisher_info_terms: dict
Dictionary where keys are circuits and values are (num_params, num_params) Fisher information matrices for a single circuit.
 pygsti.calculate_fisher_information_matrix(model, circuits, num_shots=1, term_cache=None, regularize_spam=True, approx=False, mem_efficient_mode=False, circuit_chunk_size=100, verbosity=1, comm=None, mem_limit=None)
Calculate the Fisher information matrix for a set of circuits and a model.
Note that the model should be regularized so that no probability should be very small for numerical stability. This is done by default for models with a dense SPAM parameterization, but must be done manually if this is not the case (e.g. CPTP parameterization).
Parameters
 model: OpModel
The model used to calculate the terms of the Fisher information matrix.
 circuits: list
List of circuits in the experiment design.
 num_shots: int or dict
If int, specifies how many shots each circuit gets. If dict, keys must be circuits and values are percircuit counts.
 term_cache: dict or None
If provided, should have circuits as keys and percircuit Fisher information matrices as values, i.e. the output of calculate_fisher_information_per_circuit(). This cache will be updated with any additional circuits that need to be calculated in the given circuit list.
 regularize_spam: bool
If True, depolarizing SPAM noise is added to prevent 0 probabilities for numerical stability. Note that this may fail if the model does not have a dense SPAM paramerization. In that case, pass an already “regularized” model and set this to False.
 approx: bool, optional (default False)
When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.
 mem_efficient_mode: bool, optional (default False)
If true avoid constructing the intermediate term cache to save on memory.
 circuit_chunk_size, int, optional (default 100)
Used in conjunction with mem_efficient_mode. This sets the maximum number of circuits to simultaneously construct the percircuit contributions to the fisher information for at any one time.
 verbosity: int, optional (default 1)
Used to control the level of output printed by a VerbosityPrinter object.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.
Returns
 fisher_information: numpy.ndarray
Fisher information matrix of size (num_params, num_params)
 pygsti.calculate_fisher_information_matrices_by_L(model, circuit_lists, Ls, num_shots=1, term_cache=None, regularize_spam=True, cumulative=True, approx=False, mem_efficient_mode=False, circuit_chunk_size=100, verbosity=1, comm=None, mem_limit=None)
Calculate a set of Fisher information matrices for a set of circuits grouped by iteration.
Parameters
 model: OpModel
The model used to calculate the terms of the Fisher information matrix.
 circuit_lists: list of lists of circuits or CircuitLists
Circuit lists for the experiment design for each L. Most likely from the value of the circuit_lists attribute of most experiment design objects.
 Lslist of ints
A list of integer values corresponding to the circuit lengths associated with each circuit list as past in with circuit_lists.
 num_shots: int or dict
If int, specifies how many shots each circuit gets. If dict, keys must be circuits and values are percircuit counts.
 term_cache: dict or None
If provided, should have circuits as keys and percircuit Fisher information matrices as values, i.e. the output of calculate_fisher_information_per_circuit(). This cache will be updated with any additional circuits that need to be calculated in the given circuit list.
 regularize_spam: bool
If True, depolarizing SPAM noise is added to prevent 0 probabilities for numerical stability. Note that this may fail if the model does not have a dense SPAM paramerization. In that case, pass an already “regularized” model and set this to False.
 cumulative: bool
Whether to include Fisher information matrices for lower L (True) or not.
 approx: bool, optional (default False)
When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.
 mem_efficient_mode: bool, optional (default False)
If true avoid constructing the intermediate term cache to save on memory.
 circuit_chunk_size, int, optional (default 100)
Used in conjunction with mem_efficient_mode. This sets the maximum number of circuits to simultaneously construct the percircuit contributions to the fisher information for at any one time.
 verbosity: int, optional (default 1)
Used to control the level of output printed by a VerbosityPrinter object.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.
Returns
 fisher_information_by_L: dict
Dictionary with keys as circuit length L and value as Fisher information matrices
 pygsti.accumulate_fim_matrix(subcircuits, num_params, num_shots, outcomes, ps, js, printer, hs=None, approx=False)
 pygsti.accumulate_fim_matrix_per_circuit(subcircuits, num_params, outcomes, ps, js, printer, hs=None, approx=False)
 pygsti.pad_edesign_with_idle_lines(edesign, line_labels)
Utility to explicitly pad out ExperimentDesigns with idle lines.
Parameters
 edesign: ExperimentDesign
The edesign to be padded.
 line_labels: tuple of int or str
Full line labels for the padded edesign.
Returns
 ExperimentDesign
An edesign where all circuits have been padded out with missing idle lines
 pygsti.bonferroni_correction(significance, numtests)
Calculates the standard Bonferroni correction.
This is used for reducing the “local” significance for > 1 statistical hypothesis test to guarantee maintaining a “global” significance (i.e., a familywise error rate) of significance.
Parameters
 significancefloat
Significance of each individual test.
 numtestsint
The number of hypothesis tests performed.
Returns
The Boferronicorrected local significance, given by significance / numtests.
 pygsti.sidak_correction(significance, numtests)
Sidak correction.
TODO: docstring  better explanaition
Parameters
 significancefloat
Significance of each individual test.
 numtestsint
The number of hypothesis tests performed.
Returns
float
 pygsti.generalized_bonferroni_correction(significance, weights, numtests=None, nested_method='bonferroni', tol=1e10)
Generalized Bonferroni correction.
Parameters
 significancefloat
Significance of each individual test.
 weightsarraylike
An array of nonnegative floatingpoint weights, one per individual test, that sum to 1.0.
 numtestsint
The number of hypothesis tests performed.
 nested_method{‘bonferroni’, ‘sidak’}
Which method is used to find the significance of the composite test.
 tolfloat, optional
Tolerance when checking that the weights add to 1.0.
Returns
float
 pygsti.jamiolkowski_iso(operation_mx, op_mx_basis='pp', choi_mx_basis='pp')
Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.
Parameters
 operation_mxnumpy array
the operation matrix to compute Choi matrix of.
 op_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 choi_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
the Choi matrix, normalized to have trace == 1, in the desired basis.
 pygsti.jamiolkowski_iso_inv(choi_mx, choi_mx_basis='pp', op_mx_basis='pp')
Given a choi matrix, return the corresponding operation matrix.
This function performs the inverse of
jamiolkowski_iso()
.Parameters
 choi_mxnumpy array
the Choi matrix, normalized to have trace == 1, to compute operation matrix for.
 choi_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 op_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
operation matrix in the desired basis.
 pygsti.fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)
The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.
This routine only computes the case of the Choi matrix being in the standard (matrix unit) basis, but does so more quickly than
jamiolkowski_iso()
and so is particuarly useful when only the eigenvalues of the Choi matrix are needed.Parameters
 operation_mxnumpy array
the operation matrix to compute Choi matrix of.
 op_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
the Choi matrix, normalized to have trace == 1, in the std basis.
 pygsti.fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)
Given a choi matrix in the standard basis, return the corresponding operation matrix.
This function performs the inverse of
fast_jamiolkowski_iso_std()
.Parameters
 choi_mxnumpy array
the Choi matrix in the standard (matrix units) basis, normalized to have trace == 1, to compute operation matrix for.
 op_mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
operation matrix in the desired basis.
 pygsti.sum_of_negative_choi_eigenvalues(model, weights=None)
Compute the amount of nonCPness of a model.
This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model.
Parameters
 modelModel
The model to act on.
 weightsdict
A dictionary of weights used to multiply the negative eigenvalues of different gates. Keys are operation labels, values are floating point numbers.
Returns
 float
the sum of negative eigenvalues of the Choi matrix for each gate.
 pygsti.sums_of_negative_choi_eigenvalues(model)
Compute the amount of nonCPness of a model.
This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model separately. This function is different from
sum_of_negative_choi_eigenvalues()
in that it returns sums separately for each operation of model.Parameters
 modelModel
The model to act on.
Returns
 list of floats
each element == sum of the negative eigenvalues of the Choi matrix for the corresponding gate (as ordered by model.operations.iteritems()).
 pygsti.magnitudes_of_negative_choi_eigenvalues(model)
Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.
Parameters
 modelModel
The model to act on.
Returns
 list of floats
list of the magnitues of all negative Choi eigenvalues. The length of this list will vary based on how many negative eigenvalues are found, as positive eigenvalues contribute nothing to this list.
 pygsti.warn_deprecated(name, replacement=None)
Formats and prints a deprecation warning message.
Parameters
 namestr
The name of the function that is now deprecated.
 replacementstr, optional
the name of the function that should replace it.
Returns
None
 pygsti.deprecate(replacement=None)
Decorator for deprecating a function.
Parameters
 replacementstr, optional
the name of the function that should replace it.
Returns
function
 pygsti.deprecate_imports(module_name, replacement_map, warning_msg)
Utility to deprecate imports from a module.
This works by swapping the underlying module in the import mechanisms with a ModuleType object that overrides attribute lookup to check against the replacement map.
Note that this will slow down module attribute lookup substantially. If you need to deprecate multiple names, DO NOT call this method more than once on a given module! Instead, use the replacement map to batch multiple deprecations into one call. When using this method, plan to remove the deprecated paths altogether sooner rather than later.
Parameters
 module_namestr
The fullyqualified name of the module whose names have been deprecated.
 replacement_map{name: function}
A map of each deprecated name to a factory which will be called with no arguments when importing the name.
 warning_msgstr
A message to be displayed as a warning when importing a deprecated name. Optionally, this may include the format string name, which will be formatted with the deprecated name.
Returns
None
 pygsti.TOL = '1e20'
 pygsti.logl(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)
The loglikelihood function.
Parameters
 modelModel
Model of parameterized gates
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 wildcardWildcardBudget
A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 float
The log likelihood
 pygsti.logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)
Computes the percircuit loglikelihood contribution for a set of circuits.
Parameters
 modelModel
Model of parameterized gates
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 wildcardWildcardBudget
A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
Returns
 numpy.ndarray
Array of length either len(circuits) or len(dataset.keys()). Values are the loglikelihood contributions of the corresponding circuit aggregated over outcomes.
 pygsti.logl_jacobian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)
The jacobian of the loglikelihood function.
Parameters
 modelModel
Model of parameterized gates (including SPAM)
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the Poissonpicutre loglikelihood should be differentiated.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 verbosityint, optional
How much detail to print to stdout.
Returns
 numpy array
array of shape (M,), where M is the length of the vectorized model.
 pygsti.logl_hessian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)
The hessian of the loglikelihood function.
Parameters
 modelModel
Model of parameterized gates (including SPAM)
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the Poissonpicutre loglikelihood should be differentiated.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 verbosityint, optional
How much detail to print to stdout.
Returns
 numpy array or None
On the root processor, the Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on nonroot processors.
 pygsti.logl_approximate_hessian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)
An approximate Hessian of the loglikelihood function.
An approximation to the true Hessian is computed using just the Jacobian (and not the Hessian) of the probabilities w.r.t. the model parameters. Let J = d(probs)/d(params) and denote the Hessian of the loglikelihood w.r.t. the probabilities as d2(logl)/dprobs2 (a diagonal matrix indexed by the term, i.e. probability, of the loglikelihood). Then this function computes:
H = J * d2(logl)/dprobs2 * J.T
Which simply neglects the d2(probs)/d(params)2 terms of the true Hessian. Since this curvature is expected to be small at the MLE point, this approximation can be useful for computing approximate error bars.
Parameters
 modelModel
Model of parameterized gates (including SPAM)
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the Poissonpicutre loglikelihood should be differentiated.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
 mem_limitint, optional
A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 verbosityint, optional
How much detail to print to stdout.
Returns
 numpy array or None
On the root processor, the approximate Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on nonroot processors.
 pygsti.logl_max(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)
The maximum loglikelihood possible for a DataSet.
That is, the loglikelihood obtained by a maximal model that can fit perfectly the probability of each circuit.
Parameters
 modelModel
the model, used only for circuit compilation
 datasetDataSet
the data set to use.
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the maxloglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 poisson_pictureboolean, optional
Whether the Poissonpicture maximum loglikelihood should be returned.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
Returns
float
 pygsti.logl_max_per_circuit(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)
The vector of maximum loglikelihood contributions for each circuit, aggregated over outcomes.
Parameters
 modelModel
the model, used only for circuit compilation
 datasetDataSet
the data set to use.
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the maxloglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 poisson_pictureboolean, optional
Whether the Poissonpicture maximum loglikelihood should be returned.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
Returns
 numpy.ndarray
Array of length either len(circuits) or len(dataset.keys()). Values are the maximum loglikelihood contributions of the corresponding circuit aggregated over outcomes.
 pygsti.two_delta_logl_nsigma(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method='modeltest', wildcard=None)
See docstring for
pygsti.tools.two_delta_logl()
Parameters
 modelModel
Model of parameterized gates
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 dof_calc_method{“all”, “modeltest”}
How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. “all” uses model.num_params whereas “modeltest” uses model.num_modeltest_params (the number of nongauge parameters by default).
 wildcardWildcardBudget
A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
Returns
float
 pygsti.two_delta_logl(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)
Twice the difference between the maximum and actual loglikelihood.
Optionally also can return the Nsigma (# std deviations from mean) and pvalue relative to expected chi^2 distribution (when dof_calc_method is not None).
This function’s arguments are supersets of
logl()
, andlogl_max()
. This is a convenience function, equivalent to 2*(logl_max(…)  logl(…)), whose value is what is often called the loglikelihoodratio between the “maximal model” (that which trivially fits the data exactly) and the model given by model.Parameters
 modelModel
Model of parameterized gates
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the computed loglikelihood values.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 dof_calc_method{None, “all”, “modeltest”}
How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. If None, then Nsigma and pvalue are not returned (see below).
 wildcardWildcardBudget
A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
Returns
 twoDeltaLogLfloat
2*(loglikelihood(maximal_model,data)  loglikelihood(model,data))
 Nsigma, pvaluefloat
Only returned when dof_calc_method is not None.
 pygsti.two_delta_logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=(1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)
Twice the percircuit difference between the maximum and actual loglikelihood.
Contributions are aggregated over each circuit’s outcomes, but no further.
Optionally (when dof_calc_method is not None) returns parallel vectors containing the Nsigma (# std deviations from mean) and the pvalue relative to expected chi^2 distribution for each sequence.
Parameters
 modelModel
Model of parameterized gates
 datasetDataSet
Probability data
 circuitslist of (tuples or Circuits), optional
Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
 min_prob_clipfloat, optional
The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
 prob_clip_interval2tuple or None, optional
(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
 radiusfloat, optional
Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
 op_label_aliasesdictionary, optional
Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
 dof_calc_method{“all”, “modeltest”}
How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model.
 wildcardWildcardBudget
A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
 mdc_storeModelDatasetCircuitsStore, optional
An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 commmpi4py.MPI.Comm, optional
When not None, an MPI communicator for distributing the computation across multiple processors.
Returns
twoDeltaLogL_terms : numpy.ndarray
 Nsigma, pvaluenumpy.ndarray
Only returned when dof_calc_method is not None.
 pygsti.two_delta_logl_term(n, p, f, min_prob_clip=1e06, poisson_picture=True)
Term of the 2*[log(L)upperbound  log(L)] sum corresponding to a single circuit and spam label.
Parameters
 nfloat or numpy array
Number of samples.
 pfloat or numpy array
Probability of 1st outcome (typically computed).
 ffloat or numpy array
Frequency of 1st outcome (typically observed).
 min_prob_clipfloat, optional
Minimum probability clip point to avoid evaluating log(number <= zero)
 poisson_pictureboolean, optional
Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
Returns
float or numpy array
 pygsti.create_elementary_errorgen_dual(typ, p, q=None, sparse=False, normalization_factor='auto')
Construct a “dual” elementary error generator matrix in the “standard” (matrixunit) basis.
The elementary error generator that is dual to the one computed by calling
create_elementary_errorgen()
with the same argument. This dual element can be used to find the coefficient of the original, or “primal” elementary generator. For example, if A = sum(c_i * E_i), where E_i are the elementary error generators given bycreate_elementary_errorgen()
), then c_i = dot(D_i.conj(), A) where D_i is the dual to E_i.There are four different types of dual elementary error generators: ‘H’ (Hamiltonian), ‘S’ (stochastic), ‘C’ (correlation), and ‘A’ (active). See arxiv:2103.01928. Each type transforms an input density matrix differently. The action of an elementary error generator L on an input density matrix rho is given by:
Hamiltonian: L(rho) = 1j/(2d^2) * [ p, rho ] Stochastic: L(rho) = 1/(d^2) p * rho * p Correlation: L(rho) = 1/(2d^2) ( p * rho * q + q * rho * p) Active: L(rho) = 1j/(2d^2) ( p * rho * q  q * rho * p)
where d is the dimension of the Hilbert space, e.g. 2 for a single qubit. Square brackets denotes the commutator and curly brackets the anticommutator. L is returned as a superoperator matrix that acts on vectorized density matrices.
Parameters
 typ{‘H’,’S’,’C’,’A’}
The type of dual error generator to construct.
 pnumpy.ndarray
ddimensional basis matrix.
 qnumpy.ndarray, optional
ddimensional basis matrix; must be nonNone if and only if typ is ‘C’ or ‘A’.
 sparsebool, optional
Whether to construct a sparse or dense (the default) matrix.
Returns
ndarray or Scipy CSR matrix
 pygsti.create_elementary_errorgen(typ, p, q=None, sparse=False)
Construct an elementary error generator as a matrix in the “standard” (matrixunit) basis.
There are four different types of elementary error generators: ‘H’ (Hamiltonian), ‘S’ (stochastic), ‘C’ (correlation), and ‘A’ (active). See arxiv:2103.01928. Each type transforms an input density matrix differently. The action of an elementary error generator L on an input density matrix rho is given by:
Hamiltonian: L(rho) = 1j * [ p, rho ] Stochastic: L(rho) = p * rho * p  rho Correlation: L(rho) = p * rho * q + q * rho * p  0.5 {{p,q}, rho} Active: L(rho) = 1j( p * rho * q  q * rho * p + 0.5 {[p,q], rho} )
Square brackets denotes the commutator and curly brackets the anticommutator. L is returned as a superoperator matrix that acts on vectorized density matrices.
Parameters
 typ{‘H’,’S’,’C’,’A’}
The type of error generator to construct.
 pnumpy.ndarray
ddimensional basis matrix.
 qnumpy.ndarray, optional
ddimensional basis matrix; must be nonNone if and only if typ is ‘C’ or ‘A’.
 sparsebool, optional
Whether to construct a sparse or dense (the default) matrix.
Returns
ndarray or Scipy CSR matrix
 pygsti.create_lindbladian_term_errorgen(typ, Lm, Ln=None, sparse=False)
Construct the superoperator for a term in the common Lindbladian expansion of an error generator.
Mathematically, for ddimensional matrices Lm and Ln, this routine constructs the d^2dimension Lindbladian matrix L whose action is given by:
L(rho) = i [Lm, rho] ` (when `typ == ‘H’)
or
L(rho) = Ln*rho*Lm^dag  1/2(rho*Lm^dag*Ln + Lm^dag*Ln*rho) (typ == ‘O’)
where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.
Parameters
 typ{‘H’, ‘O’}
The type of error generator to construct.
 Lmnumpy.ndarray
ddimensional basis matrix.
 Lnnumpy.ndarray, optional
ddimensional basis matrix.
 sparsebool, optional
Whether to construct a sparse or dense (the default) matrix.
Returns
ndarray or Scipy CSR matrix
 pygsti.remove_duplicates_in_place(l, index_to_test=None)
Remove duplicates from the list passed as an argument.
Parameters
 llist
The list to remove duplicates from.
 index_to_testint, optional
If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate yvalues.
Returns
None
 pygsti.remove_duplicates(l, index_to_test=None)
Remove duplicates from the a list and return the result.
Parameters
 literable
The list/set to remove duplicates from.
 index_to_testint, optional
If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate yvalues.
Returns
 list
the list after duplicates have been removed.
 pygsti.compute_occurrence_indices(lst)
A 0based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.
For example, if lst = [ ‘A’,’B’,’C’,’C’,’A’] then the returned list will be [ 0 , 0 , 0 , 1 , 1 ]. This may be useful when working with DataSet objects that have collisionAction set to “keepseparate”.
Parameters
 lstlist
The list to process.
Returns
list
 pygsti.find_replace_tuple(t, alias_dict)
Replace elements of t according to rules in alias_dict.
Parameters
 ttuple or list
The object to perform replacements upon.
 alias_dictdictionary
Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a subsequence that the given element should be replaced with. If None, no replacement is performed.
Returns
tuple
 pygsti.find_replace_tuple_list(list_of_tuples, alias_dict)
Applies
find_replace_tuple()
on each element of list_of_tuples.Parameters
 list_of_tupleslist
A list of tuple objects to perform replacements upon.
 alias_dictdictionary
Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a subsequence that the given element should be replaced with. If None, no replacement is performed.
Returns
list
 pygsti.apply_aliases_to_circuits(list_of_circuits, alias_dict)
Applies alias_dict to the circuits in list_of_circuits.
Parameters
 list_of_circuitslist
A list of circuits to make replacements in.
 alias_dictdict
A dictionary whose keys are layer Labels (or equivalent tuples or strings), and whose values are Circuits or tuples of labels.
Returns
list
 pygsti.sorted_partitions(n)
Iterate over all sorted (decreasing) partitions of integer n.
A partition of n here is defined as a list of one or more nonzero integers which sum to n. Sorted partitions (those iterated over here) have their integers in decreasing order.
Parameters
 nint
The number to partition.
 pygsti.partitions(n)
Iterate over all partitions of integer n.
A partition of n here is defined as a list of one or more nonzero integers which sum to n. Every partition is iterated over exacty once  there are no duplicates/repetitions.
Parameters
 nint
The number to partition.
 pygsti.partition_into(n, nbins)
Iterate over all partitions of integer n into nbins bins.
Here, unlike in
partition()
, a “partition” is allowed to contain zeros. For example, (4,1,0) is a valid partition of 5 using 3 bins. This function fixes the number of bins and iterates over all possible length nbins partitions while allowing zeros. This is equivalent to iterating over all usual partitions of length at most nbins and inserting zeros into all possible places for partitions of length less than nbins.Parameters
 nint
The number to partition.
 nbinsint
The fixed number of bins, equal to the length of all the partitions that are iterated over.
 pygsti.incd_product(*args)
Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.
Parameters
 *argsiterables
Any number of iterable things that we’re taking the product of.
 pygsti.lists_to_tuples(obj)
Recursively replaces lists with tuples.
Can be useful for fixing tuples that were serialized to json or mongodb. Recurses on lists, tuples, and dicts within obj.
Parameters
 objobject
Object to convert.
Returns
object
 pygsti.dot_mod2(m1, m2)
Returns the product over the integers modulo 2 of two matrices.
Parameters
 m1numpy.ndarray
First matrix
 m2numpy.ndarray
Second matrix
Returns
numpy.ndarray
 pygsti.multidot_mod2(mlist)
Returns the product over the integers modulo 2 of a list of matrices.
Parameters
 mlistlist
A list of matrices.
Returns
numpy.ndarray
 pygsti.det_mod2(m)
Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).
Parameters
 mnumpy.ndarray
Matrix to take determinant of.
Returns
numpy.ndarray
 pygsti.matrix_directsum(m1, m2)
Returns the direct sum of two square matrices of integers.
Parameters
 m1numpy.ndarray
First matrix
 m2numpy.ndarray
Second matrix
Returns
numpy.ndarray
 pygsti.inv_mod2(m)
Finds the inverse of a matrix over GL(n,2)
Parameters
 mnumpy.ndarray
Matrix to take inverse of.
Returns
numpy.ndarray
 pygsti.Axb_mod2(A, b)
Solves Ax = b over GF(2)
Parameters
 Anumpy.ndarray
Matrix to operate on.
 bnumpy.ndarray
Vector to operate on.
Returns
numpy.ndarray
 pygsti.gaussian_elimination_mod2(a)
Gaussian elimination mod2 of a.
Parameters
 anumpy.ndarray
Matrix to operate on.
Returns
numpy.ndarray
 pygsti.diagonal_as_vec(m)
Returns a 1D array containing the diagonal of the input square 2D array m.
Parameters
 mnumpy.ndarray
Matrix to operate on.
Returns
numpy.ndarray
 pygsti.strictly_upper_triangle(m)
Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.
Parameters
 mnumpy.ndarray
Matrix to operate on.
Returns
numpy.ndarray
 pygsti.diagonal_as_matrix(m)
Returns a diagonal matrix containing the diagonal of m.
Parameters
 mnumpy.ndarray
Matrix to operate on.
Returns
numpy.ndarray
 pygsti.albert_factor(d, failcount=0, rand_state=None)
Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.
The algorithm mostly follows the proof in “Orthogonal Matrices Over Finite Fields” by Jessie MacWilliams in The American Mathematical Monthly, Vol. 76, No. 2 (Feb., 1969), pp. 152164
There is generally not a unique albert factorization, and this algorthm is randomized. It will general return a different factorizations from multiple calls.
Parameters
 darraylike
Symmetric matrix mod 2.
 failcountint, optional
UNUSED.
 rand_statenp.random.RandomState, optional
Random number generator to allow for determinism.
Returns
numpy.ndarray
 pygsti.random_bitstring(n, p, failcount=0, rand_state=None)
Constructs a random bitstring of length n with parity p
Parameters
 nint
Number of bits.
 pint
Parity.
 failcountint, optional
Internal use only.
 rand_statenp.random.RandomState, optional
Random number generator to allow for determinism.
Returns
numpy.ndarray
 pygsti.random_invertable_matrix(n, failcount=0, rand_state=None)
Finds a random invertable matrix M over GL(n,2)
Parameters
 nint
matrix dimension
 failcountint, optional
Internal use only.
 rand_statenp.random.RandomState, optional
Random number generator to allow for determinism.
Returns
numpy.ndarray
 pygsti.random_symmetric_invertable_matrix(n, failcount=0, rand_state=None)
Creates a random, symmetric, invertible matrix from GL(n,2)
Parameters
 nint
Matrix dimension.
 failcountint, optional
Internal use only.
 rand_statenp.random.RandomState, optional
Random number generator to allow for determinism.
Returns
numpy.ndarray
 pygsti.onesify(a, failcount=0, maxfailcount=100, rand_state=None)
Returns M such that M a M.T has ones along the main diagonal
Parameters
 anumpy.ndarray
The matrix.
 failcountint, optional
Internal use only.
 maxfailcountint, optional
Maximum number of tries before giving up.
 rand_statenp.random.RandomState, optional
Random number generator to allow for determinism.
Returns
numpy.ndarray
 pygsti.permute_top(a, i)
Permutes the first row & col with the i’th row & col
Parameters
 anumpy.ndarray
The matrix to act on.
 iint
index to permute with first row/col.
Returns
numpy.ndarray
 pygsti.fix_top(a)
Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.
Parameters
 anumpy.ndarray
A symmetric binary matrix with ones along the diagonal.
Returns
numpy.ndarray
 pygsti.proper_permutation(a)
Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.
Parameters
 anumpy.ndarray
A symmetric binary matrix with ones along the diagonal.
Returns
numpy.ndarray
 pygsti.EXPM_DEFAULT_TOL
 pygsti.is_hermitian(mx, tol=1e09)
Test whether mx is a hermitian matrix.
Parameters
 mxnumpy array
Matrix to test.
 tolfloat, optional
Tolerance on absolute magitude of elements.
Returns
 bool
True if mx is hermitian, otherwise False.
 pygsti.is_pos_def(mx, tol=1e09)
Test whether mx is a positivedefinite matrix.
Parameters
 mxnumpy array
Matrix to test.
 tolfloat, optional
Tolerance on absolute magitude of elements.
Returns
 bool
True if mx is positivesemidefinite, otherwise False.
 pygsti.is_valid_density_mx(mx, tol=1e09)
Test whether mx is a valid density matrix (hermitian, positivedefinite, and unit trace).
Parameters
 mxnumpy array
Matrix to test.
 tolfloat, optional
Tolerance on absolute magitude of elements.
Returns
 bool
True if mx is a valid density matrix, otherwise False.
 pygsti.nullspace(m, tol=1e07)
Compute the nullspace of a matrix.
Parameters
 mnumpy array
An matrix of shape (M,N) whose nullspace to compute.
 tolfloat , optional
Nullspace tolerance, used when comparing singular values with zero.
Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.nullspace_qr(m, tol=1e07)
Compute the nullspace of a matrix using the QR decomposition.
The QR decomposition is faster but less accurate than the SVD used by
nullspace()
.Parameters
 mnumpy array
An matrix of shape (M,N) whose nullspace to compute.
 tolfloat , optional
Nullspace tolerance, used when comparing diagonal values of R with zero.
Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.nice_nullspace(m, tol=1e07, orthogonalize=False)
Computes the nullspace of a matrix, and tries to return a “nice” basis for it.
Columns of the returned value (a basis for the nullspace) each have a maximum absolute value of 1.0 and are chosen so as to align with the the original matrix’s basis as much as possible (the basis is found by projecting each original basis vector onto an arbitrariliyfound nullspace and keeping only a set of linearly independent projections).
Parameters
 mnumpy array
An matrix of shape (M,N) whose nullspace to compute.
 tolfloat , optional
Nullspace tolerance, used when comparing diagonal values of R with zero.
 orthogonalizebool, optional
If True, the nullspace vectors are additionally orthogonalized.
Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.normalize_columns(m, return_norms=False, ord=None)
Normalizes the columns of a matrix.
Parameters
 mnumpy.ndarray or scipy sparse matrix
The matrix.
 return_normsbool, optional
If True, also return a 1D array containing the norms of the columns (before they were normalized).
 ordint or list of ints, optional
The order of the norm. See
numpy.linalg.norm()
. An array of orders can be given to specify the norm on a percolumn basis.
Returns
 normalized_mnumpy.ndarray
The matrix after columns are normalized
 column_normsnumpy.ndarray
Only returned when return_norms=True, a 1dimensional array of the prenormalization norm of each column.
 pygsti.column_norms(m, ord=None)
Compute the norms of the columns of a matrix.
Parameters
 mnumpy.ndarray or scipy sparse matrix
The matrix.
 ordint or list of ints, optional
The order of the norm. See
numpy.linalg.norm()
. An array of orders can be given to specify the norm on a percolumn basis.
Returns
 numpy.ndarray
A 1dimensional array of the column norms (length is number of columns of m).
 pygsti.scale_columns(m, scale_values)
Scale each column of a matrix by a given value.
Usually used for normalization purposes, when the matrix columns represent vectors.
Parameters
 mnumpy.ndarray or scipy sparse matrix
The matrix.
 scale_valuesnumpy.ndarray
A 1dimensional array of scale values, one per column of m.
Returns
 numpy.ndarray or scipy sparse matrix
A copy of m with scaled columns, possibly with different sparsity structure.
 pygsti.columns_are_orthogonal(m, tol=1e07)
Checks whether a matrix contains orthogonal columns.
The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.
Parameters
 mnumpy.ndarray
The matrix to check.
 tolfloat, optional
Tolerance for checking whether dot products are zero.
Returns
bool
 pygsti.columns_are_orthonormal(m, tol=1e07)
Checks whether a matrix contains orthogonal columns.
The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.
Parameters
 mnumpy.ndarray
The matrix to check.
 tolfloat, optional
Tolerance for checking whether dot products are zero.
Returns
bool
 pygsti.independent_columns(m, initial_independent_cols=None, tol=1e07)
Computes the indices of the linearlyindependent columns in a matrix.
Optionally starts with a “base” matrix of independent columns, so that the returned indices indicate the columns of m that are independent of all the base columns and the other independent columns of m.
Parameters
 mnumpy.ndarray or scipy sparse matrix
The matrix.
 initial_independent_colsnumpy.ndarray or scipy sparse matrix, optional
If not None, a matrix of knowntobe independent columns so to test the columns of m with respect to (in addition to the already chosen independent columns of m.
 tolfloat, optional
Tolerance threshold used to decide whether a singular value is nonzero (it is if it’s is greater than tol).
Returns
 list
A list of the independentcolumn indices of m.
 pygsti.pinv_of_matrix_with_orthogonal_columns(m)
TODO: docstring
 pygsti.matrix_sign(m)
The “sign” matrix of m
Parameters
 mnumpy.ndarray
the matrix.
Returns
numpy.ndarray
 pygsti.print_mx(mx, width=9, prec=4, withbrackets=False)
Print matrix in pretty format.
Will print real or complex matrices with a desired precision and “cell” width.
Parameters
 mxnumpy array
the matrix (2D array) to print.
 widthint, opitonal
the width (in characters) of each printed element
 precint optional
the precision (in characters) of each printed element
 withbracketsbool, optional
whether to print brackets and commas to make the result something that Python can read back in.
Returns
None
 pygsti.mx_to_string(m, width=9, prec=4, withbrackets=False)
Generate a “prettyformat” string for a matrix.
Will generate strings for real or complex matrices with a desired precision and “cell” width.
Parameters
 mnumpy.ndarray
array to print.
 widthint, opitonal
the width (in characters) of each converted element
 precint optional
the precision (in characters) of each converted element
 withbracketsbool, optional
whether to print brackets and commas to make the result something that Python can read back in.
Returns
 string
matrix m as a pretty formated string.
 pygsti.mx_to_string_complex(m, real_width=9, im_width=9, prec=4)
Generate a “prettyformat” string for a complexvalued matrix.
Parameters
 mnumpy array
array to format.
 real_widthint, opitonal
the width (in characters) of the real part of each element.
 im_widthint, opitonal
the width (in characters) of the imaginary part of each element.
 precint optional
the precision (in characters) of each element’s real and imaginary parts.
Returns
 string
matrix m as a pretty formated string.
 pygsti.unitary_superoperator_matrix_log(m, mx_basis)
Construct the logarithm of superoperator matrix m.
This function assumes that m acts as a unitary on densitymatrix space, (m: rho > U rho Udagger) so that log(m) can be written as the action by Hamiltonian H:
log(m): rho > i[H,rho].
Parameters
 mnumpy array
The superoperator matrix whose logarithm is taken
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
A matrix logM, of the same shape as m, such that m = exp(logM) and logM can be written as the action rho > i[H,rho].
 pygsti.near_identity_matrix_log(m, tol=1e08)
Construct the logarithm of superoperator matrix m that is near the identity.
If m is real, the resulting logarithm will be real.
Parameters
 mnumpy array
The superoperator matrix whose logarithm is taken
 tolfloat, optional
The tolerance used when testing for zero imaginary parts.
Returns
 numpy array
An matrix logM, of the same shape as m, such that m = exp(logM) and logM is real when m is real.
 pygsti.approximate_matrix_log(m, target_logm, target_weight=10.0, tol=1e06)
Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.
The equation m = exp( logM ) is allowed to become inexact in order to make logM close to target_logm. In particular, the objective function that is minimized is (where  indicates the 2norm):
exp(logM)  m_1 + target_weight * logM  target_logm^2
Parameters
 mnumpy array
The superoperator matrix whose logarithm is taken
 target_logmnumpy array
The target logarithm
 target_weightfloat
A weighting factor used to blance the exactnessoflog term with the closenesstotarget term in the optimized objective function. This value multiplies the latter term.
 tolfloat, optional
Optimzer tolerance.
Returns
 logMnumpy array
An matrix of the same shape as m.
 pygsti.real_matrix_log(m, action_if_imaginary='raise', tol=1e08)
Construct a real logarithm of real matrix m.
This is possible when negative eigenvalues of m come in pairs, so that they can be viewed as complex conjugate pairs.
Parameters
 mnumpy array
The matrix to take the logarithm of
 action_if_imaginary{“raise”,”warn”,”ignore”}, optional
What action should be taken if a realvalued logarithm cannot be found. “raise” raises a ValueError, “warn” issues a warning, and “ignore” ignores the condition and simply returns the complexvalued result.
 tolfloat, optional
An internal tolerance used when testing for equivalence and zero imaginary parts (realness).
Returns
 logMnumpy array
An matrix logM, of the same shape as m, such that m = exp(logM)
 pygsti.column_basis_vector(i, dim)
Returns the ith standard basis vector in dimension dim.
Parameters
 iint
Basis vector index.
 dimint
Vector dimension.
Returns
 numpy.ndarray
An array of shape (dim, 1) that is all zeros except for its ith element, which equals 1.
 pygsti.vec(matrix_in)
Stacks the columns of a matrix to return a vector
Parameters
matrix_in : numpy.ndarray
Returns
numpy.ndarray
 pygsti.unvec(vector_in)
Slices a vector into the columns of a matrix.
Parameters
vector_in : numpy.ndarray
Returns
numpy.ndarray
 pygsti.norm1(m)
Returns the 1 norm of a matrix
Parameters
 mnumpy.ndarray
The matrix.
Returns
numpy.ndarray
 pygsti.random_hermitian(dim)
Generates a random Hermitian matrix
Parameters
 dimint
the matrix dimensinon.
Returns
numpy.ndarray
 pygsti.norm1to1(operator, num_samples=10000, mx_basis='gm', return_list=False)
The Hermitian 1to1 norm of a superoperator represented in the standard basis.
This is calculated via MonteCarlo sampling. The definition of Hermitian 1to1 norm can be found in arxiv:1109.6887.
Parameters
 operatornumpy.ndarray
The operator matrix to take the norm of.
 num_samplesint, optional
Number of MonteCarlo samples.
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis
The basis of operator.
 return_listbool, optional
Whether the entire list of sampled values is returned or just the maximum.
Returns
 float or list
Depends on the value of return_list.
 pygsti.complex_compare(a, b)
Comparison function for complex numbers that compares real part, then imaginary part.
Parameters
a : complex
b : complex
Returns
1 if a < b
0 if a == b
+1 if a > b
 pygsti.prime_factors(n)
GCD algorithm to produce prime factors of n
Parameters
 nint
The number to factorize.
Returns
 list
The prime factors of n.
 pygsti.minweight_match(a, b, metricfn=None, return_pairs=True, pass_indices_to_metricfn=False)
Matches the elements of two vectors, a and b by minimizing the weight between them.
The weight is defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b).
Parameters
 alist or numpy.ndarray
First 1D array to match elements between.
 blist or numpy.ndarray
Second 1D array to match elements between.
 metricfnfunction, optional
A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(xy) is used.
 return_pairsbool, optional
If True, the matching is also returned.
 pass_indices_to_metricfnbool, optional
If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.
Returns
 weight_arraynumpy.ndarray
The array of weights corresponding to the minweight matching. The sum of this array’s elements is the minimized total weight.
 pairslist
Only returned when return_pairs == True, a list of 2tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair. The first (ix) indices will be in continuous ascending order starting at zero.
 pygsti.minweight_match_realmxeigs(a, b, metricfn=None, pass_indices_to_metricfn=False, eps=1e09)
Matches the elements of a and b, whose elements are assumed to either real or onehalf of a conjugate pair.
Matching is performed by minimizing the weight between elements, defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b). If straightforward matching fails to preserve eigenvalue conjugacy relations, then real and conjugate pair eigenvalues are matched separately to ensure relations are preserved (but this can result in a suboptimal matching). A ValueError is raised when the elements of a and b have incompatible conjugacy structures (#’s of conjugate vs. real pairs).
Parameters
 anumpy.ndarray
First 1D array to match.
 bnumpy.ndarray
Second 1D array to match.
 metricfnfunction, optional
A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(xy) is used.
 pass_indices_to_metricfnbool, optional
If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.
 epsfloat, optional
Tolerance when checking if eigenvalues are equal to each other.
Returns
 pairslist
A list of 2tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair.
 pygsti.safe_dot(a, b)
Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.
Parameters
 anumpy.ndarray or scipy.sparse matrix.
First matrix.
 bnumpy.ndarray or scipy.sparse matrix.
Second matrix.
Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_real(a, inplace=False, check=False)
Get the realpart of a, where a can be either a dense array or a sparse matrix.
Parameters
 anumpy.ndarray or scipy.sparse matrix.
Array to take real part of.
 inplacebool, optional
Whether this operation should be done inplace.
 checkbool, optional
If True, raise a ValueError if a has a nonzero imaginary part.
Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_imag(a, inplace=False, check=False)
Get the imaginarypart of a, where a can be either a dense array or a sparse matrix.
Parameters
 anumpy.ndarray or scipy.sparse matrix.
Array to take imaginary part of.
 inplacebool, optional
Whether this operation should be done inplace.
 checkbool, optional
If True, raise a ValueError if a has a nonzero real part.
Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_norm(a, part=None)
Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.
Parameters
 andarray or scipy.sparse matrix
The matrix or vector to take the norm of.
 part{None,’real’,’imag’}
If not None, return the norm of the real or imaginary part of a.
Returns
float
 pygsti.safe_onenorm(a)
Computes the 1norm of the dense or sparse matrix a.
Parameters
 andarray or sparse matrix
The matrix or vector to take the norm of.
Returns
float
 pygsti.csr_sum_indices(csr_matrices)
Precomputes the indices needed to sum a set of CSR sparse matrices.
Computes the indexarrays needed for use in
csr_sum()
, along with the index pointer and columnindices arrays for constructing a “template” CSR matrix to be the destination of csr_sum.Parameters
 csr_matriceslist
The SciPy CSR matrices to be summed.
Returns
 ind_arrayslist
A list of numpy arrays giving the destination dataarray indices of each element of csr_matrices.
 indptr, indicesnumpy.ndarray
The rowpointer and columnindices arrays specifying the sparsity structure of a the destination CSR matrix.
 Nint
The dimension of the destination matrix (and of each member of csr_matrices)
 pygsti.csr_sum(data, coeffs, csr_mxs, csr_sum_indices)
Accelerated summation of several CSRformat sparse matrices.
csr_sum_indices()
precomputes the necessary indices for summing directly into the dataarray of a destination CSR sparse matrix. If data is the dataarray of matrix D (for “destination”), then this method performs:D += sum_i( coeff[i] * csr_mxs[i] )
Note that D is not returned; the sum is done internally into D’s dataarray.
Parameters
 datanumpy.ndarray
The dataarray of the destination CSRmatrix.
 coeffsiterable
The weight coefficients which multiply each summed matrix.
 csr_mxsiterable
A list of CSR matrix objects whose dataarray is given by obj.data (e.g. a SciPy CSR sparse matrix).
 csr_sum_indiceslist
A list of precomputed index arrays as returned by
csr_sum_indices()
.
Returns
None
 pygsti.csr_sum_flat_indices(csr_matrices)
Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.
The returned quantities can later be used to quickly compute a linear combination of the CSR sparse matrices csr_matrices.
Computes the index and data arrays needed for use in
csr_sum_flat()
, along with the index pointer and columnindices arrays for constructing a “template” CSR matrix to be the destination of csr_sum_flat.Parameters
 csr_matriceslist
The SciPy CSR matrices to be summed.
Returns
 flat_dest_index_arraynumpy array
A 1D array of one element per nonzero element in any of csr_matrices, giving the destinationindex of that element.
 flat_csr_mx_datanumpy array
A 1D array of the same length as flat_dest_index_array, which simply concatenates the data arrays of csr_matrices.
 mx_nnz_indptrnumpy array
A 1D array of length len(csr_matrices)+1 such that the data for the ith element of csr_matrices lie in the indexrange of mx_nnz_indptr[i] to mx_nnz_indptr[i+1]1 of the flat arrays.
 indptr, indicesnumpy.ndarray
The rowpointer and columnindices arrays specifying the sparsity structure of a the destination CSR matrix.
 Nint
The dimension of the destination matrix (and of each member of csr_matrices)
 pygsti.csr_sum_flat(data, coeffs, flat_dest_index_array, flat_csr_mx_data, mx_nnz_indptr)
Computation of the summation of several CSRformat sparse matrices.
csr_sum_flat_indices()
precomputes the necessary indices for summing directly into the dataarray of a destination CSR sparse matrix. If data is the dataarray of matrix D (for “destination”), then this method performs:D += sum_i( coeff[i] * csr_mxs[i] )
Note that D is not returned; the sum is done internally into D’s dataarray.
Parameters
 datanumpy.ndarray
The dataarray of the destination CSRmatrix.
 coeffsndarray
The weight coefficients which multiply each summed matrix.
 flat_dest_index_arrayndarray
The index array generated by
csr_sum_flat_indices()
. flat_csr_mx_datandarray
The data array generated by
csr_sum_flat_indices()
. mx_nnz_indptrndarray
The numberofnonzeroelements pointer array generated by
csr_sum_flat_indices()
.
Returns
None
 pygsti.expm_multiply_prep(a, tol=EXPM_DEFAULT_TOL)
Computes “prepared” metainfo about matrix a, to be used in expm_multiply_fast.
This includes a shifted version of a.
Parameters
 anumpy.ndarray
the matrix that will be later exponentiated.
 tolfloat, optional
Tolerance used to within matrix exponentiation routines.
Returns
 tuple
A tuple of values to pass to expm_multiply_fast.
 pygsti.expm_multiply_fast(prep_a, v, tol=EXPM_DEFAULT_TOL)
Multiplies v by an exponentiated matrix.
Parameters
 prep_atuple
A tuple of values from
expm_multiply_prep()
that defines the matrix to be exponentiated and holds other precomputed quantities. vnumpy.ndarray
Vector to multiply (take dot product with).
 tolfloat, optional
Tolerance used to within matrix exponentiation routines.
Returns
numpy.ndarray
 pygsti.expop_multiply_prep(op, a_1_norm=None, tol=EXPM_DEFAULT_TOL)
Returns “prepared” metainfo about operation op, which is assumed to be traceless (so no shift is needed).
Used as input for use with _custom_expm_multiply_simple_core or fast Creps.
Parameters
 opscipy.sparse.linalg.LinearOperator
The operator to exponentiate.
 a_1_normfloat, optional
The 1norm (if computed separately) of op.
 tolfloat, optional
Tolerance used to within matrix exponentiation routines.
Returns
 tuple
A tuple of values to pass to expm_multiply_fast.
 pygsti.sparse_equal(a, b, atol=1e08)
Checks whether two Scipy sparse matrices are (almost) equal.
Parameters
 ascipy.sparse matrix
First matrix.
 bscipy.sparse matrix
Second matrix.
 atolfloat, optional
The tolerance to use, passed to numpy.allclose, when comparing the elements of a and b.
Returns
bool
 pygsti.sparse_onenorm(a)
Computes the 1norm of the scipy sparse matrix a.
Parameters
 ascipy sparse matrix
The matrix or vector to take the norm of.
Returns
float
 pygsti.ndarray_base(a, verbosity=0)
Get the base memory object for numpy array a.
This is found by following .base until it comes up None.
Parameters
 anumpy.ndarray
Array to get base of.
 verbosityint, optional
Print additional debugging information if this is > 0.
Returns
numpy.ndarray
 pygsti.to_unitary(scaled_unitary)
Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.
Parameters
 scaled_unitaryndarray
A scaled unitary matrix
Returns
scale : float
 unitaryndarray
Such that scale * unitary == scaled_unitary.
 pygsti.sorted_eig(mx)
Similar to numpy.eig, but returns sorted output.
In particular, the eigenvalues and vectors sorted by eigenvalue, where sorting is done according to (real_part, imaginary_part) tuple.
Parameters
 mxnumpy.ndarray
Matrix to act on.
Returns
eigenvalues : numpy.ndarray
eigenvectors : numpy.ndarray
 pygsti.compute_kite(eigenvalues)
Computes the “kite” corresponding to a list of eigenvalues.
The kite is defined as a list of integers, each indicating that there is a degnenerate block of that many eigenvalues within eigenvalues. Thus the sum of the list values equals len(eigenvalues).
Parameters
 eigenvaluesnumpy.ndarray
A sorted array of eigenvalues.
Returns
 list
A list giving the multiplicity structure of evals.
 pygsti.find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)
Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.
More specifically, find a matrix R such that u_inv R u0 is diagonal (so G = R G0 Rinv if G and G0 share the same eigenvalues and have eigenvectors u and u0 respectively) AND log(R) has no (zero) projection onto the commutant of G0 = u0 diag(evals) u0_inv.
Parameters
 unumpy.ndarray
Usually the eigenvector matrix of a gate (G).
 u_invnumpy.ndarray
Inverse of u.
 u0numpy.ndarray
Usually the eigenvector matrix of the corresponding target gate (G0).
 u0_invnumpy.ndarray
Inverse of u0.
 kitelist
The kite structure of u0.
Returns
numpy.ndarray
 pygsti.project_onto_kite(mx, kite)
Project mx onto kite, so mx is zero everywhere except on the kite.
Parameters
 mxnumpy.ndarray
Matrix to project.
 kitelist
A kite structure.
Returns
numpy.ndarray
 pygsti.project_onto_antikite(mx, kite)
Project mx onto the complement of kite, so mx is zero everywhere on the kite.
Parameters
 mxnumpy.ndarray
Matrix to project.
 kitelist
A kite structure.
Returns
numpy.ndarray
 pygsti.remove_dependent_cols(mx, tol=1e07)
Removes the linearly dependent columns of a matrix.
Parameters
 mxnumpy.ndarray
The input matrix
Returns
A linearly independent subset of the columns of mx.
 pygsti.intersection_space(space1, space2, tol=1e07, use_nice_nullspace=False)
TODO: docstring
 pygsti.union_space(space1, space2, tol=1e07)
TODO: docstring
 pygsti.jamiolkowski_angle(hamiltonian_mx)
TODO: docstring
 pygsti.zvals_to_dense(self, zvals, superket=True)
Construct the dense operator or superoperator representation of a computational basis state.
Parameters
 zvalslist or numpy.ndarray
The zvalues, each 0 or 1, defining the computational basis state.
 superketbool, optional
If True, the superket representation of the state is returned. If False, then the complex ket representation is returned.
Returns
numpy.ndarray
 pygsti.int64_parity(x)
Compute the partity of x.
Recursively divide a (64bit) integer (x) into two equal halves and take their XOR until only 1 bit is left.
Parameters
x : int64
Returns
int64
 pygsti.zvals_int64_to_dense(zvals_int, nqubits, outvec=None, trust_outvec_sparsity=False, abs_elval=None)
Fills a dense array with the superket representation of a computational basis state.
Parameters
 zvals_intint64
The array of (up to 64) zvalues, encoded as the 0s and 1s in the binary representation of this integer.
 nqubitsint
The number of zvalues (up to 64)
 outvecnumpy.ndarray, optional
The output array, which must be a 1D array of length 4**nqubits or None, in which case a new array is allocated.
 trust_outvec_sparsitybool, optional
When True, it is assumed that the provided outvec starts as all zeros and so only nonzero elements of outvec need to be set.
 abs_elvalfloat
the value 1 / (sqrt(2)**nqubits), which can be passed here so that it doesn’t need to be recomputed on every call to this function. If None, then we just compute the value.
Returns
numpy.ndarray
 pygsti.sign_fix_qr(q, r, tol=1e06)
Change the signs of the columns of Q and rows of R to follow a convention.
Flips the signs of Qcolumns and Rrows from a QR decomposition so that the largest absolute element in each Qcolumn is positive. This is an arbitrary but consisten convention that resolves signambiguity in the output of a QR decomposition.
Parameters
 q, rnumpy.ndarray
Input Q and R matrices from QR decomposition.
 tolfloat, optional
Tolerance for computing the maximum element, i.e., anything within tol of the true max is counted as a maximal element, the first of which is set positive by the convention.
Returns
 qq, rrnumpy.ndarray
Updated versions of q and r.
 pygsti.IMAG_TOL = '1e07'
 pygsti.fidelity(a, b)
Returns the quantum state fidelity between density matrices.
This given by:
F = Tr( sqrt{ sqrt(a) * b * sqrt(a) } )^2
To compute process fidelity, pass this function the Choi matrices of the two processes, or just call
entanglement_fidelity()
with the operation matrices.Parameters
 anumpy array
First density matrix.
 bnumpy array
Second density matrix.
Returns
 float
The resulting fidelity.
 pygsti.frobeniusdist(a, b)
Returns the frobenius distance between arrays: a  b_Fro.
This could be inlined, but we’re keeping it for API consistency with other distance functions.
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
Returns
 float
The resulting frobenius distance.
 pygsti.frobeniusdist_squared(a, b)
Returns the square of the frobenius distance between arrays: (a  b_Fro)^2.
This could be inlined, but we’re keeping it for API consistency with other distance functions.
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
Returns
 float
The resulting frobenius distance.
 pygsti.residuals(a, b)
Calculate residuals between the elements of two matrices
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
Returns
 np.array
residuals
 pygsti.tracenorm(a)
Compute the trace norm of matrix a given by:
Tr( sqrt{ a^dagger * a } )
Parameters
 anumpy array
The matrix to compute the trace norm of.
Returns
float
 pygsti.tracedist(a, b)
Compute the trace distance between matrices.
This is given by:
D = 0.5 * Tr( sqrt{ (ab)^dagger * (ab) } )
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
Returns
float
 pygsti.diamonddist(a, b, mx_basis='pp', return_x=False)
Returns the approximate diamond norm describing the difference between gate matrices.
This is given by :
D = a  b _diamond = sup_rho  AxI(rho)  BxI(rho) _1
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
 mx_basisBasis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 return_xbool, optional
Whether to return a numpy array encoding the state (rho) at which the maximal trace distance occurs.
Returns
 dmfloat
Diamond norm
 Wnumpy array
Only returned if return_x = True. Encodes the state rho, such that dm = trace( (J(a)J(b)).T * W ).
 pygsti.jtracedist(a, b, mx_basis='pp')
Compute the Jamiolkowski trace distance between operation matrices.
This is given by:
D = 0.5 * Tr( sqrt{ (J(a)J(b))^2 } )
where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
float
 pygsti.entanglement_fidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)
Returns the “entanglement” process fidelity between gate matrices.
This is given by:
F = Tr( sqrt{ sqrt(J(a)) * J(b) * sqrt(J(a)) } )^2
where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.
When the both of the input matrices a and b are TP, and the target matrix b is unitary then we can use a more efficient formula:
F= Tr(a @ b.conjugate().T)/d^2
Parameters
 aarray or gate
The gate to compute the entanglement fidelity to b of. E.g., an imperfect implementation of b.
 barray or gate
The gate to compute the entanglement fidelity to a of. E.g., the target gate corresponding to a.
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The basis of the matrices. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 is_tpbool, optional (default None)
Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
 is_unitarybool, optional (default None)
Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
Returns
float
 pygsti.average_gate_fidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)
Computes the average gate fidelity (AGF) between two gates.
Average gate fidelity (F_g) is related to entanglement fidelity (F_p), via:
F_g = (d * F_p + 1)/(1 + d),
where d is the Hilbert space dimension. This formula, and the definition of AGF, can be found in Phys. Lett. A 303 249252 (2002).
Parameters
 aarray or gate
The gate to compute the AGI to b of. E.g., an imperfect implementation of b.
 barray or gate
The gate to compute the AGI to a of. E.g., the target gate corresponding to a.
 mx_basis{“std”,”gm”,”pp”} or Basis object, optional
The basis of the matrices.
 is_tpbool, optional (default None)
Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
 is_unitarybool, optional (default None)
Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
Returns
 AGIfloat
The AGI of a to b.
 pygsti.average_gate_infidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)
Computes the average gate infidelity (AGI) between two gates.
Average gate infidelity is related to entanglement infidelity (EI) via:
AGI = (d * (1EI) + 1)/(1 + d),
where d is the Hilbert space dimension. This formula, and the definition of AGI, can be found in Phys. Lett. A 303 249252 (2002).
Parameters
 aarray or gate
The gate to compute the AGI to b of. E.g., an imperfect implementation of b.
 barray or gate
The gate to compute the AGI to a of. E.g., the target gate corresponding to a.
 mx_basis{“std”,”gm”,”pp”} or Basis object, optional
The basis of the matrices.
 is_tpbool, optional (default None)
Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
 is_unitarybool, optional (default None)
Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
Returns
float
 pygsti.entanglement_infidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)
Returns the entanglement infidelity (EI) between gate matrices.
This i given by:
EI = 1  Tr( sqrt{ sqrt(J(a)) * J(b) * sqrt(J(a)) } )^2
where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.
Parameters
 anumpy array
First matrix.
 bnumpy array
Second matrix.
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The basis of the matrices. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 is_tpbool, optional (default None)
Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
 is_unitarybool, optional (default None)
Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
Returns
 EIfloat
The EI of a to b.
 pygsti.gateset_infidelity(model, target_model, itype='EI', weights=None, mx_basis=None, is_tp=None, is_unitary=None)
Computes the averageovergates of the infidelity between gates in model and the gates in target_model.
If itype is ‘EI’ then the “infidelity” is the entanglement infidelity; if itype is ‘AGI’ then the “infidelity” is the average gate infidelity (AGI and EI are related by a dimension dependent constant).
This is the quantity that RB error rates are sometimes claimed to be related to directly related.
Parameters
 modelModel
The model to calculate the average infidelity, to target_model, of.
 target_modelModel
The model to calculate the average infidelity, to model, of.
 itypestr, optional
The infidelity type. Either ‘EI’, corresponding to entanglement infidelity, or ‘AGI’, corresponding to average gate infidelity.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are, possibly unnormalized, probabilities. These probabilities corresponding to the weighting in the average, so if the model contains gates A and B and weights[A] = 2 and weights[B] = 1 then the output is Inf(A)*2/3 + Inf(B)/3 where Inf(X) is the infidelity (to the corresponding element in the other model) of X. If None, a uniformaverage is taken, equivalent to setting all the weights to 1.
 mx_basis{“std”,”gm”,”pp”} or Basis object, optional
The basis of the models. If None, the basis is obtained from the model.
 is_tpbool, optional (default None)
Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
 is_unitarybool, optional (default None)
Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).
Returns
 float
The weighted averageovergates infidelity between the two models.
 pygsti.unitarity(a, mx_basis='gm')
Returns the “unitarity” of a channel.
Unitarity is defined as in Wallman et al, “Estimating the Coherence of noise” NJP 17 113020 (2015). The unitarity is given by (Prop 1 in Wallman et al):
u(a) = Tr( A_u^{dagger} A_u ) / (d^2  1),
where A_u is the unital submatrix of a, and d is the dimension of the Hilbert space. When a is written in any basis for which the first element is the normalized identity (e.g., the pp or gm bases), The unital submatrix of a is the matrix obtained when the top row and left hand column is removed from a.
Parameters
 aarray or gate
The gate for which the unitarity is to be computed.
 mx_basis{“std”,”gm”,”pp”} or a Basis object, optional
The basis of the matrix.
Returns
float
 pygsti.fidelity_upper_bound(operation_mx)
Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix.
 The closeness of the result to one tells
how “unitary” the action of operation_mx is.
Parameters
 operation_mxnumpy array
The operation matrix to act on.
Returns
 float
The resulting upper bound on fidelity(operation_mx, anyUnitaryGateMx)
 pygsti.compute_povm_map(model, povmlbl)
Constructs a gatelike quantity for the POVM within model.
This is done by embedding the koutcome classical output space of the POVM in the HilbertSchmidt space of k by k density matrices by placing the classical probability distribution along the diagonal of the density matrix. Currently, this is only implemented for the case when k equals d, the dimension of the POVM’s Hilbert space.
Parameters
 modelModel
The model supplying the POVM effect vectors and the basis those vectors are in.
 povmlblstr
The POVM label
Returns
 numpy.ndarray
The matrix of the “POVM map” in the model.basis basis.
 pygsti.povm_fidelity(model, target_model, povmlbl)
Computes the process (entanglement) fidelity between POVM maps.
Parameters
 modelModel
The model the POVM belongs to.
 target_modelModel
The target model (which also has povmlbl).
 povmlblLabel
Label of the POVM to get the fidelity of.
Returns
float
 pygsti.povm_jtracedist(model, target_model, povmlbl)
Computes the Jamiolkowski trace distance between POVM maps using
jtracedist()
.Parameters
 modelModel
The model the POVM belongs to.
 target_modelModel
The target model (which also has povmlbl).
 povmlblLabel
Label of the POVM to get the trace distance of.
Returns
float
 pygsti.povm_diamonddist(model, target_model, povmlbl)
Computes the diamond distance between POVM maps using
diamonddist()
.Parameters
 modelModel
The model the POVM belongs to.
 target_modelModel
The target model (which also has povmlbl).
 povmlblLabel
Label of the POVM to get the diamond distance of.
Returns
float
 pygsti.decompose_gate_matrix(operation_mx)
Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates.
This funtion computes how the action of a operation matrix can be is decomposed into fixed points, axes of rotation, angles of rotation, and decays. Also determines whether a gate appears to be valid and/or unitary.
Parameters
 operation_mxnumpy array
The operation matrix to act on.
Returns
 dict
A dictionary describing the decomposed action. Keys are:
 ‘isValid’bool
whether decomposition succeeded
 ‘isUnitary’bool
whether operation_mx describes unitary action
 ‘fixed point’numpy array
the fixed point of the action
 ‘axis of rotation’numpy array or nan
the axis of rotation
 ‘decay of diagonal rotation terms’float
decay of diagonal terms
 ‘rotating axis 1’numpy array or nan
1st axis orthogonal to axis of rotation
 ‘rotating axis 2’numpy array or nan
2nd axis orthogonal to axis of rotation
 ‘decay of off diagonal rotation terms’float
decay of offdiagonal terms
 ‘pi rotations’float
angle of rotation in units of pi radians
 pygsti.state_to_dmvec(psi)
Compute the vectorized density matrix which acts as the state psi.
This is just the outer product map psi> => psi><psi with the output flattened, i.e. dot(psi, conjugate(psi).T).
Parameters
 psinumpy array
The state vector.
Returns
 numpy array
The vectorized density matrix.
 pygsti.dmvec_to_state(dmvec, tol=1e06)
Compute the pure state describing the action of density matrix vector dmvec.
If dmvec represents a mixed state, ValueError is raised.
Parameters
 dmvecnumpy array
The vectorized density matrix, assumed to be in the standard (matrix unit) basis.
 tolfloat, optional
tolerance for determining whether an eigenvalue is zero.
Returns
 numpy array
The pure state, as a column vector of shape = (N,1)
 pygsti.unitary_to_superop(u, superop_mx_basis='pp')
TODO: docstring
 pygsti.unitary_to_process_mx(u)
 pygsti.unitary_to_std_process_mx(u)
Compute the superoperator corresponding to unitary matrix u.
Computes a superoperator (that acts on (row)vectorized density matrices) from a unitary operator (matrix) u which acts on state vectors. This superoperator is given by the tensor product of u and conjugate(u), i.e. kron(u,u.conj).
Parameters
 unumpy array
The unitary matrix which acts on state vectors.
Returns
 numpy array
The superoperator process matrix.
 pygsti.superop_is_unitary(superop_mx, mx_basis='pp', rank_tol=1e06)
TODO: docstring
 pygsti.superop_to_unitary(superop_mx, mx_basis='pp', check_superop_is_unitary=True)
TODO: docstring
 pygsti.process_mx_to_unitary(superop)
 pygsti.std_process_mx_to_unitary(superop_mx)
Compute the unitary corresponding to the (unitaryaction!) superoperator superop.
This function assumes superop acts on (row)vectorized density matrices, and that the superoperator is of the form kron(U,U.conj).
Parameters
 superopnumpy array
The superoperator matrix which acts on vectorized density matrices (in the ‘std’ matrixunit basis).
Returns
 numpy array
The unitary matrix which acts on state vectors.
 pygsti.spam_error_generator(spamvec, target_spamvec, mx_basis, typ='logGTi')
Construct an error generator from a SPAM vector and it’s target.
Computes the value of the error generator given by errgen = log( diag(spamvec / target_spamvec) ), where division is elementwise. This results in a (nonunique) error generator matrix E such that spamvec = exp(E) * target_spamvec.
Note: This is currently of very limited use, as the above algorithm fails whenever target_spamvec has zero elements where spamvec doesn’t.
Parameters
 spamvecndarray
The SPAM vector.
 target_spamvecndarray
The target SPAM vector.
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 typ{“logGTi”}
The type of error generator to compute. Allowed values are:
“logGTi” : errgen = log( diag(spamvec / target_spamvec) )
Returns
 errgenndarray
The error generator.
 pygsti.error_generator(gate, target_op, mx_basis, typ='logGlogT', logG_weight=None)
Construct the error generator from a gate and its target.
Computes the value of the error generator given by errgen = log( inv(target_op) * gate ), so that gate = target_op * exp(errgen).
Parameters
 gatendarray
The operation matrix
 target_opndarray
The target operation matrix
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 typ{“logGlogT”, “logTiG”, “logGTi”}
The type of error generator to compute. Allowed values are:
“logGlogT” : errgen = log(gate)  log(target_op)
“logTiG” : errgen = log( dot(inv(target_op), gate) )
“logGTi” : errgen = log( dot(gate,inv(target_op)) )
 logG_weight: float or None (default)
Regularization weight for logGlogT penalty of approximate logG. If None, the default weight in
approximate_matrix_log()
is used. Note that this will result in a logG close to logT, but G may not exactly equal exp(logG). If selfconsistency with func:operation_from_error_generator is desired, consider testing lower (or zero) regularization weight.
Returns
 errgenndarray
The error generator.
 pygsti.operation_from_error_generator(error_gen, target_op, mx_basis, typ='logGlogT')
Construct a gate from an error generator and a target gate.
Inverts the computation done in
error_generator()
and returns the value of the gate given by gate = target_op * exp(error_gen).Parameters
 error_genndarray
The error generator matrix
 target_opndarray
The target operation matrix
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 typ{“logGlogT”, “logGlogTquick”, “logTiG”, “logGTi”}
The type of error generator to invert. Allowed values are:
“logGlogT” : gate = exp( errgen + log(target_op) ) using internal logm
“logGlogTquick” : gate = exp( errgen + log(target_op) ) using SciPy logm
“logTiG” : gate = dot( target_op, exp(errgen) )
“logGTi” : gate = dot( exp(errgen), target_op )
Returns
 ndarray
The operation matrix.
 pygsti.elementary_errorgens(dim, typ, basis)
Compute the elementary error generators of a certain type.
Parameters
 dimint
The dimension of the error generators to be returned. This is also the associated gate dimension, and must be a perfect square, as sqrt(dim) is the dimension of density matrices. For a single qubit, dim == 4.
 typ{‘H’, ‘S’, ‘C’, ‘A’}
The type of error generators to construct.
 basisBasis or str
Which basis is used to construct the error generators. Note that this is not the basis of the returned error generators (which is always the ‘std’ matrixunit basis) but that used to define the different elementary generator operations themselves.
Returns
 generatorsnumpy.ndarray
An array of shape (#basiselements,dim,dim). generators[i] is the generator corresponding to the ith basis matrix in the std (matrix unit) basis. (Note that in most cases #basiselements == dim, so the size of generators is (dim,dim,dim) ). Each generator is normalized so that as a vector it has unit Frobenius norm.
 pygsti.elementary_errorgens_dual(dim, typ, basis)
Compute the set of dualtoelementary error generators of a given type.
These error generators are dual to the elementary error generators constructed by
elementary_errorgens()
.Parameters
 dimint
The dimension of the error generators to be returned. This is also the associated gate dimension, and must be a perfect square, as sqrt(dim) is the dimension of density matrices. For a single qubit, dim == 4.
 typ{‘H’, ‘S’, ‘C’, ‘A’}
The type of error generators to construct.
 basisBasis or str
Which basis is used to construct the error generators. Note that this is not the basis of the returned error generators (which is always the ‘std’ matrixunit basis) but that used to define the different elementary generator operations themselves.
Returns
 generatorsnumpy.ndarray
An array of shape (#basiselements,dim,dim). generators[i] is the generator corresponding to the ith basis matrix in the std (matrix unit) basis. (Note that in most cases #basiselements == dim, so the size of generators is (dim,dim,dim) ). Each generator is normalized so that as a vector it has unit Frobenius norm.
 pygsti.extract_elementary_errorgen_coefficients(errorgen, elementary_errorgen_labels, elementary_errorgen_basis='pp', errorgen_basis='pp', return_projected_errorgen=False)
TODO: docstring
 pygsti.project_errorgen(errorgen, elementary_errorgen_type, elementary_errorgen_basis, errorgen_basis='pp', return_dual_elementary_errorgens=False, return_projected_errorgen=False)
Compute the projections of a gate error generator onto a set of elementary error generators. TODO: docstring update
This standard set of errors is given by projection_type, and is constructed from the elements of the projection_basis basis.
Parameters
 errorgen: ndarray
The error generator matrix to project.
 projection_type{“hamiltonian”, “stochastic”, “affine”}
The type of error generators to project the gate error generator onto. If “hamiltonian”, then use the Hamiltonian generators which take a density matrix rho > i*[ H, rho ] for Pauliproduct matrix H. If “stochastic”, then use the Stochastic error generators which take rho > P*rho*P for Pauliproduct matrix P (recall P is self adjoint). If “affine”, then use the affine error generators which take rho > P (superop is P>><<1).
 projection_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 return_generatorsbool, optional
If True, return the error generators projected against along with the projection values themseves.
 return_scale_fctrbool, optional
If True, also return the scaling factor that was used to multply the projections onto normalized error generators to get the returned values.
Returns
 projectionsnumpy.ndarray
An array of length equal to the number of elements in the basis used to construct the projectors. Typically this is is also the dimension of the gate (e.g. 4 for a single qubit).
 generatorsnumpy.ndarray
Only returned when return_generators == True. An array of shape (#basisels,op_dim,op_dim) such that generators[i] is the generator corresponding to the ith basis element. Note that these matricies are in the std (matrix unit) basis.
 pygsti.create_elementary_errorgen_nqudit(typ, basis_element_labels, basis_1q, normalize=False, sparse=False, tensorprod_basis=False)
TODO: docstring  labels can be, e.g. (‘H’, ‘XX’) and basis should be a 1qubit basis w/singlechar labels
 pygsti.create_elementary_errorgen_nqudit_dual(typ, basis_element_labels, basis_1q, normalize=False, sparse=False, tensorprod_basis=False)
TODO: docstring  labels can be, e.g. (‘H’, ‘XX’) and basis should be a 1qubit basis w/singlechar labels
 pygsti.rotation_gate_mx(r, mx_basis='gm')
Construct a rotation operation matrix.
Build the operation matrix corresponding to the unitary
exp(i * (r[0]/2*PP[0]*sqrt(d) + r[1]/2*PP[1]*sqrt(d) + …) )
where PP’ is the array of Pauliproduct matrices obtained via `pp_matrices(d), where d = sqrt(len(r)+1). The division by 2 is for convention, and the sqrt(d) is to essentially unnormalise the matrices returned by
pp_matrices()
to they are equal to products of the standard Pauli matrices.Parameters
 rtuple
A tuple of coeffiecients, one per nonidentity Pauliproduct basis element
 mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object
The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
Returns
 numpy array
a d^2 x d^2 operation matrix in the specified basis.
 pygsti.project_model(model, target_model, projectiontypes=('H', 'S', 'H+S', 'LND'), gen_type='logGlogT', logG_weight=None)
Construct a new model(s) by projecting the error generator of model onto some subspace then reconstructing.
Parameters
 modelModel
The model whose error generator should be projected.
 target_modelModel
The set of target (ideal) gates.
 projectiontypestuple of {‘H’,’S’,’H+S’,’LND’,’LNDF’}
Which projections to use. The length of this tuple gives the number of Model objects returned. Allowed values are:
‘H’ = Hamiltonian errors
‘S’ = Stochastic Paulichannel errors
‘H+S’ = both of the above error types
‘LND’ = errgen projected to a normal (CPTP) Lindbladian
‘LNDF’ = errgen projected to an unrestricted (full) Lindbladian
 gen_type{“logGlogT”, “logTiG”, “logGTi”}
The type of error generator to compute. For more details, see func:error_generator.
 logG_weight: float or None (default)
Regularization weight for approximate logG in logGlogT generator. For more details, see func:error_generator.
Returns
 projected_modelslist of Models
Elements are projected versions of model corresponding to the elements of projectiontypes.
 Npslist of parameter counts
Integer parameter counts for each model in projected_models. Useful for computing the expected loglikelihood or chi2.
 pygsti.compute_best_case_gauge_transform(gate_mx, target_gate_mx, return_all=False)
Returns a gauge transformation that maps gate_mx into a matrix that is codiagonal with target_gate_mx.
(Codiagonal means that they share a common set of eigenvectors.)
Gauge transformations effectively change the basis of all the gates in a model. From the perspective of a single gate a gauge transformation leaves it’s eigenvalues the same and changes its eigenvectors. This function finds a real transformation that transforms the eigenspaces of gate_mx so that there exists a set of eigenvectors which diagonalize both gate_mx and target_gate_mx.
Parameters
 gate_mxnumpy.ndarray
Gate matrix to transform.
 target_gate_mxnumpy.ndarray
Target gate matrix.
 return_allbool, optional
If true, also return the matrices of eigenvectors for Ugate for gate_mx and Utgt for target_gate_mx such that U = dot(Utgt, inv(Ugate)) is real.
Returns
 Unumpy.ndarray
A gauge transformation such that if epgate = U * gate_mx * U_inv, then epgate (which has the same eigenalues as gate_mx), can be diagonalized with a set of eigenvectors that also diagonalize target_gate_mx. Furthermore, U is real.
 Ugate, Utgtnumpy.ndarray
only if return_all == True. See above.
 pygsti.project_to_target_eigenspace(model, target_model, eps=1e06)
Project each gate of model onto the eigenspace of the corresponding gate within target_model.
Returns the resulting Model.
Parameters
 modelModel
Model to act on.
 target_modelModel
The target model, whose gates define the target eigenspaces being projected onto.
 epsfloat, optional
Small magnitude specifying how much to “nudge” the target gates before eigendecomposing them, so that their spectra will have the same conjugacy structure as the gates of model.
Returns
Model
 pygsti.unitary_to_pauligate(u)
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states.
Parameters
 unumpy array
A dxd array giving the action of the unitary on a state in the sigmaz basis. where d = 2 ** nqubits
Returns
 numpy array
The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.
 pygsti.is_valid_lindblad_paramtype(typ)
Whether typ is a recognized Lindbladgate parameterization type.
A Lindblad type is comprised of a parameter specification followed optionally by an evolutiontype suffix. The parameter spec can be “GLND” (general unconstrained Lindbladian), “CPTP” (cptpconstrained), or any/all of the letters “H” (Hamiltonian), “S” (Stochastic, CPTP), “s” (Stochastic), “A” (Affine), “D” (Depolarization, CPTP), “d” (Depolarization) joined with plus (+) signs. Note that “A” cannot appear without one of {“S”,”s”,”D”,”d”}. The suffix can be nonexistent (densitymatrix), “terms” (statevector terms) or “clifford terms” (stabilizerstate terms). For example, valid Lindblad types are “H+S”, “H+d+A”, “CPTP clifford terms”, or “S+A terms”.
Parameters
 typstr
A paramterization type.
Returns
bool
 pygsti.effect_label_to_outcome(povm_and_effect_lbl)
Extract the outcome label from a “simplified” effect label.
Simplified effect labels are not themselves so simple. They combine POVM and effect labels so that accessing any given effect vector is simpler.
If povm_and_effect_lbl is None then “NONE” is returned.
Parameters
 povm_and_effect_lblLabel
Simplified effect vector.
Returns
str
 pygsti.effect_label_to_povm(povm_and_effect_lbl)
Extract the POVM label from a “simplified” effect label.
Simplified effect labels are not themselves so simple. They combine POVM and effect labels so that accessing any given effect vector is simpler.
If povm_and_effect_lbl is None then “NONE” is returned.
Parameters
 povm_and_effect_lblLabel
Simplified effect vector.
Returns
str
 pygsti.id2x2
 pygsti.sigmax
 pygsti.sigmay
 pygsti.sigmaz
 pygsti.sigmaii
 pygsti.sigmaix
 pygsti.sigmaiy
 pygsti.sigmaiz
 pygsti.sigmaxi
 pygsti.sigmaxx
 pygsti.sigmaxy
 pygsti.sigmaxz
 pygsti.sigmayi
 pygsti.sigmayx
 pygsti.sigmayy
 pygsti.sigmayz
 pygsti.sigmazi
 pygsti.sigmazx
 pygsti.sigmazy
 pygsti.sigmazz
 pygsti.single_qubit_gate(hx, hy, hz, noise=0)
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (hx*X + hy*Y + hz*Z), where X, Y, and Z are the sigma matrices. Thus, hx, hy, and hz correspond to rotation angles divided by 2. Additionally, a uniform depolarization noise can be applied to the gate.
Parameters
 hxfloat
Coefficient of sigmaX matrix in exponent.
 hyfloat
Coefficient of sigmaY matrix in exponent.
 hzfloat
Coefficient of sigmaZ matrix in exponent.
 noisefloat, optional
The amount of uniform depolarizing noise.
Returns
 numpy array
4x4 operation matrix which operates on a 1qubit density matrix expressed as a vector in the Pauli basis ( {I,X,Y,Z}/sqrt(2) ).
 pygsti.two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (xx*XX + xy*XY + …) where terms in the exponent are tensor products of two Pauli matrices.
Parameters
 ixfloat, optional
Coefficient of IX matrix in exponent.
 iyfloat, optional
Coefficient of IY matrix in exponent.
 izfloat, optional
Coefficient of IZ matrix in exponent.
 xifloat, optional
Coefficient of XI matrix in exponent.
 xxfloat, optional
Coefficient of XX matrix in exponent.
 xyfloat, optional
Coefficient of XY matrix in exponent.
 xzfloat, optional
Coefficient of XZ matrix in exponent.
 yifloat, optional
Coefficient of YI matrix in exponent.
 yxfloat, optional
Coefficient of YX matrix in exponent.
 yyfloat, optional
Coefficient of YY matrix in exponent.
 yzfloat, optional
Coefficient of YZ matrix in exponent.
 zifloat, optional
Coefficient of ZI matrix in exponent.
 zxfloat, optional
Coefficient of ZX matrix in exponent.
 zyfloat, optional
Coefficient of ZY matrix in exponent.
 zzfloat, optional
Coefficient of ZZ matrix in exponent.
 iifloat, optional
Coefficient of II matrix in exponent.
Returns
 numpy array
16x16 operation matrix which operates on a 2qubit density matrix expressed as a vector in the PauliProduct basis.
 pygsti.cache_by_hashed_args(obj)
Decorator for caching a function values
Deprecated since version v0.9.8.3:
cache_by_hashed_args()
will be removed in pyGSTi v0.9.9. Usefunctools.lru_cache()
instead.Parameters
 objfunction
function to decorate
Returns
function
 pygsti.timed_block(label, time_dict=None, printer=None, verbosity=2, round_places=6, pre_message=None, format_str=None)
Context manager that times a block of code
Parameters
 labelstr
An identifying label for this timed block.
 time_dictdict, optional
A dictionary to store the final time in, under the key label.
 printerVerbosityPrinter, optional
A printer object to log the timer’s message. If None, this message will be printed directly.
 verbosityint, optional
The verbosity level at which to print the time message (if printer is given).
 round_placesint, opitonal
How many decimal places of precision to print time with (in seconds).
 pre_messagestr, optional
A format string to print out before the timer’s message, which formats the label arguent, e.g. “My label is {}”.
 format_strstr, optional
A format string used to format the label before the resulting “rendered label” is used as the first argument in the final formatting string “{} took {} seconds”.
 pygsti.tvd(p, q)
Calculates the total variational distance between two probability distributions.
The distributions must be dictionaries, where keys are events (e.g., bit strings) and values are the probabilities. If an event in the keys of one dictionary isn’t in the keys of the other then that probability is assumed to be zero. There are no checks that the input probability distributions are valid (i.e., that the probabilities sum up to one and are postiive).
Parameters
 p, qdicts
The distributions to calculate the TVD between.
Returns
float
 pygsti.classical_fidelity(p, q)
Calculates the (classical) fidelity between two probability distributions.
The distributions must be dictionaries, where keys are events (e.g., bit strings) and values are the probabilities. If an event in the keys of one dictionary isn’t in the keys of the other then that probability is assumed to be zero. There are no checks that the input probability distributions are valid (i.e., that the probabilities sum up to one and are postiive).
Parameters
 p, qdicts
The distributions to calculate the TVD between.
Returns
float
 pygsti.predicted_rb_number(model, target_model, weights=None, d=None, rtype='EI')
Predicts the RB error rate from a model.
Uses the “Lmatrix” theory from Proctor et al Phys. Rev. Lett. 119, 130502 (2017). Note that this gives the same predictions as the theory in Wallman Quantum 2, 47 (2018).
This theory is valid for various types of RB, including standard Clifford RB – i.e., it will accurately predict the perClifford error rate reported by standard Clifford RB. It is also valid for “direct RB” under broad circumstances.
For this function to be valid the model should be trace preserving and completely positive in some representation, but the particular representation of the model used is irrelevant, as the predicted RB error rate is a gaugeinvariant quantity. The function is likely reliable when complete positivity is slightly violated, although the theory on which it is based assumes complete positivity.
Parameters
 modelModel
The model to calculate the RB number of. This model is the model randomly sampled over, so this is not necessarily the set of physical primitives. In Clifford RB this is a set of Clifford gates; in “direct RB” this normally would be the physical primitives.
 target_modelModel
The target model, corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be nonnegative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.
 dint, optional
The Hilbert space dimension. If None, then sqrt(model.dim) is used.
 rtypestr, optional
The type of RB error rate, either “EI” or “AGI”, corresponding to different dimensiondependent rescalings of the RB decay constant p obtained from fitting to Pm = A + Bp^m. “EI” corresponds to an RB error rate that is associated with entanglement infidelity, which is the probability of error for a gate with stochastic errors. This is the RB error rate defined in the “direct RB” protocol, and is given by:
r = (d^2  1)(1  p)/d^2,
The AGItype r is given by
r = (d  1)(1  p)/d,
which is the conventional r definition in Clifford RB. This r is associated with (gateaveraged) average gate infidelity.
Returns
 rfloat.
The predicted RB number.
 pygsti.predicted_rb_decay_parameter(model, target_model, weights=None)
Computes the second largest eigenvalue of the ‘L matrix’ (see the L_matrix function).
For standard Clifford RB and direct RB, this corresponds to the RB decay parameter p in Pm = A + Bp^m for “reasonably low error” trace preserving and completely positive gates. See also the predicted_rb_number function.
Parameters
 modelModel
The model to calculate the RB decay parameter of. This model is the model randomly sampled over, so this is not necessarily the set of physical primitives. In Clifford RB this is a set of Clifford gates; in “direct RB” this normally would be the physical primitives.
 target_modelModel
The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be nonnegative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.
Returns
 pfloat.
The second largest eigenvalue of L. This is the RB decay parameter for various types of RB.
 pygsti.rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)
Computes the gauge transformation required so that the RB number matches the average model infidelity.
This function computes the gauge transformation required so that, when the model is transformed via this gaugetransformation, the RB number – as predicted by the function predicted_rb_number – is the average model infidelity between the transformed model model and the target model target_model. This transformation is defined Proctor et al Phys. Rev. Lett. 119, 130502 (2017), and see also Wallman Quantum 2, 47 (2018).
Parameters
 modelModel
The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).
 target_modelModel
The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be nonnegative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.
 mx_basis{“std”,”gm”,”pp”}, optional
The basis of the models. If None, the basis is obtained from the model.
 eigenvector_weightingfloat, optional
Must be nonzero. A weighting on the eigenvector with eigenvalue that is the RB decay parameter, in the sum of this eigenvector and the eigenvector with eigenvalue of 1 that defines the returned matrix l_operator. The value of this factor does not change whether this l_operator transforms into a gauge in which r = AGsI, but it may impact on other properties of the gates in that gauge. It is irrelevant if the gates are unital.
Returns
 l_operatorarray
The matrix defining the gaugetransformation.
 pygsti.transform_to_rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)
Transforms a Model into the “RB gauge” (see the RB_gauge function).
This notion was introduced in Proctor et al Phys. Rev. Lett. 119, 130502 (2017). This gauge is a function of both the model and its target. These may be input in any gauge, for the purposes of obtaining “r = average model infidelity” between the output
Model
and target_model.Parameters
 modelModel
The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).
 target_modelModel
The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be nonnegative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.
 mx_basis{“std”,”gm”,”pp”}, optional
The basis of the models. If None, the basis is obtained from the model.
 eigenvector_weightingfloat, optional
Must be nonzero. A weighting on the eigenvector with eigenvalue that is the RB decay parameter, in the sum of this eigenvector and the eigenvector with eigenvalue of 1 that defines the returned matrix l_operator. The value of this factor does not change whether this l_operator transforms into a gauge in which r = AGsI, but it may impact on other properties of the gates in that gauge. It is irrelevant if the gates are unital.
Returns
 model_in_RB_gaugeModel
The model model transformed into the “RB gauge”.
 pygsti.L_matrix(model, target_model, weights=None)
Constructs a generalization of the ‘Lmatrix’ linear operator on superoperators.
From Proctor et al Phys. Rev. Lett. 119, 130502 (2017), the ‘Lmatrix’ is represented as a matrix via the “stack” operation. This eigenvalues of this matrix describe the decay constant (or constants) in an RB decay curve for an RB protocol whereby random elements of the provided model are sampled according to the weights probability distribution over the model. So, this facilitates predictions of Clifford RB and direct RB decay curves.
Parameters
 modelModel
The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).
 target_modelModel
The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be nonnegative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.
Returns
 Lfloat
A weighted version of the L operator from Proctor et al Phys. Rev. Lett. 119, 130502 (2017), represented as a matrix using the ‘stacking’ convention.
 pygsti.R_matrix_predicted_rb_decay_parameter(model, group, group_to_model=None, weights=None)
Returns the second largest eigenvalue of a generalization of the ‘Rmatrix’ [see the R_matrix function].
Introduced in Proctor et al Phys. Rev. Lett. 119, 130502 (2017). This number is a prediction of the RB decay parameter for tracepreserving gates and a variety of forms of RB, including Clifford and direct RB. This function creates a matrix which scales superexponentially in the number of qubits.
Parameters
 modelModel
The model to predict the RB decay paramter for. If group_to_model is None, the labels of the gates in model should be the same as the labels of the group elements in group. For Clifford RB this would be the clifford model, for direct RB it would be the primitive gates.
 groupMatrixGroup
The group that the model model contains gates from (model does not need to be the full group, and could be a subset of group). For Clifford RB and direct RB, this would be the Clifford group.
 group_to_modeldict, optional
If not None, a dictionary that maps labels of group elements to labels of model. If model and group elements have the same labels, this dictionary is not required. Otherwise it is necessary.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be positive or zero, and they must not all be zero (because, when divided by their sum, they must be a valid probability distribution). If None, the weighting defaults to an equal weighting on all gates, as used in most RB protocols.
Returns
 pfloat
The predicted RB decay parameter. Valid for standard Clifford RB or direct RB with tracepreserving gates, and in a range of other circumstances.
 pygsti.R_matrix(model, group, group_to_model=None, weights=None)
Constructs a generalization of the ‘Rmatrix’ of Proctor et al Phys. Rev. Lett. 119, 130502 (2017).
This matrix described the exact behaviour of the average success probablities of RB sequences. This matrix is superexponentially large in the number of qubits, but can be constructed for 1qubit models.
Parameters
 modelModel
The noisy model (e.g., the Cliffords) to calculate the R matrix of. The correpsonding target model (not required in this function) must be equal to or a subset of (a faithful rep of) the group group. If group_to_model `is None, the labels of the gates in model should be the same as the labels of the corresponding group elements in `group. For Clifford RB model should be the clifford model; for direct RB this should be the native model.
 groupMatrixGroup
The group that the model model contains gates from. For Clifford RB or direct RB, this would be the Clifford group.
 group_to_modeldict, optional
If not None, a dictionary that maps labels of group elements to labels of model. This is required if the labels of the gates in model are different from the labels of the corresponding group elements in group.
 weightsdict, optional
If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at for each layer of the RB protocol. If None, the weighting defaults to an equal weighting on all gates, as used in most RB protocols (e.g., Clifford RB).
Returns
 Rfloat
A weighted, a subsetsampling generalization of the ‘Rmatrix’ from Proctor et al Phys. Rev. Lett. 119, 130502 (2017).
 pygsti.errormaps(model, target_model)
Computes the ‘leftmultiplied’ error maps associated with a noisy gate set, along with the average error map.
This is the model [E_1,…] such that
G_i = E_iT_i,
where T_i is the gate which G_i is a noisy implementation of. There is an additional gate in the set, that has the key ‘Gavg’. This is the average of the error maps.
Parameters
 modelModel
The imperfect model.
 target_modelModel
The target model.
Returns
 errormapsModel
The left multplied error gates, along with the average error map, with the key ‘Gavg’.
 pygsti.gate_dependence_of_errormaps(model, target_model, norm='diamond', mx_basis=None)
Computes the “gatedependence of errors maps” parameter defined by
delta_avg = avg_i E_i  avg_i(E_i) ,
where E_i are the error maps, and the norm is either the diamond norm or the 1to1 norm. This quantity is defined in Magesan et al PRA 85 042311 2012.
Parameters
 modelModel
The actual model
 target_modelModel
The target model.
 normstr, optional
The norm used in the calculation. Can be either ‘diamond’ for the diamond norm, or ‘1to1’ for the Hermitian 1 to 1 norm.
 mx_basis{“std”,”gm”,”pp”}, optional
The basis of the models. If None, the basis is obtained from the model.
Returns
 delta_avgfloat
The value of the parameter defined above.
 pygsti.length(s)
Returns the length (the number of indices) contained in a slice.
Parameters
 sslice
The slice to operate upon.
Returns
int
 pygsti.shift(s, offset)
Returns a new slice whose start and stop points are shifted by offset.
Parameters
 sslice
The slice to operate upon.
 offsetint
The amount to shift the start and stop members of s.
Returns
slice
 pygsti.intersect(s1, s2)
Returns the intersection of two slices (which must have the same step).
Parameters
 s1slice
First slice.
 s2slice
Second slice.
Returns
slice
 pygsti.intersect_within(s1, s2)
Returns the intersection of two slices (which must have the same step). and the subslice of s1 and s2 that specifies the intersection.
Furthermore, s2 may be an array of indices, in which case the returned slices become arrays as well.
Parameters
 s1slice
First slice. Must have definite boundaries (start & stop cannot be None).
 s2slice or numpy.ndarray
Second slice or index array.
Returns
 intersectionslice or numpy.ndarray
The intersection of s1 and s2.
 subslice1slice or numpy.ndarray
The portion of s1 that yields intersection.
 subslice2slice or numpy.ndarray
The portion of s2 that yields intersection.
 pygsti.indices(s, n=None)
Returns a list of the indices specified by slice s.
Parameters
 sslice
The slice to operate upon.
 nint, optional
The number of elements in the array being indexed, used for computing negative start/stop points.
Returns
list of ints
 pygsti.list_to_slice(lst, array_ok=False, require_contiguous=True)
Returns a slice corresponding to a given list of (integer) indices, if this is possible.
If not, array_ok determines the behavior.
Parameters
 lstlist
The list of integers to convert to a slice (must be contiguous if require_contiguous == True).
 array_okbool, optional
If True, an integer array (of type numpy.ndarray) is returned when lst does not correspond to a single slice. Otherwise, an AssertionError is raised.
 require_contiguousbool, optional
If True, then lst will only be converted to a contiguous (step=1) slice, otherwise either a ValueError is raised (if array_ok is False) or an array is returned.
Returns
numpy.ndarray or slice
 pygsti.to_array(slc_or_list_like)
Returns slc_or_list_like as an index array (an integer numpy.ndarray).
Parameters
 slc_or_list_likeslice or list
A slice, list, or array.
Returns
numpy.ndarray
 pygsti.divide(slc, max_len)
Divides a slice into subslices based on a maximum length (for each subslice).
For example: divide(slice(0,10,2), 2) == [slice(0,4,2), slice(4,8,2), slice(8,10,2)]
Parameters
 slcslice
The slice to divide
 max_lenint
The maximum length (i.e. number of indices) allowed in a subslice.
Returns
list of slices
 pygsti.slice_of_slice(slc, base_slc)
A slice that is the composition of base_slc and slc.
So that when indexing an array a, a[slice_of_slice(slc, base_slc)] == a[base_slc][slc]
Parameters
 slcslice
the slice to take out of base_slc.
 base_slcslice
the original “base” slice to act upon.
Returns
slice
 pygsti.slice_hash(slc)
 pygsti.smart_cached(obj)
Decorator for applying a smart cache to a single function or method.
Parameters
 objfunction
function to decorate.
Returns
function
 pygsti.symplectic_form(n, convention='standard')
Creates the symplectic form for the number of qubits specified.
There are two variants, of the sympletic form over the finite field of the integers modulo 2, used in pyGSTi. These corresponding to the ‘standard’ and ‘directsum’ conventions. In the case of ‘standard’, the symplectic form is the 2n x 2n matrix of ((0,1),(1,0)), where ‘1’ and ‘0’ are the identity and allzeros matrices of size n x n. The ‘standard’ symplectic form is probably the most commonly used, and it is the definition used throughout most of the code, including the Clifford compilers. In the case of ‘directsum’, the symplectic form is the direct sum of n 2x2 bitflip matrices. This is only used in pyGSTi for sampling from the symplectic group.
Parameters
 nint
The number of qubits the symplectic form should be constructed for. That is, the function creates a 2n x 2n matrix that is a sympletic form
 conventionstr, optional
Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form.
Returns
 numpy array
The specified symplectic form.
 pygsti.change_symplectic_form_convention(s, outconvention='standard')
Maps the input symplectic matrix between the ‘standard’ and ‘directsum’ symplectic form conventions.
That is, if the input is a symplectic matrix with respect to the ‘directsum’ convention and outconvention =’standard’ the output of this function is the equivalent symplectic matrix in the ‘standard’ symplectic form convention. Similarily, if the input is a symplectic matrix with respect to the ‘standard’ convention and outconvention = ‘directsum’ the output of this function is the equivalent symplectic matrix in the ‘directsum’ symplectic form convention.
Parameters
 snumpy.ndarray
The input symplectic matrix.
 outconventionstr, optional
Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form. This is the convention the input is being converted to (and so the input should be a symplectic matrix in the other convention).
Returns
 numpy array
The matrix s converted to outconvention.
 pygsti.check_symplectic(m, convention='standard')
Checks whether a matrix is symplectic.
Parameters
 mnumpy array
The matrix to check.
 conventionstr, optional
Can be either ‘standard’ or ‘directsum’, Specifies the convention of the symplectic form with respect to which the matrix should be sympletic.
Returns
 bool
A bool specifying whether the matrix is symplectic
 pygsti.inverse_symplectic(s)
Returns the inverse of a symplectic matrix over the integers mod 2.
Parameters
 snumpy array
The matrix to invert
Returns
 numpy array
The inverse of s, over the field of the integers mod 2.
 pygsti.inverse_clifford(s, p)
Returns the inverse of a Clifford gate in the symplectic representation.
This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford
Returns
 sinversenumpy array
The symplectic matrix representing the inverse of the input Clifford.
 pinversenumpy array
The ‘phase vector’ representing the inverse of the input Clifford.
 pygsti.check_valid_clifford(s, p)
Checks if a symplectic matrix  phase vector pair (s,p) is the symplectic representation of a Clifford.
This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford
Returns
 bool
True if (s,p) is the symplectic representation of some Clifford.
 pygsti.construct_valid_phase_vector(s, pseed)
Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate.
If the seed phase vector, when paired with s, represents some Clifford this seed is returned. Otherwise 1 mod 4 is added to the required elements of the pseed in order to make it at valid phase vector (which is one of many possible phase vectors that, together with s, define a valid Clifford).
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford
 pseednumpy array
The seed ‘phase vector’ over the integers mod 4.
Returns
 numpy array
Some p such that (s,p) is the symplectic representation of some Clifford.
 pygsti.find_postmultipled_pauli(s, p_implemented, p_target, qubit_labels=None)
Finds the Pauli layer that should be appended to a circuit to implement a given Clifford.
If some circuit implements the clifford described by the symplectic matrix s and the vector p_implemented, this function returns the Pauli layer that should be appended to this circuit to implement the clifford described by s and the vector p_target.
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford implemented by the circuit
 p_implementednumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford implemented by the circuit
 p_targetnumpy array
The ‘phase vector’ over the integers mod 4 that, together with s represents the Clifford that you want to implement. Together with s, this vector must define a valid Clifford.
 qubit_labelslist, optional
A list of qubit labels, that are strings or ints. The length of this list should be equal to the number of qubits the Clifford acts on. The ith element of the list is the label corresponding to the qubit at the ith index of s and the two phase vectors. If None, defaults to the integers from 0 to number of qubits  1.
Returns
 list
A list that defines a Pauli layer, with the ith element containig one of the 4 tuples (P,qubit_labels[i]) with P = ‘I’, ‘Z’, ‘Y’ and ‘Z’
 pygsti.find_premultipled_pauli(s, p_implemented, p_target, qubit_labels=None)
Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford.
If some circuit implements the clifford described by the symplectic matrix s and the vector p_implemented, this function returns the Pauli layer that should be prefixed to this circuit to implement the clifford described by s and the vector p_target.
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford implemented by the circuit
 p_implementednumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford implemented by the circuit
 p_targetnumpy array
The ‘phase vector’ over the integers mod 4 that, together with s represents the Clifford that you want to implement. Together with s, this vector must define a valid Clifford.
 qubit_labelslist, optional
A list of qubit labels, that are strings or ints. The length of this list should be equal to the number of qubits the Clifford acts on. The ith element of the list is the label corresponding to the qubit at the ith index of s and the two phase vectors. If None, defaults to the integers from 0 to number of qubits  1.
Returns
 list
A list that defines a Pauli layer, with the ith element containig one of the 4 tuples (‘I’,i), (‘X’,i), (‘Y’,i), (‘Z’,i).
 pygsti.find_pauli_layer(pvec, qubit_labels, pauli_labels=None)
TODO: docstring pauli_labels defaults to [‘I’, ‘X’, ‘Y’, ‘Z’].
 pygsti.find_pauli_number(pvec)
TODO: docstring
 pygsti.compose_cliffords(s1, p1, s2, p2, do_checks=True)
Multiplies two cliffords in the symplectic representation.
The output corresponds to the symplectic representation of C2 times C1 (i.e., C1 acts first) where s1 (s2) and p1 (p2) are the symplectic matrix and phase vector, respectively, for Clifford C1 (C2). This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).
Parameters
 s1numpy array
The symplectic matrix over the integers mod 2 representing the first Clifford
 p1numpy array
The ‘phase vector’ over the integers mod 4 representing the first Clifford
 s2numpy array
The symplectic matrix over the integers mod 2 representing the second Clifford
 p2numpy array
The ‘phase vector’ over the integers mod 4 representing the second Clifford
 do_checksbool
If True (default), check inputs and output are valid cliffords. If False, these checks are skipped (for speed)
Returns
 snumpy array
The symplectic matrix over the integers mod 2 representing the composite Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the compsite Clifford
 pygsti.symplectic_kronecker(sp_factors)
Takes a kronecker product of symplectic representations.
Construct a single (s,p) symplectic (or stabilizer) representation that corresponds to the tensor (kronecker) product of the objects represented by each (s,p) element of sp_factors.
This is performed by inserting each factor’s s and p elements into the appropriate places of the final (large) s and p arrays. This operation works for combining Clifford operations AND also stabilizer states.
Parameters
 sp_factorsiterable
A list of (s,p) symplectic (or stabilizer) representation factors.
Returns
 snumpy.ndarray
An array of shape (2n,2n) where n is the total number of qubits (the sum of the number of qubits in each sp_factors element).
 pnumpy.ndarray
A 1D array of length 2n.
 pygsti.prep_stabilizer_state(nqubits, zvals=None)
Contruct the (s,p) stabilizer representation for a computational basis state given by zvals.
Parameters
 nqubitsint
Number of qubits
 zvalsiterable, optional
An iterable over anything that can be cast as True/False to indicate the 0/1 value of each qubit in the Z basis. If None, the allzeros state is created. If None, then all zeros is assumed.
Returns
 s,pnumpy.ndarray
The stabilizer “matrix” and phase vector corresponding to the desired state. s has shape (2n,2n) (it includes antistabilizers) and p has shape 2n, where n equals nqubits.
 pygsti.apply_clifford_to_stabilizer_state(s, p, state_s, state_p)
Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation.
The output corresponds to the stabilizer representation of the output state.
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford
 state_snumpy array
The matrix over the integers mod 2 representing the stabilizer state
 state_pnumpy array
The ‘phase vector’ over the integers mod 4 representing the stabilizer state
Returns
 out_snumpy array
The symplectic matrix over the integers mod 2 representing the output state
 out_pnumpy array
The ‘phase vector’ over the integers mod 4 representing the output state
 pygsti.pauli_z_measurement(state_s, state_p, qubit_index)
Computes the probabilities of 0/1 (+/) outcomes from measuring a Pauli operator on a stabilizer state.
Parameters
 state_snumpy array
The matrix over the integers mod 2 representing the stabilizer state
 state_pnumpy array
The ‘phase vector’ over the integers mod 4 representing the stabilizer state
 qubit_indexint
The index of the qubit being measured
Returns
 p0, p1float
Probabilities of 0 (+ eigenvalue) and 1 ( eigenvalue) outcomes.
 state_s_0, state_s_1numpy array
Matrix over the integers mod 2 representing the output stabilizer states.
 state_p_0, state_p_1numpy array
Phase vectors over the integers mod 4 representing the output stabilizer states.
 pygsti.colsum(i, j, s, p, n)
A helper routine used for manipulating stabilizer state representations.
Updates the ith stabilizer generator (column of s and element of p) with the groupaction product of the jth and the ith generators, i.e.
generator[i] > generator[j] + generator[i]
Parameters
 iint
Destination generator index.
 jint
Sournce generator index.
 snumpy array
The matrix over the integers mod 2 representing the stabilizer state
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the stabilizer state
 nint
The number of qubits. s must be shape (2n,2n) and p must be length 2n.
Returns
None
 pygsti.colsum_acc(acc_s, acc_p, j, s, p, n)
A helper routine used for manipulating stabilizer state representations.
Similar to
colsum()
except a separate “accumulator” column is used instead of the ith column of s and element of p. I.e., this performs:acc[0] > generator[j] + acc[0]
Parameters
 acc_snumpy array
The matrix over the integers mod 2 representing the “accumulator” stabilizer state
 acc_pnumpy array
The ‘phase vector’ over the integers mod 4 representing the “accumulator” stabilizer state
 jint
Index of the stabilizer generator being accumulated (see above).
 snumpy array
The matrix over the integers mod 2 representing the stabilizer state
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the stabilizer state
 nint
The number of qubits. s must be shape (2n,2n) and p must be length 2n.
Returns
None
 pygsti.stabilizer_measurement_prob(state_sp_tuple, moutcomes, qubit_filter=None, return_state=False)
Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state.
Returns this probability, optionally along with the updated (postmeasurement) stabilizer state.
Parameters
 state_sp_tupletuple
A (s,p) tuple giving the stabilizer state to measure.
 moutcomesarraylike
The zvalues identifying which measurement outcome (a computational basis state) to compute the probability for.
 qubit_filteriterable, optional
If not None, a list of qubit indices which are measured. len(qubit_filter) should always equal len(moutcomes). If None, then assume all qubits are measured (len(moutcomes) == num_qubits).
 return_statebool, optional
Whether the postmeasurement (w/outcome moutcomes) state is also returned.
Returns
 pfloat
The probability of the given measurement outcome.
 state_s,state_pnumpy.ndarray
Only returned when return_state=True. The postmeasurement stabilizer state representation (an updated version of state_sp_tuple).
 pygsti.embed_clifford(s, p, qubit_inds, n)
Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation.
The action of (s,p) takes place on the qubit indices specified by qubit_inds.
Parameters
 snumpy array
The symplectic matrix over the integers mod 2 representing the Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the Clifford
 qubit_indslist
A list or array of integers specifying which qubits s and p act on.
 nint
The total number of qubits
Returns
 snumpy array
The symplectic matrix over the integers mod 2 representing the embedded Clifford
 pnumpy array
The ‘phase vector’ over the integers mod 4 representing the embedded Clifford
 pygsti.compute_internal_gate_symplectic_representations(gllist=None)
Creates a dictionary of the symplectic representations of ‘standard’ Clifford gates.
Returns a dictionary containing the symplectic matrices and phase vectors that represent the specified ‘standard’ Clifford gates, or the representations of all the standard gates if no list of operation labels is supplied. These ‘standard’ Clifford gates are those gates that are already known to the code gates (e.g., the label ‘CNOT’ has a specfic meaning in the code), and are recorded as unitaries in “internalgates.py”.
Parameters
 gllistlist, optional
If not None, a list of strings corresponding to operation labels for any of the standard gates that have fixed meaning for the code (e.g., ‘CNOT’ corresponds to the CNOT gate with the first qubit the target). For example, this list could be gllist = [‘CNOT’,’H’,’P’,’I’,’X’].
Returns
 srep_dictdict
dictionary of (smatrix,svector) tuples, where smatrix and svector are numpy arrays containing the symplectic matrix and phase vector representing the operation label given by the key.
 pygsti.symplectic_rep_of_clifford_circuit(circuit, srep_dict=None, pspec=None)
Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit.
This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).
Parameters
 circuitCircuit
The Clifford circuit to calculate the global action of, input as a Circuit object.
 srep_dictdict, optional
If not None, a dictionary providing the (symplectic matrix, phase vector) tuples associated with each operation label. If the circuit layer contains only ‘standard’ gates which have a hardcoded symplectic representation this may be None. Alternatively, if pspec is specifed and it contains the gates in circuit in a Clifford model, it also does not need to be specified (and it is ignored if it is specified). Otherwise it must be specified.
 pspecQubitProcessorSpec, optional
A QubitProcessorSpec that contains a Clifford model that defines the symplectic action of all of the gates in circuit. If this is not None it overrides srep_dict. Both pspec and srep_dict can only be None if the circuit contains only gates with names that are hardcoded into pyGSTi.
Returns
 snumpy array
The symplectic matrix representing the Clifford implement by the input circuit
 pdictionary of numpy arrays
The phase vector representing the Clifford implement by the input circuit
 pygsti.symplectic_rep_of_clifford_layer(layer, n=None, q_labels=None, srep_dict=None, add_internal_sreps=True)
Constructs the symplectic representation of the nqubit Clifford implemented by a single quantum circuit layer.
(Gates in a “single layer” must act on disjoint sets of qubits, but not all qubits need to be acted upon in the layer.)
Parameters
 layerLabel
A layer label, often a compound label with components. Specifies The Clifford gate(s) to calculate the global action of.
 nint, optional
The total number of qubits. Must be specified if q_labels is None.
 q_labelslist, optional
A list of all the qubit labels. If the layer is over qubits that are not labelled by integers 0 to n1 then it is necessary to specify this list. Note that this should contain all the qubit labels for the circuit that this is a layer from, and they should be ordered as in that circuit, otherwise the symplectic rep returned might not be of the correct dimension or of the correct order.
 srep_dictdict, optional
If not None, a dictionary providing the (symplectic matrix, phase vector) tuples associated with each operation label. If the circuit layer contains only ‘standard’ gates which have a hardcoded symplectic representation this may be None. Otherwise it must be specified. If the layer contains some standard gates it is not necesary to specify the symplectic represenation for those gates.
 add_internal_srepsbool, optional
If True, the symplectic reps for internal gates are calculated and added to srep_dict. For speed, calculate these reps once, store them in srep_dict, and set this to False.
Returns
 snumpy array
The symplectic matrix representing the Clifford implement by specified circuit layer
 pnumpy array
The phase vector representing the Clifford implement by specified circuit layer
 pygsti.one_q_clifford_symplectic_group_relations()
Gives the group relationship between the ‘I’, ‘H’, ‘P’ ‘HP’, ‘PH’, and ‘HPH’ uptoPaulis operators.
The returned dictionary contains keys (A,B) for all A and B in the above list. The value for key (A,B) is C if BA = C x some Pauli operator. E,g, (‘P’,’P’) = ‘I’.
This dictionary is important for Compiling multiqubit Clifford gates without unneccessary 1qubit gate overheads. But note that this dictionary should not be used for compressing circuits containing these gates when the exact action of the circuit is of importance (not only the uptoPaulis action of the circuit).
Returns
dict
 pygsti.unitary_is_clifford(unitary)
Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise.
Parameters
 unitarynumpy.ndarray
A unitary matrix to test.
Returns
bool
 pygsti.unitary_to_symplectic(u, flagnonclifford=True)
Returns the symplectic representation of a onequbit or twoqubit Clifford unitary.
The Clifford is input as a complex matrix in the standard computational basis.
Parameters
 unumpy array
The unitary matrix to construct the symplectic representation for. This must be a onequbit or twoqubit gate (so, it is a 2 x 2 or 4 x 4 matrix), and it must be provided in the standard computational basis. It also must be a Clifford gate in the standard sense.
 flagnoncliffordbool, opt
If True, a ValueError is raised when the input unitary is not a Clifford gate. If False, when the unitary is not a Clifford the returned s and p are None.
Returns
 snumpy array or None
The symplectic matrix representing the unitary, or None if the input unitary is not a Clifford and flagnonclifford is False
 pnumpy array or None
The phase vector representing the unitary, or None if the input unitary is not a Clifford and flagnonclifford is False
 pygsti.random_symplectic_matrix(n, convention='standard', rand_state=None)
Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n).
This uses the method of Robert Koenig and John A. Smolin, presented in “How to efficiently select an arbitrary Clifford group element”.
Parameters
 nint
The size of the symplectic group to sample from.
 conventionstr, optional
Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form. In the case of ‘standard’, the symplectic form is the 2n x 2n matrix of ((0,1),(1,0)), where ‘1’ and ‘0’ are the identity and allzeros matrices of size n x n. The ‘standard’ symplectic form is the convention used throughout most of the code. In the case of ‘directsum’, the symplectic form is the direct sum of n 2x2 bitflip matrices.
 rand_state: RandomState, optional
A np.random.RandomState object for seeding RNG
Returns
 snumpy array
A uniformly sampled random symplectic matrix.
 pygsti.random_clifford(n, rand_state=None)
Returns a Clifford, in the symplectic representation, sampled uniformly at random from the nqubit Clifford group.
The core of this function uses the method of Robert Koenig and John A. Smolin, presented in “How to efficiently select an arbitrary Clifford group element”, for sampling a uniformly random symplectic matrix.
Parameters
 nint
The number of qubits the Clifford group is over.
 rand_state: RandomState, optional
A np.random.RandomState object for seeding RNG
Returns
 snumpy array
The symplectic matrix representating the uniformly sampled random Clifford.
 pnumpy array
The phase vector representating the uniformly sampled random Clifford.
 pygsti.random_phase_vector(s, n, rand_state=None)
Generates a uniformly random phase vector for a nqubit Clifford.
(This vector, together with the provided symplectic matrix, define a valid Clifford operation.) In combination with a uniformly random s the returned p defines a uniformly random Clifford gate.
Parameters
 snumpy array
The symplectic matrix to construct a random phase vector
 nint
The number of qubits the Clifford group is over.
 rand_state: RandomState, optional
A np.random.RandomState object for seeding RNG
Returns
 pnumpy array
A phase vector sampled uniformly at random from all those phase vectors that, as a pair with s, define a valid nqubit Clifford.
 pygsti.bitstring_for_pauli(p)
Get the bitstring corresponding to a Pauli.
The state, represented by a bitstring, that the Pauli operator represented by the phasevector p creates when acting on the standard input state.
Parameters
 pnumpy.ndarray
Phase vector of a symplectic representation, encoding a Pauli operation.
Returns
 list
A list of 0 or 1 elements.
 pygsti.apply_internal_gate_to_symplectic(s, gate_name, qindex_list, optype='row')
Applies a Clifford gate to the nqubit Clifford gate specified by the 2n x 2n symplectic matrix.
The Clifford gate is specified by the internally hardcoded name gate_name. This gate is applied to the qubits with indices in qindex_list, where these indices are w.r.t to indeices of s. This gate is applied from the left (right) of s if optype is ‘row’ (‘column’), and has a rowaction (columnaction) on s. E.g., the Hadmard (‘H’) on qubit with index i swaps the ith row (or column) with the (i+n)th row (or column) of s; CNOT adds rows, etc.
Note that this function updates s, and returns None.
Parameters
 snp.array
A evendimension square array over [0,1] that is the symplectic representation of some (normally multiqubit) Clifford gate.
 gate_namestr
The gate name. Should be one of the gatenames of the hardcoded gates used internally in pyGSTi that is also a Clifford gate. Currently not all of those gates are supported, and gate_name must be one of: ‘H’, ‘P’, ‘CNOT’, ‘SWAP’.
 qindex_listlist or tuple
The qubit indices that gate_name acts on (can be either length 1 or 2 depending on whether the gate acts on 1 or 2 qubits).
 optype{‘row’, ‘column’}, optional
Whether the symplectic operator type uses rows or columns: TODO: docstring  better explanation.
Returns
None
 pygsti.compute_num_cliffords(n)
The number of Clifford gates in the nqubit Clifford group.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 nint
The number of qubits the Clifford group is over.
Returns
 long integer
The cardinality of the nqubit Clifford group.
 pygsti.compute_num_symplectics(n)
The number of elements in the symplectic group S(n) over the 2element finite field.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 nint
S(n) group parameter.
Returns
int
 pygsti.compute_num_cosets(n)
Returns the number of different cosets for the symplectic group S(n) over the 2element finite field.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 nint
S(n) group parameter.
Returns
int
 pygsti.symplectic_innerproduct(v, w)
Returns the symplectic inner product of two vectors in F_2^(2n).
Here F_2 is the finite field containing 0 and 1, and 2n is the length of the vectors. Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 vnumpy.ndarray
A length2n vector.
 wnumpy.ndarray
A length2n vector.
Returns
int
 pygsti.symplectic_transvection(k, v)
Applies transvection Z k to v.
Code from “How to efficiently select an arbitrary Clifford group element by Robert Koenig and John A. Smolin.
Parameters
 knumpy.ndarray
A length2n vector.
 vnumpy.ndarray
A length2n vector.
Returns
numpy.ndarray
 pygsti.int_to_bitstring(i, n)
Converts integer i to an length n array of bits.
Code from “How to efficiently select an arbitrary Clifford group element by Robert Koenig and John A. Smolin.
Parameters
 iint
Any integer.
 nint
Number of bits
Returns
 numpy.ndarray
Integer array of 0s and 1s.
 pygsti.bitstring_to_int(b, n)
Converts an nbit string b to an integer between 0 and 2^`n`  1.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 blist, tuple, or array
Sequence of bits (a bitstring).
 nint
Number of bits.
Returns
int
 pygsti.find_symplectic_transvection(x, y)
A utility function for selecting a random Clifford element.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 xnumpy.ndarray
A length2n vector.
 ynumpy.ndarray
A length2n vector.
Returns
numpy.ndarray
 pygsti.compute_symplectic_matrix(i, n)
Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the “canonical” index i.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 iint
Canonical index.
 nint
Number of qubits.
Returns
numpy.ndarray
 pygsti.compute_symplectic_label(gn, n=None)
Returns the “canonical” index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 gnnumpy.ndarray
symplectic matrix
 nint, optional
Number of qubits (if None, use gn.shape[0] // 2).
Returns
 int
The canonical index of gn.
 pygsti.random_symplectic_index(n, rand_state=None)
The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1.
Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.
Parameters
 nint
Number of qubits (half dimension of symplectic matrix).
 rand_state: RandomState, optional
A np.random.RandomState object for seeding RNG
Returns
numpy.ndarray