pygsti
¶
A Python implementation of LinearOperator Set Tomography
Subpackages¶
pygsti.algorithms
pygsti.algorithms.compilers
pygsti.algorithms.contract
pygsti.algorithms.core
pygsti.algorithms.directx
pygsti.algorithms.fiducialpairreduction
pygsti.algorithms.fiducialselection
pygsti.algorithms.gaugeopt
pygsti.algorithms.germselection
pygsti.algorithms.grammatrix
pygsti.algorithms.grasp
pygsti.algorithms.mirroring
pygsti.algorithms.randomcircuit
pygsti.algorithms.rbfit
pygsti.algorithms.robust_phase_estimation
pygsti.algorithms.scoring
pygsti.baseobjs
pygsti.baseobjs.opcalc
pygsti.baseobjs._compatibility
pygsti.baseobjs.advancedoptions
pygsti.baseobjs.basis
pygsti.baseobjs.basisconstructors
pygsti.baseobjs.errorgenbasis
pygsti.baseobjs.errorgenlabel
pygsti.baseobjs.errorgenspace
pygsti.baseobjs.exceptions
pygsti.baseobjs.label
pygsti.baseobjs.nicelyserializable
pygsti.baseobjs.outcomelabeldict
pygsti.baseobjs.polynomial
pygsti.baseobjs.profiler
pygsti.baseobjs.protectedarray
pygsti.baseobjs.qubitgraph
pygsti.baseobjs.resourceallocation
pygsti.baseobjs.smartcache
pygsti.baseobjs.statespace
pygsti.baseobjs.verbosityprinter
pygsti.circuits
pygsti.data
pygsti.drivers
pygsti.evotypes
pygsti.extras
pygsti.extras.crosstalk
pygsti.extras.devices
pygsti.extras.devices.devcore
pygsti.extras.devices.ibmq_athens
pygsti.extras.devices.ibmq_belem
pygsti.extras.devices.ibmq_bogota
pygsti.extras.devices.ibmq_burlington
pygsti.extras.devices.ibmq_cambridge
pygsti.extras.devices.ibmq_casablanca
pygsti.extras.devices.ibmq_essex
pygsti.extras.devices.ibmq_guadalupe
pygsti.extras.devices.ibmq_lima
pygsti.extras.devices.ibmq_london
pygsti.extras.devices.ibmq_manhattan
pygsti.extras.devices.ibmq_manila
pygsti.extras.devices.ibmq_melbourne
pygsti.extras.devices.ibmq_montreal
pygsti.extras.devices.ibmq_ourense
pygsti.extras.devices.ibmq_quito
pygsti.extras.devices.ibmq_rome
pygsti.extras.devices.ibmq_rueschlikon
pygsti.extras.devices.ibmq_santiago
pygsti.extras.devices.ibmq_sydney
pygsti.extras.devices.ibmq_tenerife
pygsti.extras.devices.ibmq_toronto
pygsti.extras.devices.ibmq_vigo
pygsti.extras.devices.ibmq_yorktown
pygsti.extras.devices.rigetti_agave
pygsti.extras.devices.rigetti_aspen4
pygsti.extras.devices.rigetti_aspen6
pygsti.extras.devices.rigetti_aspen7
pygsti.extras.drift
pygsti.extras.ibmq
pygsti.extras.idletomography
pygsti.extras.interpygate
pygsti.extras.rb
pygsti.extras.rpe
pygsti.forwardsims
pygsti.forwardsims.chpforwardsim
pygsti.forwardsims.distforwardsim
pygsti.forwardsims.forwardsim
pygsti.forwardsims.mapforwardsim
pygsti.forwardsims.mapforwardsim_calc_generic
pygsti.forwardsims.matrixforwardsim
pygsti.forwardsims.successfailfwdsim
pygsti.forwardsims.termforwardsim
pygsti.forwardsims.termforwardsim_calc_generic
pygsti.forwardsims.weakforwardsim
pygsti.io
pygsti.layouts
pygsti.modelmembers
pygsti.modelmembers.instruments
pygsti.modelmembers.operations
pygsti.modelmembers.operations.composederrorgen
pygsti.modelmembers.operations.composedop
pygsti.modelmembers.operations.denseop
pygsti.modelmembers.operations.depolarizeop
pygsti.modelmembers.operations.eigpdenseop
pygsti.modelmembers.operations.embeddederrorgen
pygsti.modelmembers.operations.embeddedop
pygsti.modelmembers.operations.experrorgenop
pygsti.modelmembers.operations.fullarbitraryop
pygsti.modelmembers.operations.fulltpop
pygsti.modelmembers.operations.fullunitaryop
pygsti.modelmembers.operations.lindbladerrorgen
pygsti.modelmembers.operations.linearop
pygsti.modelmembers.operations.lpdenseop
pygsti.modelmembers.operations.opfactory
pygsti.modelmembers.operations.repeatedop
pygsti.modelmembers.operations.staticarbitraryop
pygsti.modelmembers.operations.staticcliffordop
pygsti.modelmembers.operations.staticstdop
pygsti.modelmembers.operations.staticunitaryop
pygsti.modelmembers.operations.stochasticop
pygsti.modelmembers.povms
pygsti.modelmembers.povms.basepovm
pygsti.modelmembers.povms.complementeffect
pygsti.modelmembers.povms.composedeffect
pygsti.modelmembers.povms.composedpovm
pygsti.modelmembers.povms.computationaleffect
pygsti.modelmembers.povms.computationalpovm
pygsti.modelmembers.povms.conjugatedeffect
pygsti.modelmembers.povms.denseeffect
pygsti.modelmembers.povms.effect
pygsti.modelmembers.povms.fulleffect
pygsti.modelmembers.povms.fullpureeffect
pygsti.modelmembers.povms.marginalizedpovm
pygsti.modelmembers.povms.povm
pygsti.modelmembers.povms.staticeffect
pygsti.modelmembers.povms.staticpureeffect
pygsti.modelmembers.povms.tensorprodeffect
pygsti.modelmembers.povms.tensorprodpovm
pygsti.modelmembers.povms.tppovm
pygsti.modelmembers.povms.unconstrainedpovm
pygsti.modelmembers.states
pygsti.modelmembers.states.composedstate
pygsti.modelmembers.states.computationalstate
pygsti.modelmembers.states.cptpstate
pygsti.modelmembers.states.densestate
pygsti.modelmembers.states.fullpurestate
pygsti.modelmembers.states.fullstate
pygsti.modelmembers.states.purestate
pygsti.modelmembers.states.state
pygsti.modelmembers.states.staticpurestate
pygsti.modelmembers.states.staticstate
pygsti.modelmembers.states.tensorprodstate
pygsti.modelmembers.states.tpstate
pygsti.modelmembers.errorgencontainer
pygsti.modelmembers.modelmember
pygsti.modelmembers.modelmembergraph
pygsti.modelmembers.term
pygsti.modelpacks
pygsti.modelpacks.legacy
pygsti.modelpacks.legacy.std1Q_Cliffords
pygsti.modelpacks.legacy.std1Q_XY
pygsti.modelpacks.legacy.std1Q_XYI
pygsti.modelpacks.legacy.std1Q_XYZI
pygsti.modelpacks.legacy.std1Q_XZ
pygsti.modelpacks.legacy.std1Q_ZN
pygsti.modelpacks.legacy.std1Q_pi4_pi2_XZ
pygsti.modelpacks.legacy.std2Q_XXII
pygsti.modelpacks.legacy.std2Q_XXYYII
pygsti.modelpacks.legacy.std2Q_XY
pygsti.modelpacks.legacy.std2Q_XYCNOT
pygsti.modelpacks.legacy.std2Q_XYCPHASE
pygsti.modelpacks.legacy.std2Q_XYI
pygsti.modelpacks.legacy.std2Q_XYI1
pygsti.modelpacks.legacy.std2Q_XYI2
pygsti.modelpacks.legacy.std2Q_XYICNOT
pygsti.modelpacks.legacy.std2Q_XYICPHASE
pygsti.modelpacks.legacy.std2Q_XYZICNOT
pygsti.modelpacks.legacy.stdQT_XYIMS
pygsti.modelpacks._modelpack
pygsti.modelpacks.smq1Q_XY
pygsti.modelpacks.smq1Q_XYI
pygsti.modelpacks.smq1Q_XYZI
pygsti.modelpacks.smq1Q_XZ
pygsti.modelpacks.smq1Q_Xpi2_rpe
pygsti.modelpacks.smq1Q_Ypi2_rpe
pygsti.modelpacks.smq1Q_ZN
pygsti.modelpacks.smq1Q_pi4_pi2_XZ
pygsti.modelpacks.smq2Q_XXII
pygsti.modelpacks.smq2Q_XXII_condensed
pygsti.modelpacks.smq2Q_XXYYII
pygsti.modelpacks.smq2Q_XXYYII_condensed
pygsti.modelpacks.smq2Q_XY
pygsti.modelpacks.smq2Q_XYCNOT
pygsti.modelpacks.smq2Q_XYCPHASE
pygsti.modelpacks.smq2Q_XYI
pygsti.modelpacks.smq2Q_XYI1
pygsti.modelpacks.smq2Q_XYI2
pygsti.modelpacks.smq2Q_XYICNOT
pygsti.modelpacks.smq2Q_XYICPHASE
pygsti.modelpacks.smq2Q_XYZICNOT
pygsti.modelpacks.stdtarget
pygsti.models
pygsti.models.cloudnoisemodel
pygsti.models.explicitcalc
pygsti.models.explicitmodel
pygsti.models.fogistore
pygsti.models.gaugegroup
pygsti.models.implicitmodel
pygsti.models.layerrules
pygsti.models.localnoisemodel
pygsti.models.memberdict
pygsti.models.model
pygsti.models.modelconstruction
pygsti.models.modelnoise
pygsti.models.modelparaminterposer
pygsti.models.oplessmodel
pygsti.models.qutrit
pygsti.models.rpemodel
pygsti.models.stencillabel
pygsti.objectivefns
pygsti.optimize
pygsti.processors
pygsti.protocols
pygsti.protocols.confidenceregionfactory
pygsti.protocols.estimate
pygsti.protocols.freeformsim
pygsti.protocols.gst
pygsti.protocols.modeltest
pygsti.protocols.protocol
pygsti.protocols.rb
pygsti.protocols.rpe
pygsti.protocols.stability
pygsti.protocols.treenode
pygsti.protocols.vb
pygsti.protocols.vbdataframe
pygsti.report
pygsti.report.section
pygsti.report.autotitle
pygsti.report.cell
pygsti.report.colormaps
pygsti.report.convert
pygsti.report.factory
pygsti.report.figure
pygsti.report.fogidiagram
pygsti.report.formatter
pygsti.report.formatters
pygsti.report.html
pygsti.report.latex
pygsti.report.merge_helpers
pygsti.report.modelfunction
pygsti.report.mpl_colormaps
pygsti.report.notebook
pygsti.report.notebookcell
pygsti.report.parse_notebook_text
pygsti.report.plothelpers
pygsti.report.plotly_plot_ex
pygsti.report.python
pygsti.report.report
pygsti.report.reportableqty
pygsti.report.reportables
pygsti.report.row
pygsti.report.table
pygsti.report.textblock
pygsti.report.vbplot
pygsti.report.workspace
pygsti.report.workspaceplots
pygsti.report.workspacetables
pygsti.report.workspacetexts
pygsti.serialization
pygsti.tools
pygsti.tools.basistools
pygsti.tools.chi2fns
pygsti.tools.compilationtools
pygsti.tools.dataframetools
pygsti.tools.exceptions
pygsti.tools.fogitools
pygsti.tools.gatetools
pygsti.tools.group
pygsti.tools.hypothesis
pygsti.tools.internalgates
pygsti.tools.jamiolkowski
pygsti.tools.legacytools
pygsti.tools.likelihoodfns
pygsti.tools.lindbladtools
pygsti.tools.listtools
pygsti.tools.matrixmod2
pygsti.tools.matrixtools
pygsti.tools.mpitools
pygsti.tools.mptools
pygsti.tools.nameddict
pygsti.tools.optools
pygsti.tools.opttools
pygsti.tools.pdftools
pygsti.tools.profile
pygsti.tools.rbtheory
pygsti.tools.rbtools
pygsti.tools.sharedmemtools
pygsti.tools.slicetools
pygsti.tools.symplectic
pygsti.tools.typeddict
Submodules¶
Package Contents¶
Classes¶
A dummy profiler that doesn't do anything. 

A basis that is included within and integrated into pyGSTi. 

Class responsible for logging things to stdout or a file. 

A basis that is the direct sum of one or more "component" bases. 

A unmutable list (a tuple) of 

Describes available resources and how they should be allocated. 

A LevenbergMarquardt optimizer customized for GSTlike problems. 

An optimizer. Optimizes an objective function. 

Element of 

An association between Circuits and outcome counts, serving as the input data for many QCVV protocols. 

Advanced options for GST driver functions. 

The device specification for a one or more qubit quantum computer. 

A predictive model for a Quantum Information Processor (QIP). 

A dictionary that also holds category names and types. 

A dictionary that holds perkey type information. 

An ordered set of labeled matrices/vectors. 

A Basis whose elements are specified directly. 

A basis that is the direct sum of one or more "component" bases. 

A label used to identify a gate, circuit layer, or (sub)circuit. 

Labels an elementary error generator by simply a type and one or two 
Functions¶

Contract a Model to a specified space. 









Contract the surface preparation and measurement operations of 

Performs Linearinversion Gate Set Tomography on the dataset. 













Returns the rank and singular values of the Gram matrix for a dataset. 

Performs core Gate Set Tomography function of model optimization. 

Performs core Gate Set Tomography function of model optimization. 

Performs Iterative Gate Set Tomography on the dataset. 

Runs the core modeloptimization step within a GST routine by optimizing 

Runs the core modeloptimization step for models using the 

Find the closest (in fidelity) unitary superoperator to operation_mx. 

Optimize the gauge degrees of freedom of a model to that of a target. 

Optimize the gauge of a model using a custom objective function. 

Creates the objective function and jacobian (if available) 

Helper function  same as that in core.py. 

Helper function  same as that in core.py. 

Helper function  CPTP penalty: (sum of tracenorms of gates), 

Helper function  CPTP penalty: (sum of tracenorms of gates), 

Helper function  jacobian of CPTP penalty (sum of tracenorms of gates) 

Helper function  jacobian of CPTP penalty (sum of tracenorms of gates) 

Returns the rank and singular values of the Gram matrix for a dataset. 

Compute a maximal set of basis circuits for a Gram matrix. 

Compute the rank and singular values of a maximal Gram matrix. 
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states. 


Construct the singlequbit operation matrix. 

Construct the singlequbit operation matrix. 

Creates a DataSet used for generating bootstrapped error bars. 

Creates a series of "bootstrapped" Models. 

Optimizes the "spam weight" parameter used when gauge optimizing a set of models. 

Standard deviation of gs_func over an ensemble of models. 

Mean of gs_func over an ensemble of models. 

Take the pergateelement mean of a set of models. 

Take the pergateelement standard deviation of a list of models. 

Take the pergateelement RMS of a set of models. 



Compares a 

Perform Linear Gate Set Tomography (LGST). 

Perform longsequence GST (LSGST). 

A more fundamental interface for performing endtoend GST. 

Perform endtoend GST analysis using standard practices. 



Loads a DataSet from the data_filename_or_set argument of functions in this module. 













Apply a function, f to every element of a list, l in parallel, using MPI. 
Get a comm object 




Get the elements of the specifed basistype which spans the densitymatrix space given by dim. 

Get the "long name" for a particular basis, which is typically used in reports, etc. 

Get a list of short labels corresponding to to the elements of the described basis. 

Whether a basis contains sparse matrices. 

Convert a operation matrix from one basis of a density matrix space to another. 

Constructs bases from transforming mx between two basis names. 

Construct a Basis object with type given by basis and dimension approprate for transforming mx. 

Change the basis of mx to a potentially larger or smaller 'std'type basis given by std_basis_2. 

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed. 

Wrapper for 

Convert a state vector into a density matrix. 

Convert a single qubit state vector into a Liouville vector in the Pauli basis. 

Convert a vector in this basis to a matrix in the standard basis. 

Convert a matrix in the standard basis to a vector in the Pauli basis. 

Decorator for deprecating a function. 

Computes the total (aggregate) chi^2 for a set of circuits. 

Computes the percircuit chi^2 contributions for a set of cirucits. 

Compute the gradient of the chi^2 function computed by :function:`chi2`. 

Compute the Hessian matrix of the 

Compute and approximate Hessian matrix of the 

Compute the chialpha objective function. 

Compute the percircuit chialpha objective function. 

Computes chi^2 for a 2outcome measurement. 

Computes chi^2 for a 2outcome measurement using frequencyweighting. 

Computes the chi^2 term corresponding to a single outcome. 

Computes the frequencyweighed chi^2 term corresponding to a single outcome. 

Calculates the standard Bonferroni correction. 

Sidak correction. 

Generalized Bonferroni correction. 

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1. 

Given a choi matrix, return the corresponding operation matrix. 

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1. 

Given a choi matrix in the standard basis, return the corresponding operation matrix. 

Compute the amount of nonCPness of a model. 
Compute the amount of nonCPness of a model. 

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model. 


Formats and prints a deprecation warning message. 

Decorator for deprecating a function. 

Utility to deprecate imports from a module. 

The loglikelihood function. 

Computes the percircuit loglikelihood contribution for a set of circuits. 

The jacobian of the loglikelihood function. 

The hessian of the loglikelihood function. 

An approximate Hessian of the loglikelihood function. 

The maximum loglikelihood possible for a DataSet. 

The vector of maximum loglikelihood contributions for each circuit, aggregated over outcomes. 

See docstring for :function:`pygsti.tools.two_delta_logl` 

Twice the difference between the maximum and actual loglikelihood. 

Twice the percircuit difference between the maximum and actual loglikelihood. 

Term of the 2*[log(L)upperbound  log(L)] sum corresponding to a single circuit and spam label. 

Construct the Lindbladian corresponding to a given Hamiltonian. 

Construct the Lindbladian corresponding to stochastic qerrors. 

Construct the Lindbladian corresponding to affine qerrors. 

Construct the Lindbladian corresponding to generalized nonHamiltonian (stochastic) errors. 

Remove duplicates from the list passed as an argument. 

Remove duplicates from the a list and return the result. 
A 0based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is. 


Replace elements of t according to rules in alias_dict. 

Applies 

Applies alias_dict to the circuits in list_of_circuits. 
Iterate over all sorted (decreasing) partitions of integer n. 


Iterate over all partitions of integer n. 

Iterate over all partitions of integer n into nbins bins. 

Helper function for partition_into that performs the same task for 

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself. 

Returns the product over the integers modulo 2 of two matrices. 

Returns the product over the integers modulo 2 of a list of matrices. 

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)). 

Returns the direct sum of two square matrices of integers. 

Finds the inverse of a matrix over GL(n,2) 

Solves Ax = b over GF(2) 
Gaussian elimination mod2 of a. 

Returns a 1D array containing the diagonal of the input square 2D array m. 

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere. 

Returns a diagonal matrix containing the diagonal of m. 


Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2. 

Constructs a random bitstring of length n with parity p 

Finds a random invertable matrix M over GL(n,2) 
Creates a random, symmetric, invertible matrix from GL(n,2) 


Returns M such that M a M.T has ones along the main diagonal 

Permutes the first row & col with the i'th row & col 

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible. 
Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible. 

Check to see if the matrix has been properly permuted. 


The trace of a matrix, sum_i m[i,i]. 

Test whether mx is a hermitian matrix. 

Test whether mx is a positivedefinite matrix. 

Test whether mx is a valid density matrix (hermitian, positivedefinite, and unit trace). 

Compute the frobenius norm of an array (or matrix), 
Compute the squared frobenius norm of an array (or matrix), 


Compute the nullspace of a matrix. 

Compute the nullspace of a matrix using the QR decomposition. 

Computes the nullspace of a matrix, and tries to return a "nice" basis for it. 

Normalizes the columns of a matrix. 

Compute the norms of the columns of a matrix. 

Scale each column of a matrix by a given value. 

Checks whether a matrix contains orthogonal columns. 

Checks whether a matrix contains orthogonal columns. 

Computes the indices of the linearlyindependent columns in a matrix. 
TODO: docstring 


The "sign" matrix of m 

Print matrix in pretty format. 

Generate a "prettyformat" string for a matrix. 

Generate a "prettyformat" string for a complexvalued matrix. 

Construct the logarithm of superoperator matrix m. 

Construct the logarithm of superoperator matrix m that is near the identity. 

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm. 

Construct a real logarithm of real matrix m. 

Returns the ith standard basis vector in dimension dim. 

Stacks the columns of a matrix to return a vector 

Slices a vector into the columns of a matrix. 

Returns the 1 norm of a matrix 

Generates a random Hermitian matrix 

The Hermitian 1to1 norm of a superoperator represented in the standard basis. 

Comparison function for complex numbers that compares real part, then imaginary part. 
GCD algorithm to produce prime factors of n 


Matches the elements of two vectors, a and b by minimizing the weight between them. 

Matches the elements of a and b, whose elements are assumed to either real or onehalf of a conjugate pair. 

Fancy Assignment, equivalent to a[*inds] = rhs but with 

Returns the shape of a fancyindexed array (a[*inds].shape) 

Fancy Indexing, equivalent to a[*inds].copy() but with 

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices. 

Get the realpart of a, where a can be either a dense array or a sparse matrix. 

Get the imaginarypart of a, where a can be either a dense array or a sparse matrix. 

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix. 

Computes the 1norm of the dense or sparse matrix a. 

Precomputes the indices needed to sum a set of CSR sparse matrices. 

Accelerated summation of several CSRformat sparse matrices. 

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices. 

Computation of the summation of several CSRformat sparse matrices. 

Computes "prepared" metainfo about matrix a, to be used in expm_multiply_fast. 

Multiplies v by an exponentiated matrix. 

a helper function. Note that this (python) version works when a is a LinearOperator 

Returns "prepared" metainfo about operation op, which is assumed to be traceless (so no shift is needed). 

Checks whether two Scipy sparse matrices are (almost) equal. 
Computes the 1norm of the scipy sparse matrix a. 


Get the base memory object for numpy array a. 

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix. 

Similar to numpy.eig, but returns sorted output. 

Computes the "kite" corresponding to a list of eigenvalues. 

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0. 

Project mx onto kite, so mx is zero everywhere except on the kite. 

Project mx onto the complement of kite, so mx is zero everywhere on the kite. 

Removes the linearly dependent columns of a matrix. 

TODO: docstring 

TODO: docstring 

TODO: docstring 

Construct the dense operator or superoperator representation of a computational basis state. 

Compute the partity of x. 

Fills a dense array with the superket representation of a computational basis state. 





Returns the quantum state fidelity between density matrices. 

Returns the frobenius distance between gate or density matrices. 

Returns the square of the frobenius distance between gate or density matrices. 

Calculate residuals between the elements of two matrices 

Compute the trace norm of matrix a given by: 

Compute the trace distance between matrices. 

Returns the approximate diamond norm describing the difference between gate matrices. 

Compute the Jamiolkowski trace distance between operation matrices. 

Returns the "entanglement" process fidelity between gate matrices. 

Computes the average gate fidelity (AGF) between two gates. 

Computes the average gate infidelity (AGI) between two gates. 

Returns the entanglement infidelity (EI) between gate matrices. 

Computes the averageovergates of the infidelity between gates in model and the gates in target_model. 

Returns the "unitarity" of a channel. 

Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix. 

Constructs a gatelike quantity for the POVM within model. 

Computes the process (entanglement) fidelity between POVM maps. 

Computes the Jamiolkowski trace distance between POVM maps using 

Computes the diamond distance between POVM maps using 

Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates. 

Compute the vectorized density matrix which acts as the state psi. 

Compute the pure state describing the action of density matrix vector dmvec. 
Compute the superoperator corresponding to unitary matrix u. 


Compute the unitary corresponding to the (unitaryaction!) superoperator superop. 

Construct an error generator from a SPAM vector and it's target. 

Construct the error generator from a gate and its target. 

Construct a gate from an error generator and a target gate. 

Gets the scaling factors required to turn 

Compute the gate error generators for a standard set of errors. 

Compute the projections of a gate error generator onto generators for a standard set of errors. 

Asserts ar.shape == shape ; works with sparse matrices too 

TODO: docstring  labels can be, e.g. ('H', 'XX') and basis should be a 1qubit basis w/singlechar labels 

Compute the superoperatorgenerators corresponding to Lindblad terms. 

Compute the projections of an error generator onto generators for the Lindbladterm errors. 

Converts errorgenerator projections into a dictionary of error coefficients. 

Convert a set of Lindblad terms into a dense matrix/grid of projections. 

Generate humanreadable labels for the Lindblad parameters. 

Compute Lindbladgate parameter values from error generator projections. 

Constructs a dictionary mapping Lindblad term labels to projection coefficients. 

Construct Lindbladterm projections from Lindbladoperator parameter values. 

Construct derivative of Lindbladterm projections with respect to the parameter values. 

Construct a rotation operation matrix. 

Construct a new model(s) by projecting the error generator of model onto some subspace then reconstructing. 

Returns a gauge transformation that maps gate_mx into a matrix that is codiagonal with target_gate_mx. 

Project each gate of model onto the eigenspace of the corresponding gate within target_model. 
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states. 

Whether typ is a recognized Lindbladgate parameterization type. 


Extract the outcome label from a "simplified" effect label. 

Extract the POVM label from a "simplified" effect label. 

Construct the singlequbit operation matrix. 

Construct the singlequbit operation matrix. 

Decorator for caching a function values 

Context manager that times a block of code 
Get stringversion of current time 


Calculates the total variational distance between two probability distributions. 

Calculates the (classical) fidelity between two probability distributions. 

Predicts the RB error rate from a model. 

Computes the second largest eigenvalue of the 'L matrix' (see the L_matrix function). 

Computes the gauge transformation required so that the RB number matches the average model infidelity. 

Transforms a Model into the "RB gauge" (see the RB_gauge function). 

Constructs a generalization of the 'Lmatrix' linear operator on superoperators. 

Returns the second largest eigenvalue of a generalization of the 'Rmatrix' [see the R_matrix function]. 

Constructs a generalization of the 'Rmatrix' of Proctor et al Phys. Rev. Lett. 119, 130502 (2017). 

Computes the 'leftmultiplied' error maps associated with a noisy gate set, along with the average error map. 

Computes the "gatedependence of errors maps" parameter defined by 

Returns the length (the number of indices) contained in a slice. 

Returns a new slice whose start and stop points are shifted by offset. 

Returns the intersection of two slices (which must have the same step). 

Returns the intersection of two slices (which must have the same step). 

Returns a list of the indices specified by slice s. 

Returns a slice corresponding to a given list of (integer) indices, if this is possible. 

Returns slc_or_list_like as an index array (an integer numpy.ndarray). 

Divides a slice into subslices based on a maximum length (for each subslice). 

A slice that is the composition of base_slc and slc. 



Decorator for applying a smart cache to a single function or method. 

Creates the symplectic form for the number of qubits specified. 

Maps the input symplectic matrix between the 'standard' and 'directsum' symplectic form conventions. 

Checks whether a matrix is symplectic. 
Returns the inverse of a symplectic matrix over the integers mod 2. 


Returns the inverse of a Clifford gate in the symplectic representation. 

Checks if a symplectic matrix  phase vector pair (s,p) is the symplectic representation of a Clifford. 

Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate. 

Finds the Pauli layer that should be appended to a circuit to implement a given Clifford. 

Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford. 

TODO: docstring 

TODO: docstring 

Multiplies two cliffords in the symplectic representation. 

Takes a kronecker product of symplectic representations. 

Contruct the (s,p) stabilizer representation for a computational basis state given by zvals. 

Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation. 

Computes the probabilities of 0/1 (+/) outcomes from measuring a Pauli operator on a stabilizer state. 

A helper routine used for manipulating stabilizer state representations. 

A helper routine used for manipulating stabilizer state representations. 

Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state. 

Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation. 

Creates a dictionary of the symplectic representations of 'standard' Clifford gates. 

Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit. 

Constructs the symplectic representation of the nqubit Clifford implemented by a single quantum circuit layer. 
Gives the group relationship between the 'I', 'H', 'P' 'HP', 'PH', and 'HPH' uptoPaulis operators. 


Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise. 

Returns the symplectic representation of a single qubit Clifford unitary, 

Returns the symplectic representation of a twoqubit Clifford unitary, 

Returns the symplectic representation of a onequbit or twoqubit Clifford unitary. 

Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n). 

Returns a Clifford, in the symplectic representation, sampled uniformly at random from the nqubit Clifford group. 

Generates a uniformly random phase vector for a nqubit Clifford. 
Get the bitstring corresponding to a Pauli. 


Applies a Clifford gate to the nqubit Clifford gate specified by the 2n x 2n symplectic matrix. 
The number of Clifford gates in the nqubit Clifford group. 

The number of elements in the symplectic group S(n) over the 2element finite field. 

Returns the number of different cosets for the symplectic group S(n) over the 2element finite field. 


Returns the symplectic inner product of two vectors in F_2^(2n). 

Applies transvection Z k to v. 

Converts integer i to an length n array of bits. 

Converts an nbit string b to an integer between 0 and 2^`n`  1. 
A utility function for selecting a random Clifford element. 

Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the "canonical" index i. 


Returns the "canonical" index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1. 

The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1. 
str(.) on first arg of einsum skirts a bug in Numpy 14.0 
Attributes¶
 pygsti.__version__ = 0.9.10.post4¶
 pygsti.contract(model, to_what, dataset=None, maxiter=1000000, tol=0.01, use_direct_cp=True, method='NelderMead', verbosity=0)¶
Contract a Model to a specified space.
All contraction operations except ‘vSPAM’ operate entirely on the gate matrices and leave state preparations and measurments alone, while ‘vSPAM’ operations only on SPAM.
 Parameters
model (Model) – The model to contract
to_what (string) –
Specifies which space is the model is contracted to. Allowed values are:
’TP’ – All gates are manifestly tracepreserving maps.
’CP’ – All gates are manifestly completelypositive maps.
’CPTP’ – All gates are manifestly completelypositive and tracepreserving maps.
’XP’ – All gates are manifestly “experimentallypositive” maps.
’XPTP’ – All gates are manifestly “experimentallypositive” and tracepreserving maps.
’vSPAM’ – state preparation and measurement operations are valid.
’nothing’ – no contraction is performed.
dataset (DataSet, optional) – Dataset to use to determine whether a model is in the “experimentallypositive” (XP) space. Required only when contracting to XP or XPTP.
maxiter (int, optional) – Maximum number of iterations for iterative contraction routines.
tol (float, optional) – Tolerance for iterative contraction routines.
use_direct_cp (bool, optional) – Whether to use a faster directcontraction method for CP contraction. This method essentially transforms to the Choi matrix, truncates any negative eigenvalues to zero, then transforms back to a operation matrix.
method (string, optional) – The method used when contracting to XP and nondirectly to CP (i.e. use_direct_cp == False).
verbosity (int, optional) – How much detail to send to stdout.
 Returns
Model – The contracted model
 pygsti._contract_to_xp(model, dataset, verbosity, method='NelderMead', maxiter=100000, tol=1e10)¶
 pygsti._contract_to_cp(model, verbosity, method='NelderMead', maxiter=100000, tol=0.01)¶
 pygsti._contract_to_cp_direct(model, verbosity, tp_also=False, maxiter=100000, tol=1e08)¶
 pygsti._contract_to_tp(model, verbosity)¶
 pygsti._contract_to_valid_spam(model, verbosity=0)¶
Contract the surface preparation and measurement operations of a Model to the space of valid quantum operations.
 Parameters
model (Model) – The model to contract
verbosity (int) – How much detail to send to stdout.
 Returns
Model – The contracted model
 class pygsti._DummyProfiler¶
Bases:
object
A dummy profiler that doesn’t do anything.
A class which implements the same interface as Profiler but which doesn’t actually do any profiling (consists of stub functions).
 add_time(self, name, start_time, prefix=0)¶
Stub function that does nothing
 Parameters
name (string) – The name of the timer to add elapsed time into (if the name doesn’t exist, one is created and initialized to the elapsed time).
start_time (float) – The starting time used to compute the elapsed, i.e. the value time.time()start_time, which is added to the named timer.
prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.
 Returns
None
 add_count(self, name, inc=1, prefix=0)¶
Stub function that does nothing
 Parameters
name (string) – The name of the counter to add val into (if the name doesn’t exist, one is created and initialized to val).
inc (int, optional) – The increment (the value to add to the counter).
prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.
 Returns
None
 memory_check(self, name, printme=None, prefix=0)¶
Stub function that does nothing
 Parameters
name (string) – The name of the memory checkpoint. (Later, memory information can be organized by checkpoint name.)
printme (bool, optional) – Whether or not to print the memory usage during this function call (if None, the default, then the value of default_print_memcheck specified during Profiler construction is used).
prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.
 Returns
None
 class pygsti.BuiltinBasis(name, dim_or_statespace, sparse=False)¶
Bases:
LazyBasis
A basis that is included within and integrated into pyGSTi.
Such bases may, in most cases be represented merely by its name. (In actuality, a dimension is also required, but this is often able to be inferred from context.)
 Parameters
name ({"pp", "gm", "std", "qt", "id", "cl", "sv"}) – Name of the basis to be created.
dim_or_statespace (int or StateSpace) – The dimension of the basis to be created or the state space for which a basis should be created. Note that when this is an integer it is the dimension of the vectors, which correspond to flattened elements in simple cases. Thus, a 1qubit basis would have dimension 2 in the statevector (name=”sv”) case and dimension 4 when constructing a densitymatrix basis (e.g. name=”pp”).
sparse (bool, optional) – Whether basis elements should be stored as SciPy CSR sparse matrices or dense numpy arrays (the default).
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 property dim(self)¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size(self)¶
The number of elements (or vectorelements) in the basis.
 property elshape(self)¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 __hash__(self)¶
Return hash(self).
 _lazy_build_elements(self)¶
 _lazy_build_labels(self)¶
 _copy_with_toggled_sparsity(self)¶
 __eq__(self, other)¶
Return self==value.
 class pygsti.VerbosityPrinter(verbosity=1, filename=None, comm=None, warnings=True, split=False, clear_file=True)¶
Bases:
object
Class responsible for logging things to stdout or a file.
Controls verbosity and can print progress bars. ex:
>>> VerbosityPrinter(1)
would construct a printer that printed out messages of level one or higher to the screen.
>>> VerbosityPrinter(3, 'output.txt')
would construct a printer that sends verbose output to a text file
The static function
create_printer()
will construct a printer from either an integer or an already existing printer. it is a static method of the VerbosityPrinter class, so it is called like so:>>> VerbosityPrinter.create_printer(2)
or
>>> VerbostityPrinter.create_printer(VerbosityPrinter(3, 'output.txt'))
printer.log('status')
would log ‘status’ if the printers verbosity was one or higher.printer.log('status2', 2)
would log ‘status2’ if the printer’s verbosity was two or higherprinter.error('something terrible happened')
would ALWAYS log ‘something terrible happened’.printer.warning('something worrisome happened')
would log if verbosity was one or higher  the same as a normal status.Both printer.error and printer.warning will prepend ‘ERROR: ‘ or ‘WARNING: ‘ to the message they are given. Optionally, printer.log() can also prepend ‘Status_n’ to the message, where n is the message level.
Logging of progress bars/iterations:
>>> with printer_instance.progress_logging(verbosity): >>> for i, item in enumerate(data): >>> printer.show_progress(i, len(data)) >>> printer.log(...)
will output either a progress bar or iteration statuses depending on the printer’s verbosity
 Parameters
verbosity (int) – How verbose the printer should be.
filename (str, optional) – Where to put output (If none, output goes to screen)
comm (mpi4py.MPI.Comm or ResourceAllocation, optional) – Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …
warnings (bool, optional) – Whether or not to print warnings
split (bool, optional) – Whether to split output between stdout and stderr as appropriate, or to combine the streams so everything is sent to stdout.
clear_file (bool, optional) – Whether or not filename should be cleared (overwritten) or simply appended to.
 _comm_path¶
relative path where comm files (outputs of nonroot ranks) are stored.
 Type
str
 _comm_file_name¶
root filename for comm files (outputs of nonroot ranks).
 Type
str
 _comm_file_ext¶
filename extension for comm files (outputs of nonroot ranks).
 Type
str
 _comm_path =¶
 _comm_file_name =¶
 _comm_file_ext = .txt¶
 _create_file(self, filename)¶
 _get_comm_file(self, comm_id)¶
 clone(self)¶
Instead of deepcopy, initialize a new printer object and feed it some select deepcopied members
 Returns
VerbosityPrinter
 static create_printer(verbosity, comm=None)¶
Function for converting between interfaces
 Parameters
verbosity (int or VerbosityPrinter object, required:) – object to build a printer from
comm (mpi4py.MPI.Comm object, optional) – Comm object to build printers with. !Will override!
 Returns
VerbosityPrinter – The printer object, constructed from either an integer or another printer
 __add__(self, other)¶
Increase the verbosity of a VerbosityPrinter
 __sub__(self, other)¶
Decrease the verbosity of a VerbosityPrinter
 __getstate__(self)¶
 __setstate__(self, state_dict)¶
 _append_to(self, filename, message)¶
 _put(self, message, flush=True, stderr=False)¶
 _record(self, typ, level, message)¶
 error(self, message)¶
Log an error to the screen/file
 Parameters
message (str) – the error message
 Returns
None
 warning(self, message)¶
Log a warning to the screen/file if verbosity > 1
 Parameters
message (str) – the warning message
 Returns
None
 log(self, message, message_level=None, indent_char=' ', show_statustype=False, do_indent=True, indent_offset=0, end='\n', flush=True)¶
Log a status message to screen/file.
Determines whether the message should be printed based on current verbosity setting, then sends the message to the appropriate output
 Parameters
message (str) – the message to print (or log)
message_level (int, optional) – the minimum verbosity level at which this level is printed.
indent_char (str, optional) – what constitutes an “indent” (messages at higher levels are indented more when do_indent=True).
show_statustype (bool, optional) – if True, prepend lines with “Status Level X” indicating the message_level.
do_indent (bool, optional) – whether messages at higher message levels should be indented. Note that if this is False it may be helpful to set show_statustype=True.
indent_offset (int, optional) – an additional number of indentations to add, on top of any due to the message level.
end (str, optional) – the character (or string) to end message lines with.
flush (bool, optional) – whether stdout should be flushed right after this message is printed (this avoids delays in onscreen output due to buffering).
 Returns
None
 _progress_bar(self, iteration, total, bar_length, num_decimals, fill_char, empty_char, prefix, suffix, indent)¶
 _verbose_iteration(self, iteration, total, prefix, suffix, verbose_messages, indent, end)¶
 __str__(self)¶
Return str(self).
 verbosity_env(self, level)¶
Create a temporary environment with a different verbosity level.
This is context manager, controlled using Python’s with statement:
>>> with printer.verbosity_env(2): printer.log('Message1') # printed at verbosity level 2 printer.log('Message2') # printed at verbosity level 2
 Parameters
level (int) – the verbosity level of the environment.
 progress_logging(self, message_level=1)¶
Context manager for logging progress bars/iterations.
(The printer will return to its normal, unrestricted state when the progress logging has finished)
 Parameters
message_level (int, optional) – progress messages will not be shown until the verbosity level reaches message_level.
 show_progress(self, iteration, total, bar_length=50, num_decimals=2, fill_char='#', empty_char='', prefix='Progress:', suffix='', verbose_messages=[], indent_char=' ', end='\n')¶
Displays a progress message (to be used within a progress_logging block).
 Parameters
iteration (int) – the 0based current iteration – the interation number this message is for.
total (int) – the total number of iterations expected.
bar_length (int, optional) – the length, in characters, of a textformat progress bar (only used when the verbosity level is exactly equal to the progress_logging message level.
num_decimals (int, optional) – number of places after the decimal point that are displayed in progress bar’s percentage complete.
fill_char (str, optional) – replaces ‘#’ as the barfilling character
empty_char (str, optional) – replaces ‘’ as the emptybar character
prefix (str, optional) – message in front of the bar
suffix (str, optional) – message after the bar
verbose_messages (list, optional) – A list of strings to display after an initial “Iter X of Y” line when the verbosity level is higher than the progress_logging message level and so more verbose messages are shown (and a progress bar is not). The elements of verbose_messages will occur, one per line, after the initial “Iter X of Y” line.
indent_char (str, optional) – what constitutes an “indentation”.
end (str, optional) – the character (or string) to end message lines with.
 Returns
None
 _end_progress(self)¶
 start_recording(self)¶
Begins recording the output (to memory).
Begins recording (in memory) a list of (type, verbosityLevel, message) tuples that is returned by the next call to :method:`stop_recording`.
 Returns
None
 is_recording(self)¶
Returns whether this VerbosityPrinter is currently recording.
 Returns
bool
 stop_recording(self)¶
Stops recording and returns recorded output.
Stops a “recording” started by :method:`start_recording` and returns the list of (type, verbosityLevel, message) tuples that have been recorded since then.
 Returns
list
 class pygsti.DirectSumBasis(component_bases, name=None, longname=None)¶
Bases:
LazyBasis
A basis that is the direct sum of one or more “component” bases.
Elements of this basis are the union of the basis elements on each component, each embedded into a common blockdiagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).
 Parameters
component_bases (iterable) – A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to :function:`Basis.cast`, e.g. (‘pp’,4).
name (str, optional) – The name of this basis. If None, the names of the component bases joined with “+” is used.
longname (str, optional) – A longer description of this basis. If None, then a long name is automatically generated.
 vector_elements¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Type
list
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 property dim(self)¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size(self)¶
The number of elements (or vectorelements) in the basis.
 property elshape(self)¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 __hash__(self)¶
Return hash(self).
 _lazy_build_vector_elements(self)¶
 _lazy_build_elements(self)¶
 _lazy_build_labels(self)¶
 _copy_with_toggled_sparsity(self)¶
 __eq__(self, other)¶
Return self==value.
 property vector_elements(self)¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Returns
list
 property to_std_transform_matrix(self)¶
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
 Returns
numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
 property to_elementstd_transform_matrix(self)¶
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space”  that is, vectors in the same standard basis that the elements of this basis are expressed in.
 Returns
numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
 create_equivalent(self, builtin_basis_name)¶
Create an equivalent basis with components of type builtin_basis_name.
Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.
 Parameters
builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
 Returns
DirectSumBasis
 create_simple_equivalent(self, builtin_basis_name=None)¶
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_nameanalogue of the standard basis that this basis’s elements are expressed in. Parameters
builtin_basis_name (str, optional) – The name of the builtin basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and componentfree version of the same builtinbasis type.
 Returns
Basis
 class pygsti._CircuitList(circuits, op_label_aliases=None, circuit_weights=None, name=None)¶
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
A unmutable list (a tuple) of
Circuit
objects and associated metadata. Parameters
circuits (list) – The list of circuits that constitutes the primary data held by this object.
op_label_aliases (dict, optional) – Dictionary of circuit metadata whose keys are operation label “aliases” and whose values are circuits corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined). e.g. op_label_aliases[‘Gx^3’] = pygsti.obj.Circuit([‘Gx’,’Gx’,’Gx’])
circuit_weights (numpy.ndarray, optional) – If not None, an array of percircuit weights (of length equal to the number of circuits) that are typically used to multiply the counts extracted for each circuit.
name (str, optional) – An optional name for this list, used for status messages.
 classmethod cast(cls, circuits)¶
Convert (if needed) an object into a
CircuitList
. Parameters
circuits (list or CircuitList) – The object to convert.
 Returns
CircuitList
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 __len__(self)¶
 __getitem__(self, index)¶
 __iter__(self)¶
 apply_aliases(self)¶
Applies any operationlabel aliases to this circuit list.
 Returns
list – A list of :class:`Circuit`s.
 truncate(self, circuits_to_keep)¶
Builds a new circuit list containing only a given subset.
This can be safer then just creating a new
CircuitList
because it preserves the aliases, etc., of this list. Parameters
circuits_to_keep (list or set) – The circuits to retain in the returned circuit list.
 Returns
CircuitList
 truncate_to_dataset(self, dataset)¶
Builds a new circuit list containing only those elements in dataset.
 Parameters
dataset (DataSet) – The dataset to check. Aliases are applied to the circuits in this circuit list before they are tested.
 Returns
CircuitList
 __hash__(self)¶
Return hash(self).
 __eq__(self, other)¶
Return self==value.
 __setstate__(self, state_dict)¶
 class pygsti._ResourceAllocation(comm=None, mem_limit=None, profiler=None, distribute_method='default', allocated_memory=0)¶
Bases:
object
Describes available resources and how they should be allocated.
This includes the number of processors and amount of memory, as well as a strategy for how computations should be distributed among them.
 Parameters
comm (mpi4py.MPI.Comm, optional) – MPI communicator holding the number of available processors.
mem_limit (int, optional) – A rough perprocessor memory limit in bytes.
profiler (Profiler, optional) – A lightweight profiler object for tracking resource usage.
distribute_method (str, optional) – The name of a distribution strategy.
 classmethod cast(cls, arg)¶
Cast arg to a
ResourceAllocation
object.If arg already is a
ResourceAllocation
instance, it just returned. Otherwise this function attempts to create a new instance from arg. Parameters
arg (ResourceAllocation or dict) – An object that can be cast to a
ResourceAllocation
. Returns
ResourceAllocation
 build_hostcomms(self)¶
 property comm_rank(self)¶
A safe way to get self.comm.rank (0 if self.comm is None)
 property comm_size(self)¶
A safe way to get self.comm.size (1 if self.comm is None)
 property is_host_leader(self)¶
True if this processors is the rank0 “leader” of its host (node). False otherwise.
 host_comm_barrier(self)¶
Calls self.host_comm.barrier() when self.host_comm is not None.
This convenience function provides an oftenused barrier that follows code where a single “leader” processor modifies a memory block shared between all members of self.host_comm, and the other processors must wait until this modification is performed before proceeding with their own computations.
 Returns
None
 copy(self)¶
Copy this object.
 Returns
ResourceAllocation
 reset(self, allocated_memory=0)¶
Resets internal allocation counters to given values (defaults to zero).
 Parameters
allocated_memory (int64) – The value to set the memory allocation counter to.
 Returns
None
 add_tracked_memory(self, num_elements, dtype='d')¶
Adds nelements * itemsize bytes to the total amount of allocated memory being tracked.
If the total (tracked) memory exceeds self.mem_limit a
MemoryError
exception is raised. Parameters
num_elements (int) – The number of elements to track allocation of.
dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.
 Returns
None
 check_can_allocate_memory(self, num_elements, dtype='d')¶
Checks that allocating nelements doesn’t cause the memory limit to be exceeded.
This memory isn’t tracked  it’s just added to the current tracked memory and a
MemoryError
exception is raised if the result exceeds self.mem_limit. Parameters
num_elements (int) – The number of elements to track allocation of.
dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.
 Returns
None
 temporarily_track_memory(self, num_elements, dtype='d')¶
Temporarily adds nelements to tracked memory (a context manager).
A
MemoryError
exception is raised if the tracked memory exceeds self.mem_limit. Parameters
num_elements (int) – The number of elements to track allocation of.
dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.
 Returns
contextmanager
 gather_base(self, result, local, slice_of_global, unit_ralloc=None, all_gather=False)¶
Gather or allgather operation using local arrays and a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final tobe gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
all_gather (bool, optional) – Whether the final result should be gathered on all the processors of this
ResourceAllocation
or just the root (rank 0) processor.
 Returns
None
 gather(self, result, local, slice_of_global, unit_ralloc=None)¶
Gather local arrays into a global result array potentially with a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final tobe gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
The global array is only gathered on the root (rank 0) processor of this resource allocation.
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array, only needed on the root (rank 0) processor. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
 Returns
None
 allgather(self, result, local, slice_of_global, unit_ralloc=None)¶
Allgather local arrays into global arrays on each processor, potentially using a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final tobe gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
 Returns
None
 allreduce_sum(self, result, local, unit_ralloc=None)¶
Sum local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.SUM type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the sum, only processors with unit_ralloc.rank == 0 contribute to the sum. This handles the case where simply summing the local contributions from all processors would result in overcounting because of multiple processors hold the same logical result (summand).
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being summed. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
 Returns
None
 allreduce_sum_simple(self, local, unit_ralloc=None)¶
A simplified sum over quantities on different processors that doesn’t use shared memory.
The shared memory usage of :method:`allreduce_sum` can be overkill when just summing a single scalar quantity. This method provides a way to easily sum a quantity across all the processors in this
ResourceAllocation
object using a unit resource allocation. Parameters
local (int or float) – The local (perprocessor) value to sum.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local value, so that only the unit_ralloc.rank == 0 processors will contribute to the sum. If None, then it is assumed that each processor computes a logically different local value.
 Returns
float or int – The sum of all local quantities, returned on all the processors.
 allreduce_min(self, result, local, unit_ralloc=None)¶
Take elementwise min of local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.MIN type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the min operation, only processors with unit_ralloc.rank == 0 contribute.
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
 Returns
None
 allreduce_max(self, result, local, unit_ralloc=None)¶
Take elementwise max of local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.MAX type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the max operation, only processors with unit_ralloc.rank == 0 contribute.
 Parameters
result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intrahost communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.local (numpy.ndarray) – The locally computed quantity. This can be a sharedmemory array, but need not be.
unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
 Returns
None
 bcast(self, value, root=0)¶
Broadcasts a value from the root processor/host to the others in this resource allocation.
This is similar to a usual MPI broadcast, except it takes advantage of shared memory when it is available. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial interhost comm, then this routine places value in a shared memory buffer and uses the resource allocation’s interhost communicator to broadcast the result from the root host to all the other hosts using all the processor on the root host in parallel (all processors with the same intrahost rank participate in a MPI broadcast). Parameters
value (numpy.ndarray) – The value to broadcast. May be shared memory but doesn’t need to be. Only need to specify this on the rank root processor, other processors can provide any value for this argument (it’s unused).
root (int) – The rank of the processor whose value will be to broadcast.
 Returns
numpy.ndarray – The broadcast value, in a new, nonsharedmemory array.
 __getstate__(self)¶
 class pygsti._CustomLMOptimizer(maxiter=100, maxfev=100, tol=1e06, fditer=0, first_fditer=0, damping_mode='identity', damping_basis='diagonal_values', damping_clip=None, use_acceleration=False, uphill_step_threshold=0.0, init_munu='auto', oob_check_interval=0, oob_action='reject', oob_check_mode=0, serial_solve_proc_threshold=100)¶
Bases:
Optimizer
A LevenbergMarquardt optimizer customized for GSTlike problems.
 Parameters
maxiter (int, optional) – The maximum number of (outer) interations.
maxfev (int, optional) – The maximum function evaluations.
tol (float or dict, optional) – The tolerance, specified as a single float or as a dict with keys {‘relx’, ‘relf’, ‘jac’, ‘maxdx’}. A single float sets the ‘relf’ and ‘jac’ elemments and leaves the others at their default values.
fditer (int optional) – Internally compute the Jacobian using a finitedifference method for the first fditer iterations. This is useful when the initial point lies at a special or singular point where the analytic Jacobian is misleading.
first_fditer (int, optional) – Number of finitedifference iterations applied to the first stage of the optimization (only). Unused.
damping_mode ({'identity', 'JTJ', 'invJTJ', 'adaptive'}) – How damping is applied. ‘identity’ means that the damping parameter mu multiplies the identity matrix. ‘JTJ’ means that mu multiplies the diagonal or singular values (depending on scaling_mode) of the JTJ (Fischer information and approx. hessaian) matrix, whereas ‘invJTJ’ means mu multiplies the reciprocals of these values instead. The ‘adaptive’ mode adaptively chooses a damping strategy.
damping_basis ({'diagonal_values', 'singular_values'}) – Whether the the diagonal or singular values of the JTJ matrix are used during damping. If ‘singular_values’ is selected, then a SVD of the Jacobian (J) matrix is performed and damping is performed in the basis of (right) singular vectors. If ‘diagonal_values’ is selected, the diagonal values of relevant matrices are used as a proxy for the the singular values (saving the cost of performing a SVD).
damping_clip (tuple, optional) – A 2tuple giving upper and lower bounds for the values that mu multiplies. If damping_mode == “identity” then this argument is ignored, as mu always multiplies a 1.0 on the diagonal if the identity matrix. If None, then no clipping is applied.
use_acceleration (bool, optional) – Whether to include a geodesic acceleration term as suggested in arXiv:1201.5885. This is supposed to increase the rate of convergence with very little overhead. In practice we’ve seen mixed results.
uphill_step_threshold (float, optional) – Allows uphill steps when taking two consecutive steps in nearly the same direction. The condition for accepting an uphill step is that (uphill_step_thresholdbeta)*new_objective < old_objective, where beta is the cosine of the angle between successive steps. If uphill_step_threshold == 0 then no uphill steps are allowed, otherwise it should take a value between 1.0 and 2.0, with 1.0 being the most permissive to uphill steps.
init_munu (tuple, optional) – If not None, a (mu, nu) tuple of 2 floats giving the initial values for mu and nu.
oob_check_interval (int, optional) – Every oob_check_interval outer iterations, the objective function (obj_fn) is called with a second argument ‘oob_check’, set to True. In this case, obj_fn can raise a ValueError exception to indicate that it is Out Of Bounds. If oob_check_interval is 0 then this check is never performed; if 1 then it is always performed.
oob_action ({"reject","stop"}) – What to do when the objective function indicates (by raising a ValueError as described above). “reject” means the step is rejected but the optimization proceeds; “stop” means the optimization stops and returns as converged at the last knowninbounds point.
oob_check_mode (int, optional) – An advanced option, expert use only. If 0 then the optimization is halted as soon as an attempt is made to evaluate the function out of bounds. If 1 then the optimization is halted only when a wouldbe accepted step is out of bounds.
serial_solve_proc_threshold (int optional) – When there are fewer than this many processors, the optimizer will solve linear systems serially, using SciPy on a single processor, rather than using a parallelized Gaussian Elimination (with partial pivoting) algorithm coded in Python. Since SciPy’s implementation is more efficient, it’s not worth using the parallel version until there are many processors to spread the work among.
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 run(self, objective, profiler, printer)¶
Perform the optimization.
 Parameters
objective (ObjectiveFunction) – The objective function to optimize.
profiler (Profiler) – A profiler to track resource usage.
printer (VerbosityPrinter) – printer to use for sending output to stdout.
 class pygsti._Optimizer¶
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
An optimizer. Optimizes an objective function.
 pygsti._dummy_profiler¶
 pygsti.CUSTOMLM = True¶
 pygsti.FLOATSIZE = 8¶
 pygsti.run_lgst(dataset, prep_fiducials, effect_fiducials, target_model, op_labels=None, op_label_aliases=None, guess_model_for_gauge=None, svd_truncate_to=None, verbosity=0)¶
Performs Linearinversion Gate Set Tomography on the dataset.
 Parameters
dataset (DataSet) – The data used to generate the LGST estimates
prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.
effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.
target_model (Model) – A model used to specify which operation labels should be estimated, a guess for which gauge these estimates should be returned in.
op_labels (list, optional) – A list of which operation labels (or aliases) should be estimated. Overrides the operation labels in target_model. e.g. [‘Gi’,’Gx’,’Gy’,’Gx2’]
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are circuits corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = pygsti.obj.Circuit([‘Gx’,’Gx’,’Gx’])
guess_model_for_gauge (Model, optional) – A model used to compute a gauge transformation that is applied to the LGST estimates before they are returned. This gauge transformation is computed such that if the estimated gates matched the model given, then the operation matrices would match, i.e. the gauge would be the same as the model supplied. Defaults to target_model.
svd_truncate_to (int, optional) – The Hilbert space dimension to truncate the operation matrices to using a SVD to keep only the largest svdToTruncateTo singular values of the I_tildle LGST matrix. Zero means no truncation. Defaults to dimension of target_model.
verbosity (int, optional) – How much detail to send to stdout.
 Returns
Model – A model containing all of the estimated labels (or aliases)
 pygsti._lgst_matrix_dims(model, prep_fiducials, effect_fiducials)¶
 pygsti._construct_ab(prep_fiducials, effect_fiducials, model, dataset, op_label_aliases=None)¶
 pygsti._construct_x_matrix(prep_fiducials, effect_fiducials, model, op_label_tuple, dataset, op_label_aliases=None)¶
 pygsti._construct_a(effect_fiducials, model)¶
 pygsti._construct_b(prep_fiducials, model)¶
 pygsti._construct_target_ab(prep_fiducials, effect_fiducials, target_model)¶
 pygsti.gram_rank_and_eigenvalues(dataset, prep_fiducials, effect_fiducials, target_model)¶
Returns the rank and singular values of the Gram matrix for a dataset.
 Parameters
dataset (DataSet) – The data used to populate the Gram matrix
prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.
effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.
target_model (Model) – A model used to make sense of circuit elements, and to compute the theoretical gram matrix eigenvalues (returned as svalues_target).
 Returns
rank (int) – the rank of the Gram matrix
svalues (numpy array) – the singular values of the Gram matrix
svalues_target (numpy array) – the corresponding singular values of the Gram matrix generated by target_model.
 pygsti.run_gst_fit_simple(dataset, start_model, circuits, optimizer, objective_function_builder, resource_alloc, verbosity=0)¶
Performs core Gate Set Tomography function of model optimization.
Optimizes the parameters of start_model by minimizing the objective function built by objective_function_builder. Probabilities are computed by the model, and outcome counts are supplied by dataset.
 Parameters
dataset (DataSet) – The dataset to obtain counts from.
start_model (Model) – The Model used as a starting point for the leastsquares optimization.
circuits (list of (tuples or Circuits)) – Each tuple contains operation labels and specifies a circuit whose probabilities are considered when trying to leastsquaresfit the probabilities given in the dataset. e.g. [ (), (‘Gx’,), (‘Gx’,’Gy’) ]
optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
objective_function_builder (ObjectiveFunctionBuilder) – Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”.
resource_alloc (ResourceAllocation) – A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.
verbosity (int, optional) – How much detail to send to stdout.
 Returns
result (OptimizerResult) – the result of the optimization
model (Model) – the bestfit model.
 pygsti.run_gst_fit(mdc_store, optimizer, objective_function_builder, verbosity=0)¶
Performs core Gate Set Tomography function of model optimization.
Optimizes the model to the data within mdc_store by minimizing the objective function built by objective_function_builder.
 Parameters
mdc_store (ModelDatasetCircuitsStore) – An object holding a model, data set, and set of circuits. This defines the model to be optimized, the data to fit to, and the circuits where predicted vs. observed comparisons should be made. This object also contains additional information specific to the given model, data set, and circuit list, doubling as a cache for increased performance. This information is also specific to a particular resource allocation, which affects how cached values stored.
optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
objective_function_builder (ObjectiveFunctionBuilder) – Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”. If None, then mdc_store must itself be an alreadybuilt objective function.
verbosity (int, optional) – How much detail to send to stdout.
 Returns
result (OptimizerResult) – the result of the optimization
objfn_store (MDCObjectiveFunction) – the objective function and store containing the bestfit model evaluated at the bestfit point.
 pygsti.run_iterative_gst(dataset, start_model, circuit_lists, optimizer, iteration_objfn_builders, final_objfn_builders, resource_alloc, verbosity=0)¶
Performs Iterative Gate Set Tomography on the dataset.
 Parameters
dataset (DataSet) – The data used to generate MLGST gate estimates
start_model (Model) – The Model used as a starting point for the leastsquares optimization.
circuit_lists (list of lists of (tuples or Circuits)) – The ith element is a list of the circuits to be used in the ith iteration of the optimization. Each element of these lists is a circuit, specifed as either a Circuit object or as a tuple of operation labels (but all must be specified using the same type). e.g. [ [ (), (‘Gx’,) ], [ (), (‘Gx’,), (‘Gy’,) ], [ (), (‘Gx’,), (‘Gy’,), (‘Gx’,’Gy’) ] ]
optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.
iteration_objfn_builders (list) – List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on each iteration.
final_objfn_builders (list) – List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on the final iteration.
resource_alloc (ResourceAllocation) – A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.
verbosity (int, optional) – How much detail to send to stdout.
 Returns
models (list of Models) – list whose ith element is the model corresponding to the results of the ith iteration.
optimums (list of OptimizerResults) – list whose ith element is the final optimizer result from that iteration.
final_objfn (MDSObjectiveFunction) – The final iteration’s objective function / store, which encapsulated the final objective function evaluated at the bestfit point (an “evaluated” modeldataSetcircuits store).
 pygsti._do_runopt(objective, optimizer, printer)¶
Runs the core modeloptimization step within a GST routine by optimizing objective using optimizer.
This is factored out as a separate function because of the differences when running Taylorterm simtype calculations, which utilize this as a subroutine (see :function:`_do_term_runopt`).
 Parameters
objective (MDSObjectiveFunction) – A “modeldataset” objective function to optimize.
optimizer (Optimizer) – The optimizer to use.
printer (VerbosityPrinter) – An object for printing output.
 Returns
OptimizerResult
 pygsti._do_term_runopt(objective, optimizer, printer)¶
Runs the core modeloptimization step for models using the Taylorterm (path integral) method of computing probabilities.
This routine serves the same purpose as :function:`_do_runopt`, but is more complex because an appropriate “path set” must be found, requiring a loop of model optimizations with fixed path sets until a sufficient “good” path set is obtained.
 Parameters
objective (MDSObjectiveFunction) – A “modeldataset” objective function to optimize.
optimizer (Optimizer) – The optimizer to use.
printer (VerbosityPrinter) – An object for printing output.
 Returns
OptimizerResult
 pygsti.find_closest_unitary_opmx(operation_mx)¶
Find the closest (in fidelity) unitary superoperator to operation_mx.
Finds the closest operation matrix (by maximizing fidelity) to operation_mx that describes a unitary quantum gate.
 Parameters
operation_mx (numpy array) – The operation matrix to act on.
 Returns
numpy array – The resulting closest unitary operation matrix.
 class pygsti._TrivialGaugeGroupElement(dim)¶
Bases:
GaugeGroupElement
Element of
TrivialGaugeGroup
 Parameters
dim (int) – The HilbertSchmidt space dimension of the gauge group.
 property transform_matrix(self)¶
The gaugetransform matrix.
 Returns
numpy.ndarray
 property transform_matrix_inverse(self)¶
The inverse of the gaugetransform matrix.
 Returns
numpy.ndarray
 deriv_wrt_params(self, wrt_filter=None)¶
Computes the derivative of the gauge group at this element.
That is, the derivative of a general element with respect to the gauge group’s parameters, evaluated at this element.
 Parameters
wrt_filter (list or numpy.ndarray, optional) – Indices of the gauge group parameters to differentiate with respect to. If None, differentiation is performed with respect to all the group’s parameters.
 Returns
numpy.ndarray
 to_vector(self)¶
Get the parameter vector corresponding to this transform.
 Returns
numpy.ndarray
 from_vector(self, v)¶
Reinitialize this GaugeGroupElement using the the parameter vector v.
 Parameters
v (numpy.ndarray) – A 1D array of length :method:`num_params`
 Returns
None
 property num_params(self)¶
Return the number of parameters (degrees of freedom) of this element.
 Returns
int
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 pygsti.gaugeopt_to_target(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', gauge_group=None, method='auto', maxiter=100000, maxfev=None, tol=1e08, oob_check_interval=0, return_all=False, comm=None, verbosity=0, check_jac=False)¶
Optimize the gauge degrees of freedom of a model to that of a target.
 Parameters
model (Model) – The model to gaugeoptimize
target_model (Model) – The model to optimize to. The metric used for comparing models is given by gates_metric and spam_metric.
item_weights (dict, optional) – Dictionary of weighting factors for gates and spam operators. Keys can be gate, state preparation, or POVM effect, as well as the special values “spam” or “gates” which apply the given weighting to all spam operators or gates respectively. Values are floating point numbers. Values given for specific gates or spam operators take precedence over “gates” and “spam” values. The precise use of these weights depends on the model metric(s) being used.
cptp_penalty_factor (float, optional) – If greater than zero, the objective function also contains CPTP penalty terms which penalize nonCPTPness of the gates being optimized. This factor multiplies these CPTP penalty terms.
spam_penalty_factor (float, optional) – If greater than zero, the objective function also contains SPAM penalty terms which penalize nonpositiveness of the state preps being optimized. This factor multiplies these SPAM penalty terms.
gates_metric ({"frobenius", "fidelity", "tracedist"}, optional) – The metric used to compare gates within models. “frobenius” computes the normalized sqrt(sumofsquareddifferences), with weights multiplying the squared differences (see
Model.frobeniusdist()
). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each gate, weighted by the weights.spam_metric ({"frobenius", "fidelity", "tracedist"}, optional) – The metric used to compare spam vectors within models. “frobenius” computes the normalized sqrt(sumofsquareddifferences), with weights multiplying the squared differences (see
Model.frobeniusdist()
). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each “SPAM gate”, weighted by the weights.gauge_group (GaugeGroup, optional) – The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.
method (string, optional) –
The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘NelderMead’, ‘CG’, ‘LBFGSB’, or additionally:
’auto’ – ‘ls’ when allowed, otherwise ‘LBFGSB’
’ls’ – custom leastsquares optimizer.
’custom’ – custom CG that often works better than ‘CG’
’supersimplex’ – repeated application of ‘NelderMead’ to converge it
’basinhopping’ – scipy.optimize.basinhopping using LBFGSB as a local optimizer
’swarm’ – particle swarm global optimization algorithm
’evolve’ – evolutionary global optimization algorithm using DEAP
’brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions
maxiter (int, optional) – Maximum number of iterations for the gauge optimization.
maxfev (int, optional) – Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.
tol (float, optional) – The tolerance for the gauge optimization.
oob_check_interval (int, optional) – If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an outofbounds condition that the gauge optimizer will avoid. If zero, then any gaugetransform failures just terminate the optimization.
return_all (bool, optional) – When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
verbosity (int, optional) – How much detail to send to stdout.
check_jac (bool) – When True, check least squares analytic jacobian against finite differences
 Returns
model if return_all == False
(goodnessMin, gaugeMx, model) if return_all == True – where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gaugetransformed model.
 pygsti.gaugeopt_custom(model, objective_fn, gauge_group=None, method='LBFGSB', maxiter=100000, maxfev=None, tol=1e08, oob_check_interval=0, return_all=False, jacobian_fn=None, comm=None, verbosity=0)¶
Optimize the gauge of a model using a custom objective function.
 Parameters
model (Model) – The model to gaugeoptimize
objective_fn (function) – The function to be minimized. The function must take a single Model argument and return a float.
gauge_group (GaugeGroup, optional) – The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.
method (string, optional) –
The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘NelderMead’, ‘CG’, ‘LBFGSB’, or additionally:
’custom’ – custom CG that often works better than ‘CG’
’supersimplex’ – repeated application of ‘NelderMead’ to converge it
’basinhopping’ – scipy.optimize.basinhopping using LBFGSB as a local optimizer
’swarm’ – particle swarm global optimization algorithm
’evolve’ – evolutionary global optimization algorithm using DEAP
’brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions
maxiter (int, optional) – Maximum number of iterations for the gauge optimization.
maxfev (int, optional) – Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.
tol (float, optional) – The tolerance for the gauge optimization.
oob_check_interval (int, optional) – If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an outofbounds condition that the gauge optimizer will avoid. If zero, then any gaugetransform failures just terminate the optimization.
return_all (bool, optional) – When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.
jacobian_fn (function, optional) – The jacobian of objective_fn. The function must take three parameters, 1) the untransformed Model, 2) the transformed Model, and 3) the GaugeGroupElement representing the transformation that brings the first argument into the second.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
verbosity (int, optional) – How much detail to send to stdout.
 Returns
model if return_all == False
(goodnessMin, gaugeMx, model) if return_all == True – where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gaugetransformed model.
 pygsti._create_objective_fn(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', method=None, comm=None, check_jac=False)¶
Creates the objective function and jacobian (if available) for gaugeopt_to_target
 pygsti._cptp_penalty_size(mdl)¶
Helper function  same as that in core.py.
 pygsti._spam_penalty_size(mdl)¶
Helper function  same as that in core.py.
 pygsti._cptp_penalty(mdl, prefactor, op_basis)¶
Helper function  CPTP penalty: (sum of tracenorms of gates), which in least squares optimization means returning an array of the sqrt(tracenorm) of each gate. This function is the same as that in core.py.
 Returns
numpy array – a (real) 1D array of length len(mdl.operations).
 pygsti._spam_penalty(mdl, prefactor, op_basis)¶
Helper function  CPTP penalty: (sum of tracenorms of gates), which in least squares optimization means returning an array of the sqrt(tracenorm) of each gate. This function is the same as that in core.py.
 Returns
numpy array – a (real) 1D array of length _spam_penalty_size(mdl)
 pygsti._cptp_penalty_jac_fill(cp_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)¶
Helper function  jacobian of CPTP penalty (sum of tracenorms of gates) Returns a (real) array of shape (len(mdl.operations), gauge_group_el.num_params).
 pygsti._spam_penalty_jac_fill(spam_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)¶
Helper function  jacobian of CPTP penalty (sum of tracenorms of gates) Returns a (real) array of shape (_spam_penalty_size(mdl), gauge_group_el.num_params).
 pygsti._gram_rank_and_evals(dataset, prep_fiducials, effect_fiducials, target_model)¶
Returns the rank and singular values of the Gram matrix for a dataset.
 Parameters
dataset (DataSet) – The data used to populate the Gram matrix
prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.
effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.
target_model (Model) – A model used to make sense of circuit elements, and to compute the theoretical gram matrix eigenvalues (returned as svalues_target).
 Returns
rank (int) – the rank of the Gram matrix
svalues (numpy array) – the singular values of the Gram matrix
svalues_target (numpy array) – the corresponding singular values of the Gram matrix generated by target_model.
 pygsti.max_gram_basis(op_labels, dataset, max_length=0)¶
Compute a maximal set of basis circuits for a Gram matrix.
That is, a maximal set of strings {S_i} such that the gate strings { S_i S_j } are all present in dataset. If max_length > 0, then restrict len(S_i) <= max_length.
 Parameters
op_labels (list or tuple) – the operation labels to use in Gram matrix basis strings
dataset (DataSet) – the dataset to use when constructing the Gram matrix
max_length (int, optional) – the maximum string length considered for Gram matrix basis elements. Defaults to 0 (no limit).
 Returns
list of tuples – where each tuple contains operation labels and specifies a single circuit.
 pygsti.max_gram_rank_and_eigenvalues(dataset, target_model, max_basis_string_length=10, fixed_lists=None)¶
Compute the rank and singular values of a maximal Gram matrix.
That is, compute the rank and singular values of the Gram matrix computed using the basis: max_gram_basis(dataset.gate_labels(), dataset, max_basis_string_length).
 Parameters
dataset (DataSet) – the dataset to use when constructing the Gram matrix
target_model (Model) – A model used to make sense of circuits and for the construction of a theoretical gram matrix and spectrum.
max_basis_string_length (int, optional) – the maximum string length considered for Gram matrix basis elements. Defaults to 10.
fixed_lists ((prep_fiducials, effect_fiducials), optional) – 2tuple of
Circuit
lists, specifying the preparation and measurement fiducials to use when constructing the Gram matrix, and thereby bypassing the search for such lists.
 Returns
rank (integer)
singularvalues (numpy array)
targetsingularvalues (numpy array)
 pygsti.id2x2¶
 pygsti.sigmax¶
 pygsti.sigmay¶
 pygsti.sigmaz¶
 pygsti.unitary_to_pauligate(u)¶
Get the linear operator on (vectorized) density matrices corresponding to a nqubit unitary operator on states.
 Parameters
u (numpy array) – A dxd array giving the action of the unitary on a state in the sigmaz basis. where d = 2 ** nqubits
 Returns
numpy array – The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.
 pygsti.sigmaii¶
 pygsti.sigmaix¶
 pygsti.sigmaiy¶
 pygsti.sigmaiz¶
 pygsti.sigmaxi¶
 pygsti.sigmaxx¶
 pygsti.sigmaxy¶
 pygsti.sigmaxz¶
 pygsti.sigmayi¶
 pygsti.sigmayx¶
 pygsti.sigmayy¶
 pygsti.sigmayz¶
 pygsti.sigmazi¶
 pygsti.sigmazx¶
 pygsti.sigmazy¶
 pygsti.sigmazz¶
 pygsti.single_qubit_gate(hx, hy, hz, noise=0)¶
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (hx*X + hy*Y + hz*Z), where X, Y, and Z are the sigma matrices. Thus, hx, hy, and hz correspond to rotation angles divided by 2. Additionally, a uniform depolarization noise can be applied to the gate.
 Parameters
hx (float) – Coefficient of sigmaX matrix in exponent.
hy (float) – Coefficient of sigmaY matrix in exponent.
hz (float) – Coefficient of sigmaZ matrix in exponent.
noise (float, optional) – The amount of uniform depolarizing noise.
 Returns
numpy array – 4x4 operation matrix which operates on a 1qubit density matrix expressed as a vector in the Pauli basis ( {I,X,Y,Z}/sqrt(2) ).
 pygsti.two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)¶
Construct the singlequbit operation matrix.
Build the operation matrix given by exponentiating i * (xx*XX + xy*XY + …) where terms in the exponent are tensor products of two Pauli matrices.
 Parameters
ix (float, optional) – Coefficient of IX matrix in exponent.
iy (float, optional) – Coefficient of IY matrix in exponent.
iz (float, optional) – Coefficient of IZ matrix in exponent.
xi (float, optional) – Coefficient of XI matrix in exponent.
xx (float, optional) – Coefficient of XX matrix in exponent.
xy (float, optional) – Coefficient of XY matrix in exponent.
xz (float, optional) – Coefficient of XZ matrix in exponent.
yi (float, optional) – Coefficient of YI matrix in exponent.
yx (float, optional) – Coefficient of YX matrix in exponent.
yy (float, optional) – Coefficient of YY matrix in exponent.
yz (float, optional) – Coefficient of YZ matrix in exponent.
zi (float, optional) – Coefficient of ZI matrix in exponent.
zx (float, optional) – Coefficient of ZX matrix in exponent.
zy (float, optional) – Coefficient of ZY matrix in exponent.
zz (float, optional) – Coefficient of ZZ matrix in exponent.
ii (float, optional) – Coefficient of II matrix in exponent.
 Returns
numpy array – 16x16 operation matrix which operates on a 2qubit density matrix expressed as a vector in the PauliProduct basis.
 class pygsti._DataSet(oli_data=None, time_data=None, rep_data=None, circuits=None, circuit_indices=None, outcome_labels=None, outcome_label_indices=None, static=False, file_to_load_from=None, collision_action='aggregate', comment=None, aux_info=None)¶
Bases:
object
An association between Circuits and outcome counts, serving as the input data for many QCVV protocols.
The DataSet class associates circuits with counts or time series of counts for each outcome label, and can be thought of as a table with gate strings labeling the rows and outcome labels and/or time labeling the columns. It is designed to behave similarly to a dictionary of dictionaries, so that counts are accessed by:
count = dataset[circuit][outcomeLabel]
in the timeindependent case, and in the timedependent case, for integer time index i >= 0,
outcomeLabel = dataset[circuit][i].outcome count = dataset[circuit][i].count time = dataset[circuit][i].time
 Parameters
oli_data (list or numpy.ndarray) – When static == True, a 1D numpy array containing outcome label indices (integers), concatenated for all sequences. Otherwise, a list of 1D numpy arrays, one array per gate sequence. In either case, this quantity is indexed by the values of circuit_indices or the index of circuits.
time_data (list or numpy.ndarray) – Same format at oli_data except stores floatingpoint timestamp values.
rep_data (list or numpy.ndarray) – Same format at oli_data except stores integer repetition counts for each “data bin” (i.e. (outcome,time) pair). If all repetitions equal 1 (“singleshot” timestampted data), then rep_data can be None (no repetitions).
circuits (list of (tuples or Circuits)) – Each element is a tuple of operation labels or a Circuit object. Indices for these strings are assumed to ascend from 0. These indices must correspond to the time series of spamlabel indices (above). Only specify this argument OR circuit_indices, not both.
circuit_indices (ordered dictionary) – An OrderedDict with keys equal to circuits (tuples of operation labels) and values equal to integer indices associating a row/element of counts with the circuit. Only specify this argument OR circuits, not both.
outcome_labels (list of strings or int) – Specifies the set of spam labels for the DataSet. Indices for the spam labels are assumed to ascend from 0, starting with the first element of this list. These indices will associate each elememtn of timeseries with a spam label. Only specify this argument OR outcome_label_indices, not both. If an int, specifies that the outcome labels should be those for a standard set of this many qubits.
outcome_label_indices (ordered dictionary) – An OrderedDict with keys equal to spam labels (strings) and value equal to integer indices associating a spam label with given index. Only specify this argument OR outcome_labels, not both.
static (bool) –
 When True, create a readonly, i.e. “static” DataSet which cannot be modified. In
this case you must specify the timeseries data, circuits, and spam labels.
 When False, create a DataSet that can have time series data added to it. In this case,
you only need to specify the spam labels.
file_to_load_from (string or file object) – Specify this argument and no others to create a static DataSet by loading from a file (just like using the load(…) function).
collision_action ({"aggregate","overwrite","keepseparate"}) – Specifies how duplicate circuits should be handled. “aggregate” adds duplicatecircuit counts to the same circuit’s data at the next integer timestamp. “overwrite” only keeps the latest given data for a circuit. “keepseparate” tags duplicatecircuits by setting the .occurrence ID of added circuits that are already contained in this data set to the next available positive integer.
comment (string, optional) – A userspecified comment string that gets carried around with the data. A common use for this field is to attach to the data details regarding its collection.
aux_info (dict, optional) – A userspecified dictionary of percircuit auxiliary information. Keys should be the circuits in this DataSet and value should be Python dictionaries.
 __iter__(self)¶
 __len__(self)¶
 __contains__(self, circuit)¶
Test whether data set contains a given circuit.
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels or a Circuit instance which specifies the the circuit to check for.
 Returns
bool – whether circuit was found.
 __hash__(self)¶
Return hash(self).
 __getitem__(self, circuit)¶
 __setitem__(self, circuit, outcome_dict_or_series)¶
 __delitem__(self, circuit)¶
 _get_row(self, circuit)¶
Get a row of data from this DataSet.
 Parameters
circuit (Circuit or tuple) – The gate sequence to extract data for.
 Returns
_DataSetRow
 _set_row(self, circuit, outcome_dict_or_series)¶
Set the counts for a row of this DataSet.
 Parameters
circuit (Circuit or tuple) – The gate sequence to extract data for.
outcome_dict_or_series (dict or tuple) – The outcome count data, either a dictionary of outcome counts (with keys as outcome labels) or a tuple of lists. In the latter case this can be a 2tuple: (outcomelabellist, timestamplist) or a 3tuple: (outcomelabellist, timestamplist, repetitioncountlist).
 Returns
None
 keys(self)¶
Returns the circuits used as keys of this DataSet.
 Returns
list – A list of Circuit objects which index the data counts within this data set.
 items(self)¶
Iterator over (circuit, timeSeries) pairs.
Here circuit is a tuple of operation labels and timeSeries is a
_DataSetRow
instance, which behaves similarly to a list of spam labels whose index corresponds to the time step. Returns
_DataSetKVIterator
 values(self)¶
Iterator over _DataSetRow instances corresponding to the time series data for each circuit.
 Returns
_DataSetValueIterator
 property outcome_labels(self)¶
Get a list of all the outcome labels contained in this DataSet.
 Returns
list of strings or tuples – A list where each element is an outcome label (which can be a string or a tuple of strings).
 property timestamps(self)¶
Get a list of all the (unique) timestamps contained in this DataSet.
 Returns
list of floats – A list where each element is a timestamp.
 gate_labels(self, prefix='G')¶
Get a list of all the distinct operation labels used in the circuits of this dataset.
 Parameters
prefix (str) – Filter the circuit labels so that only elements beginning with this prefix are returned. None performs no filtering.
 Returns
list of strings – A list where each element is a operation label.
 degrees_of_freedom(self, circuits=None, method='present_outcomes1', aggregate_times=True)¶
Returns the number of independent degrees of freedom in the data for the circuits in circuits.
 Parameters
circuits (list of Circuits) – The list of circuits to count degrees of freedom for. If None then all of the DataSet’s strings are used.
method ({'all_outcomes1', 'present_outcomes1', 'tuned'}) – How the degrees of freedom should be computed. ‘all_outcomes1’ takes the number of circuits and multiplies this by the total number of outcomes (the length of what is returned by outcome_labels()) minus one. ‘present_outcomes1’ counts on a percircuit basis the number of present (usually = nonzero) outcomes recorded minus one. ‘tuned’ should be the most accurate, as it accounts for lowN “Poisson bump” behavior, but it is not the default because it is still under development. For timestamped data, see aggreate_times below.
aggregate_times (bool, optional) – Whether counts that occur at different times should be tallied separately. If True, then even when counts occur at different times degrees of freedom are tallied on a percircuit basis. If False, then counts occuring at distinct times are treated as independent of those an any other time, and are tallied separately. So, for example, if aggregate_times is False and a data row has 0 and 1counts of 45 & 55 at time=0 and 42 and 58 at time=1 this row would contribute 2 degrees of freedom, not 1. It can sometimes be useful to set this to False when the DataSet holds coarsegrained data, but usually you want this to be left as True (especially for timeseries data).
 Returns
int
 _collisionaction_update_circuit(self, circuit)¶
 _add_explicit_repetition_counts(self)¶
Build internal repetition counts if they don’t exist already.
This method is usually unnecessary, as repetition counts are almost always build as soon as they are needed.
 Returns
None
 add_count_dict(self, circuit, count_dict, record_zero_counts=True, aux=None, update_ol=True)¶
Add a single circuit’s counts to this DataSet
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object
count_dict (dict) – A dictionary with keys = outcome labels and values = counts
record_zero_counts (bool, optional) – Whether zerocounts are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).
update_ol (bool, optional) – This argument is for internal use only and should be left as True.
 Returns
None
 add_count_list(self, circuit, outcome_labels, counts, record_zero_counts=True, aux=None, update_ol=True, unsafe=False)¶
Add a single circuit’s counts to this DataSet
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object
outcome_labels (list or tuple) – The outcome labels corresponding to counts.
counts (list or tuple) – The counts themselves.
record_zero_counts (bool, optional) – Whether zerocounts are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).
update_ol (bool, optional) – This argument is for internal use only and should be left as True.
unsafe (bool, optional) – True means that outcome_labels is guaranteed to hold tupletype outcome labels and never plain strings. Only set this to True if you know what you’re doing.
 Returns
None
 add_count_arrays(self, circuit, outcome_index_array, count_array, record_zero_counts=True, aux=None)¶
Add the outcomes for a single circuit, formatted as raw data arrays.
 Parameters
circuit (Circuit) – The circuit to add data for.
outcome_index_array (numpy.ndarray) – An array of outcome indices, which must be values of self.olIndex (which maps outcome labels to indices).
count_array (numpy.ndarray) – An array of integer (or sometimes floating point) counts, one corresponding to each outcome index (element of outcome_index_array).
record_zero_counts (bool, optional) – Whether zero counts (zeros in count_array should be stored explicitly or not stored and inferred. Setting to False reduces the space taken by data sets containing lots of zero counts, but makes some objective function evaluations less precise.
aux (dict or None, optional) – If not None a dictionary of userdefined auxiliary information that should be associated with this circuit.
 Returns
None
 add_cirq_trial_result(self, circuit, trial_result, key)¶
Add a single circuit’s counts — stored in a Cirq TrialResult — to this DataSet
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object. Note that this must be a PyGSTi circuit — not a Cirq circuit.
trial_result (cirq.TrialResult) – The TrialResult to add
key (str) – The string key of the measurement. Set by cirq.measure.
 Returns
None
 add_raw_series_data(self, circuit, outcome_label_list, time_stamp_list, rep_count_list=None, overwrite_existing=True, record_zero_counts=True, aux=None, update_ol=True, unsafe=False)¶
Add a single circuit’s counts to this DataSet
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object
outcome_label_list (list) – A list of outcome labels (strings or tuples). An element’s index links it to a particular time step (i.e. the ith element of the list specifies the outcome of the ith measurement in the series).
time_stamp_list (list) – A list of floating point timestamps, each associated with the single corresponding outcome in outcome_label_list. Must be the same length as outcome_label_list.
rep_count_list (list, optional) – A list of integer counts specifying how many outcomes of type given by outcome_label_list occurred at the time given by time_stamp_list. If None, then all counts are assumed to be 1. When not None, must be the same length as outcome_label_list.
overwrite_existing (bool, optional) – Whether to overwrite the data for circuit (if it exists). If False, then the given lists are appended (added) to existing data.
record_zero_counts (bool, optional) – Whether zerocounts (elements of rep_count_list that are zero) are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).
update_ol (bool, optional) – This argument is for internal use only and should be left as True.
unsafe (bool, optional) – When True, don’t bother checking that outcome_label_list contains tupletype outcome labels and automatically upgrading strings to 1tuples. Only set this to True if you know what you’re doing and need the marginally faster performance.
 Returns
None
 _add_raw_arrays(self, circuit, oli_array, time_array, rep_array, overwrite_existing, record_zero_counts, aux)¶
 update_ol(self)¶
Updates the internal outcomelabel list in this dataset.
Call this after calling add_count_dict(…) or add_raw_series_data(…) with update_olIndex=False.
 Returns
None
 add_series_data(self, circuit, count_dict_list, time_stamp_list, overwrite_existing=True, record_zero_counts=True, aux=None)¶
Add a single circuit’s counts to this DataSet
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object
count_dict_list (list) – A list of dictionaries holding the outcomelabel:count pairs for each time step (times given by time_stamp_list.
time_stamp_list (list) – A list of floating point timestamps, each associated with an entire dictionary of outcomes specified by count_dict_list.
overwrite_existing (bool, optional) – If True, overwrite any existing data for the circuit. If False, add the count data with the next nonnegative integer timestamp.
record_zero_counts (bool, optional) – Whether zerocounts (elements of the dictionaries in count_dict_list that are zero) are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).
 Returns
None
 aggregate_outcomes(self, label_merge_dict, record_zero_counts=True)¶
Creates a DataSet which merges certain outcomes in this DataSet.
Used, for example, to aggregate a 2qubit 4outcome DataSet into a 1qubit 2outcome DataSet.
 Parameters
label_merge_dict (dictionary) – The dictionary whose keys define the new DataSet outcomes, and whose items are lists of input DataSet outcomes that are to be summed together. For example, if a twoqubit DataSet has outcome labels “00”, “01”, “10”, and “11”, and we want to ‘’aggregate out’’ the second qubit, we could use label_merge_dict = {‘0’:[‘00’,’01’],’1’:[‘10’,’11’]}. When doing this, however, it may be better to use :function:`filter_qubits` which also updates the circuits.
record_zero_counts (bool, optional) – Whether zerocounts are actually recorded (stored) in the returned (merged) DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
 Returns
merged_dataset (DataSet object) – The DataSet with outcomes merged according to the rules given in label_merge_dict.
 aggregate_std_nqubit_outcomes(self, qubit_indices_to_keep, record_zero_counts=True)¶
Creates a DataSet which merges certain outcomes in this DataSet.
Used, for example, to aggregate a 2qubit 4outcome DataSet into a 1qubit 2outcome DataSet. This assumes that outcome labels are in the standard format whereby each qubit corresponds to a single ‘0’ or ‘1’ character.
 Parameters
qubit_indices_to_keep (list) – A list of integers specifying which qubits should be kept, that is, not aggregated.
record_zero_counts (bool, optional) – Whether zerocounts are actually recorded (stored) in the returned (merged) DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.
 Returns
merged_dataset (DataSet object) – The DataSet with outcomes merged.
 add_auxiliary_info(self, circuit, aux)¶
Add auxiliary meta information to circuit.
 Parameters
circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object
aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).
 Returns
None
 add_counts_from_dataset(self, other_data_set)¶
Append another DataSet’s data to this DataSet
 Parameters
other_data_set (DataSet) – The dataset to take counts from.
 Returns
None
 add_series_from_dataset(self, other_data_set)¶
Append another DataSet’s series data to this DataSet
 Parameters
other_data_set (DataSet) – The dataset to take time series data from.
 Returns
None
 property meantimestep(self)¶
The mean timestep, averaged over the timestep for each circuit and over circuits.
 Returns
float
 property has_constant_totalcounts_pertime(self)¶
True if the data for every circuit has the same number of total counts at every data collection time.
This will return True if there is a different number of total counts per circuit (i.e., after aggregating over time), as long as every circuit has the same total counts per time step (this will happen when the number of timesteps varies between circuit).
 Returns
bool
 property totalcounts_pertime(self)¶
Total counts per time, if this is constant over times and circuits.
When that doesn’t hold, an error is raised.
 Returns
float or int
 property has_constant_totalcounts(self)¶
True if the data for every circuit has the same number of total counts.
 Returns
bool
 property has_trivial_timedependence(self)¶
True if all the data in this DataSet occurs at time 0.
 Returns
bool
 __str__(self)¶
Return str(self).
 to_str(self, mode='auto')¶
Render this DataSet as a string.
 Parameters
mode ({"auto","timedependent","timeindependent"}) – Whether to display the data as timeseries of outcome counts (“timedependent”) or to report peroutcome counts aggregated over time (“timeindependent”). If “auto” is specified, then the timeindependent mode is used only if all time stamps in the DataSet are equal to zero (trivial time dependence).
 Returns
str
 truncate(self, list_of_circuits_to_keep, missing_action='raise')¶
Create a truncated dataset comprised of a subset of the circuits in this dataset.
 Parameters
list_of_circuits_to_keep (list of (tuples or Circuits)) – A list of the circuits for the new returned dataset. If a circuit is given in this list that isn’t in the original data set, missing_action determines the behavior.
missing_action ({"raise","warn","ignore"}) – What to do when a string in list_of_circuits_to_keep is not in the data set (raise a KeyError, issue a warning, or do nothing).
 Returns
DataSet – The truncated data set.
 time_slice(self, start_time, end_time, aggregate_to_time=None)¶
Creates a DataSet by aggregating the counts within the [start_time,`end_time`) interval.
 Parameters
start_time (float) – The starting time.
end_time (float) – The ending time.
aggregate_to_time (float, optional) – If not None, a single timestamp to give all the data in the specified range, resulting in timeindependent DataSet. If None, then the original timestamps are preserved.
 Returns
DataSet
 split_by_time(self, aggregate_to_time=None)¶
Creates a dictionary of DataSets, each of which is a equaltime slice of this DataSet.
The keys of the returned dictionary are the distinct timestamps in this dataset.
 Parameters
aggregate_to_time (float, optional) – If not None, a single timestamp to give all the data in each returned data set, resulting in timeindependent `DataSet`s. If None, then the original timestamps are preserved.
 Returns
OrderedDict – A dictionary of
DataSet
objects whose keys are the timestamp values of the original (this) data set in sorted order.
 drop_zero_counts(self)¶
Creates a copy of this data set that doesn’t include any zero counts.
 Returns
DataSet
 process_times(self, process_times_array_fn)¶
Manipulate this DataSet’s timestamps according to processor_fn.
For example, using, the folloing process_times_array_fn would change the timestamps for each circuit to sequential integers.
``` def process_times_array_fn(times):
return list(range(len(times)))
 Parameters
process_times_array_fn (function) – A function which takes a single arrayoftimestamps argument and returns another similarlysized array. This function is called, once per circuit, with the circuit’s array of timestamps.
 Returns
DataSet – A new data set with altered timestamps.
 process_circuits(self, processor_fn, aggregate=False)¶
Create a new data set by manipulating this DataSet’s circuits (keys) according to processor_fn.
The new DataSet’s circuits result from by running each of this DataSet’s circuits through processor_fn. This can be useful when “tracing out” qubits in a dataset containing multiqubit data.
 Parameters
processor_fn (function) – A function which takes a single Circuit argument and returns another (or the same) Circuit. This function may also return None, in which case the data for that string is deleted.
aggregate (bool, optional) – When True, aggregate the data for ciruits that processor_fn assigns to the same “new” circuit. When False, use the data from the last original circuit that maps to a given “new” circuit.
 Returns
DataSet
 process_circuits_inplace(self, processor_fn, aggregate=False)¶
Manipulate this DataSet’s circuits (keys) inplace according to processor_fn.
All of this DataSet’s circuits are updated by running each one through processor_fn. This can be useful when “tracing out” qubits in a dataset containing multiqubit data.
 Parameters
processor_fn (function) – A function which takes a single Circuit argument and returns another (or the same) Circuit. This function may also return None, in which case the data for that string is deleted.
aggregate (bool, optional) – When True, aggregate the data for ciruits that processor_fn assigns to the same “new” circuit. When False, use the data from the last original circuit that maps to a given “new” circuit.
 Returns
None
 remove(self, circuits, missing_action='raise')¶
Remove (delete) the data for circuits from this
DataSet
. Parameters
circuits (iterable) – An iterable over Circuitlike objects specifying the keys (circuits) to remove.
missing_action ({"raise","warn","ignore"}) – What to do when a string in circuits is not in this data set (raise a KeyError, issue a warning, or do nothing).
 Returns
None
 _remove(self, gstr_indices)¶
Removes the data in indices given by gstr_indices
 copy(self)¶
Make a copy of this DataSet.
 Returns
DataSet
 copy_nonstatic(self)¶
Make a nonstatic copy of this DataSet.
 Returns
DataSet
 done_adding_data(self)¶
Promotes a nonstatic DataSet to a static (readonly) DataSet.
This method should be called after all data has been added.
 Returns
None
 __getstate__(self)¶
 __setstate__(self, state_dict)¶
 save(self, file_or_filename)¶
 write_binary(self, file_or_filename)¶
Write this data set to a binaryformat file.
 Parameters
file_or_filename (string or file object) – If a string, interpreted as a filename. If this filename ends in “.gz”, the file will be gzip compressed.
 Returns
None
 load(self, file_or_filename)¶
 read_binary(self, file_or_filename)¶
Read a DataSet from a binary file, clearing any data is contained previously.
The file should have been created with :method:`DataSet.write_binary`
 Parameters
file_or_filename (str or buffer) – The file or filename to load from.
 Returns
None
 rename_outcome_labels(self, old_to_new_dict)¶
Replaces existing output labels with new ones as per old_to_new_dict.
 Parameters
old_to_new_dict (dict) – A mapping from old/existing outcome labels to new ones. Strings in keys or values are automatically converted to 1tuples. Missing outcome labels are left unaltered.
 Returns
None
 add_std_nqubit_outcome_labels(self, nqubits)¶
Adds all the “standard” outcome labels (e.g. ‘0010’) on nqubits qubits.
This is useful to ensure that, even if not all outcomes appear in the data, that all are recognized as being potentially valid outcomes (and so attempts to get counts for these outcomes will be 0 rather than raising an error).
 Parameters
nqubits (int) – The number of qubits. For example, if equal to 3 the outcome labels ‘000’, ‘001’, … ‘111’ are added.
 Returns
None
 add_outcome_labels(self, outcome_labels, update_ol=True)¶
Adds new valid outcome labels.
Ensures that all the elements of outcome_labels are stored as valid outcomes for circuits in this DataSet, adding new outcomes as necessary.
 Parameters
outcome_labels (list or generator) – A list or generator of string or tuplevalued outcome labels.
update_ol (bool, optional) – Whether to update internal mappings to reflect the new outcome labels. Leave this as True unless you really know what you’re doing.
 Returns
None
 auxinfo_dataframe(self, pivot_valuename=None, pivot_value=None, drop_columns=False)¶
Create a Pandas dataframe with auxdata from this dataset.
 Parameters
pivot_valuename (str, optional) – If not None, the resulting dataframe is pivoted using pivot_valuename as the column whose values name the pivoted table’s column names. If None and pivot_value is not None,`”ValueName”` is used.
pivot_value (str, optional) – If not None, the resulting dataframe is pivoted such that values of the pivot_value column are rearranged into new columns whose names are given by the values of the pivot_valuename column. If None and pivot_valuename is not None,`”Value”` is used.
drop_columns (bool or list, optional) – A list of column names to drop (prior to performing any pivot). If True appears in this list or is given directly, then all constantvalued columns are dropped as well. No columns are dropped when drop_columns == False.
 Returns
pandas.DataFrame
 pygsti.create_bootstrap_dataset(input_data_set, generation_method, input_model=None, seed=None, outcome_labels=None, verbosity=1)¶
Creates a DataSet used for generating bootstrapped error bars.
 Parameters
input_data_set (DataSet) – The data set to use for generating the “bootstrapped” data set.
generation_method ({ 'nonparametric', 'parametric' }) – The type of dataset to generate. ‘parametric’ generates a DataSet with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates a DataSet with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.
input_model (Model, optional) – The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).
seed (int, optional) – A seed value for numpy’s random number generator.
outcome_labels (list, optional) – The list of outcome labels to include in the output dataset. If None are specified, defaults to the spam labels of input_data_set.
verbosity (int, optional) – How verbose the function output is. If 0, then printing is suppressed. If 1 (or greater), then printing is not suppressed.
 Returns
DataSet
 pygsti.create_bootstrap_models(num_models, input_data_set, generation_method, fiducial_prep, fiducial_measure, germs, max_lengths, input_model=None, target_model=None, start_seed=0, outcome_labels=None, lsgst_lists=None, return_data=False, verbosity=2)¶
Creates a series of “bootstrapped” Models.
Models are created from a single DataSet (and possibly Model) and are typically used for generating bootstrapped error bars. The resulting Models are obtained by performing MLGST on data generated by repeatedly calling :function:`create_bootstrap_dataset` with consecutive integer seed values.
 Parameters
num_models (int) – The number of models to create.
input_data_set (DataSet) – The data set to use for generating the “bootstrapped” data set.
generation_method ({ 'nonparametric', 'parametric' }) – The type of data to generate. ‘parametric’ generates DataSets with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates DataSets with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.
fiducial_prep (list of Circuits) – The state preparation fiducial circuits used by MLGST.
fiducial_measure (list of Circuits) – The measurement fiducial circuits used by MLGST.
germs (list of Circuits) – The germ circuits used by MLGST.
max_lengths (list of ints) – List of integers, one per MLGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
input_model (Model, optional) – The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).
target_model (Model, optional) – Mandatory model to use for as the target model for MLGST when generation_method is set to ‘nonparametric’. When ‘parametric’ is selected, input_model is used as the target.
start_seed (int, optional) – The initial seed value for numpy’s random number generator when generating data sets. For each succesive dataset (and model) that are generated, the seed is incremented by one.
outcome_labels (list, optional) – The list of Outcome labels to include in the output dataset. If None are specified, defaults to the effect labels of input_data_set.
lsgst_lists (list of circuit lists, optional) – Provides explicit list of circuit lists to be used in analysis; to be given if the dataset uses “incomplete” or “reduced” sets of circuit. Default is None.
return_data (bool) – Whether generated data sets should be returned in addition to models.
verbosity (int) – Level of detail printed to stdout.
 Returns
models (list) – The list of generated Model objects.
data (list) – The list of generated DataSet objects, only returned when return_data == True.
 pygsti.gauge_optimize_models(gs_list, target_model, gate_metric='frobenius', spam_metric='frobenius', plot=True)¶
Optimizes the “spam weight” parameter used when gauge optimizing a set of models.
This function gauge optimizes multiple times using a range of spam weights and takes the one the minimizes the average spam error multiplied by the average gate error (with respect to a target model).
 Parameters
gs_list (list) – The list of Model objects to gauge optimize (simultaneously).
target_model (Model) – The model to compare the gaugeoptimized gates with, and also to gaugeoptimize them to.
gate_metric ({ "frobenius", "fidelity", "tracedist" }, optional) – The metric used within the gauge optimization to determing error in the gates.
spam_metric ({ "frobenius", "fidelity", "tracedist" }, optional) – The metric used within the gauge optimization to determing error in the state preparation and measurement.
plot (bool, optional) – Whether to create a plot of the modeltarget discrepancy as a function of spam weight (figure displayed interactively).
 Returns
list – The list of Models gaugeoptimized using the best spamWeight.
 pygsti._model_stdev(gs_func, gs_ensemble, ddof=1, axis=None, **kwargs)¶
Standard deviation of gs_func over an ensemble of models.
 Parameters
gs_func (function) – A function that takes a
Model
as its first argument, and whose additional arguments may be given by keyword arguments.gs_ensemble (list) – A list of Model objects.
ddof (int, optional) – As in numpy.std
axis (int or None, optional) – As in numpy.std
 Returns
numpy.ndarray – The output of numpy.std
 pygsti._model_mean(gs_func, gs_ensemble, axis=None, **kwargs)¶
Mean of gs_func over an ensemble of models.
 Parameters
gs_func (function) – A function that takes a
Model
as its first argument, and whose additional arguments may be given by keyword arguments.gs_ensemble (list) – A list of Model objects.
axis (int or None, optional) – As in numpy.mean
 Returns
numpy.ndarray – The output of numpy.mean
 pygsti._to_mean_model(gs_list, target_gs)¶
Take the pergateelement mean of a set of models.
Return the
Model
constructed from the mean parameter vector of the models in gs_list, that is, the mean of the parameter vectors of each model in gs_list. Parameters
gs_list (list) – A list of
Model
objects.target_gs (Model) – A template model used to specify the parameterization of the returned Model.
 Returns
Model
 pygsti._to_std_model(gs_list, target_gs, ddof=1)¶
Take the pergateelement standard deviation of a list of models.
Return the
Model
constructed from the standarddeviation parameter vector of the models in gs_list, that is, the standard devaiation of the parameter vectors of each model in gs_list. Parameters
gs_list (list) – A list of
Model
objects.target_gs (Model) – A template model used to specify the parameterization of the returned Model.
ddof (int, optional) – As in numpy.std
 Returns
Model
 pygsti._to_rms_model(gs_list, target_gs)¶
Take the pergateelement RMS of a set of models.
Return the
Model
constructed from the rootmeansquared parameter vector of the models in gs_list, that is, the RMS of the parameter vectors of each model in gs_list. Parameters
gs_list (list) – A list of
Model
objects.target_gs (Model) – A template model used to specify the parameterization of the returned Model.
 Returns
Model
 class pygsti._GSTAdvancedOptions(items=None)¶
Bases:
AdvancedOptions
Advanced options for GST driver functions.
 valid_keys¶
the valid (allowed) keys.
 Type
tuple
 valid_keys = ['always_perform_mle', 'bad_fit_threshold', 'circuit_weights', 'contract_start_to_cptp',...¶
 class pygsti._QubitProcessorSpec(num_qubits, gate_names, nonstd_gate_unitaries=None, availability=None, geometry=None, qubit_labels=None, nonstd_gate_symplecticreps=None, aux_info=None)¶
Bases:
ProcessorSpec
The device specification for a one or more qubit quantum computer.
This is objected is geared towards multiqubit devices; many of the contained structures are superfluous in the case of a single qubit.
 Parameters
num_qubits (int) – The number of qubits in the device.
gate_names (list of strings) –
The names of gates in the device. This may include standard gate names known by pyGSTi (see below) or names which appear in the nonstd_gate_unitaries argument. The set of standard gate names includes, but is not limited to:
’Gi’ : the 1Q idle operation
’Gx’,’Gy’,’Gz’ : 1qubit pi/2 rotations
’Gxpi’,’Gypi’,’Gzpi’ : 1qubit pi rotations
’Gh’ : Hadamard
’Gp’ : phase or Sgate (i.e., ((1,0),(0,i)))
’Gcphase’,’Gcnot’,’Gswap’ : standard 2qubit gates
Alternative names can be used for all or any of these gates, but then they must be explicitly defined in the nonstd_gate_unitaries dictionary. Including any standard names in nonstd_gate_unitaries overrides the default (builtin) unitary with the one supplied.
nonstd_gate_unitaries (dictionary of numpy arrays) – A dictionary with keys that are gate names (strings) and values that are numpy arrays specifying quantum gates in terms of unitary matrices. This is an additional “lookup” database of unitaries  to add a gate to this QubitProcessorSpec its names still needs to appear in the gate_names list. This dictionary’s values specify additional (target) native gates that can be implemented in the device as unitaries acting on ordinary purestatevectors, in the standard computationl basis. These unitaries need not, and often should not, be unitaries acting on all of the qubits. E.g., a CNOT gate is specified by a key that is the desired name for CNOT, and a value that is the standard 4 x 4 complex matrix for CNOT. All gate names must start with ‘G’. As an advanced behavior, a unitarymatrixreturning function which takes a single argument  a tuple of label arguments  may be given instead of a single matrix to create an operation factory which allows continuouslyparameterized gates. This function must also return an empty/dummy unitary when None is given as it’s argument.
availability (dict, optional) – A dictionary whose keys are some subset of the keys (which are gate names) nonstd_gate_unitaries and the strings (which are gate names) in gate_names and whose values are lists of qubitlabeltuples. Each qubitlabeltuple must have length equal to the number of qubits the corresponding gate acts upon, and causes that gate to be available to act on the specified qubits. Instead of a list of tuples, values of availability may take the special values “allpermutations” and “allcombinations”, which as their names imply, equate to all possible permutations and combinations of the appropriate number of qubit labels (deterined by the gate’s dimension). If a gate name is not present in availability, the default is “allpermutations”. So, the availability of a gate only needs to be specified when it cannot act in every valid way on the qubits (e.g., the device does not have alltoall connectivity).
geometry ({"line","ring","grid","torus"} or QubitGraph, optional) – The type of connectivity among the qubits, specifying a graph used to define neighbor relationships. Alternatively, a
QubitGraph
object with qubit_labels as the node labels may be passed directly. This argument is only used as a convenient way of specifying gate availability (edge connections are used for gates whose availability is unspecified by availability or whose value there is “alledges”).qubit_labels (list or tuple, optional) – The labels (integers or strings) of the qubits. If None, then the integers starting with zero are used.
nonstd_gate_symplecticreps (dict, optional) – A dictionary similar to nonstd_gate_unitaries that supplies, instead of a unitary matrix, the symplectic representation of a Clifford operations, given as a 2tuple of numpy arrays.
aux_info (dict, optional) – Any additional information that should be attached to this processor spec.
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 property num_qubits(self)¶
The number of qubits.
 property primitive_op_labels(self)¶
All the primitive operation labels derived from the gate names and availabilities
 gate_num_qubits(self, gate_name)¶
The number of qubits that a given gate acts upon.
 Parameters
gate_name (str) – The name of the gate.
 Returns
int
 resolved_availability(self, gate_name, tuple_or_function='auto')¶
The availability of a given gate, resolved as either a tuple of sslbltuples or a function.
This function does more than just access the availability attribute, as this may hold special values like “alledges”. It takes the value of self.availability[gate_name] and resolves and converts it into the desired format: either a tuple of statespace labels or a function with a single statespacelabelstuple argument.
 Parameters
gate_name (str) – The gate name to get the availability of.
tuple_or_function ({'tuple', 'function', 'auto'}) – The type of object to return. ‘tuple’ means a tuple of state space label tuples, e.g. ((0,1), (1,2)). ‘function’ means a function that takes a single state space label tuple argument and returns True or False to indicate whether the gate is available on the given target labels. If ‘auto’ is given, then either a tuple or function is returned  whichever is more computationally convenient.
 Returns
tuple or function
 _resolve_availability(self, avail_entry, gate_nqubits, tuple_or_function='auto')¶
 is_available(self, gate_label)¶
Check whether a gate at a given location is available.
 Parameters
gate_label (Label) – The gate name and target labels to check availability of.
 Returns
bool
 available_gatenames(self, sslbls)¶
List all the gate names that are available within a set of state space labels.
This function finds all the gate names that are available for at least a subset of sslbls.
 Parameters
sslbls (tuple) – The state space labels to find availability within.
 Returns
tuple of strings – A tuple of gate names (strings).
 available_gatelabels(self, gate_name, sslbls)¶
List all the gate labels that are available for gate_name on at least a subset of sslbls.
 Parameters
gate_name (str) – The gate name.
sslbls (tuple) – The state space labels to find availability within.
 Returns
tuple of Labels – The available gate labels (all with name gate_name).
 force_recompute_gate_relationships(self)¶
Invalidates LRU caches for all compute_* methods of this object, forcing them to recompute their values.
The compute_* methods of this processor spec compute various relationships and properties of its gates. These routines can be computationally intensive, and so their values are cached for performance. If the gates of a processor spec changes and its compute_* methods are used, force_recompute_gate_relationships should be called.
 compute_clifford_symplectic_reps(self, gatename_filter=None)¶
Constructs a dictionary of the symplectic representations for all the Clifford gates in this processor spec.
 Parameters
gatename_filter (iterable, optional) – A list, tuple, or set of gate names whose symplectic representations should be returned (if they exist).
 Returns
dict – keys are gate names, values are (symplectic_matrix, phase_vector) tuples.
 compute_one_qubit_gate_relations(self)¶
Computes the basic pairwise relationships relationships between the gates.
1. It multiplies all possible combinations of two 1qubit gates together, from the full model available to in this device. If the two gates multiple to another 1qubit gate from this set of gates this is recorded in the dictionary self.oneQgate_relations. If the 1qubit gate with name name1 followed by the 1qubit gate with name name2 multiple (up to phase) to the gate with name3, then self.oneQgate_relations[name1,`name2`] = name3.
2. If the inverse of any 1qubit gate is contained in the model, this is recorded in the dictionary self.gate_inverse.
 Returns
gate_relations (dict) – Keys are (gatename1, gatename2) and values are either the gate name of the product of the two gates or None, signifying the identity.
gate_inverses (dict) – Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).
 compute_multiqubit_inversion_relations(self)¶
Computes the inverses of multiqubit (>1 qubit) gates.
Finds whether any of the multiqubit gates in this device also have their inverse in the model. That is, if the unitaries for the multiqubit gate with name name1 followed by the multiqubit gate (of the same dimension) with name name2 multiple (up to phase) to the identity, then gate_inverse[name1] = name2 and gate_inverse[name2] = name1
1qubit gates are not computed by this method, as they are be computed by the method :method:`compute_one_qubit_gate_relations`.
 Returns
gate_inverse (dict) – Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).
 compute_clifford_ops_on_qubits(self)¶
Constructs a dictionary mapping tuples of state space labels to the clifford operations available on them.
 Returns
dict – A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available Clifford gates on those target labels.
 compute_ops_on_qubits(self)¶
Constructs a dictionary mapping tuples of state space labels to the operations available on them.
 Returns
dict – A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available gates on those target labels.
 compute_clifford_2Q_connectivity(self)¶
Constructs a graph encoding the connectivity between qubits via 2qubit Clifford gates.
 Returns
QubitGraph – A graph with nodes equal to the qubit labels and edges present whenever there is a 2qubit Clifford gate between the vertex qubits.
 compute_2Q_connectivity(self)¶
Constructs a graph encoding the connectivity between qubits via 2qubit gates.
 Returns
QubitGraph – A graph with nodes equal to the qubit labels and edges present whenever there is a 2qubit gate between the vertex qubits.
 subset(self, gate_names_to_include='all', qubit_labels_to_keep='all')¶
Construct a smaller processor specification by keeping only a select set of gates from this processor spec.
 Parameters
gate_names_to_include (list or tuple or set) – The gate names that should be included in the returned processor spec.
 Returns
QubitProcessorSpec
 map_qubit_labels(self, mapper)¶
Creates a new QubitProcessorSpec whose qubit labels are updated according to the mapping function mapper.
 Parameters
mapper (dict or function) – A dictionary whose keys are the existing self.qubit_labels values and whose value are the new labels, or a function which takes a single (existing qubitlabel) argument and returns a new qubit label.
 Returns
QubitProcessorSpec
 property idle_gate_names(self)¶
The gate names that correspond to idle operations.
 property global_idle_gate_name(self)¶
The (first) gate name that corresponds to a global idle operation.
 property global_idle_layer_label(self)¶
Similar to global_idle_gate_name but include the appropriate sslbls (either None or all the qubits)
 class pygsti._Model(state_space)¶
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
A predictive model for a Quantum Information Processor (QIP).
The main function of a Model object is to compute the outcome probabilities of
Circuit
objects based on the action of the model’s ideal operations plus (potentially) noise which makes the outcome probabilities deviate from the perfect ones. Parameters
state_space (StateSpace) – The state space of this model.
 _to_nice_serialization(self)¶
 property state_space(self)¶
State space labels
 Returns
StateSpaceLabels
 property hyperparams(self)¶
Dictionary of hyperparameters associated with this model
 Returns
dict
 property num_params(self)¶
The number of free parameters when vectorizing this model.
 Returns
int – the number of model parameters.
 property num_modeltest_params(self)¶
The parameter count to use when testing this model against data.
Often times, this is the same as :method:`num_params`, but there are times when it can convenient or necessary to use a parameter count different than the actual number of parameters in this model.
 Returns
int – the number of model parameters.
 property parameter_bounds(self)¶
Upper and lower bounds on the values of each parameter, utilized by optimization routines
 set_parameter_bounds(self, index, lower_bound= _np.inf, upper_bound=_np.inf)¶
Set the bounds for a single model parameter.
These limit the values the parameter can have during an optimization of the model.
 Parameters
index (int) – The index of the paramter whose bounds should be set.
lower_bound (float, optional) – The lower and upper bounds for the parameter. Can be set to the special numpy.inf (or numpy.inf) values to effectively have no bound.
upper_bound (float, optional) – The lower and upper bounds for the parameter. Can be set to the special numpy.inf (or numpy.inf) values to effectively have no bound.
 Returns
None
 property parameter_labels(self)¶
A list of labels, usually of the form (op_label, string_description) describing this model’s parameters.
 property parameter_labels_pretty(self)¶
The list of parameter labels but formatted in a nice way.
In particular, tuples where the first element is an op label are made into a single string beginning with the string representation of the operation.
 set_parameter_label(self, index, label)¶
Set the label of a single model parameter.
 Parameters
index (int) – The index of the paramter whose label should be set.
label (object) – An object that serves to label this parameter. Often a string.
 Returns
None
 to_vector(self)¶
Returns the model vectorized according to the optional parameters.
 Returns
numpy array – The vectorized model parameters.
 from_vector(self, v, close=False)¶
Sets this Model’s operations based on parameter values v.
 Parameters
v (numpy.ndarray) – A vector of parameters, with length equal to self.num_params.
close (bool, optional) – Set to True if v is close to the current parameter vector. This can make some operations more efficient.
 Returns
None
 abstract probabilities(self, circuit, clip_to=None)¶
Construct a dictionary containing the outcome probabilities of circuit.
 Parameters
circuit (Circuit or tuple of operation labels) – The sequence of operation labels specifying the circuit.
clip_to (2tuple, optional) – (min,max) to clip probabilities to if not None.
 Returns
probs (dictionary) – A dictionary such that probs[SL] = pr(SL,circuit,clip_to) for each spam label (string) SL.
 abstract bulk_probabilities(self, circuits, clip_to=None, comm=None, mem_limit=None, smartc=None)¶
Construct a dictionary containing the probabilities for an entire list of circuits.
 Parameters
circuits ((list of Circuits) or CircuitOutcomeProbabilityArrayLayout) – When a list, each element specifies a circuit to compute outcome probabilities for. A
CircuitOutcomeProbabilityArrayLayout
specifies the circuits along with an internal memory layout that reduces the time required by this function and can restrict the computed probabilities to those corresponding to only certain outcomes.clip_to (2tuple, optional) – (min,max) to clip return value if not None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors. Distribution is performed over subtrees of evalTree (if it is split).
mem_limit (int, optional) – A rough memory limit in bytes which is used to determine processor allocation.
smartc (SmartCache, optional) – A cache object to cache & use previously cached values inside this function.
 Returns
probs (dictionary) – A dictionary such that probs[opstr] is an ordered dictionary of (outcome, p) tuples, where outcome is a tuple of labels and p is the corresponding probability.
 _init_copy(self, copy_into, memo)¶
Copies any “tricky” member of this model into copy_into, before deep copying everything else within a .copy() operation.
 _post_copy(self, copy_into, memo)¶
Called after all other copying is done, to perform “linking” between the new model (copy_into) and its members.
 copy(self)¶
Copy this model.
 Returns
Model – a (deep) copy of this model.
 __str__(self)¶
Return str(self).
 __hash__(self)¶
Return hash(self).
 circuit_outcomes(self, circuit)¶
Get all the possible outcome labels produced by simulating this circuit.
 Parameters
circuit (Circuit) – Circuit to get outcomes of.
 Returns
tuple
 pygsti._create_explicit_model(processor_spec, modelnoise, custom_gates=None, evotype='default', simulator='auto', ideal_gate_type='auto', ideal_prep_type='auto', ideal_povm_type='auto', embed_gates=False, basis='pp')¶
 pygsti.ROBUST_SUFFIX_LIST = ['.robust', '.Robust', '.robust+', '.Robust+']¶
 pygsti.DEFAULT_BAD_FIT_THRESHOLD = 2.0¶
 pygsti.run_model_test(model_filename_or_object, data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)¶
Compares a
Model
’s predictions to a DataSet using GSTlike circuits.This routine tests a Model model against a DataSet using a specific set of structured, GSTlike circuits (given by fiducials, max_lengths and germs). In particular, circuits are constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. Each string thus constructed is sandwiched between all pairs of (preparation, measurement) fiducial sequences.
model_filename_or_object is used directly (without any optimization) as the the model estimate at each maximumlength “iteration”. The model is given a trivial default_gauge_group so that it is not altered during any gauge optimization step.
A
ModelEstimateResults
object is returned, which encapsulates the model estimate and related parameters, and can be used with reportgeneration routines. Parameters
model_filename_or_object (Model or string) – The model model, specified either directly or by the filename of a model file (text format).
data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor this model test is to be run on, given either directly or by the filename of a processorspec file (text format). The processor specification contains basic interfacelevel information about the processor being tested, e.g., its state space and available gates.
prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename.germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).
max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
gauge_opt_params (dict, optional) – A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality.
comm (mpi4py.MPI.Comm, optional) – When not
None
, an MPI communicator for distributing the computation across multiple processors.mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
 Returns
Results
 pygsti.run_linear_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)¶
Perform Linear Gate Set Tomography (LGST).
This function differs from the lower level :function:`run_lgst` function in that it may perform a postLGST gauge optimization and this routine returns a
Results
object containing the LGST estimate.Overall, this is a highlevel driver routine which can be used similarly to :function:`run_long_sequence_gst` whereas run_lgst is a lowlevel routine used when building your own algorithms.
 Parameters
data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor that LGST is to be run on, given either directly or by the filename of a processorspec file (text format). The processor specification contains basic interfacelevel information about the processor being tested, e.g., its state space and available gates.
prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename.gauge_opt_params (dict, optional) – A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See :function:`run_long_sequence_gst`.
comm (mpi4py.MPI.Comm, optional) – When not
None
, an MPI communicator for distributing the computation across multiple processors. In this LGST case, this is just the gauge optimization.mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
 Returns
Results
 pygsti.run_long_sequence_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)¶
Perform longsequence GST (LSGST).
This analysis fits a model (target_model_filename_or_object) to data (data_filename_or_set) using the outcomes from periodic GST circuits constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. When LGST is applicable (i.e. for explicit models with full or TP parameterizations), the LGST estimate of the gates is computed, gauge optimized, and used as a starting seed for the remaining optimizations.
LSGST iterates
len(max_lengths)
times, optimizing the chi2 using successively larger sets of circuits. On the ith iteration, the repeated germs sequences limited bymax_lengths[i]
are included in the growing set of circuits used by LSGST. The final iteration maximizes the loglikelihood.Once computed, the model estimates are optionally gauge optimized as directed by gauge_opt_params. A
ModelEstimateResults
object is returned, which encapsulates the input and outputs of this GST analysis, and can generate final enduser output such as reports and presentations. Parameters
data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
target_model_filename_or_object (Model or string) – The target model, specified either directly or by the filename of a model file (text format).
prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename.germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).
max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
gauge_opt_params (dict, optional) – A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.advanced_options (dict, optional) –
Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. The allowed keys and values include:  objective = {‘chi2’, ‘logl’}  op_labels = list of strings  circuit_weights = dict or None  starting_point = “LGSTifpossible” (default), “LGST”, or “target”  depolarize_start = float (default == 0)  randomize_start = float (default == 0)  contract_start_to_cptp = True / False (default)  cptpPenaltyFactor = float (default = 0)  tolerance = float or dict w/’relx’,’relf’,’f’,’jac’,’maxdx’ keys  max_iterations = int  finitediff_iterations = int  min_prob_clip = float  min_prob_clip_for_weighting = float (default == 1e4)  prob_clip_interval = tuple (default == (1e6,1e6)  radius = float (default == 1e4)  use_freq_weighted_chi2 = True / False (default)  XX nested_circuit_lists = True (default) / False  XX include_lgst = True / False (default is True)  distribute_method = “default”, “circuits” or “deriv”  profile = int (default == 1)  check = True / False (default)  XX op_label_aliases = dict (default = None)  always_perform_mle = bool (default = False)  only_perform_mle = bool (default = False)  XX truncScheme = “whole germ powers” (default) or “truncated germ powers”
or “length as exponent”
appendTo = Results (default = None)
estimateLabel = str (default = “default”)
XX missingDataAction = {‘drop’,’raise’} (default = ‘drop’)
XX string_manipulation_rules = list of (find,replace) tuples
germ_length_limits = dict of form {germ: maxlength}
record_output = bool (default = True)
timeDependent = bool (default = False)
comm (mpi4py.MPI.Comm, optional) – When not
None
, an MPI communicator for distributing the computation across multiple processors.mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
verbosity (int, optional) –
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.  0 – prints nothing  1 – shows progress bar for entire iterative GST  2 – show summary details about each individual iteration  3 – also shows outer iterations of LM algorithm  4 – also shows inner iterations of LM algorithm  5 – also shows detailed info from within jacobian
and objective function calls.
 Returns
Results
 pygsti.run_long_sequence_gst_base(data_filename_or_set, target_model_filename_or_object, lsgst_lists, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)¶
A more fundamental interface for performing endtoend GST.
Similar to
run_long_sequence_gst()
except this function takes lsgst_lists, a list of either raw circuit lists or ofPlaquetteGridCircuitStructure
objects to define which circuits are used on each GST iteration. Parameters
data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
target_model_filename_or_object (Model or string) – The target model, specified either directly or by the filename of a model file (text format).
lsgst_lists (list of lists or PlaquetteGridCircuitStructure(s)) – An explicit list of either the raw circuit lists to be used in the analysis or of
PlaquetteGridCircuitStructure
objects, which additionally contain the structure of a set of circuits. A single PlaquetteGridCircuitStructure object can also be given, which is equivalent to passing a list of successive Lvalue truncations of this object (e.g. if the object has Ls = [1,2,4] then this is like passing a list of three PlaquetteGridCircuitStructure objects w/truncations [1], [1,2], and [1,2,4]).gauge_opt_params (dict, optional) – A dictionary of arguments to
gaugeopt_to_target()
, specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments ofgaugeopt_to_target()
except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See
run_long_sequence_gst()
for a list of the allowed keys, with the exception “nested_circuit_lists”, “op_label_aliases”, “include_lgst”, and “truncScheme”.comm (mpi4py.MPI.Comm, optional) – When not
None
, an MPI communicator for distributing the computation across multiple processors.mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
verbosity (int, optional) –
The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.  0 – prints nothing  1 – shows progress bar for entire iterative GST  2 – show summary details about each individual iteration  3 – also shows outer iterations of LM algorithm  4 – also shows inner iterations of LM algorithm  5 – also shows detailed info from within jacobian
and objective function calls.
 Returns
Results
 pygsti.run_stdpractice_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, modes='full TP,CPTP,Target', gaugeopt_suite='stdgaugeopt', gaugeopt_target=None, models_to_test=None, comm=None, mem_limit=None, advanced_options=None, output_pkl=None, verbosity=2)¶
Perform endtoend GST analysis using standard practices.
This routines is an even higherlevel driver than
run_long_sequence_gst()
. It performs bottled, typicallyuseful, runs of long sequence GST on a dataset. This essentially boils down to runningrun_long_sequence_gst()
one or more times using different model parameterizations, and performing commonlyuseful gauge optimizations, based only on the highlevel modes argument. Parameters
data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).
processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor that GST is to be run on, given either directly or by the filename of a processorspec file (text format). The processor specification contains basic interfacelevel information about the processor being tested, e.g., its state space and available gates.
prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).
meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If
None
, then use the same strings as specified by prep_fiducial_list_or_filename.germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).
max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the ith LSGST iteration includes the repeated germs truncated to the Lvalues up to and including the ith one.
modes (str, optional) –
A commaseparated list of modes which dictate what types of analyses are performed. Currently, these correspond to different types of parameterizations/constraints to apply to the estimated model. The default value is usually fine. Allowed values are:
”full” : full (completely unconstrained)
”TP” : TPconstrained
”CPTP” : Lindbladian CPTPconstrained
”H+S” : Only Hamiltonian + Stochastic errors allowed (CPTP)
”S” : Only Stochastic errors allowed (CPTP)
”Target” : use the target (ideal) gates as the estimate
<model> : any key in the models_to_test argument
gaugeopt_suite (str or list or dict, optional) –
Specifies which gauge optimizations to perform on each estimate. A string or list of strings (see below) specifies builtin sets of gauge optimizations, otherwise gaugeopt_suite should be a dictionary of gaugeoptimization parameter dictionaries, as specified by the gauge_opt_params argument of
run_long_sequence_gst()
. The key names of gaugeopt_suite then label the gauge optimizations within the resuling Estimate objects. The builtin suites are:”single” : performs only a single “best guess” gauge optimization.
”varySpam” : varies spam weight and toggles SPAM penalty (0 or 1).
”varySpamWt” : varies spam weight but no SPAM penalty.
”varyValidSpamWt” : varies spam weight with SPAM penalty == 1.
”toggleValidSpam” : toggles spame penalty (0 or 1); fixed SPAM wt.
”unreliable2Q” : adds branch to a spam suite that weights 2Q gates less
”none” : no gauge optimizations are performed.
gaugeopt_target (Model, optional) – If not None, a model to be used as the “target” for gauge optimization (only). This argument is useful when you want to gauge optimize toward something other than the ideal target gates given by target_model_filename_or_object, which are used as the default when gaugeopt_target is None.
models_to_test (dict, optional) – A dictionary of Model objects representing (gateset) models to test against the data. These Models are essentially hypotheses for which (if any) model generated the data. The keys of this dictionary can (and must, to actually test the models) be used within the comma separate list given by the modes argument.
comm (mpi4py.MPI.Comm, optional) – When not
None
, an MPI communicator for distributing the computation across multiple processors.mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multiCPUs).
advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expertlevel functionality. See
run_long_sequence_gst()
for a list of the allowed keys for each such dictionary.output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).
verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.
 Returns
Results
 pygsti._load_model(model_filename_or_object)¶
 pygsti._load_dataset(data_filename_or_set, comm, verbosity)¶
Loads a DataSet from the data_filename_or_set argument of functions in this module.
 pygsti._update_objfn_builders(builders, advanced_options)¶
 pygsti._get_badfit_options(advanced_options)¶
 pygsti._output_to_pickle(obj, output_pkl, comm)¶
 pygsti._get_gst_initial_model(target_model, advanced_options)¶
 pygsti._get_gst_builders(advanced_options)¶
 pygsti._get_optimizer(advanced_options, model_being_optimized)¶
 pygsti.parallel_apply(f, l, comm)¶
Apply a function, f to every element of a list, l in parallel, using MPI.
 Parameters
f (function) – function of an item in the list l
l (list) – list of items as arguments to f
comm (MPI Comm) – MPI communicator object for organizing parallel programs
 Returns
results (list) – list of items after f has been applied
 pygsti.mpi4py_comm()¶
Get a comm object
 Returns
MPI.Comm – Comm object to be passed down to parallel pygsti routines
 pygsti.starmap_with_kwargs(fn, num_runs, num_processors, args_list, kwargs_list)¶
 class pygsti.NamedDict(keyname=None, keytype=None, valname=None, valtype=None, items=())¶
Bases:
dict
,pygsti.baseobjs.nicelyserializable.NicelySerializable
A dictionary that also holds category names and types.
This dictderived class holds a catgory name applicable to its keys, and key and value type names indicating the types of its keys and values.
The main purpose of this class is to utilize its :method:`to_dataframe` method.
 Parameters
keyname (str, optional) – A category name for the keys of this dict. For example, if the dict contained the keys “dog” and “cat”, this might be “animals”. This becomes a column header if this dict is converted to a data frame.
keytype ({"float", "int", "category", None}, optional) – The keytype, in correspondence with different pandas series types.
valname (str, optional) – A category name for the keys of this dict. This becomse a column header if this dict is converted to a data frame.
valtype ({"float", "int", "category", None}, optional) – The valuetype, in correspondence with different pandas series types.
items (list or dict, optional) – Initial items, used in serialization.
 classmethod create_nested(cls, key_val_type_list, inner)¶
Creates a nested NamedDict.
 Parameters
key_val_type_list (list) – A list of (key, value, type) tuples, one per nesting layer.
inner (various) – The value that will be set to the innermost nested dictionary’s value, supplying any additional layers of nesting (if inner is a NamedDict) or the value contained in all of the nested layers.
 __reduce__(self)¶
Helper for pickle.
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 to_dataframe(self)¶
Render this dict as a pandas data frame.
 Returns
pandas.DataFrame
 _add_to_columns(self, columns, seriestypes, row_prefix)¶
 class pygsti.TypedDict(types=None, items=())¶
Bases:
dict
A dictionary that holds perkey type information.
This type of dict is used for the “leaves” in a tree of nested
NamedDict
objects, specifying a collection of data of different types pertaining to some set of category labels (the indexpath of the named dictionaries).When converted to a data frame, each key specifies a different column and values contribute the values of a single data frame row. Columns will be series of the held data types.
 Parameters
types (dict, optional) – Keys are the keys that can appear in this dictionary, and values are valid data frame type strings, e.g. “int”, “float”, or “category”, that specify the type of each value.
items (dict or list) – Initial data, used for serialization.
 __reduce__(self)¶
Helper for pickle.
 as_dataframe(self)¶
Render this dict as a pandas data frame.
 Returns
pandas.DataFrame
 _add_to_columns(self, columns, seriestypes, row_prefix)¶
 pygsti._basis_constructor_dict¶
 pygsti.basis_matrices(name_or_basis, dim, sparse=False)¶
Get the elements of the specifed basistype which spans the densitymatrix space given by dim.
 Parameters
name_or_basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match dim.
dim (int) – The dimension of the densitymatrix space.
sparse (bool, optional) – Whether any built matrices should be SciPy CSR sparse matrices or dense numpy arrays (the default).
 Returns
list – A list of N numpy arrays each of shape (dmDim, dmDim), where dmDim is the matrixdimension of the overall “embedding” density matrix (the sum of dim_or_block_dims) and N is the dimension of the densitymatrix space, equal to sum( block_dim_i^2 ).
 pygsti.basis_longname(basis)¶
Get the “long name” for a particular basis, which is typically used in reports, etc.
 Parameters
basis (Basis or str) – The basis or standardbasisname.
 Returns
string
 pygsti.basis_element_labels(basis, dim)¶
Get a list of short labels corresponding to to the elements of the described basis.
These labels are typically used to label the rows/columns of a boxplot of a matrix in the basis.
 Parameters
basis ({'std', 'gm', 'pp', 'qt'}) – Which basis the model is represented in. Allowed options are Matrixunit (std), GellMann (gm), Pauliproduct (pp) and Qutrit (qt). If the basis is not known, then an empty list is returned.
dim (int or list) – Dimension of basis matrices. If a list of integers, then gives the dimensions of the terms in a directsum decomposition of the density matrix space acted on by the basis.
 Returns
list of strings – A list of length dim, whose elements label the basis elements.
 pygsti.is_sparse_basis(name_or_basis)¶
Whether a basis contains sparse matrices.
 Parameters
name_or_basis (Basis or str) – The basis or standardbasisname.
 Returns
bool
 pygsti.change_basis(mx, from_basis, to_basis)¶
Convert a operation matrix from one basis of a density matrix space to another.
 Parameters
mx (numpy array) – The operation matrix (a 2D square array) in the from_basis basis.
from_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source basis. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
to_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The destination basis. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – The given operation matrix converted to the to_basis basis. Array size is the same as mx.
 pygsti.create_basis_pair(mx, from_basis, to_basis)¶
Constructs bases from transforming mx between two basis names.
Construct a pair of Basis objects with types from_basis and to_basis, and dimension appropriate for transforming mx (if they’re not already given by from_basis or to_basis being a Basis rather than a str).
 Parameters
mx (numpy.ndarray) – A matrix, assumed to be square and have a dimension that is a perfect square.
from_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source basis (named because it’s usually the source basis for a basis change). Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).
to_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The destination basis (named because it’s usually the destination basis for a basis change). Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).
 Returns
from_basis, to_basis (Basis)
 pygsti.create_basis_for_matrix(mx, basis)¶
Construct a Basis object with type given by basis and dimension approprate for transforming mx.
Dimension is taken from mx (if it’s not given by basis) that is sqrt(mx.shape[0]).
 Parameters
mx (numpy.ndarray) – A matrix, assumed to be square and have a dimension that is a perfect square.
basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – A basis name or Basis object. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension must equal sqrt(mx.shape[0]), as this will be checked.
 Returns
Basis
 pygsti.resize_std_mx(mx, resize, std_basis_1, std_basis_2)¶
Change the basis of mx to a potentially larger or smaller ‘std’type basis given by std_basis_2.
(mx is assumed to be in the ‘std’type basis given by std_basis_1.)
This is possible when the two ‘std’type bases have the same “embedding dimension”, equal to the sum of their block dimensions. If, for example, std_basis_1 has block dimensions (kite structure) of (4,2,1) then mx, expressed as a sum of 4^2 + 2^2 + 1^2 = 21 basis elements, can be “embedded” within a larger ‘std’ basis having a single block with dimension 7 (7^2 = 49 elements).
When std_basis_2 is smaller than std_basis_1 the reverse happens and mx is irreversibly truncated, or “contracted” to a basis having a particular kite structure.
 Parameters
 Returns
numpy.ndarray
 pygsti.flexible_change_basis(mx, start_basis, end_basis)¶
Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.
(see
resize_std_mx()
for more details).
 pygsti.resize_mx(mx, dim_or_block_dims=None, resize=None)¶
Wrapper for
resize_std_mx()
, that manipulates mx to be in another basis.This function first constructs two ‘std’type bases using dim_or_block_dims and sum(dim_or_block_dims). The matrix mx is converted from the former to the latter when resize == “expand”, and from the latter to the former when resize == “contract”.
 Parameters
mx (numpy array) – Matrix of size N x N, where N is the dimension of the density matrix space, i.e. sum( dimOrBlockDims_i^2 )
dim_or_block_dims (int or list of ints) – Structure of the densitymatrix space. Gives the matrix dimensions of each block.
resize ({'expand','contract'}) – Whether mx should be expanded or contracted.
 Returns
numpy.ndarray
 pygsti.state_to_stdmx(state_vec)¶
Convert a state vector into a density matrix.
 Parameters
state_vec (list or tuple) – State vector in the standard (sigmaz) basis.
 Returns
numpy.ndarray – A density matrix of shape (d,d), corresponding to the pure state given by the lengthd array, state_vec.
 pygsti.state_to_pauli_density_vec(state_vec)¶
Convert a single qubit state vector into a Liouville vector in the Pauli basis.
 Parameters
state_vec (list or tuple) – State vector in the sigmaz basis, len(state_vec) == 2
 Returns
numpy array – The 2x2 density matrix of the pure state given by state_vec, given as a 4x1 column vector in the Pauli basis.
 pygsti.vec_to_stdmx(v, basis, keep_complex=False)¶
Convert a vector in this basis to a matrix in the standard basis.
 Parameters
v (numpy array) – The vector length 4 or 16 respectively.
basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match v.
keep_complex (bool, optional) – If True, leave the final (output) array elements as complex numbers when v is complex. Usually, the final elements are real (even though v is complex) and so when keep_complex=False the elements are forced to be real and the returned array is float (not complex) valued.
 Returns
numpy array – The matrix, 2x2 or 4x4 depending on nqubits
 pygsti.gmvec_to_stdmx¶
 pygsti.ppvec_to_stdmx¶
 pygsti.qtvec_to_stdmx¶
 pygsti.stdvec_to_stdmx¶
 pygsti.stdmx_to_vec(m, basis)¶
Convert a matrix in the standard basis to a vector in the Pauli basis.
 Parameters
m (numpy array) – The matrix, shape 2x2 (1Q) or 4x4 (2Q)
basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match m.
 Returns
numpy array – The vector, length 4 or 16 respectively.
 pygsti.stdmx_to_ppvec¶
 pygsti.stdmx_to_gmvec¶
 pygsti.stdmx_to_stdvec¶
 pygsti._deprecated_fn(replacement=None)¶
Decorator for deprecating a function.
 Parameters
replacement (str, optional) – the name of the function that should replace it.
 Returns
function
 pygsti.chi2(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=( 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Computes the total (aggregate) chi^2 for a set of circuits.
The chi^2 test statistic obtained by summing up the contributions of a given set of circuits or all the circuits available in a dataset. For the gradient or Hessian, see the :function:`chi2_jacobian` and :function:`chi2_hessian` functions.
 Parameters
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
chi2 (float) – chi^2 value, equal to the sum of chi^2 terms from all specified circuits
 pygsti.chi2_per_circuit(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=( 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Computes the percircuit chi^2 contributions for a set of cirucits.
This function returns the same value as
chi2()
except the contributions from different circuits are not summed but returned as an array (the contributions of all the outcomes of a given cirucit are summed together). Parameters
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
chi2 (numpy.ndarray) – Array of length either len(circuits) or len(dataset.keys()). Values are the chi2 contributions of the corresponding circuit aggregated over outcomes.
 pygsti.chi2_jacobian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=( 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Compute the gradient of the chi^2 function computed by :function:`chi2`.
The returned value holds the derivatives of the chi^2 function with respect to model’s parameters.
 Parameters
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
numpy array – The gradient vector of length model.num_params, the number of model parameters.
 pygsti.chi2_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=( 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Compute the Hessian matrix of the
chi2()
function. Parameters
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
numpy array – The Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params.
 pygsti.chi2_approximate_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=( 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Compute and approximate Hessian matrix of the
chi2()
function.This approximation neglects terms proportional to the Hessian of the probabilities w.r.t. the model parameters (which can take a long time to compute). See logl_approximate_hessian for details on the analogous approximation for the loglikelihood Hessian.
 Parameters
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.
min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
numpy array – The Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params.
 pygsti.chialpha(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=( 10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Compute the chialpha objective function.
 Parameters
alpha (float) – The alpha parameter, which lies in the interval (0,1].
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chialpha sum. Default value (None) means “all strings in dataset”.
pfratio_stitchpt (float, optional) – The xvalue (x = probility/frequency ratio) below which the chialpha function is replaced with it secondorder Taylor expansion.
pfratio_derivpt (float, optional) – The xvalue at which the Taylor expansion derivatives are evaluated at.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
radius (float, optional) – If radius is not None then a “harsh” method of regularizing the zerofrequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zerofrequency terms.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
float
 pygsti.chialpha_per_circuit(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=( 10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)¶
Compute the percircuit chialpha objective function.
 Parameters
alpha (float) – The alpha parameter, which lies in the interval (0,1].
model (Model) – The model used to specify the probabilities and SPAM labels
dataset (DataSet) – The data used to specify frequencies and counts
circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chialpha sum. Default value (None) means “all strings in dataset”.
pfratio_stitchpt (float, optional) – The xvalue (x = probility/frequency ratio) below which the chialpha function is replaced with it secondorder Taylor expansion.
pfratio_derivpt (float, optional) – The xvalue at which the Taylor expansion derivatives are evaluated at.
prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.
radius (float, optional) – If radius is not None then a “harsh” method of regularizing the zerofrequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zerofrequency terms.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the chialpha contributions of the corresponding circuit aggregated over outcomes.
 pygsti.chi2fn_2outcome(n, p, f, min_prob_clip_for_weighting=0.0001)¶
Computes chi^2 for a 2outcome measurement.
The chisquared function for a 2outcome measurement using a clipped probability for the statistical weighting.
 Parameters
n (float or numpy array) – Number of samples.
p (float or numpy array) – Probability of 1st outcome (typically computed).
f (float or numpy array) – Frequency of 1st outcome (typically observed).
min_prob_clip_for_weighting (float, optional) – Defines clipping interval (see return value).
 Returns
float or numpy array – n(pf)^2 / (cp(1cp)), where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1min_prob_clip_for_weighting)
 pygsti.chi2fn_2outcome_wfreqs(n, p, f)¶
Computes chi^2 for a 2outcome measurement using frequencyweighting.
The chisquared function for a 2outcome measurement using the observed frequency in the statistical weight.
 Parameters
n (float or numpy array) – Number of samples.
p (float or numpy array) – Probability of 1st outcome (typically computed).
f (float or numpy array) – Frequency of 1st outcome (typically observed).
 Returns
float or numpy array – n(pf)^2 / (f*(1f*)), where f* = (f*n+1)/n+2 is the frequency value used in the statistical weighting (prevents divide by zero errors)
 pygsti.chi2fn(n, p, f, min_prob_clip_for_weighting=0.0001)¶
Computes the chi^2 term corresponding to a single outcome.
The chisquared term for a single outcome of a multioutcome measurement using a clipped probability for the statistical weighting.
 Parameters
n (float or numpy array) – Number of samples.
p (float or numpy array) – Probability of 1st outcome (typically computed).
f (float or numpy array) – Frequency of 1st outcome (typically observed).
min_prob_clip_for_weighting (float, optional) – Defines clipping interval (see return value).
 Returns
float or numpy array – n(pf)^2 / cp , where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1min_prob_clip_for_weighting)
 pygsti.chi2fn_wfreqs(n, p, f, min_freq_clip_for_weighting=0.0001)¶
Computes the frequencyweighed chi^2 term corresponding to a single outcome.
The chisquared term for a single outcome of a multioutcome measurement using the observed frequency in the statistical weight.
 Parameters
n (float or numpy array) – Number of samples.
p (float or numpy array) – Probability of 1st outcome (typically computed).
f (float or numpy array) – Frequency of 1st outcome (typically observed).
min_freq_clip_for_weighting (float, optional) – The minimum frequency weighting used in the weighting, i.e. the largest weighting factor is 1 / fmin_freq_clip_for_weighting.
 Returns
float or numpy array
 pygsti.bonferroni_correction(significance, numtests)¶
Calculates the standard Bonferroni correction.
This is used for reducing the “local” significance for > 1 statistical hypothesis test to guarantee maintaining a “global” significance (i.e., a familywise error rate) of significance.
 Parameters
significance (float) – Significance of each individual test.
numtests (int) – The number of hypothesis tests performed.
 Returns
The Boferronicorrected local significance, given by
significance / numtests.
 pygsti.sidak_correction(significance, numtests)¶
Sidak correction.
TODO: docstring  better explanaition
 Parameters
significance (float) – Significance of each individual test.
numtests (int) – The number of hypothesis tests performed.
 Returns
float
 pygsti.generalized_bonferroni_correction(significance, weights, numtests=None, nested_method='bonferroni', tol=1e10)¶
Generalized Bonferroni correction.
 Parameters
significance (float) – Significance of each individual test.
weights (arraylike) – An array of nonnegative floatingpoint weights, one per individual test, that sum to 1.0.
numtests (int) – The number of hypothesis tests performed.
nested_method ({'bonferroni', 'sidak'}) – Which method is used to find the significance of the composite test.
tol (float, optional) – Tolerance when checking that the weights add to 1.0.
 Returns
float
 class pygsti._Basis(name, longname, real, sparse)¶
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
An ordered set of labeled matrices/vectors.
The base class for basis objects. A basis in pyGSTi is an abstract notion of a set of labeled elements, or “vectors”. Each basis has a certain size, and has .elements, .labels, and .ellookup members, the latter being a dictionary mapping of labels to elements.
An important point to note that isn’t immediately intuitive is that while Basis object holds elements (in its .elements property) these are not the same as its vectors (given by the object’s vector_elements property). Often times, in what we term a “simple” basis, the you just flatten an element to get the corresponding vectorelement. This works for bases where the elements are either vectors (where flattening does nothing) and matrices. By storing elements as distinct from vector_elements, the Basis can capture additional structure of the elements (such as viewing them as matrices) that can be helpful for their display and interpretation. The elements are also sometimes referred to as the “natural elements” because they represent how to display the element in a natrual way. A nonsimple basis occurs when vector_elements need to be stored as elements in a larger “embedded” way so that these elements can be displayed and interpeted naturally.
A second important note is that there is assumed to be some underlying “standard” basis underneath all the bases in pyGSTi. The elements in a Basis are always written in this standard basis. In the case of the “std”named basis in pyGSTi, these elements are just the trivial vector or matrix units, so one can rightly view the “std” pyGSTi basis as the “standard” basis for a that particular dimension.
The arguments below describe the basic properties of all basis objects in pyGSTi. It is important to remember that the vector_elements of a basis are different from its elements (see the
Basis
docstring), and that dim refers to the vector elements whereas elshape refers to the elements.For example, consider a 2element Basis containing the I and X Pauli matrices. The size of this basis is 2, as there are two elements (and two vector elements). Since vector elements are the length4 flattened Pauli matrices, the dimension (dim) is 4. Since the elements are 2x2 Pauli matrices, the elshape is (2,2).
As another example consider a basis which spans all the diagonal 2x2 matrices. The elements of this basis are the two matrix units with a 1 in the (0,0) or (1,1) location. The vector elements, however, are the length2 [1,0] and [0,1] vectors obtained by extracting just the diagonal entries from each basis element. Thus, for this basis, size=2, dim=2, and elshape=(2,2)  so the dimension is not just the product of elshape entries (equivalently, elsize).
 Parameters
name (string) – The name of the basis. This can be anything, but is usually short and abbreviated. There are several types of bases built into pyGSTi that can be constructed by this name.
longname (string) – A more descriptive name for the basis.
real (bool) – Elements and vector elements are always allowed to have complex entries. This argument indicates whether the coefficients in the expression of an arbitrary vector in this basis must be real. For example, if real=True, then when pyGSTi transforms a vector in some other basis to a vector in this basis, it will demand that the values of that vector (i.e. the coefficients which multiply this basis’s elements to obtain a vector in the “standard” basis) are real.
sparse (bool) – Whether the elements of .elements for this Basis are stored (when they are stored at all) as sparse matrices or vectors.
 dim¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 Type
int
 size¶
The number of elements (or vectorelements) in the basis.
 Type
int
 elshape¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 Type
int
 elndim¶
The number of element dimensions, i.e. len(self.elshape)
 Type
int
 elsize¶
The total element size, i.e. product(self.elshape)
 Type
int
 vector_elements¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Type
list
 classmethod cast(cls, name_or_basis_or_matrices, dim=None, sparse=None, classical_name='cl')¶
Convert various things that can describe a basis into a Basis object.
 Parameters
name_or_basis_or_matrices (various) –
Can take on a variety of values to produce different types of bases:
None: an empty ExpicitBasis
Basis: checked with dim and sparse and passed through.
str: BuiltinBasis or DirectSumBasis with the given name.
 list: an ExplicitBasis if given matrices/vectors or a
DirectSumBasis if given a (name, dim) pairs.
dim (int or StateSpace, optional) – The dimension of the basis to create. Sometimes this can be inferred based on name_or_basis_or_matrices, other times it must be supplied. This is the dimension of the space that this basis fully or partially spans. This is equal to the number of basis elements in a “full” (ordinary) basis. When a StateSpace object is given, a more detailed directsumoftensorproductblocks structure for the state space (rather than a single dimension) is described, and a basis is produced for this space. For instance, a DirectSumBasis basis of TensorProdBasis components can result when there are multiple tensorproduct blocks and these blocks consist of multiple factors.
sparse (bool, optional) – Whether the resulting basis should be “sparse”, meaning that its elements will be sparse rather than dense matrices.
classical_name (str, optional) – An alternate builtin basis name that should be used when constructing the bases for the classical sectors of dim, when dim is a StateSpace object.
 Returns
Basis
 property dim(self)¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size(self)¶
The number of elements (or vectorelements) in the basis.
 property elshape(self)¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 property elndim(self)¶
The number of element dimensions, i.e. len(self.elshape)
 Returns
int
 property elsize(self)¶
The total element size, i.e. product(self.elshape)
 Returns
int
 is_simple(self)¶
Whether the flattenedelement vector space is the same space as the space this basis’s vectors belong to.
 Returns
bool
 is_complete(self)¶
Whether this is a complete basis, i.e. this basis’s vectors span the entire space that they live in.
 Returns
bool
 is_partial(self)¶
The negative of :method:`is_complete`, effectively “is_incomplete”.
 Returns
bool
 property vector_elements(self)¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Returns
list – A list of 1D arrays.
 copy(self)¶
Make a copy of this Basis object.
 Returns
Basis
 with_sparsity(self, desired_sparsity)¶
Returns either this basis or a copy of it with the desired sparsity.
If this basis has the desired sparsity it is simply returned. If not, this basis is copied to one that does.
 Parameters
desired_sparsity (bool) – The sparsity (True for sparse elements, False for dense elements) that is desired.
 Returns
Basis
 abstract _copy_with_toggled_sparsity(self)¶
 __str__(self)¶
Return str(self).
 __getitem__(self, index)¶
 __len__(self)¶
 __eq__(self, other)¶
Return self==value.
 create_transform_matrix(self, to_basis)¶
Get the matrix that transforms a vector from this basis to to_basis.
 Parameters
to_basis (Basis or string) – The basis to transform to or a builtin basis name. In the latter case, a basis to transform to is built with the same structure as this basis but with all components constructed from the given name.
 Returns
numpy.ndarray (even if basis is sparse)
 reverse_transform_matrix(self, from_basis)¶
Get the matrix that transforms a vector from from_basis to this basis.
The reverse of :method:`create_transform_matrix`.
 Parameters
from_basis (Basis or string) – The basis to transform from or a builtin basis name. In the latter case, a basis to transform from is built with the same structure as this basis but with all components constructed from the given name.
 Returns
numpy.ndarray (even if basis is sparse)
 is_normalized(self)¶
Check if a basis is normalized, meaning that Tr(Bi Bi) = 1.0.
Available only to bases whose elements are matrices for now.
 Returns
bool
 property to_std_transform_matrix(self)¶
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
 Returns
numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
 property from_std_transform_matrix(self)¶
Retrieve the matrix that transforms vectors from the standard basis to this basis.
 Returns
numpy array or scipy sparse matrix – An array of shape (size, dim) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
 property to_elementstd_transform_matrix(self)¶
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space”  that is, vectors in the same standard basis that the elements of this basis are expressed in.
 Returns
numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
 property from_elementstd_transform_matrix(self)¶
Get transformation matrix from “element space” to this basis.
Get the matrix that transforms vectors in the “element space”  that is, vectors in the same standard basis that the elements of this basis are expressed in  to vectors in this basis (with length equal to the dim of this basis).
 Returns
numpy array – An array of shape (size, element_dim) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
 create_equivalent(self, builtin_basis_name)¶
Create an equivalent basis with components of type builtin_basis_name.
Create a
Basis
that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name. Parameters
builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
 Returns
Basis
 create_simple_equivalent(self, builtin_basis_name=None)¶
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_nameanalogue of the standard basis that this basis’s elements are expressed in. Parameters
builtin_basis_name (str, optional) – The name of the builtin basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and componentfree version of the same builtinbasis type.
 Returns
Basis
 is_compatible_with_state_space(self, state_space)¶
Checks whether this basis is compatible with a given state space.
 Parameters
state_space (StateSpace) – the state space to check.
 Returns
bool
 pygsti.jamiolkowski_iso(operation_mx, op_mx_basis='pp', choi_mx_basis='pp')¶
Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.
 Parameters
operation_mx (numpy array) – the operation matrix to compute Choi matrix of.
op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
choi_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – the Choi matrix, normalized to have trace == 1, in the desired basis.
 pygsti.jamiolkowski_iso_inv(choi_mx, choi_mx_basis='pp', op_mx_basis='pp')¶
Given a choi matrix, return the corresponding operation matrix.
This function performs the inverse of :function:`jamiolkowski_iso`.
 Parameters
choi_mx (numpy array) – the Choi matrix, normalized to have trace == 1, to compute operation matrix for.
choi_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – operation matrix in the desired basis.
 pygsti.fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)¶
The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.
This routine only computes the case of the Choi matrix being in the standard (matrix unit) basis, but does so more quickly than
jamiolkowski_iso()
and so is particuarly useful when only the eigenvalues of the Choi matrix are needed. Parameters
operation_mx (numpy array) – the operation matrix to compute Choi matrix of.
op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – the Choi matrix, normalized to have trace == 1, in the std basis.
 pygsti.fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)¶
Given a choi matrix in the standard basis, return the corresponding operation matrix.
This function performs the inverse of :function:`fast_jamiolkowski_iso_std`.
 Parameters
choi_mx (numpy array) – the Choi matrix in the standard (matrix units) basis, normalized to have trace == 1, to compute operation matrix for.
op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – operation matrix in the desired basis.
 pygsti.sum_of_negative_choi_eigenvalues(model, weights=None)¶
Compute the amount of nonCPness of a model.
This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model.
 Parameters
model (Model) – The model to act on.
weights (dict) – A dictionary of weights used to multiply the negative eigenvalues of different gates. Keys are operation labels, values are floating point numbers.
 Returns
float – the sum of negative eigenvalues of the Choi matrix for each gate.
 pygsti.sums_of_negative_choi_eigenvalues(model)¶
Compute the amount of nonCPness of a model.
This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model separately. This function is different from :function:`sum_of_negative_choi_eigenvalues` in that it returns sums separately for each operation of model.
 Parameters
model (Model) – The model to act on.
 Returns
list of floats – each element == sum of the negative eigenvalues of the Choi matrix for the corresponding gate (as ordered by model.operations.iteritems()).
 pygsti.magnitudes_of_negative_choi_eigenvalues(model)¶
Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.
 Parameters
model (Model) – The model to act on.
 Returns
list of floats – list of the magnitues of all negative Choi eigenvalues. The length of this list will vary based on how many negative eigenvalues are found, as positive eigenvalues contribute nothing to this list.
 pygsti.warn_deprecated(name, replacement=None)¶
Formats and prints a deprecation warning message.
 Parameters
name (str) – The name of the function that is now deprecated.
replacement (str, optional) – the name of the function that should replace it.
 Returns
None
 pygsti.deprecate(replacement=None)¶
Decorator for deprecating a function.
 Parameters
replacement (str, optional) – the name of the function that should replace it.
 Returns
function
 pygsti.deprecate_imports(module_name, replacement_map, warning_msg)¶
Utility to deprecate imports from a module.
This works by swapping the underlying module in the import mechanisms with a ModuleType object that overrides attribute lookup to check against the replacement map.
Note that this will slow down module attribute lookup substantially. If you need to deprecate multiple names, DO NOT call this method more than once on a given module! Instead, use the replacement map to batch multiple deprecations into one call. When using this method, plan to remove the deprecated paths altogether sooner rather than later.
 Parameters
module_name (str) – The fullyqualified name of the module whose names have been deprecated.
replacement_map ({name: function}) – A map of each deprecated name to a factory which will be called with no arguments when importing the name.
warning_msg (str) – A message to be displayed as a warning when importing a deprecated name. Optionally, this may include the format string name, which will be formatted with the deprecated name.
 Returns
None
 pygsti.TOL = 1e20¶
 pygsti.logl(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)¶
The loglikelihood function.
 Parameters
model (Model) – Model of parameterized gates
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
wildcard (WildcardBudget) – A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
float – The log likelihood
 pygsti.logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)¶
Computes the percircuit loglikelihood contribution for a set of circuits.
 Parameters
model (Model) – Model of parameterized gates
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
wildcard (WildcardBudget) – A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
 Returns
numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the loglikelihood contributions of the corresponding circuit aggregated over outcomes.
 pygsti.logl_jacobian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)¶
The jacobian of the loglikelihood function.
 Parameters
model (Model) – Model of parameterized gates (including SPAM)
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the Poissonpicutre loglikelihood should be differentiated.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
verbosity (int, optional) – How much detail to print to stdout.
 Returns
numpy array – array of shape (M,), where M is the length of the vectorized model.
 pygsti.logl_hessian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)¶
The hessian of the loglikelihood function.
 Parameters
model (Model) – Model of parameterized gates (including SPAM)
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the Poissonpicutre loglikelihood should be differentiated.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
verbosity (int, optional) – How much detail to print to stdout.
 Returns
numpy array – array of shape (M,M), where M is the length of the vectorized model.
 pygsti.logl_approximate_hessian(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)¶
An approximate Hessian of the loglikelihood function.
An approximation to the true Hessian is computed using just the Jacobian (and not the Hessian) of the probabilities w.r.t. the model parameters. Let J = d(probs)/d(params) and denote the Hessian of the loglikelihood w.r.t. the probabilities as d2(logl)/dprobs2 (a diagonal matrix indexed by the term, i.e. probability, of the loglikelihood). Then this function computes:
H = J * d2(logl)/dprobs2 * J.T
Which simply neglects the d2(probs)/d(params)2 terms of the true Hessian. Since this curvature is expected to be small at the MLE point, this approximation can be useful for computing approximate error bars.
 Parameters
model (Model) – Model of parameterized gates (including SPAM)
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the Poissonpicutre loglikelihood should be differentiated.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.
verbosity (int, optional) – How much detail to print to stdout.
 Returns
numpy array – array of shape (M,M), where M is the length of the vectorized model.
 pygsti.logl_max(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)¶
The maximum loglikelihood possible for a DataSet.
That is, the loglikelihood obtained by a maximal model that can fit perfectly the probability of each circuit.
 Parameters
model (Model) – the model, used only for circuit compilation
dataset (DataSet) – the data set to use.
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the maxloglikelihood sum. Default value of None implies all the circuits in dataset should be used.
poisson_picture (boolean, optional) – Whether the Poissonpicture maximum loglikelihood should be returned.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 Returns
float
 pygsti.logl_max_per_circuit(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)¶
The vector of maximum loglikelihood contributions for each circuit, aggregated over outcomes.
 Parameters
model (Model) – the model, used only for circuit compilation
dataset (DataSet) – the data set to use.
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the maxloglikelihood sum. Default value of None implies all the circuits in dataset should be used.
poisson_picture (boolean, optional) – Whether the Poissonpicture maximum loglikelihood should be returned.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
 Returns
numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the maximum loglikelihood contributions of the corresponding circuit aggregated over outcomes.
 pygsti.two_delta_logl_nsigma(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method='modeltest', wildcard=None)¶
See docstring for :function:`pygsti.tools.two_delta_logl`
 Parameters
model (Model) – Model of parameterized gates
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
dof_calc_method ({"all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. “all” uses model.num_params whereas “modeltest” uses model.num_modeltest_params (the number of nongauge parameters by default).
wildcard (WildcardBudget) – A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
 Returns
float
 pygsti.two_delta_logl(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)¶
Twice the difference between the maximum and actual loglikelihood.
Optionally also can return the Nsigma (# std deviations from mean) and pvalue relative to expected chi^2 distribution (when dof_calc_method is not None).
This function’s arguments are supersets of :function:`logl`, and :function:`logl_max`. This is a convenience function, equivalent to 2*(logl_max(…)  logl(…)), whose value is what is often called the loglikelihoodratio between the “maximal model” (that which trivially fits the data exactly) and the model given by model.
 Parameters
model (Model) – Model of parameterized gates
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the computed loglikelihood values.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
dof_calc_method ({None, "all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. If None, then Nsigma and pvalue are not returned (see below).
wildcard (WildcardBudget) – A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
 Returns
twoDeltaLogL (float) – 2*(loglikelihood(maximal_model,data)  loglikelihood(model,data))
Nsigma, pvalue (float) – Only returned when dof_calc_method is not None.
 pygsti.two_delta_logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e06, prob_clip_interval=( 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)¶
Twice the percircuit difference between the maximum and actual loglikelihood.
Contributions are aggregated over each circuit’s outcomes, but no further.
Optionally (when dof_calc_method is not None) returns parallel vectors containing the Nsigma (# std deviations from mean) and the pvalue relative to expected chi^2 distribution for each sequence.
 Parameters
model (Model) – Model of parameterized gates
dataset (DataSet) – Probability data
circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the loglikelihood sum. Default value of None implies all the circuits in dataset should be used.
min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the loglikelihood. A penalty function replaces the true loglikelihood for probabilities that lie below this threshold so that the loglikelihood never becomes undefined (which improves optimizer performance).
prob_clip_interval (2tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.
radius (float, optional) – Specifies the severity of rounding used to “patch” the zerofrequency terms of the loglikelihood.
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)
dof_calc_method ({"all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and pvalue relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model.
wildcard (WildcardBudget) – A wildcard budget to apply to this loglikelihood computation. This increases the returned loglikelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the loglikelihood.
mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.
comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.
 Returns
twoDeltaLogL_terms (numpy.ndarray)
Nsigma, pvalue (numpy.ndarray) – Only returned when dof_calc_method is not None.
 pygsti.two_delta_logl_term(n, p, f, min_prob_clip=1e06, poisson_picture=True)¶
Term of the 2*[log(L)upperbound  log(L)] sum corresponding to a single circuit and spam label.
 Parameters
n (float or numpy array) – Number of samples.
p (float or numpy array) – Probability of 1st outcome (typically computed).
f (float or numpy array) – Frequency of 1st outcome (typically observed).
min_prob_clip (float, optional) – Minimum probability clip point to avoid evaluating log(number <= zero)
poisson_picture (boolean, optional) – Whether the loglikelihoodinthePoissonpicture terms should be included in the returned logl value.
 Returns
float or numpy array
 pygsti.hamiltonian_to_lindbladian(hamiltonian, sparse=False)¶
Construct the Lindbladian corresponding to a given Hamiltonian.
Mathematically, for a ddimensional Hamiltonian matrix H, this routine constructs the d^2dimension Lindbladian matrix L whose action is given by L(rho) = 1j*sqrt(d)/2*[ H, rho ], where square brackets denote the commutator and rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.
 Parameters
hamiltonian (ndarray) – The hamiltonian matrix used to construct the Lindbladian.
sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.
 Returns
ndarray or Scipy CSR matrix
 pygsti.stochastic_lindbladian(q, sparse=False)¶
Construct the Lindbladian corresponding to stochastic qerrors.
Mathematically, for a ddimensional matrix q, this routine constructs the d^2dimension Lindbladian matrix L whose action is given by L(rho) = q*rho*q^dag where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.
 Parameters
q (ndarray) – The matrix used to construct the Lindbladian.
sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.
 Returns
ndarray or Scipy CSR matrix
 pygsti.affine_lindbladian(q, sparse=False)¶
Construct the Lindbladian corresponding to affine qerrors.
Mathematically, for a ddimensional matrix q, this routine constructs the d^2dimension Lindbladian matrix L whose action is given by L(rho) = q where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.
 Parameters
q (ndarray) – The matrix used to construct the Lindbladian.
sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.
 Returns
ndarray or Scipy CSR matrix
 pygsti.nonham_lindbladian(Lm, Ln, sparse=False)¶
Construct the Lindbladian corresponding to generalized nonHamiltonian (stochastic) errors.
Mathematically, for ddimensional matrices Lm and Ln, this routine constructs the d^2dimension Lindbladian matrix L whose action is given by:
L(rho) = Ln*rho*Lm^dag  1/2(rho*Lm^dag*Ln + Lm^dag*Ln*rho)
where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.
 Parameters
Lm (numpy.ndarray) – ddimensional matrix.
Ln (numpy.ndarray) – ddimensional matrix.
sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.
 Returns
ndarray or Scipy CSR matrix
 pygsti.remove_duplicates_in_place(l, index_to_test=None)¶
Remove duplicates from the list passed as an argument.
 Parameters
l (list) – The list to remove duplicates from.
index_to_test (int, optional) – If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate yvalues.
 Returns
None
 pygsti.remove_duplicates(l, index_to_test=None)¶
Remove duplicates from the a list and return the result.
 Parameters
l (iterable) – The list/set to remove duplicates from.
index_to_test (int, optional) – If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate yvalues.
 Returns
list – the list after duplicates have been removed.
 pygsti.compute_occurrence_indices(lst)¶
A 0based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.
For example, if lst = [ ‘A’,’B’,’C’,’C’,’A’] then the returned list will be [ 0 , 0 , 0 , 1 , 1 ]. This may be useful when working with DataSet objects that have collisionAction set to “keepseparate”.
 Parameters
lst (list) – The list to process.
 Returns
list
 pygsti.find_replace_tuple(t, alias_dict)¶
Replace elements of t according to rules in alias_dict.
 Parameters
t (tuple or list) – The object to perform replacements upon.
alias_dict (dictionary) – Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a subsequence that the given element should be replaced with. If None, no replacement is performed.
 Returns
tuple
 pygsti.find_replace_tuple_list(list_of_tuples, alias_dict)¶
Applies
find_replace_tuple()
on each element of list_of_tuples. Parameters
list_of_tuples (list) – A list of tuple objects to perform replacements upon.
alias_dict (dictionary) – Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a subsequence that the given element should be replaced with. If None, no replacement is performed.
 Returns
list
 pygsti.apply_aliases_to_circuits(list_of_circuits, alias_dict)¶
Applies alias_dict to the circuits in list_of_circuits.
 Parameters
list_of_circuits (list) – A list of circuits to make replacements in.
alias_dict (dict) – A dictionary whose keys are layer Labels (or equivalent tuples or strings), and whose values are Circuits or tuples of labels.
 Returns
list
 pygsti.sorted_partitions(n)¶
Iterate over all sorted (decreasing) partitions of integer n.
A partition of n here is defined as a list of one or more nonzero integers which sum to n. Sorted partitions (those iterated over here) have their integers in decreasing order.
 Parameters
n (int) – The number to partition.
 pygsti.partitions(n)¶
Iterate over all partitions of integer n.
A partition of n here is defined as a list of one or more nonzero integers which sum to n. Every partition is iterated over exacty once  there are no duplicates/repetitions.
 Parameters
n (int) – The number to partition.
 pygsti.partition_into(n, nbins)¶
Iterate over all partitions of integer n into nbins bins.
Here, unlike in :function:`partition`, a “partition” is allowed to contain zeros. For example, (4,1,0) is a valid partition of 5 using 3 bins. This function fixes the number of bins and iterates over all possible length nbins partitions while allowing zeros. This is equivalent to iterating over all usual partitions of length at most nbins and inserting zeros into all possible places for partitions of length less than nbins.
 Parameters
n (int) – The number to partition.
nbins (int) – The fixed number of bins, equal to the length of all the partitions that are iterated over.
 pygsti._partition_into_slow(n, nbins)¶
Helper function for partition_into that performs the same task for a general number n.
 pygsti.incd_product(*args)¶
Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.
 Parameters
*args (iterables) – Any number of iterable things that we’re taking the product of.
 pygsti.dot_mod2(m1, m2)¶
Returns the product over the integers modulo 2 of two matrices.
 Parameters
m1 (numpy.ndarray) – First matrix
m2 (numpy.ndarray) – Second matrix
 Returns
numpy.ndarray
 pygsti.multidot_mod2(mlist)¶
Returns the product over the integers modulo 2 of a list of matrices.
 Parameters
mlist (list) – A list of matrices.
 Returns
numpy.ndarray
 pygsti.det_mod2(m)¶
Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).
 Parameters
m (numpy.ndarray) – Matrix to take determinant of.
 Returns
numpy.ndarray
 pygsti.matrix_directsum(m1, m2)¶
Returns the direct sum of two square matrices of integers.
 Parameters
m1 (numpy.ndarray) – First matrix
m2 (numpy.ndarray) – Second matrix
 Returns
numpy.ndarray
 pygsti.inv_mod2(m)¶
Finds the inverse of a matrix over GL(n,2)
 Parameters
m (numpy.ndarray) – Matrix to take inverse of.
 Returns
numpy.ndarray
 pygsti.Axb_mod2(A, b)¶
Solves Ax = b over GF(2)
 Parameters
A (numpy.ndarray) – Matrix to operate on.
b (numpy.ndarray) – Vector to operate on.
 Returns
numpy.ndarray
 pygsti.gaussian_elimination_mod2(a)¶
Gaussian elimination mod2 of a.
 Parameters
a (numpy.ndarray) – Matrix to operate on.
 Returns
numpy.ndarray
 pygsti.diagonal_as_vec(m)¶
Returns a 1D array containing the diagonal of the input square 2D array m.
 Parameters
m (numpy.ndarray) – Matrix to operate on.
 Returns
numpy.ndarray
 pygsti.strictly_upper_triangle(m)¶
Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.
 Parameters
m (numpy.ndarray) – Matrix to operate on.
 Returns
numpy.ndarray
 pygsti.diagonal_as_matrix(m)¶
Returns a diagonal matrix containing the diagonal of m.
 Parameters
m (numpy.ndarray) – Matrix to operate on.
 Returns
numpy.ndarray
 pygsti.albert_factor(d, failcount=0)¶
Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.
The algorithm mostly follows the proof in “Orthogonal Matrices Over Finite Fields” by Jessie MacWilliams in The American Mathematical Monthly, Vol. 76, No. 2 (Feb., 1969), pp. 152164
There is generally not a unique albert factorization, and this algorthm is randomized. It will general return a different factorizations from multiple calls.
 Parameters
d (arraylike) – Symmetric matrix mod 2.
failcount (int, optional) – UNUSED.
 Returns
numpy.ndarray
 pygsti.random_bitstring(n, p, failcount=0)¶
Constructs a random bitstring of length n with parity p
 Parameters
n (int) – Number of bits.
p (int) – Parity.
failcount (int, optional) – Internal use only.
 Returns
numpy.ndarray
 pygsti.random_invertable_matrix(n, failcount=0)¶
Finds a random invertable matrix M over GL(n,2)
 Parameters
n (int) – matrix dimension
failcount (int, optional) – Internal use only.
 Returns
numpy.ndarray
 pygsti.random_symmetric_invertable_matrix(n)¶
Creates a random, symmetric, invertible matrix from GL(n,2)
 Parameters
n (int) – Matrix dimension.
 Returns
numpy.ndarray
 pygsti.onesify(a, failcount=0, maxfailcount=100)¶
Returns M such that M a M.T has ones along the main diagonal
 Parameters
a (numpy.ndarray) – The matrix.
failcount (int, optional) – Internal use only.
maxfailcount (int, optional) – Maximum number of tries before giving up.
 Returns
numpy.ndarray
 pygsti.permute_top(a, i)¶
Permutes the first row & col with the i’th row & col
 Parameters
a (numpy.ndarray) – The matrix to act on.
i (int) – index to permute with first row/col.
 Returns
numpy.ndarray
 pygsti.fix_top(a)¶
Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.
 Parameters
a (numpy.ndarray) – A symmetric binary matrix with ones along the diagonal.
 Returns
numpy.ndarray
 pygsti.proper_permutation(a)¶
Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.
 Parameters
a (numpy.ndarray) – A symmetric binary matrix with ones along the diagonal.
 Returns
numpy.ndarray
 pygsti._check_proper_permutation(a)¶
Check to see if the matrix has been properly permuted.
This should be redundent to what is already built into ‘fix_top’.
 Parameters
a (numpy.ndarray) – A matrix.
 Returns
bool
 pygsti._fastcalc¶
 pygsti.EXPM_DEFAULT_TOL¶
 pygsti.trace(m)¶
The trace of a matrix, sum_i m[i,i].
A memory leak in some version of numpy can cause repeated calls to numpy’s trace function to eat up all available system memory, and this function does not have this problem.
 Parameters
m (numpy array) – the matrix (any object that can be doubleindexed)
 Returns
element type of m – The trace of m.
 pygsti.is_hermitian(mx, tol=1e09)¶
Test whether mx is a hermitian matrix.
 Parameters
mx (numpy array) – Matrix to test.
tol (float, optional) – Tolerance on absolute magitude of elements.
 Returns
bool – True if mx is hermitian, otherwise False.
 pygsti.is_pos_def(mx, tol=1e09)¶
Test whether mx is a positivedefinite matrix.
 Parameters
mx (numpy array) – Matrix to test.
tol (float, optional) – Tolerance on absolute magitude of elements.
 Returns
bool – True if mx is positivesemidefinite, otherwise False.
 pygsti.is_valid_density_mx(mx, tol=1e09)¶
Test whether mx is a valid density matrix (hermitian, positivedefinite, and unit trace).
 Parameters
mx (numpy array) – Matrix to test.
tol (float, optional) – Tolerance on absolute magitude of elements.
 Returns
bool – True if mx is a valid density matrix, otherwise False.
 pygsti.frobeniusnorm(ar)¶
Compute the frobenius norm of an array (or matrix),
sqrt( sum( each_element_of_a^2 ) )
 Parameters
ar (numpy array) – What to compute the frobenius norm of. Note that ar can be any shape or number of dimenions.
 Returns
float or complex – depending on the element type of ar.
 pygsti.frobeniusnorm_squared(ar)¶
Compute the squared frobenius norm of an array (or matrix),
sum( each_element_of_a^2 ) )
 Parameters
ar (numpy array) – What to compute the squared frobenius norm of. Note that ar can be any shape or number of dimenions.
 Returns
float or complex – depending on the element type of ar.
 pygsti.nullspace(m, tol=1e07)¶
Compute the nullspace of a matrix.
 Parameters
m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.
tol (float , optional) – Nullspace tolerance, used when comparing singular values with zero.
 Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.nullspace_qr(m, tol=1e07)¶
Compute the nullspace of a matrix using the QR decomposition.
The QR decomposition is faster but less accurate than the SVD used by
nullspace()
. Parameters
m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.
tol (float , optional) – Nullspace tolerance, used when comparing diagonal values of R with zero.
 Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.nice_nullspace(m, tol=1e07)¶
Computes the nullspace of a matrix, and tries to return a “nice” basis for it.
Columns of the returned value (a basis for the nullspace) each have a maximum absolute value of 1.0 and are chosen so as to align with the the original matrix’s basis as much as possible (the basis is found by projecting each original basis vector onto an arbitrariliyfound nullspace and keeping only a set of linearly independent projections).
 Parameters
m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.
tol (float , optional) – Nullspace tolerance, used when comparing diagonal values of R with zero.
 Returns
An matrix of shape (M,K) whose columns contain nullspace basis vectors.
 pygsti.normalize_columns(m, return_norms=False, ord=None)¶
Normalizes the columns of a matrix.
 Parameters
m (numpy.ndarray or scipy sparse matrix) – The matrix.
return_norms (bool, optional) – If True, also return a 1D array containing the norms of the columns (before they were normalized).
ord (int, optional) – The order of the norm. See :function:`numpy.linalg.norm`.
 Returns
normalized_m (numpy.ndarray) – The matrix after columns are normalized
column_norms (numpy.ndarray) – Only returned when return_norms=True, a 1dimensional array of the prenormalization norm of each column.
 pygsti.column_norms(m, ord=None)¶
Compute the norms of the columns of a matrix.
 Parameters
m (numpy.ndarray or scipy sparse matrix) – The matrix.
ord (int, optional) – The order of the norm. See :function:`numpy.linalg.norm`.
 Returns
numpy.ndarray – A 1dimensional array of the column norms (length is number of columns of m).
 pygsti.scale_columns(m, scale_values)¶
Scale each column of a matrix by a given value.
Usually used for normalization purposes, when the matrix columns represent vectors.
 Parameters
m (numpy.ndarray or scipy sparse matrix) – The matrix.
scale_values (numpy.ndarray) – A 1dimensional array of scale values, one per column of m.
 Returns
numpy.ndarray or scipy sparse matrix – A copy of m with scaled columns, possibly with different sparsity structure.
 pygsti.columns_are_orthogonal(m, tol=1e07)¶
Checks whether a matrix contains orthogonal columns.
The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.
 Parameters
m (numpy.ndarray) – The matrix to check.
tol (float, optional) – Tolerance for checking whether dot products are zero.
 Returns
bool
 pygsti.columns_are_orthonormal(m, tol=1e07)¶
Checks whether a matrix contains orthogonal columns.
The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.
 Parameters
m (numpy.ndarray) – The matrix to check.
tol (float, optional) – Tolerance for checking whether dot products are zero.
 Returns
bool
 pygsti.independent_columns(m, initial_independent_cols=None, tol=1e07)¶
Computes the indices of the linearlyindependent columns in a matrix.
Optionally starts with a “base” matrix of independent columns, so that the returned indices indicate the columns of m that are independent of all the base columns and the other independent columns of m.
 Parameters
m (numpy.ndarray or scipy sparse matrix) – The matrix.
initial_independent_cols (numpy.ndarray or scipy sparse matrix, optional) – If not None, a matrix of knowntobe independent columns so to test the columns of m with respect to (in addition to the already chosen independent columns of m.
tol (float, optional) – Tolerance threshold used to decide whether a singular value is nonzero (it is if it’s is greater than tol).
 Returns
list – A list of the independentcolumn indices of m.
 pygsti.pinv_of_matrix_with_orthogonal_columns(m)¶
TODO: docstring
 pygsti.matrix_sign(m)¶
The “sign” matrix of m
 Parameters
m (numpy.ndarray) – the matrix.
 Returns
numpy.ndarray
 pygsti.print_mx(mx, width=9, prec=4, withbrackets=False)¶
Print matrix in pretty format.
Will print real or complex matrices with a desired precision and “cell” width.
 Parameters
mx (numpy array) – the matrix (2D array) to print.
width (int, opitonal) – the width (in characters) of each printed element
prec (int optional) – the precision (in characters) of each printed element
withbrackets (bool, optional) – whether to print brackets and commas to make the result something that Python can read back in.
 Returns
None
 pygsti.mx_to_string(m, width=9, prec=4, withbrackets=False)¶
Generate a “prettyformat” string for a matrix.
Will generate strings for real or complex matrices with a desired precision and “cell” width.
 Parameters
m (numpy.ndarray) – array to print.
width (int, opitonal) – the width (in characters) of each converted element
prec (int optional) – the precision (in characters) of each converted element
withbrackets (bool, optional) – whether to print brackets and commas to make the result something that Python can read back in.
 Returns
string – matrix m as a pretty formated string.
 pygsti.mx_to_string_complex(m, real_width=9, im_width=9, prec=4)¶
Generate a “prettyformat” string for a complexvalued matrix.
 Parameters
m (numpy array) – array to format.
real_width (int, opitonal) – the width (in characters) of the real part of each element.
im_width (int, opitonal) – the width (in characters) of the imaginary part of each element.
prec (int optional) – the precision (in characters) of each element’s real and imaginary parts.
 Returns
string – matrix m as a pretty formated string.
 pygsti.unitary_superoperator_matrix_log(m, mx_basis)¶
Construct the logarithm of superoperator matrix m.
This function assumes that m acts as a unitary on densitymatrix space, (m: rho > U rho Udagger) so that log(m) can be written as the action by Hamiltonian H:
log(m): rho > i[H,rho].
 Parameters
m (numpy array) – The superoperator matrix whose logarithm is taken
mx_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source and destination basis, respectively. Allowed values are Matrixunit (std), GellMann (gm), Pauliproduct (pp), and Qutrit (qt) (or a custom basis object).
 Returns
numpy array – A matrix logM, of the same shape as m, such that m = exp(logM) and logM can be written as the action rho > i[H,rho].
 pygsti.near_identity_matrix_log(m, tol=1e08)¶
Construct the logarithm of superoperator matrix m that is near the identity.
If m is real, the resulting logarithm will be real.
 Parameters
m (numpy array) – The superoperator matrix whose logarithm is taken
tol (float, optional) – The tolerance used when testing for zero imaginary parts.
 Returns
numpy array – An matrix logM, of the same shape as m, such that m = exp(logM) and logM is real when m is real.
 pygsti.approximate_matrix_log(m, target_logm, target_weight=10.0, tol=1e06)¶
Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.
The equation m = exp( logM ) is allowed to become inexact in order to make logM close to target_logm. In particular, the objective function that is minimized is (where  indicates the 2norm):
exp(logM)  m_1 + target_weight * logM  target_logm^2
 Parameters
m (numpy array) – The superoperator matrix whose logarithm is taken
target_logm (numpy array) – The target logarithm
target_weight (float) – A weighting factor used to blance the exactnessoflog term with the closenesstotarget term in the optimized objective function. This value multiplies the latter term.
tol (float, optional) – Optimzer tolerance.
 Returns
logM (numpy array) – An matrix of the same shape as m.
 pygsti.real_matrix_log(m, action_if_imaginary='raise', tol=1e08)¶
Construct a real logarithm of real matrix m.
This is possible when negative eigenvalues of m come in pairs, so that they can be viewed as complex conjugate pairs.
 Parameters
m (numpy array) – The matrix to take the logarithm of
action_if_imaginary ({"raise","warn","ignore"}, optional) – What action should be taken if a realvalued logarithm cannot be found. “raise” raises a ValueError, “warn” issues a warning, and “ignore” ignores the condition and simply returns the complexvalued result.
tol (float, optional) – An internal tolerance used when testing for equivalence and zero imaginary parts (realness).
 Returns
logM (numpy array) – An matrix logM, of the same shape as m, such that m = exp(logM)
 pygsti.column_basis_vector(i, dim)¶
Returns the ith standard basis vector in dimension dim.
 Parameters
i (int) – Basis vector index.
dim (int) – Vector dimension.
 Returns
numpy.ndarray – An array of shape (dim, 1) that is all zeros except for its ith element, which equals 1.
 pygsti.vec(matrix_in)¶
Stacks the columns of a matrix to return a vector
 Parameters
matrix_in (numpy.ndarray) –
 Returns
numpy.ndarray
 pygsti.unvec(vector_in)¶
Slices a vector into the columns of a matrix.
 Parameters
vector_in (numpy.ndarray) –
 Returns
numpy.ndarray
 pygsti.norm1(m)¶
Returns the 1 norm of a matrix
 Parameters
m (numpy.ndarray) – The matrix.
 Returns
numpy.ndarray
 pygsti.random_hermitian(dim)¶
Generates a random Hermitian matrix
 Parameters
dim (int) – the matrix dimensinon.
 Returns
numpy.ndarray
 pygsti.norm1to1(operator, num_samples=10000, mx_basis='gm', return_list=False)¶
The Hermitian 1to1 norm of a superoperator represented in the standard basis.
This is calculated via MonteCarlo sampling. The definition of Hermitian 1to1 norm can be found in arxiv:1109.6887.
 Parameters
operator (numpy.ndarray) – The operator matrix to take the norm of.
num_samples (int, optional) – Number of MonteCarlo samples.
mx_basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis of operator.
return_list (bool, optional) – Whether the entire list of sampled values is returned or just the maximum.
 Returns
float or list – Depends on the value of return_list.
 pygsti.complex_compare(a, b)¶
Comparison function for complex numbers that compares real part, then imaginary part.
 Parameters
a (complex) –
b (complex) –
 Returns
1 if a < b – 0 if a == b
+1 if a > b
 pygsti.prime_factors(n)¶
GCD algorithm to produce prime factors of n
 Parameters
n (int) – The number to factorize.
 Returns
list – The prime factors of n.
 pygsti.minweight_match(a, b, metricfn=None, return_pairs=True, pass_indices_to_metricfn=False)¶
Matches the elements of two vectors, a and b by minimizing the weight between them.
The weight is defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b).
 Parameters
a (list or numpy.ndarray) – First 1D array to match elements between.
b (list or numpy.ndarray) – Second 1D array to match elements between.
metricfn (function, optional) – A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(xy) is used.
return_pairs (bool, optional) – If True, the matching is also returned.
pass_indices_to_metricfn (bool, optional) – If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.
 Returns
weight_array (numpy.ndarray) – The array of weights corresponding to the minweight matching. The sum of this array’s elements is the minimized total weight.
pairs (list) – Only returned when return_pairs == True, a list of 2tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair. The first (ix) indices will be in continuous ascending order starting at zero.
 pygsti.minweight_match_realmxeigs(a, b, metricfn=None, pass_indices_to_metricfn=False, eps=1e09)¶
Matches the elements of a and b, whose elements are assumed to either real or onehalf of a conjugate pair.
Matching is performed by minimizing the weight between elements, defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b). If straightforward matching fails to preserve eigenvalue conjugacy relations, then real and conjugate pair eigenvalues are matched separately to ensure relations are preserved (but this can result in a suboptimal matching). A ValueError is raised when the elements of a and b have incompatible conjugacy structures (#’s of conjugate vs. real pairs).
 Parameters
a (numpy.ndarray) – First 1D array to match.
b (numpy.ndarray) – Second 1D array to match.
metricfn (function, optional) – A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(xy) is used.
pass_indices_to_metricfn (bool, optional) – If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.
eps (float, optional) – Tolerance when checking if eigenvalues are equal to each other.
 Returns
pairs (list) – A list of 2tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair.
 pygsti._fas(a, inds, rhs, add=False)¶
Fancy Assignment, equivalent to a[*inds] = rhs but with the elements of inds (allowed to be integers, slices, or integer arrays) always specifing a generalizeslice along the given dimension. This avoids some weird numpy indexing rules that make using square brackets a pain.
 pygsti._findx_shape(a, inds)¶
Returns the shape of a fancyindexed array (a[*inds].shape)
 pygsti._findx(a, inds, always_copy=False)¶
Fancy Indexing, equivalent to a[*inds].copy() but with the elements of inds (allowed to be integers, slices, or integer arrays) always specifing a generalizeslice along the given dimension. This avoids some weird numpy indexing rules that make using square brackets a pain.
 pygsti.safe_dot(a, b)¶
Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.
 Parameters
a (numpy.ndarray or scipy.sparse matrix.) – First matrix.
b (numpy.ndarray or scipy.sparse matrix.) – Second matrix.
 Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_real(a, inplace=False, check=False)¶
Get the realpart of a, where a can be either a dense array or a sparse matrix.
 Parameters
a (numpy.ndarray or scipy.sparse matrix.) – Array to take real part of.
inplace (bool, optional) – Whether this operation should be done inplace.
check (bool, optional) – If True, raise a ValueError if a has a nonzero imaginary part.
 Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_imag(a, inplace=False, check=False)¶
Get the imaginarypart of a, where a can be either a dense array or a sparse matrix.
 Parameters
a (numpy.ndarray or scipy.sparse matrix.) – Array to take imaginary part of.
inplace (bool, optional) – Whether this operation should be done inplace.
check (bool, optional) – If True, raise a ValueError if a has a nonzero real part.
 Returns
numpy.ndarray or scipy.sparse matrix
 pygsti.safe_norm(a, part=None)¶
Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.
 Parameters
a (ndarray or scipy.sparse matrix) – The matrix or vector to take the norm of.
part ({None,'real','imag'}) – If not None, return the norm of the real or imaginary part of a.
 Returns
float
 pygsti.safe_onenorm(a)¶
Computes the 1norm of the dense or sparse matrix a.
 Parameters
a (ndarray or sparse matrix) – The matrix or vector to take the norm of.
 Returns
float
 pygsti.csr_sum_indices(csr_matrices)¶
Precomputes the indices needed to sum a set of CSR sparse matrices.
Computes the indexarrays needed for use in :method:`csr_sum`, along with the index pointer and columnindices arrays for constructing a “template” CSR matrix to be the destination of csr_sum.
 Parameters
csr_matrices (list) – The SciPy CSR matrices to be summed.
 Returns
ind_arrays (list) – A list of numpy arrays giving the destination dataarray indices of each element of csr_matrices.
indptr, indices (numpy.ndarray) – The rowpointer and columnindices arrays specifying the sparsity structure of a the destination CSR matrix.
N (int) – The dimension of the destination matrix (and of each member of csr_matrices)
 pygsti.csr_sum(data, coeffs, csr_mxs, csr_sum_indices)¶
Accelerated summation of several CSRformat sparse matrices.
:method:`csr_sum_indices` precomputes the necessary indices for summing directly into the dataarray of a destination CSR sparse matrix. If data is the dataarray of matrix D (for “destination”), then this method performs:
D += sum_i( coeff[i] * csr_mxs[i] )
Note that D is not returned; the sum is done internally into D’s dataarray.
 Parameters
data (numpy.ndarray) – The dataarray of the destination CSRmatrix.
coeffs (iterable) – The weight coefficients which multiply each summed matrix.
csr_mxs (iterable) – A list of CSR matrix objects whose dataarray is given by obj.data (e.g. a SciPy CSR sparse matrix).
csr_sum_indices (list) – A list of precomputed index arrays as returned by :method:`csr_sum_indices`.
 Returns
None
 pygsti.csr_sum_flat_indices(csr_matrices)¶
Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.
The returned quantities can later be used to quickly compute a linear combination of the CSR sparse matrices csr_matrices.
Computes the index and data arrays needed for use in :method:`csr_sum_flat`, along with the index pointer and columnindices arrays for constructing a “template” CSR matrix to be the destination of csr_sum_flat.
 Parameters
csr_matrices (list) – The SciPy CSR matrices to be summed.
 Returns
flat_dest_index_array (numpy array) – A 1D array of one element per nonzero element in any of csr_matrices, giving the destinationindex of that element.
flat_csr_mx_data (numpy array) – A 1D array of the same length as flat_dest_index_array, which simply concatenates the data arrays of csr_matrices.
mx_nnz_indptr (numpy array) – A 1D array of length len(csr_matrices)+1 such that the data for the ith element of csr_matrices lie in the indexrange of mx_nnz_indptr[i] to mx_nnz_indptr[i+1]1 of the flat arrays.
indptr, indices (numpy.ndarray) – The rowpointer and columnindices arrays specifying the sparsity structure of a the destination CSR matrix.
N (int) – The dimension of the destination matrix (and of each member of csr_matrices)
 pygsti.csr_sum_flat(data, coeffs, flat_dest_index_array, flat_csr_mx_data, mx_nnz_indptr)¶
Computation of the summation of several CSRformat sparse matrices.
:method:`csr_sum_flat_indices` precomputes the necessary indices for summing directly into the dataarray of a destination CSR sparse matrix. If data is the dataarray of matrix D (for “destination”), then this method performs:
D += sum_i( coeff[i] * csr_mxs[i] )
Note that D is not returned; the sum is done internally into D’s dataarray.
 Parameters
data (numpy.ndarray) – The dataarray of the destination CSRmatrix.
coeffs (ndarray) – The weight coefficients which multiply each summed matrix.
flat_dest_index_array (ndarray) – The index array generated by :function:`csr_sum_flat_indices`.
flat_csr_mx_data (ndarray) – The data array generated by :function:`csr_sum_flat_indices`.
mx_nnz_indptr (ndarray) – The numberofnonzeroelements pointer array generated by :function:`csr_sum_flat_indices`.
 Returns
None
 pygsti.expm_multiply_prep(a, tol=EXPM_DEFAULT_TOL)¶
Computes “prepared” metainfo about matrix a, to be used in expm_multiply_fast.
This includes a shifted version of a.
 Parameters
a (numpy.ndarray) – the matrix that will belater exponentiated.
tol (float, optional) – Tolerance used to within matrix exponentiation routines.
 Returns
tuple – A tuple of values to pass to expm_multiply_fast.
 pygsti.expm_multiply_fast(prep_a, v, tol=EXPM_DEFAULT_TOL)¶
Multiplies v by an exponentiated matrix.
 Parameters
prep_a (tuple) – A tuple of values from :function:`expm_multiply_prep` that defines the matrix to be exponentiated and holds other precomputed quantities.
v (numpy.ndarray) – Vector to multiply (take dot product with).
tol (float, optional) – Tolerance used to within matrix exponentiation routines.
 Returns
numpy.ndarray
 pygsti._custom_expm_multiply_simple_core(a, b, mu, m_star, s, tol, eta)¶
a helper function. Note that this (python) version works when a is a LinearOperator as well as a SciPy CSR sparse matrix.
 pygsti.expop_multiply_prep(op, a_1_norm=None, tol=EXPM_DEFAULT_TOL)¶
Returns “prepared” metainfo about operation op, which is assumed to be traceless (so no shift is needed).
Used as input for use with _custom_expm_multiply_simple_core or fast Creps.
 Parameters
op (scipy.sparse.linalg.LinearOperator) – The operator to exponentiate.
a_1_norm (float, optional) – The 1norm (if computed separately) of op.
tol (float, optional) – Tolerance used to within matrix exponentiation routines.
 Returns
tuple – A tuple of values to pass to expm_multiply_fast.
 pygsti.sparse_equal(a, b, atol=1e08)¶
Checks whether two Scipy sparse matrices are (almost) equal.
 Parameters
a (scipy.sparse matrix) – First matrix.
b (scipy.sparse matrix) – Second matrix.
atol (float, optional) – The tolerance to use, passed to numpy.allclose, when comparing the elements of a and b.
 Returns
bool
 pygsti.sparse_onenorm(a)¶
Computes the 1norm of the scipy sparse matrix a.
 Parameters
a (scipy sparse matrix) – The matrix or vector to take the norm of.
 Returns
float
 pygsti.ndarray_base(a, verbosity=0)¶
Get the base memory object for numpy array a.
This is found by following .base until it comes up None.
 Parameters
a (numpy.ndarray) – Array to get base of.
verbosity (int, optional) – Print additional debugging information if this is > 0.
 Returns
numpy.ndarray
 pygsti.to_unitary(scaled_unitary)¶
Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.
 Parameters
scaled_unitary (ndarray) – A scaled unitary matrix
 Returns
scale (float)
unitary (ndarray) – Such that scale * unitary == scaled_unitary.
 pygsti.sorted_eig(mx)¶
Similar to numpy.eig, but returns sorted output.
In particular, the eigenvalues and vectors sorted by eigenvalue, where sorting is done according to (real_part, imaginary_part) tuple.
 Parameters
mx (numpy.ndarray) – Matrix to act on.
 Returns
eigenvalues (numpy.ndarray)
eigenvectors (numpy.ndarray)
 pygsti.compute_kite(eigenvalues)¶
Computes the “kite” corresponding to a list of eigenvalues.
The kite is defined as a list of integers, each indicating that there is a degnenerate block of that many eigenvalues within eigenvalues. Thus the sum of the list values equals len(eigenvalues).
 Parameters
eigenvalues (numpy.ndarray) – A sorted array of eigenvalues.
 Returns
list – A list giving the multiplicity structure of evals.
 pygsti.find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)¶
Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.
More specifically, find a matrix R such that u_inv R u0 is diagonal (so G = R G0 Rinv if G and G0 share the same eigenvalues and have eigenvectors u and u0 respectively) AND log(R) has no (zero) projection onto the commutant of G0 = u0 diag(evals) u0_inv.
 Parameters
u (numpy.ndarray) – Usually the eigenvector matrix of a gate (G).
u_inv (numpy.ndarray) – Inverse of u.
u0 (numpy.ndarray) – Usually the eigenvector matrix of the corresponding target gate (G0).
u0_inv (numpy.ndarray) – Inverse of u0.
kite (list) – The kite structure of u0.
 Returns
numpy.ndarray
 pygsti.project_onto_kite(mx, kite)¶
Project mx onto kite, so mx is zero everywhere except on the kite.
 Parameters
mx (numpy.ndarray) – Matrix to project.
kite (list) – A kite structure.
 Returns
numpy.ndarray
 pygsti.project_onto_antikite(mx, kite)¶
Project mx onto the complement of kite, so mx is zero everywhere on the kite.
 Parameters
mx (numpy.ndarray) – Matrix to project.
kite (list) – A kite structure.
 Returns
numpy.ndarray
 pygsti.remove_dependent_cols(mx, tol=1e07)¶
Removes the linearly dependent columns of a matrix.
 Parameters
mx (numpy.ndarray) – The input matrix
 Returns
A linearly independent subset of the columns of mx.
 pygsti.intersection_space(space1, space2, tol=1e07, use_nice_nullspace=False)¶
TODO: docstring
 pygsti.union_space(space1, space2, tol=1e07)¶
TODO: docstring
 pygsti.jamiolkowski_angle(hamiltonian_mx)¶
TODO: docstring
 pygsti.zvals_to_dense(self, zvals, superket=True)¶
Construct the dense operator or superoperator representation of a computational basis state.
 Parameters
zvals (list or numpy.ndarray) – The zvalues, each 0 or 1, defining the computational basis state.
superket (bool, optional) – If True, the superket representation of the state is returned. If False, then the complex ket representation is returned.
 Returns
numpy.ndarray
 pygsti.int64_parity(x)¶
Compute the partity of x.
Recursively divide a (64bit) integer (x) into two equal halves and take their XOR until only 1 bit is left.
 Parameters
x (int64) –
 Returns
int64
 pygsti.zvals_int64_to_dense(zvals_int, nqubits, outvec=None, trust_outvec_sparsity=False, abs_elval=None)¶
Fills a dense array with the superket representation of a computational basis state.
 Parameters
zvals_int (int64) – The array of (up to 64) zvalues, encoded as the 0s and 1s in the binary representation of this integer.
nqubits (int) – The number of zvalues (up to 64)
outvec (numpy.ndarray, optional) – The output array, which must be a 1D array of length 4**nqubits or None, in which case a new array is allocated.
trust_outvec_sparsity (bool, optional) – When True, it is assumed that the provided outvec starts as all zeros and so only nonzero elements of outvec need to be set.
abs_elval (float) – the value 1 / (sqrt(2)**nqubits), which can be passed here so that it doesn’t need to be recomputed on every call to this function. If None, then we just compute the value.
 Returns
numpy.ndarray
 class pygsti._ExplicitBasis(elements, labels=None, name=None, longname=None, real=False, sparse=None)¶
Bases:
Basis
A Basis whose elements are specified directly.
All explicit bases are simple: their vector space is always taken to be that of the the flattened elements.
 Parameters
elements (numpy.ndarray) – The basis elements (sometimes different from the vectors)
labels (list) – The basis labels
name (str, optional) – The name of this basis. If None, then a name will be automatically generated.
longname (str, optional) – A more descriptive name for this basis. If None, then the short name will be used.
real (bool, optional) – Whether the coefficients in the expression of an arbitrary vector as a linear combination of this basis’s elements must be real.
sparse (bool, optional) – Whether the elements of this Basis are stored as sparse matrices or vectors. If None, then this is automatically determined by the type of the initial object: elements[0] (sparse=False is used when len(elements) == 0).
 Count¶
The number of custom bases, used for serialized naming
 Type
int
 Count = 0¶
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 property dim(self)¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size(self)¶
The number of elements (or vectorelements) in the basis.
 property elshape(self)¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 _copy_with_toggled_sparsity(self)¶
 __hash__(self)¶
Return hash(self).
 class pygsti._DirectSumBasis(component_bases, name=None, longname=None)¶
Bases:
LazyBasis
A basis that is the direct sum of one or more “component” bases.
Elements of this basis are the union of the basis elements on each component, each embedded into a common blockdiagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).
 Parameters
component_bases (iterable) – A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to :function:`Basis.cast`, e.g. (‘pp’,4).
name (str, optional) – The name of this basis. If None, the names of the component bases joined with “+” is used.
longname (str, optional) – A longer description of this basis. If None, then a long name is automatically generated.
 vector_elements¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Type
list
 _to_nice_serialization(self)¶
 classmethod _from_nice_serialization(cls, state)¶
 property dim(self)¶
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
 property size(self)¶
The number of elements (or vectorelements) in the basis.
 property elshape(self)¶
The shape of each element. Typically either a length1 or length2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
 __hash__(self)¶
Return hash(self).
 _lazy_build_vector_elements(self)¶
 _lazy_build_elements(self)¶
 _lazy_build_labels(self)¶
 _copy_with_toggled_sparsity(self)¶
 __eq__(self, other)¶
Return self==value.
 property vector_elements(self)¶
The “vectors” of this basis, always 1D (sparse or dense) arrays.
 Returns
list
 property to_std_transform_matrix(self)¶
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
 Returns
numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
 property to_elementstd_transform_matrix(self)¶
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space”  that is, vectors in the same standard basis that the elements of this basis are expressed in.
 Returns
numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
 create_equivalent(self, builtin_basis_name)¶
Create an equivalent basis with components of type builtin_basis_name.
Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.
 Parameters
builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
 Returns
DirectSumBasis
 create_simple_equivalent(self, builtin_basis_name=None)¶
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_nameanalogue of the standard basis that this basis’s elements are expressed in. Parameters
builtin_basis_name (str, optional) – The name of the builtin basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and componentfree version of the same builtinbasis type.
 Returns
Basis
 class pygsti._Label¶
Bases:
object
A label used to identify a gate, circuit layer, or (sub)circuit.
A label consisting of a string along with a tuple of integers or sectornames specifying which qubits, or more generally, parts of the Hilbert space that is acted upon by an object solabeled.
 property depth(self)¶
The depth of this label, viewed as a subcircuit.