pygsti.tools

pyGSTi Tools Python Package

Submodules

Package Contents

Classes

NamedDict

A dictionary that also holds category names and types.

TypedDict

A dictionary that holds per-key type information.

Functions

basis_matrices(name_or_basis, dim[, sparse])

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

basis_longname(basis)

Get the "long name" for a particular basis, which is typically used in reports, etc.

basis_element_labels(basis, dim)

Get a list of short labels corresponding to to the elements of the described basis.

is_sparse_basis(name_or_basis)

Whether a basis contains sparse matrices.

change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

create_basis_pair(mx, from_basis, to_basis)

Constructs bases from transforming mx between two basis names.

create_basis_for_matrix(mx, basis)

Construct a Basis object with type given by basis and dimension approprate for transforming mx.

resize_std_mx(mx, resize, std_basis_1, std_basis_2)

Change the basis of mx to a potentially larger or smaller 'std'-type basis given by std_basis_2.

flexible_change_basis(mx, start_basis, end_basis)

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.

resize_mx(mx[, dim_or_block_dims, resize])

Wrapper for resize_std_mx(), that manipulates mx to be in another basis.

state_to_stdmx(state_vec)

Convert a state vector into a density matrix.

state_to_pauli_density_vec(state_vec)

Convert a single qubit state vector into a Liouville vector in the Pauli basis.

vec_to_stdmx(v, basis[, keep_complex])

Convert a vector in this basis to a matrix in the standard basis.

stdmx_to_vec(m, basis)

Convert a matrix in the standard basis to a vector in the Pauli basis.

chi2(model, dataset[, circuits, ...])

Computes the total (aggregate) chi^2 for a set of circuits.

chi2_per_circuit(model, dataset[, circuits, ...])

Computes the per-circuit chi^2 contributions for a set of cirucits.

chi2_jacobian(model, dataset[, circuits, ...])

Compute the gradient of the chi^2 function computed by chi2().

chi2_hessian(model, dataset[, circuits, ...])

Compute the Hessian matrix of the chi2() function.

chi2_approximate_hessian(model, dataset[, circuits, ...])

Compute and approximate Hessian matrix of the chi2() function.

chialpha(alpha, model, dataset[, circuits, ...])

Compute the chi-alpha objective function.

chialpha_per_circuit(alpha, model, dataset[, ...])

Compute the per-circuit chi-alpha objective function.

chi2fn_2outcome(n, p, f[, min_prob_clip_for_weighting])

Computes chi^2 for a 2-outcome measurement.

chi2fn_2outcome_wfreqs(n, p, f)

Computes chi^2 for a 2-outcome measurement using frequency-weighting.

chi2fn(n, p, f[, min_prob_clip_for_weighting])

Computes the chi^2 term corresponding to a single outcome.

chi2fn_wfreqs(n, p, f[, min_freq_clip_for_weighting])

Computes the frequency-weighed chi^2 term corresponding to a single outcome.

calculate_edesign_estimated_runtime(edesign[, ...])

Estimate the runtime for an ExperimentDesign from gate times and batch sizes.

calculate_fisher_information_per_circuit(...[, ...])

Helper function to calculate all Fisher information terms for each circuit.

calculate_fisher_information_matrix(model, circuits[, ...])

Calculate the Fisher information matrix for a set of circuits and a model.

calculate_fisher_information_matrices_by_L(model, ...)

Calculate a set of Fisher information matrices for a set of circuits grouped by iteration.

accumulate_fim_matrix(subcircuits, num_params, ...[, ...])

accumulate_fim_matrix_per_circuit(subcircuits, ...[, ...])

pad_edesign_with_idle_lines(edesign, line_labels)

Utility to explicitly pad out ExperimentDesigns with idle lines.

bonferroni_correction(significance, numtests)

Calculates the standard Bonferroni correction.

sidak_correction(significance, numtests)

Sidak correction.

generalized_bonferroni_correction(significance, weights)

Generalized Bonferroni correction.

jamiolkowski_iso(operation_mx[, op_mx_basis, ...])

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.

jamiolkowski_iso_inv(choi_mx[, choi_mx_basis, op_mx_basis])

Given a choi matrix, return the corresponding operation matrix.

fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.

fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)

Given a choi matrix in the standard basis, return the corresponding operation matrix.

sum_of_negative_choi_eigenvalues(model[, weights])

Compute the amount of non-CP-ness of a model.

sums_of_negative_choi_eigenvalues(model)

Compute the amount of non-CP-ness of a model.

magnitudes_of_negative_choi_eigenvalues(model)

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.

warn_deprecated(name[, replacement])

Formats and prints a deprecation warning message.

deprecate([replacement])

Decorator for deprecating a function.

deprecate_imports(module_name, replacement_map, ...)

Utility to deprecate imports from a module.

logl(model, dataset[, circuits, min_prob_clip, ...])

The log-likelihood function.

logl_per_circuit(model, dataset[, circuits, ...])

Computes the per-circuit log-likelihood contribution for a set of circuits.

logl_jacobian(model, dataset[, circuits, ...])

The jacobian of the log-likelihood function.

logl_hessian(model, dataset[, circuits, ...])

The hessian of the log-likelihood function.

logl_approximate_hessian(model, dataset[, circuits, ...])

An approximate Hessian of the log-likelihood function.

logl_max(model, dataset[, circuits, poisson_picture, ...])

The maximum log-likelihood possible for a DataSet.

logl_max_per_circuit(model, dataset[, circuits, ...])

The vector of maximum log-likelihood contributions for each circuit, aggregated over outcomes.

two_delta_logl_nsigma(model, dataset[, circuits, ...])

See docstring for pygsti.tools.two_delta_logl()

two_delta_logl(model, dataset[, circuits, ...])

Twice the difference between the maximum and actual log-likelihood.

two_delta_logl_per_circuit(model, dataset[, circuits, ...])

Twice the per-circuit difference between the maximum and actual log-likelihood.

two_delta_logl_term(n, p, f[, min_prob_clip, ...])

Term of the 2*[log(L)-upper-bound - log(L)] sum corresponding to a single circuit and spam label.

basis_matrices(name_or_basis, dim[, sparse])

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

create_elementary_errorgen_dual(typ, p[, q, sparse, ...])

Construct a "dual" elementary error generator matrix in the "standard" (matrix-unit) basis.

create_elementary_errorgen(typ, p[, q, sparse])

Construct an elementary error generator as a matrix in the "standard" (matrix-unit) basis.

create_lindbladian_term_errorgen(typ, Lm[, Ln, sparse])

Construct the superoperator for a term in the common Lindbladian expansion of an error generator.

remove_duplicates_in_place(l[, index_to_test])

Remove duplicates from the list passed as an argument.

remove_duplicates(l[, index_to_test])

Remove duplicates from the a list and return the result.

compute_occurrence_indices(lst)

A 0-based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.

find_replace_tuple(t, alias_dict)

Replace elements of t according to rules in alias_dict.

find_replace_tuple_list(list_of_tuples, alias_dict)

Applies find_replace_tuple() on each element of list_of_tuples.

apply_aliases_to_circuits(list_of_circuits, alias_dict)

Applies alias_dict to the circuits in list_of_circuits.

sorted_partitions(n)

Iterate over all sorted (decreasing) partitions of integer n.

partitions(n)

Iterate over all partitions of integer n.

partition_into(n, nbins)

Iterate over all partitions of integer n into nbins bins.

incd_product(*args)

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.

lists_to_tuples(obj)

Recursively replaces lists with tuples.

dot_mod2(m1, m2)

Returns the product over the integers modulo 2 of two matrices.

multidot_mod2(mlist)

Returns the product over the integers modulo 2 of a list of matrices.

det_mod2(m)

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).

matrix_directsum(m1, m2)

Returns the direct sum of two square matrices of integers.

inv_mod2(m)

Finds the inverse of a matrix over GL(n,2)

Axb_mod2(A, b)

Solves Ax = b over GF(2)

gaussian_elimination_mod2(a)

Gaussian elimination mod2 of a.

diagonal_as_vec(m)

Returns a 1D array containing the diagonal of the input square 2D array m.

strictly_upper_triangle(m)

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.

diagonal_as_matrix(m)

Returns a diagonal matrix containing the diagonal of m.

albert_factor(d[, failcount, rand_state])

Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.

random_bitstring(n, p[, failcount, rand_state])

Constructs a random bitstring of length n with parity p

random_invertable_matrix(n[, failcount, rand_state])

Finds a random invertable matrix M over GL(n,2)

random_symmetric_invertable_matrix(n[, failcount, ...])

Creates a random, symmetric, invertible matrix from GL(n,2)

onesify(a[, failcount, maxfailcount, rand_state])

Returns M such that M a M.T has ones along the main diagonal

permute_top(a, i)

Permutes the first row & col with the i'th row & col

fix_top(a)

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.

proper_permutation(a)

Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.

change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

trace(m)

The trace of a matrix, sum_i m[i,i].

is_hermitian(mx[, tol])

Test whether mx is a hermitian matrix.

is_pos_def(mx[, tol])

Test whether mx is a positive-definite matrix.

is_valid_density_mx(mx[, tol])

Test whether mx is a valid density matrix (hermitian, positive-definite, and unit trace).

frobeniusnorm(ar)

Compute the frobenius norm of an array (or matrix),

frobeniusnorm_squared(ar)

Compute the squared frobenius norm of an array (or matrix),

nullspace(m[, tol])

Compute the nullspace of a matrix.

nullspace_qr(m[, tol])

Compute the nullspace of a matrix using the QR decomposition.

nice_nullspace(m[, tol, orthogonalize])

Computes the nullspace of a matrix, and tries to return a "nice" basis for it.

normalize_columns(m[, return_norms, ord])

Normalizes the columns of a matrix.

column_norms(m[, ord])

Compute the norms of the columns of a matrix.

scale_columns(m, scale_values)

Scale each column of a matrix by a given value.

columns_are_orthogonal(m[, tol])

Checks whether a matrix contains orthogonal columns.

columns_are_orthonormal(m[, tol])

Checks whether a matrix contains orthogonal columns.

independent_columns(m[, initial_independent_cols, tol])

Computes the indices of the linearly-independent columns in a matrix.

pinv_of_matrix_with_orthogonal_columns(m)

TODO: docstring

matrix_sign(m)

The "sign" matrix of m

print_mx(mx[, width, prec, withbrackets])

Print matrix in pretty format.

mx_to_string(m[, width, prec, withbrackets])

Generate a "pretty-format" string for a matrix.

mx_to_string_complex(m[, real_width, im_width, prec])

Generate a "pretty-format" string for a complex-valued matrix.

unitary_superoperator_matrix_log(m, mx_basis)

Construct the logarithm of superoperator matrix m.

near_identity_matrix_log(m[, tol])

Construct the logarithm of superoperator matrix m that is near the identity.

approximate_matrix_log(m, target_logm[, ...])

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.

real_matrix_log(m[, action_if_imaginary, tol])

Construct a real logarithm of real matrix m.

column_basis_vector(i, dim)

Returns the ith standard basis vector in dimension dim.

vec(matrix_in)

Stacks the columns of a matrix to return a vector

unvec(vector_in)

Slices a vector into the columns of a matrix.

norm1(m)

Returns the 1 norm of a matrix

random_hermitian(dim)

Generates a random Hermitian matrix

norm1to1(operator[, num_samples, mx_basis, return_list])

The Hermitian 1-to-1 norm of a superoperator represented in the standard basis.

complex_compare(a, b)

Comparison function for complex numbers that compares real part, then imaginary part.

prime_factors(n)

GCD algorithm to produce prime factors of n

minweight_match(a, b[, metricfn, return_pairs, ...])

Matches the elements of two vectors, a and b by minimizing the weight between them.

minweight_match_realmxeigs(a, b[, metricfn, ...])

Matches the elements of a and b, whose elements are assumed to either real or one-half of a conjugate pair.

safe_dot(a, b)

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.

safe_real(a[, inplace, check])

Get the real-part of a, where a can be either a dense array or a sparse matrix.

safe_imag(a[, inplace, check])

Get the imaginary-part of a, where a can be either a dense array or a sparse matrix.

safe_norm(a[, part])

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.

safe_onenorm(a)

Computes the 1-norm of the dense or sparse matrix a.

csr_sum_indices(csr_matrices)

Precomputes the indices needed to sum a set of CSR sparse matrices.

csr_sum(data, coeffs, csr_mxs, csr_sum_indices)

Accelerated summation of several CSR-format sparse matrices.

csr_sum_flat_indices(csr_matrices)

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.

csr_sum_flat(data, coeffs, flat_dest_index_array, ...)

Computation of the summation of several CSR-format sparse matrices.

expm_multiply_prep(a[, tol])

Computes "prepared" meta-info about matrix a, to be used in expm_multiply_fast.

expm_multiply_fast(prep_a, v[, tol])

Multiplies v by an exponentiated matrix.

expop_multiply_prep(op[, a_1_norm, tol])

Returns "prepared" meta-info about operation op, which is assumed to be traceless (so no shift is needed).

sparse_equal(a, b[, atol])

Checks whether two Scipy sparse matrices are (almost) equal.

sparse_onenorm(a)

Computes the 1-norm of the scipy sparse matrix a.

ndarray_base(a[, verbosity])

Get the base memory object for numpy array a.

to_unitary(scaled_unitary)

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.

sorted_eig(mx)

Similar to numpy.eig, but returns sorted output.

compute_kite(eigenvalues)

Computes the "kite" corresponding to a list of eigenvalues.

find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.

project_onto_kite(mx, kite)

Project mx onto kite, so mx is zero everywhere except on the kite.

project_onto_antikite(mx, kite)

Project mx onto the complement of kite, so mx is zero everywhere on the kite.

remove_dependent_cols(mx[, tol])

Removes the linearly dependent columns of a matrix.

intersection_space(space1, space2[, tol, ...])

TODO: docstring

union_space(space1, space2[, tol])

TODO: docstring

jamiolkowski_angle(hamiltonian_mx)

TODO: docstring

zvals_to_dense(self, zvals[, superket])

Construct the dense operator or superoperator representation of a computational basis state.

int64_parity(x)

Compute the partity of x.

zvals_int64_to_dense(zvals_int, nqubits[, outvec, ...])

Fills a dense array with the super-ket representation of a computational basis state.

sign_fix_qr(q, r[, tol])

Change the signs of the columns of Q and rows of R to follow a convention.

parallel_apply(f, l, comm)

Apply a function, f to every element of a list, l in parallel, using MPI.

mpi4py_comm()

Get a comm object

starmap_with_kwargs(fn, num_runs, num_processors, ...)

fidelity(a, b)

Returns the quantum state fidelity between density matrices.

frobeniusdist(a, b)

Returns the frobenius distance between gate or density matrices.

frobeniusdist_squared(a, b)

Returns the square of the frobenius distance between gate or density matrices.

residuals(a, b)

Calculate residuals between the elements of two matrices

tracenorm(a)

Compute the trace norm of matrix a given by:

tracedist(a, b)

Compute the trace distance between matrices.

diamonddist(a, b[, mx_basis, return_x])

Returns the approximate diamond norm describing the difference between gate matrices.

jtracedist(a, b[, mx_basis])

Compute the Jamiolkowski trace distance between operation matrices.

entanglement_fidelity(a, b[, mx_basis, is_tp, is_unitary])

Returns the "entanglement" process fidelity between gate matrices.

average_gate_fidelity(a, b[, mx_basis, is_tp, is_unitary])

Computes the average gate fidelity (AGF) between two gates.

average_gate_infidelity(a, b[, mx_basis, is_tp, ...])

Computes the average gate infidelity (AGI) between two gates.

entanglement_infidelity(a, b[, mx_basis, is_tp, ...])

Returns the entanglement infidelity (EI) between gate matrices.

gateset_infidelity(model, target_model[, itype, ...])

Computes the average-over-gates of the infidelity between gates in model and the gates in target_model.

unitarity(a[, mx_basis])

Returns the "unitarity" of a channel.

fidelity_upper_bound(operation_mx)

Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix.

compute_povm_map(model, povmlbl)

Constructs a gate-like quantity for the POVM within model.

povm_fidelity(model, target_model, povmlbl)

Computes the process (entanglement) fidelity between POVM maps.

povm_jtracedist(model, target_model, povmlbl)

Computes the Jamiolkowski trace distance between POVM maps using jtracedist().

povm_diamonddist(model, target_model, povmlbl)

Computes the diamond distance between POVM maps using diamonddist().

decompose_gate_matrix(operation_mx)

Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates.

state_to_dmvec(psi)

Compute the vectorized density matrix which acts as the state psi.

dmvec_to_state(dmvec[, tol])

Compute the pure state describing the action of density matrix vector dmvec.

unitary_to_superop(u[, superop_mx_basis])

TODO: docstring

unitary_to_process_mx(u)

unitary_to_std_process_mx(u)

Compute the superoperator corresponding to unitary matrix u.

superop_is_unitary(superop_mx[, mx_basis, rank_tol])

TODO: docstring

superop_to_unitary(superop_mx[, mx_basis, ...])

TODO: docstring

process_mx_to_unitary(superop)

std_process_mx_to_unitary(superop_mx)

Compute the unitary corresponding to the (unitary-action!) super-operator superop.

spam_error_generator(spamvec, target_spamvec, mx_basis)

Construct an error generator from a SPAM vector and it's target.

error_generator(gate, target_op, mx_basis[, typ, ...])

Construct the error generator from a gate and its target.

operation_from_error_generator(error_gen, target_op, ...)

Construct a gate from an error generator and a target gate.

elementary_errorgens(dim, typ, basis)

Compute the elementary error generators of a certain type.

elementary_errorgens_dual(dim, typ, basis)

Compute the set of dual-to-elementary error generators of a given type.

extract_elementary_errorgen_coefficients(errorgen, ...)

TODO: docstring

project_errorgen(errorgen, elementary_errorgen_type, ...)

Compute the projections of a gate error generator onto a set of elementary error generators.

create_elementary_errorgen_nqudit(typ, ...[, ...])

TODO: docstring - labels can be, e.g. ('H', 'XX') and basis should be a 1-qubit basis w/single-char labels

create_elementary_errorgen_nqudit_dual(typ, ...[, ...])

TODO: docstring - labels can be, e.g. ('H', 'XX') and basis should be a 1-qubit basis w/single-char labels

rotation_gate_mx(r[, mx_basis])

Construct a rotation operation matrix.

project_model(model, target_model[, projectiontypes, ...])

Construct a new model(s) by projecting the error generator of model onto some sub-space then reconstructing.

compute_best_case_gauge_transform(gate_mx, target_gate_mx)

Returns a gauge transformation that maps gate_mx into a matrix that is co-diagonal with target_gate_mx.

project_to_target_eigenspace(model, target_model[, eps])

Project each gate of model onto the eigenspace of the corresponding gate within target_model.

unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

is_valid_lindblad_paramtype(typ)

Whether typ is a recognized Lindblad-gate parameterization type.

effect_label_to_outcome(povm_and_effect_lbl)

Extract the outcome label from a "simplified" effect label.

effect_label_to_povm(povm_and_effect_lbl)

Extract the POVM label from a "simplified" effect label.

unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

single_qubit_gate(hx, hy, hz[, noise])

Construct the single-qubit operation matrix.

two_qubit_gate([ix, iy, iz, xi, xx, xy, xz, yi, yx, ...])

Construct the single-qubit operation matrix.

deprecate([replacement])

Decorator for deprecating a function.

cache_by_hashed_args(obj)

Decorator for caching a function values

timed_block(label[, time_dict, printer, verbosity, ...])

Context manager that times a block of code

time_hash()

Get string-version of current time

tvd(p, q)

Calculates the total variational distance between two probability distributions.

classical_fidelity(p, q)

Calculates the (classical) fidelity between two probability distributions.

predicted_rb_number(model, target_model[, weights, d, ...])

Predicts the RB error rate from a model.

predicted_rb_decay_parameter(model, target_model[, ...])

Computes the second largest eigenvalue of the 'L matrix' (see the L_matrix function).

rb_gauge(model, target_model[, weights, mx_basis, ...])

Computes the gauge transformation required so that the RB number matches the average model infidelity.

transform_to_rb_gauge(model, target_model[, weights, ...])

Transforms a Model into the "RB gauge" (see the RB_gauge function).

L_matrix(model, target_model[, weights])

Constructs a generalization of the 'L-matrix' linear operator on superoperators.

R_matrix_predicted_rb_decay_parameter(model, group[, ...])

Returns the second largest eigenvalue of a generalization of the 'R-matrix' [see the R_matrix function].

R_matrix(model, group[, group_to_model, weights])

Constructs a generalization of the 'R-matrix' of Proctor et al Phys. Rev. Lett. 119, 130502 (2017).

errormaps(model, target_model)

Computes the 'left-multiplied' error maps associated with a noisy gate set, along with the average error map.

gate_dependence_of_errormaps(model, target_model[, ...])

Computes the "gate-dependence of errors maps" parameter defined by

length(s)

Returns the length (the number of indices) contained in a slice.

shift(s, offset)

Returns a new slice whose start and stop points are shifted by offset.

intersect(s1, s2)

Returns the intersection of two slices (which must have the same step).

intersect_within(s1, s2)

Returns the intersection of two slices (which must have the same step).

indices(s[, n])

Returns a list of the indices specified by slice s.

list_to_slice(lst[, array_ok, require_contiguous])

Returns a slice corresponding to a given list of (integer) indices, if this is possible.

to_array(slc_or_list_like)

Returns slc_or_list_like as an index array (an integer numpy.ndarray).

divide(slc, max_len)

Divides a slice into sub-slices based on a maximum length (for each sub-slice).

slice_of_slice(slc, base_slc)

A slice that is the composition of base_slc and slc.

slice_hash(slc)

smart_cached(obj)

Decorator for applying a smart cache to a single function or method.

symplectic_form(n[, convention])

Creates the symplectic form for the number of qubits specified.

change_symplectic_form_convention(s[, outconvention])

Maps the input symplectic matrix between the 'standard' and 'directsum' symplectic form conventions.

check_symplectic(m[, convention])

Checks whether a matrix is symplectic.

inverse_symplectic(s)

Returns the inverse of a symplectic matrix over the integers mod 2.

inverse_clifford(s, p)

Returns the inverse of a Clifford gate in the symplectic representation.

check_valid_clifford(s, p)

Checks if a symplectic matrix - phase vector pair (s,p) is the symplectic representation of a Clifford.

construct_valid_phase_vector(s, pseed)

Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate.

find_postmultipled_pauli(s, p_implemented, p_target[, ...])

Finds the Pauli layer that should be appended to a circuit to implement a given Clifford.

find_premultipled_pauli(s, p_implemented, p_target[, ...])

Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford.

find_pauli_layer(pvec, qubit_labels[, pauli_labels])

TODO: docstring

find_pauli_number(pvec)

TODO: docstring

compose_cliffords(s1, p1, s2, p2[, do_checks])

Multiplies two cliffords in the symplectic representation.

symplectic_kronecker(sp_factors)

Takes a kronecker product of symplectic representations.

prep_stabilizer_state(nqubits[, zvals])

Contruct the (s,p) stabilizer representation for a computational basis state given by zvals.

apply_clifford_to_stabilizer_state(s, p, state_s, state_p)

Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation.

pauli_z_measurement(state_s, state_p, qubit_index)

Computes the probabilities of 0/1 (+/-) outcomes from measuring a Pauli operator on a stabilizer state.

colsum(i, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

colsum_acc(acc_s, acc_p, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

stabilizer_measurement_prob(state_sp_tuple, moutcomes)

Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state.

embed_clifford(s, p, qubit_inds, n)

Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation.

compute_internal_gate_symplectic_representations([gllist])

Creates a dictionary of the symplectic representations of 'standard' Clifford gates.

symplectic_rep_of_clifford_circuit(circuit[, ...])

Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit.

symplectic_rep_of_clifford_layer(layer[, n, q_labels, ...])

Constructs the symplectic representation of the n-qubit Clifford implemented by a single quantum circuit layer.

one_q_clifford_symplectic_group_relations()

Gives the group relationship between the 'I', 'H', 'P' 'HP', 'PH', and 'HPH' up-to-Paulis operators.

unitary_is_clifford(unitary)

Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise.

unitary_to_symplectic(u[, flagnonclifford])

Returns the symplectic representation of a one-qubit or two-qubit Clifford unitary.

random_symplectic_matrix(n[, convention, rand_state])

Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n).

random_clifford(n[, rand_state])

Returns a Clifford, in the symplectic representation, sampled uniformly at random from the n-qubit Clifford group.

random_phase_vector(s, n[, rand_state])

Generates a uniformly random phase vector for a n-qubit Clifford.

bitstring_for_pauli(p)

Get the bitstring corresponding to a Pauli.

apply_internal_gate_to_symplectic(s, gate_name, ...[, ...])

Applies a Clifford gate to the n-qubit Clifford gate specified by the 2n x 2n symplectic matrix.

compute_num_cliffords(n)

The number of Clifford gates in the n-qubit Clifford group.

compute_num_symplectics(n)

The number of elements in the symplectic group S(n) over the 2-element finite field.

compute_num_cosets(n)

Returns the number of different cosets for the symplectic group S(n) over the 2-element finite field.

symplectic_innerproduct(v, w)

Returns the symplectic inner product of two vectors in F_2^(2n).

symplectic_transvection(k, v)

Applies transvection Z k to v.

int_to_bitstring(i, n)

Converts integer i to an length n array of bits.

bitstring_to_int(b, n)

Converts an n-bit string b to an integer between 0 and 2^`n` - 1.

find_symplectic_transvection(x, y)

A utility function for selecting a random Clifford element.

compute_symplectic_matrix(i, n)

Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the "canonical" index i.

compute_symplectic_label(gn[, n])

Returns the "canonical" index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1.

random_symplectic_index(n[, rand_state])

The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1.

Attributes

gmvec_to_stdmx

ppvec_to_stdmx

qtvec_to_stdmx

stdvec_to_stdmx

stdmx_to_ppvec

stdmx_to_gmvec

stdmx_to_stdvec

TOL

EXPM_DEFAULT_TOL

IMAG_TOL

id2x2

sigmax

sigmay

sigmaz

sigmaii

sigmaix

sigmaiy

sigmaiz

sigmaxi

sigmaxx

sigmaxy

sigmaxz

sigmayi

sigmayx

sigmayy

sigmayz

sigmazi

sigmazx

sigmazy

sigmazz

pygsti.tools.basis_matrices(name_or_basis, dim, sparse=False)

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

Parameters

name_or_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis

The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match dim.

dimint

The dimension of the density-matrix space.

sparsebool, optional

Whether any built matrices should be SciPy CSR sparse matrices or dense numpy arrays (the default).

Returns

list

A list of N numpy arrays each of shape (dmDim, dmDim), where dmDim is the matrix-dimension of the overall “embedding” density matrix (the sum of dim_or_block_dims) and N is the dimension of the density-matrix space, equal to sum( block_dim_i^2 ).

pygsti.tools.basis_longname(basis)

Get the “long name” for a particular basis, which is typically used in reports, etc.

Parameters

basisBasis or str

The basis or standard-basis-name.

Returns

string

pygsti.tools.basis_element_labels(basis, dim)

Get a list of short labels corresponding to to the elements of the described basis.

These labels are typically used to label the rows/columns of a box-plot of a matrix in the basis.

Parameters

basis{‘std’, ‘gm’, ‘pp’, ‘qt’}

Which basis the model is represented in. Allowed options are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp) and Qutrit (qt). If the basis is not known, then an empty list is returned.

dimint or list

Dimension of basis matrices. If a list of integers, then gives the dimensions of the terms in a direct-sum decomposition of the density matrix space acted on by the basis.

Returns

list of strings

A list of length dim, whose elements label the basis elements.

pygsti.tools.is_sparse_basis(name_or_basis)

Whether a basis contains sparse matrices.

Parameters

name_or_basisBasis or str

The basis or standard-basis-name.

Returns

bool

pygsti.tools.change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

Parameters

mxnumpy array

The operation matrix (a 2D square array) in the from_basis basis.

from_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

to_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The destination basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

The given operation matrix converted to the to_basis basis. Array size is the same as mx.

pygsti.tools.create_basis_pair(mx, from_basis, to_basis)

Constructs bases from transforming mx between two basis names.

Construct a pair of Basis objects with types from_basis and to_basis, and dimension appropriate for transforming mx (if they’re not already given by from_basis or to_basis being a Basis rather than a str).

Parameters

mxnumpy.ndarray

A matrix, assumed to be square and have a dimension that is a perfect square.

from_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source basis (named because it’s usually the source basis for a basis change). Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).

to_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The destination basis (named because it’s usually the destination basis for a basis change). Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).

Returns

from_basis, to_basis : Basis

pygsti.tools.create_basis_for_matrix(mx, basis)

Construct a Basis object with type given by basis and dimension approprate for transforming mx.

Dimension is taken from mx (if it’s not given by basis) that is sqrt(mx.shape[0]).

Parameters

mxnumpy.ndarray

A matrix, assumed to be square and have a dimension that is a perfect square.

basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

A basis name or Basis object. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension must equal sqrt(mx.shape[0]), as this will be checked.

Returns

Basis

pygsti.tools.resize_std_mx(mx, resize, std_basis_1, std_basis_2)

Change the basis of mx to a potentially larger or smaller ‘std’-type basis given by std_basis_2.

(mx is assumed to be in the ‘std’-type basis given by std_basis_1.)

This is possible when the two ‘std’-type bases have the same “embedding dimension”, equal to the sum of their block dimensions. If, for example, std_basis_1 has block dimensions (kite structure) of (4,2,1) then mx, expressed as a sum of 4^2 + 2^2 + 1^2 = 21 basis elements, can be “embedded” within a larger ‘std’ basis having a single block with dimension 7 (7^2 = 49 elements).

When std_basis_2 is smaller than std_basis_1 the reverse happens and mx is irreversibly truncated, or “contracted” to a basis having a particular kite structure.

Parameters

mxnumpy array

A square matrix in the std_basis_1 basis.

resize{‘expand’,’contract’}

Whether mx can be expanded or contracted.

std_basis_1Basis

The ‘std’-type basis that mx is currently in.

std_basis_2Basis

The ‘std’-type basis that mx should be converted to.

Returns

numpy.ndarray

pygsti.tools.flexible_change_basis(mx, start_basis, end_basis)

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.

(see resize_std_mx() for more details).

Parameters

mxnumpy array

The operation matrix (a 2D square array) in the start_basis basis.

start_basisBasis

The source basis.

end_basisBasis

The destination basis.

Returns

numpy.ndarray

pygsti.tools.resize_mx(mx, dim_or_block_dims=None, resize=None)

Wrapper for resize_std_mx(), that manipulates mx to be in another basis.

This function first constructs two ‘std’-type bases using dim_or_block_dims and sum(dim_or_block_dims). The matrix mx is converted from the former to the latter when resize == “expand”, and from the latter to the former when resize == “contract”.

Parameters

mxnumpy array

Matrix of size N x N, where N is the dimension of the density matrix space, i.e. sum( dimOrBlockDims_i^2 )

dim_or_block_dimsint or list of ints

Structure of the density-matrix space. Gives the matrix dimensions of each block.

resize{‘expand’,’contract’}

Whether mx should be expanded or contracted.

Returns

numpy.ndarray

pygsti.tools.state_to_stdmx(state_vec)

Convert a state vector into a density matrix.

Parameters

state_veclist or tuple

State vector in the standard (sigma-z) basis.

Returns

numpy.ndarray

A density matrix of shape (d,d), corresponding to the pure state given by the length-d array, state_vec.

pygsti.tools.state_to_pauli_density_vec(state_vec)

Convert a single qubit state vector into a Liouville vector in the Pauli basis.

Parameters

state_veclist or tuple

State vector in the sigma-z basis, len(state_vec) == 2

Returns

numpy array

The 2x2 density matrix of the pure state given by state_vec, given as a 4x1 column vector in the Pauli basis.

pygsti.tools.vec_to_stdmx(v, basis, keep_complex=False)

Convert a vector in this basis to a matrix in the standard basis.

Parameters

vnumpy array

The vector length 4 or 16 respectively.

basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis

The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match v.

keep_complexbool, optional

If True, leave the final (output) array elements as complex numbers when v is complex. Usually, the final elements are real (even though v is complex) and so when keep_complex=False the elements are forced to be real and the returned array is float (not complex) valued.

Returns

numpy array

The matrix, 2x2 or 4x4 depending on nqubits

pygsti.tools.gmvec_to_stdmx
pygsti.tools.ppvec_to_stdmx
pygsti.tools.qtvec_to_stdmx
pygsti.tools.stdvec_to_stdmx
pygsti.tools.stdmx_to_vec(m, basis)

Convert a matrix in the standard basis to a vector in the Pauli basis.

Parameters

mnumpy array

The matrix, shape 2x2 (1Q) or 4x4 (2Q)

basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis

The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match m.

Returns

numpy array

The vector, length 4 or 16 respectively.

pygsti.tools.stdmx_to_ppvec
pygsti.tools.stdmx_to_gmvec
pygsti.tools.stdmx_to_stdvec
pygsti.tools.chi2(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the total (aggregate) chi^2 for a set of circuits.

The chi^2 test statistic obtained by summing up the contributions of a given set of circuits or all the circuits available in a dataset. For the gradient or Hessian, see the chi2_jacobian() and chi2_hessian() functions.

Parameters

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

min_prob_clip_for_weightingfloat, optional

defines the clipping interval for the statistical weight.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

chi2float

chi^2 value, equal to the sum of chi^2 terms from all specified circuits

pygsti.tools.chi2_per_circuit(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit chi^2 contributions for a set of cirucits.

This function returns the same value as chi2() except the contributions from different circuits are not summed but returned as an array (the contributions of all the outcomes of a given cirucit are summed together).

Parameters

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

min_prob_clip_for_weightingfloat, optional

defines the clipping interval for the statistical weight.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

chi2numpy.ndarray

Array of length either len(circuits) or len(dataset.keys()). Values are the chi2 contributions of the corresponding circuit aggregated over outcomes.

pygsti.tools.chi2_jacobian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the gradient of the chi^2 function computed by chi2().

The returned value holds the derivatives of the chi^2 function with respect to model’s parameters.

Parameters

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

min_prob_clip_for_weightingfloat, optional

defines the clipping interval for the statistical weight.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array

The gradient vector of length model.num_params, the number of model parameters.

pygsti.tools.chi2_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the Hessian matrix of the chi2() function.

Parameters

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

min_prob_clip_for_weightingfloat, optional

defines the clipping interval for the statistical weight.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array or None

On the root processor, the Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on non-root processors.

pygsti.tools.chi2_approximate_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute and approximate Hessian matrix of the chi2() function.

This approximation neglects terms proportional to the Hessian of the probabilities w.r.t. the model parameters (which can take a long time to compute). See logl_approximate_hessian for details on the analogous approximation for the log-likelihood Hessian.

Parameters

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

min_prob_clip_for_weightingfloat, optional

defines the clipping interval for the statistical weight.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array or None

On the root processor, the approximate Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on non-root processors.

pygsti.tools.chialpha(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(-10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the chi-alpha objective function.

Parameters

alphafloat

The alpha parameter, which lies in the interval (0,1].

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi-alpha sum. Default value (None) means “all strings in dataset”.

pfratio_stitchptfloat, optional

The x-value (x = probility/frequency ratio) below which the chi-alpha function is replaced with it second-order Taylor expansion.

pfratio_derivptfloat, optional

The x-value at which the Taylor expansion derivatives are evaluated at.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

radiusfloat, optional

If radius is not None then a “harsh” method of regularizing the zero-frequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zero-frequency terms.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

float

pygsti.tools.chialpha_per_circuit(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(-10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the per-circuit chi-alpha objective function.

Parameters

alphafloat

The alpha parameter, which lies in the interval (0,1].

modelModel

The model used to specify the probabilities and SPAM labels

datasetDataSet

The data used to specify frequencies and counts

circuitslist of Circuits or tuples, optional

List of circuits whose terms will be included in chi-alpha sum. Default value (None) means “all strings in dataset”.

pfratio_stitchptfloat, optional

The x-value (x = probility/frequency ratio) below which the chi-alpha function is replaced with it second-order Taylor expansion.

pfratio_derivptfloat, optional

The x-value at which the Taylor expansion derivatives are evaluated at.

prob_clip_intervaltuple, optional

A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

radiusfloat, optional

If radius is not None then a “harsh” method of regularizing the zero-frequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zero-frequency terms.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy.ndarray

Array of length either len(circuits) or len(dataset.keys()). Values are the chi-alpha contributions of the corresponding circuit aggregated over outcomes.

pygsti.tools.chi2fn_2outcome(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes chi^2 for a 2-outcome measurement.

The chi-squared function for a 2-outcome measurement using a clipped probability for the statistical weighting.

Parameters

nfloat or numpy array

Number of samples.

pfloat or numpy array

Probability of 1st outcome (typically computed).

ffloat or numpy array

Frequency of 1st outcome (typically observed).

min_prob_clip_for_weightingfloat, optional

Defines clipping interval (see return value).

Returns

float or numpy array

n(p-f)^2 / (cp(1-cp)), where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1-min_prob_clip_for_weighting)

pygsti.tools.chi2fn_2outcome_wfreqs(n, p, f)

Computes chi^2 for a 2-outcome measurement using frequency-weighting.

The chi-squared function for a 2-outcome measurement using the observed frequency in the statistical weight.

Parameters

nfloat or numpy array

Number of samples.

pfloat or numpy array

Probability of 1st outcome (typically computed).

ffloat or numpy array

Frequency of 1st outcome (typically observed).

Returns

float or numpy array

n(p-f)^2 / (f*(1-f*)), where f* = (f*n+1)/n+2 is the frequency value used in the statistical weighting (prevents divide by zero errors)

pygsti.tools.chi2fn(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes the chi^2 term corresponding to a single outcome.

The chi-squared term for a single outcome of a multi-outcome measurement using a clipped probability for the statistical weighting.

Parameters

nfloat or numpy array

Number of samples.

pfloat or numpy array

Probability of 1st outcome (typically computed).

ffloat or numpy array

Frequency of 1st outcome (typically observed).

min_prob_clip_for_weightingfloat, optional

Defines clipping interval (see return value).

Returns

float or numpy array

n(p-f)^2 / cp , where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1-min_prob_clip_for_weighting)

pygsti.tools.chi2fn_wfreqs(n, p, f, min_freq_clip_for_weighting=0.0001)

Computes the frequency-weighed chi^2 term corresponding to a single outcome.

The chi-squared term for a single outcome of a multi-outcome measurement using the observed frequency in the statistical weight.

Parameters

nfloat or numpy array

Number of samples.

pfloat or numpy array

Probability of 1st outcome (typically computed).

ffloat or numpy array

Frequency of 1st outcome (typically observed).

min_freq_clip_for_weightingfloat, optional

The minimum frequency weighting used in the weighting, i.e. the largest weighting factor is 1 / fmin_freq_clip_for_weighting.

Returns

float or numpy array

pygsti.tools.calculate_edesign_estimated_runtime(edesign, gate_time_dict=None, gate_time_1Q=None, gate_time_2Q=None, measure_reset_time=0.0, interbatch_latency=0.0, total_shots_per_circuit=1000, shots_per_circuit_per_batch=None, circuits_per_batch=None)

Estimate the runtime for an ExperimentDesign from gate times and batch sizes.

The rough model is that the required circuit shots are split into batches, where each batch runs a subset of the circuits for some fraction of the needed shots. One round consists of running all batches once, i.e. collecting some shots for all circuits, and rounds are repeated until the required number of shots is met for all circuits.

In addition to gate times, the user can also provide the time at the end of each circuit for measurement and/or reset, as well as the latency between batches for classical upload/ communication of the next set of circuits. Since times are user-provided, this function makes no assumption on the units of time, only that a consistent unit is used for all times.

Parameters

edesign: ExperimentDesign

An experiment design containing all required circuits.

gate_time_dict: dict

Dictionary with keys as either gate names or gate labels (for qubit-specific overrides) and values as gate time in user-specified units. All operations in the circuits of edesign must be specified. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.

gate_time_1Q: float

Gate time in user-specified units for all operations acting on one qubit. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.

gate_time_2Q: float

Gate time in user-specified units for all operations acting on more than one qubit. Either gate_time_dict or both gate_time_1Q and gate_time_2Q must be specified.

measure_reset_time: float

Measurement and/or reset time in user-specified units. This is applied once for every circuit.

interbatch_latency: float

Time between batches in user-specified units.

total_shots_per_circuit: int

Total number of shots per circuit. Together with shots_per_circuit_per_batch, this will determine the total number of rounds needed.

shots_per_circuit_per_batch: int

Number of shots to do for each circuit within a batch. Together with total_shots_per_circuit, this will determine the total number of rounds needed. If None, this is set to the total shots, meaning that only one round is done.

circuits_per_batch: int

Number of circuits to include in each batch. Together with the number of circuits in edesign, this will determine the number of batches in each round. If None, this is set to the total number of circuits such that only one batch is done.

Returns

float

The estimated time to run the experiment design.

pygsti.tools.calculate_fisher_information_per_circuit(regularized_model, circuits, approx=False, verbosity=1, comm=None, mem_limit=None)

Helper function to calculate all Fisher information terms for each circuit.

This function can be used to pre-generate a cache for the calculate_fisher_information_matrix() function, and this should be done for computational efficiency when computing many Fisher information matrices.

Parameters

regularized_model: OpModel

The model used to calculate the terms of the Fisher information matrix. This model must already be “regularized” such that there are no small probabilities, usually by adding a small amount of SPAM error.

circuits: list

List of circuits to compute Fisher information for.

approx: bool, optional (default False)

When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.

verbosity: int, optional (default 1)

Used to control the level of output printed by a VerbosityPrinter object.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.

Returns

fisher_info_terms: dict

Dictionary where keys are circuits and values are (num_params, num_params) Fisher information matrices for a single circuit.

pygsti.tools.calculate_fisher_information_matrix(model, circuits, num_shots=1, term_cache=None, regularize_spam=True, approx=False, mem_efficient_mode=False, circuit_chunk_size=100, verbosity=1, comm=None, mem_limit=None)

Calculate the Fisher information matrix for a set of circuits and a model.

Note that the model should be regularized so that no probability should be very small for numerical stability. This is done by default for models with a dense SPAM parameterization, but must be done manually if this is not the case (e.g. CPTP parameterization).

Parameters

model: OpModel

The model used to calculate the terms of the Fisher information matrix.

circuits: list

List of circuits in the experiment design.

num_shots: int or dict

If int, specifies how many shots each circuit gets. If dict, keys must be circuits and values are per-circuit counts.

term_cache: dict or None

If provided, should have circuits as keys and per-circuit Fisher information matrices as values, i.e. the output of calculate_fisher_information_per_circuit(). This cache will be updated with any additional circuits that need to be calculated in the given circuit list.

regularize_spam: bool

If True, depolarizing SPAM noise is added to prevent 0 probabilities for numerical stability. Note that this may fail if the model does not have a dense SPAM paramerization. In that case, pass an already “regularized” model and set this to False.

approx: bool, optional (default False)

When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.

mem_efficient_mode: bool, optional (default False)

If true avoid constructing the intermediate term cache to save on memory.

circuit_chunk_size, int, optional (default 100)

Used in conjunction with mem_efficient_mode. This sets the maximum number of circuits to simultaneously construct the per-circuit contributions to the fisher information for at any one time.

verbosity: int, optional (default 1)

Used to control the level of output printed by a VerbosityPrinter object.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.

Returns

fisher_information: numpy.ndarray

Fisher information matrix of size (num_params, num_params)

pygsti.tools.calculate_fisher_information_matrices_by_L(model, circuit_lists, Ls, num_shots=1, term_cache=None, regularize_spam=True, cumulative=True, approx=False, mem_efficient_mode=False, circuit_chunk_size=100, verbosity=1, comm=None, mem_limit=None)

Calculate a set of Fisher information matrices for a set of circuits grouped by iteration.

Parameters

model: OpModel

The model used to calculate the terms of the Fisher information matrix.

circuit_lists: list of lists of circuits or CircuitLists

Circuit lists for the experiment design for each L. Most likely from the value of the circuit_lists attribute of most experiment design objects.

Lslist of ints

A list of integer values corresponding to the circuit lengths associated with each circuit list as past in with circuit_lists.

num_shots: int or dict

If int, specifies how many shots each circuit gets. If dict, keys must be circuits and values are per-circuit counts.

term_cache: dict or None

If provided, should have circuits as keys and per-circuit Fisher information matrices as values, i.e. the output of calculate_fisher_information_per_circuit(). This cache will be updated with any additional circuits that need to be calculated in the given circuit list.

regularize_spam: bool

If True, depolarizing SPAM noise is added to prevent 0 probabilities for numerical stability. Note that this may fail if the model does not have a dense SPAM paramerization. In that case, pass an already “regularized” model and set this to False.

cumulative: bool

Whether to include Fisher information matrices for lower L (True) or not.

approx: bool, optional (default False)

When set to true use the approximate fisher information where we drop the hessian term. Significantly faster to compute than when including the hessian.

mem_efficient_mode: bool, optional (default False)

If true avoid constructing the intermediate term cache to save on memory.

circuit_chunk_size, int, optional (default 100)

Used in conjunction with mem_efficient_mode. This sets the maximum number of circuits to simultaneously construct the per-circuit contributions to the fisher information for at any one time.

verbosity: int, optional (default 1)

Used to control the level of output printed by a VerbosityPrinter object.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.

Returns

fisher_information_by_L: dict

Dictionary with keys as circuit length L and value as Fisher information matrices

pygsti.tools.accumulate_fim_matrix(subcircuits, num_params, num_shots, outcomes, ps, js, printer, hs=None, approx=False)
pygsti.tools.accumulate_fim_matrix_per_circuit(subcircuits, num_params, outcomes, ps, js, printer, hs=None, approx=False)
pygsti.tools.pad_edesign_with_idle_lines(edesign, line_labels)

Utility to explicitly pad out ExperimentDesigns with idle lines.

Parameters

edesign: ExperimentDesign

The edesign to be padded.

line_labels: tuple of int or str

Full line labels for the padded edesign.

Returns

ExperimentDesign

An edesign where all circuits have been padded out with missing idle lines

pygsti.tools.bonferroni_correction(significance, numtests)

Calculates the standard Bonferroni correction.

This is used for reducing the “local” significance for > 1 statistical hypothesis test to guarantee maintaining a “global” significance (i.e., a family-wise error rate) of significance.

Parameters

significancefloat

Significance of each individual test.

numtestsint

The number of hypothesis tests performed.

Returns

The Boferroni-corrected local significance, given by significance / numtests.

pygsti.tools.sidak_correction(significance, numtests)

Sidak correction.

TODO: docstring - better explanaition

Parameters

significancefloat

Significance of each individual test.

numtestsint

The number of hypothesis tests performed.

Returns

float

pygsti.tools.generalized_bonferroni_correction(significance, weights, numtests=None, nested_method='bonferroni', tol=1e-10)

Generalized Bonferroni correction.

Parameters

significancefloat

Significance of each individual test.

weightsarray-like

An array of non-negative floating-point weights, one per individual test, that sum to 1.0.

numtestsint

The number of hypothesis tests performed.

nested_method{‘bonferroni’, ‘sidak’}

Which method is used to find the significance of the composite test.

tolfloat, optional

Tolerance when checking that the weights add to 1.0.

Returns

float

pygsti.tools.jamiolkowski_iso(operation_mx, op_mx_basis='pp', choi_mx_basis='pp')

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.

Parameters

operation_mxnumpy array

the operation matrix to compute Choi matrix of.

op_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

choi_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

the Choi matrix, normalized to have trace == 1, in the desired basis.

pygsti.tools.jamiolkowski_iso_inv(choi_mx, choi_mx_basis='pp', op_mx_basis='pp')

Given a choi matrix, return the corresponding operation matrix.

This function performs the inverse of jamiolkowski_iso().

Parameters

choi_mxnumpy array

the Choi matrix, normalized to have trace == 1, to compute operation matrix for.

choi_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

op_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

operation matrix in the desired basis.

pygsti.tools.fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.

This routine only computes the case of the Choi matrix being in the standard (matrix unit) basis, but does so more quickly than jamiolkowski_iso() and so is particuarly useful when only the eigenvalues of the Choi matrix are needed.

Parameters

operation_mxnumpy array

the operation matrix to compute Choi matrix of.

op_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

the Choi matrix, normalized to have trace == 1, in the std basis.

pygsti.tools.fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)

Given a choi matrix in the standard basis, return the corresponding operation matrix.

This function performs the inverse of fast_jamiolkowski_iso_std().

Parameters

choi_mxnumpy array

the Choi matrix in the standard (matrix units) basis, normalized to have trace == 1, to compute operation matrix for.

op_mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

operation matrix in the desired basis.

pygsti.tools.sum_of_negative_choi_eigenvalues(model, weights=None)

Compute the amount of non-CP-ness of a model.

This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model.

Parameters

modelModel

The model to act on.

weightsdict

A dictionary of weights used to multiply the negative eigenvalues of different gates. Keys are operation labels, values are floating point numbers.

Returns

float

the sum of negative eigenvalues of the Choi matrix for each gate.

pygsti.tools.sums_of_negative_choi_eigenvalues(model)

Compute the amount of non-CP-ness of a model.

This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model separately. This function is different from sum_of_negative_choi_eigenvalues() in that it returns sums separately for each operation of model.

Parameters

modelModel

The model to act on.

Returns

list of floats

each element == sum of the negative eigenvalues of the Choi matrix for the corresponding gate (as ordered by model.operations.iteritems()).

pygsti.tools.magnitudes_of_negative_choi_eigenvalues(model)

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.

Parameters

modelModel

The model to act on.

Returns

list of floats

list of the magnitues of all negative Choi eigenvalues. The length of this list will vary based on how many negative eigenvalues are found, as positive eigenvalues contribute nothing to this list.

pygsti.tools.warn_deprecated(name, replacement=None)

Formats and prints a deprecation warning message.

Parameters

namestr

The name of the function that is now deprecated.

replacementstr, optional

the name of the function that should replace it.

Returns

None

pygsti.tools.deprecate(replacement=None)

Decorator for deprecating a function.

Parameters

replacementstr, optional

the name of the function that should replace it.

Returns

function

pygsti.tools.deprecate_imports(module_name, replacement_map, warning_msg)

Utility to deprecate imports from a module.

This works by swapping the underlying module in the import mechanisms with a ModuleType object that overrides attribute lookup to check against the replacement map.

Note that this will slow down module attribute lookup substantially. If you need to deprecate multiple names, DO NOT call this method more than once on a given module! Instead, use the replacement map to batch multiple deprecations into one call. When using this method, plan to remove the deprecated paths altogether sooner rather than later.

Parameters

module_namestr

The fully-qualified name of the module whose names have been deprecated.

replacement_map{name: function}

A map of each deprecated name to a factory which will be called with no arguments when importing the name.

warning_msgstr

A message to be displayed as a warning when importing a deprecated name. Optionally, this may include the format string name, which will be formatted with the deprecated name.

Returns

None

pygsti.tools.TOL = 1e-20
pygsti.tools.logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

The log-likelihood function.

Parameters

modelModel

Model of parameterized gates

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

wildcardWildcardBudget

A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

float

The log likelihood

pygsti.tools.logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit log-likelihood contribution for a set of circuits.

Parameters

modelModel

Model of parameterized gates

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

wildcardWildcardBudget

A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy.ndarray

Array of length either len(circuits) or len(dataset.keys()). Values are the log-likelihood contributions of the corresponding circuit aggregated over outcomes.

pygsti.tools.logl_jacobian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The jacobian of the log-likelihood function.

Parameters

modelModel

Model of parameterized gates (including SPAM)

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the Poisson-picutre log-likelihood should be differentiated.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

verbosityint, optional

How much detail to print to stdout.

Returns

numpy array

array of shape (M,), where M is the length of the vectorized model.

pygsti.tools.logl_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The hessian of the log-likelihood function.

Parameters

modelModel

Model of parameterized gates (including SPAM)

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the Poisson-picutre log-likelihood should be differentiated.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

verbosityint, optional

How much detail to print to stdout.

Returns

numpy array or None

On the root processor, the Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on non-root processors.

pygsti.tools.logl_approximate_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

An approximate Hessian of the log-likelihood function.

An approximation to the true Hessian is computed using just the Jacobian (and not the Hessian) of the probabilities w.r.t. the model parameters. Let J = d(probs)/d(params) and denote the Hessian of the log-likelihood w.r.t. the probabilities as d2(logl)/dprobs2 (a diagonal matrix indexed by the term, i.e. probability, of the log-likelihood). Then this function computes:

H = J * d2(logl)/dprobs2 * J.T

Which simply neglects the d2(probs)/d(params)2 terms of the true Hessian. Since this curvature is expected to be small at the MLE point, this approximation can be useful for computing approximate error bars.

Parameters

modelModel

Model of parameterized gates (including SPAM)

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the Poisson-picutre log-likelihood should be differentiated.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

mem_limitint, optional

A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

verbosityint, optional

How much detail to print to stdout.

Returns

numpy array or None

On the root processor, the approximate Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params. None on non-root processors.

pygsti.tools.logl_max(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The maximum log-likelihood possible for a DataSet.

That is, the log-likelihood obtained by a maximal model that can fit perfectly the probability of each circuit.

Parameters

modelModel

the model, used only for circuit compilation

datasetDataSet

the data set to use.

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the max-log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

poisson_pictureboolean, optional

Whether the Poisson-picture maximum log-likelihood should be returned.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

Returns

float

pygsti.tools.logl_max_per_circuit(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The vector of maximum log-likelihood contributions for each circuit, aggregated over outcomes.

Parameters

modelModel

the model, used only for circuit compilation

datasetDataSet

the data set to use.

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the max-log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

poisson_pictureboolean, optional

Whether the Poisson-picture maximum log-likelihood should be returned.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

Returns

numpy.ndarray

Array of length either len(circuits) or len(dataset.keys()). Values are the maximum log-likelihood contributions of the corresponding circuit aggregated over outcomes.

pygsti.tools.two_delta_logl_nsigma(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method='modeltest', wildcard=None)

See docstring for pygsti.tools.two_delta_logl()

Parameters

modelModel

Model of parameterized gates

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

dof_calc_method{“all”, “modeltest”}

How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. “all” uses model.num_params whereas “modeltest” uses model.num_modeltest_params (the number of non-gauge parameters by default).

wildcardWildcardBudget

A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

Returns

float

pygsti.tools.two_delta_logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the difference between the maximum and actual log-likelihood.

Optionally also can return the Nsigma (# std deviations from mean) and p-value relative to expected chi^2 distribution (when dof_calc_method is not None).

This function’s arguments are supersets of logl(), and logl_max(). This is a convenience function, equivalent to 2*(logl_max(…) - logl(…)), whose value is what is often called the log-likelihood-ratio between the “maximal model” (that which trivially fits the data exactly) and the model given by model.

Parameters

modelModel

Model of parameterized gates

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the computed log-likelihood values.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

dof_calc_method{None, “all”, “modeltest”}

How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. If None, then Nsigma and pvalue are not returned (see below).

wildcardWildcardBudget

A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

Returns

twoDeltaLogLfloat

2*(loglikelihood(maximal_model,data) - loglikelihood(model,data))

Nsigma, pvaluefloat

Only returned when dof_calc_method is not None.

pygsti.tools.two_delta_logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the per-circuit difference between the maximum and actual log-likelihood.

Contributions are aggregated over each circuit’s outcomes, but no further.

Optionally (when dof_calc_method is not None) returns parallel vectors containing the Nsigma (# std deviations from mean) and the p-value relative to expected chi^2 distribution for each sequence.

Parameters

modelModel

Model of parameterized gates

datasetDataSet

Probability data

circuitslist of (tuples or Circuits), optional

Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

min_prob_clipfloat, optional

The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

prob_clip_interval2-tuple or None, optional

(min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

radiusfloat, optional

Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

op_label_aliasesdictionary, optional

Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

dof_calc_method{“all”, “modeltest”}

How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model.

wildcardWildcardBudget

A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

mdc_storeModelDatasetCircuitsStore, optional

An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors.

Returns

twoDeltaLogL_terms : numpy.ndarray

Nsigma, pvaluenumpy.ndarray

Only returned when dof_calc_method is not None.

pygsti.tools.two_delta_logl_term(n, p, f, min_prob_clip=1e-06, poisson_picture=True)

Term of the 2*[log(L)-upper-bound - log(L)] sum corresponding to a single circuit and spam label.

Parameters

nfloat or numpy array

Number of samples.

pfloat or numpy array

Probability of 1st outcome (typically computed).

ffloat or numpy array

Frequency of 1st outcome (typically observed).

min_prob_clipfloat, optional

Minimum probability clip point to avoid evaluating log(number <= zero)

poisson_pictureboolean, optional

Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

Returns

float or numpy array

pygsti.tools.basis_matrices(name_or_basis, dim, sparse=False)

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

Parameters

name_or_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis

The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match dim.

dimint

The dimension of the density-matrix space.

sparsebool, optional

Whether any built matrices should be SciPy CSR sparse matrices or dense numpy arrays (the default).

Returns

list

A list of N numpy arrays each of shape (dmDim, dmDim), where dmDim is the matrix-dimension of the overall “embedding” density matrix (the sum of dim_or_block_dims) and N is the dimension of the density-matrix space, equal to sum( block_dim_i^2 ).

pygsti.tools.create_elementary_errorgen_dual(typ, p, q=None, sparse=False, normalization_factor='auto')

Construct a “dual” elementary error generator matrix in the “standard” (matrix-unit) basis.

The elementary error generator that is dual to the one computed by calling create_elementary_errorgen() with the same argument. This dual element can be used to find the coefficient of the original, or “primal” elementary generator. For example, if A = sum(c_i * E_i), where E_i are the elementary error generators given by create_elementary_errorgen()), then c_i = dot(D_i.conj(), A) where D_i is the dual to E_i.

There are four different types of dual elementary error generators: ‘H’ (Hamiltonian), ‘S’ (stochastic), ‘C’ (correlation), and ‘A’ (active). See arxiv:2103.01928. Each type transforms an input density matrix differently. The action of an elementary error generator L on an input density matrix rho is given by:

Hamiltonian: L(rho) = -1j/(2d^2) * [ p, rho ] Stochastic: L(rho) = 1/(d^2) p * rho * p Correlation: L(rho) = 1/(2d^2) ( p * rho * q + q * rho * p) Active: L(rho) = 1j/(2d^2) ( p * rho * q - q * rho * p)

where d is the dimension of the Hilbert space, e.g. 2 for a single qubit. Square brackets denotes the commutator and curly brackets the anticommutator. L is returned as a superoperator matrix that acts on vectorized density matrices.

Parameters

typ{‘H’,’S’,’C’,’A’}

The type of dual error generator to construct.

pnumpy.ndarray

d-dimensional basis matrix.

qnumpy.ndarray, optional

d-dimensional basis matrix; must be non-None if and only if typ is ‘C’ or ‘A’.

sparsebool, optional

Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.tools.create_elementary_errorgen(typ, p, q=None, sparse=False)

Construct an elementary error generator as a matrix in the “standard” (matrix-unit) basis.

There are four different types of elementary error generators: ‘H’ (Hamiltonian), ‘S’ (stochastic), ‘C’ (correlation), and ‘A’ (active). See arxiv:2103.01928. Each type transforms an input density matrix differently. The action of an elementary error generator L on an input density matrix rho is given by:

Hamiltonian: L(rho) = -1j * [ p, rho ] Stochastic: L(rho) = p * rho * p - rho Correlation: L(rho) = p * rho * q + q * rho * p - 0.5 {{p,q}, rho} Active: L(rho) = 1j( p * rho * q - q * rho * p + 0.5 {[p,q], rho} )

Square brackets denotes the commutator and curly brackets the anticommutator. L is returned as a superoperator matrix that acts on vectorized density matrices.

Parameters

typ{‘H’,’S’,’C’,’A’}

The type of error generator to construct.

pnumpy.ndarray

d-dimensional basis matrix.

qnumpy.ndarray, optional

d-dimensional basis matrix; must be non-None if and only if typ is ‘C’ or ‘A’.

sparsebool, optional

Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.tools.create_lindbladian_term_errorgen(typ, Lm, Ln=None, sparse=False)

Construct the superoperator for a term in the common Lindbladian expansion of an error generator.

Mathematically, for d-dimensional matrices Lm and Ln, this routine constructs the d^2-dimension Lindbladian matrix L whose action is given by:

L(rho) = -i [Lm, rho] ` (when `typ == ‘H’)

or

L(rho) = Ln*rho*Lm^dag - 1/2(rho*Lm^dag*Ln + Lm^dag*Ln*rho) (typ == ‘O’)

where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.

Parameters

typ{‘H’, ‘O’}

The type of error generator to construct.

Lmnumpy.ndarray

d-dimensional basis matrix.

Lnnumpy.ndarray, optional

d-dimensional basis matrix.

sparsebool, optional

Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.tools.remove_duplicates_in_place(l, index_to_test=None)

Remove duplicates from the list passed as an argument.

Parameters

llist

The list to remove duplicates from.

index_to_testint, optional

If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate y-values.

Returns

None

pygsti.tools.remove_duplicates(l, index_to_test=None)

Remove duplicates from the a list and return the result.

Parameters

literable

The list/set to remove duplicates from.

index_to_testint, optional

If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate y-values.

Returns

list

the list after duplicates have been removed.

pygsti.tools.compute_occurrence_indices(lst)

A 0-based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.

For example, if lst = [ ‘A’,’B’,’C’,’C’,’A’] then the returned list will be [ 0 , 0 , 0 , 1 , 1 ]. This may be useful when working with DataSet objects that have collisionAction set to “keepseparate”.

Parameters

lstlist

The list to process.

Returns

list

pygsti.tools.find_replace_tuple(t, alias_dict)

Replace elements of t according to rules in alias_dict.

Parameters

ttuple or list

The object to perform replacements upon.

alias_dictdictionary

Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a sub-sequence that the given element should be replaced with. If None, no replacement is performed.

Returns

tuple

pygsti.tools.find_replace_tuple_list(list_of_tuples, alias_dict)

Applies find_replace_tuple() on each element of list_of_tuples.

Parameters

list_of_tupleslist

A list of tuple objects to perform replacements upon.

alias_dictdictionary

Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a sub-sequence that the given element should be replaced with. If None, no replacement is performed.

Returns

list

pygsti.tools.apply_aliases_to_circuits(list_of_circuits, alias_dict)

Applies alias_dict to the circuits in list_of_circuits.

Parameters

list_of_circuitslist

A list of circuits to make replacements in.

alias_dictdict

A dictionary whose keys are layer Labels (or equivalent tuples or strings), and whose values are Circuits or tuples of labels.

Returns

list

pygsti.tools.sorted_partitions(n)

Iterate over all sorted (decreasing) partitions of integer n.

A partition of n here is defined as a list of one or more non-zero integers which sum to n. Sorted partitions (those iterated over here) have their integers in decreasing order.

Parameters

nint

The number to partition.

pygsti.tools.partitions(n)

Iterate over all partitions of integer n.

A partition of n here is defined as a list of one or more non-zero integers which sum to n. Every partition is iterated over exacty once - there are no duplicates/repetitions.

Parameters

nint

The number to partition.

pygsti.tools.partition_into(n, nbins)

Iterate over all partitions of integer n into nbins bins.

Here, unlike in partition(), a “partition” is allowed to contain zeros. For example, (4,1,0) is a valid partition of 5 using 3 bins. This function fixes the number of bins and iterates over all possible length- nbins partitions while allowing zeros. This is equivalent to iterating over all usual partitions of length at most nbins and inserting zeros into all possible places for partitions of length less than nbins.

Parameters

nint

The number to partition.

nbinsint

The fixed number of bins, equal to the length of all the partitions that are iterated over.

pygsti.tools.incd_product(*args)

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.

Parameters

*argsiterables

Any number of iterable things that we’re taking the product of.

pygsti.tools.lists_to_tuples(obj)

Recursively replaces lists with tuples.

Can be useful for fixing tuples that were serialized to json or mongodb. Recurses on lists, tuples, and dicts within obj.

Parameters

objobject

Object to convert.

Returns

object

pygsti.tools.dot_mod2(m1, m2)

Returns the product over the integers modulo 2 of two matrices.

Parameters

m1numpy.ndarray

First matrix

m2numpy.ndarray

Second matrix

Returns

numpy.ndarray

pygsti.tools.multidot_mod2(mlist)

Returns the product over the integers modulo 2 of a list of matrices.

Parameters

mlistlist

A list of matrices.

Returns

numpy.ndarray

pygsti.tools.det_mod2(m)

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).

Parameters

mnumpy.ndarray

Matrix to take determinant of.

Returns

numpy.ndarray

pygsti.tools.matrix_directsum(m1, m2)

Returns the direct sum of two square matrices of integers.

Parameters

m1numpy.ndarray

First matrix

m2numpy.ndarray

Second matrix

Returns

numpy.ndarray

pygsti.tools.inv_mod2(m)

Finds the inverse of a matrix over GL(n,2)

Parameters

mnumpy.ndarray

Matrix to take inverse of.

Returns

numpy.ndarray

pygsti.tools.Axb_mod2(A, b)

Solves Ax = b over GF(2)

Parameters

Anumpy.ndarray

Matrix to operate on.

bnumpy.ndarray

Vector to operate on.

Returns

numpy.ndarray

pygsti.tools.gaussian_elimination_mod2(a)

Gaussian elimination mod2 of a.

Parameters

anumpy.ndarray

Matrix to operate on.

Returns

numpy.ndarray

pygsti.tools.diagonal_as_vec(m)

Returns a 1D array containing the diagonal of the input square 2D array m.

Parameters

mnumpy.ndarray

Matrix to operate on.

Returns

numpy.ndarray

pygsti.tools.strictly_upper_triangle(m)

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.

Parameters

mnumpy.ndarray

Matrix to operate on.

Returns

numpy.ndarray

pygsti.tools.diagonal_as_matrix(m)

Returns a diagonal matrix containing the diagonal of m.

Parameters

mnumpy.ndarray

Matrix to operate on.

Returns

numpy.ndarray

pygsti.tools.albert_factor(d, failcount=0, rand_state=None)

Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.

The algorithm mostly follows the proof in “Orthogonal Matrices Over Finite Fields” by Jessie MacWilliams in The American Mathematical Monthly, Vol. 76, No. 2 (Feb., 1969), pp. 152-164

There is generally not a unique albert factorization, and this algorthm is randomized. It will general return a different factorizations from multiple calls.

Parameters

darray-like

Symmetric matrix mod 2.

failcountint, optional

UNUSED.

rand_statenp.random.RandomState, optional

Random number generator to allow for determinism.

Returns

numpy.ndarray

pygsti.tools.random_bitstring(n, p, failcount=0, rand_state=None)

Constructs a random bitstring of length n with parity p

Parameters

nint

Number of bits.

pint

Parity.

failcountint, optional

Internal use only.

rand_statenp.random.RandomState, optional

Random number generator to allow for determinism.

Returns

numpy.ndarray

pygsti.tools.random_invertable_matrix(n, failcount=0, rand_state=None)

Finds a random invertable matrix M over GL(n,2)

Parameters

nint

matrix dimension

failcountint, optional

Internal use only.

rand_statenp.random.RandomState, optional

Random number generator to allow for determinism.

Returns

numpy.ndarray

pygsti.tools.random_symmetric_invertable_matrix(n, failcount=0, rand_state=None)

Creates a random, symmetric, invertible matrix from GL(n,2)

Parameters

nint

Matrix dimension.

failcountint, optional

Internal use only.

rand_statenp.random.RandomState, optional

Random number generator to allow for determinism.

Returns

numpy.ndarray

pygsti.tools.onesify(a, failcount=0, maxfailcount=100, rand_state=None)

Returns M such that M a M.T has ones along the main diagonal

Parameters

anumpy.ndarray

The matrix.

failcountint, optional

Internal use only.

maxfailcountint, optional

Maximum number of tries before giving up.

rand_statenp.random.RandomState, optional

Random number generator to allow for determinism.

Returns

numpy.ndarray

pygsti.tools.permute_top(a, i)

Permutes the first row & col with the i’th row & col

Parameters

anumpy.ndarray

The matrix to act on.

iint

index to permute with first row/col.

Returns

numpy.ndarray

pygsti.tools.fix_top(a)

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.

Parameters

anumpy.ndarray

A symmetric binary matrix with ones along the diagonal.

Returns

numpy.ndarray

pygsti.tools.proper_permutation(a)

Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.

Parameters

anumpy.ndarray

A symmetric binary matrix with ones along the diagonal.

Returns

numpy.ndarray

pygsti.tools.change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

Parameters

mxnumpy array

The operation matrix (a 2D square array) in the from_basis basis.

from_basis: {‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

to_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The destination basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

The given operation matrix converted to the to_basis basis. Array size is the same as mx.

pygsti.tools.EXPM_DEFAULT_TOL
pygsti.tools.trace(m)

The trace of a matrix, sum_i m[i,i].

A memory leak in some version of numpy can cause repeated calls to numpy’s trace function to eat up all available system memory, and this function does not have this problem.

Parameters

mnumpy array

the matrix (any object that can be double-indexed)

Returns

element type of m

The trace of m.

pygsti.tools.is_hermitian(mx, tol=1e-09)

Test whether mx is a hermitian matrix.

Parameters

mxnumpy array

Matrix to test.

tolfloat, optional

Tolerance on absolute magitude of elements.

Returns

bool

True if mx is hermitian, otherwise False.

pygsti.tools.is_pos_def(mx, tol=1e-09)

Test whether mx is a positive-definite matrix.

Parameters

mxnumpy array

Matrix to test.

tolfloat, optional

Tolerance on absolute magitude of elements.

Returns

bool

True if mx is positive-semidefinite, otherwise False.

pygsti.tools.is_valid_density_mx(mx, tol=1e-09)

Test whether mx is a valid density matrix (hermitian, positive-definite, and unit trace).

Parameters

mxnumpy array

Matrix to test.

tolfloat, optional

Tolerance on absolute magitude of elements.

Returns

bool

True if mx is a valid density matrix, otherwise False.

pygsti.tools.frobeniusnorm(ar)

Compute the frobenius norm of an array (or matrix),

sqrt( sum( each_element_of_a^2 ) )

Parameters

arnumpy array

What to compute the frobenius norm of. Note that ar can be any shape or number of dimenions.

Returns

float or complex

depending on the element type of ar.

pygsti.tools.frobeniusnorm_squared(ar)

Compute the squared frobenius norm of an array (or matrix),

sum( each_element_of_a^2 ) )

Parameters

arnumpy array

What to compute the squared frobenius norm of. Note that ar can be any shape or number of dimenions.

Returns

float or complex

depending on the element type of ar.

pygsti.tools.nullspace(m, tol=1e-07)

Compute the nullspace of a matrix.

Parameters

mnumpy array

An matrix of shape (M,N) whose nullspace to compute.

tolfloat , optional

Nullspace tolerance, used when comparing singular values with zero.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.tools.nullspace_qr(m, tol=1e-07)

Compute the nullspace of a matrix using the QR decomposition.

The QR decomposition is faster but less accurate than the SVD used by nullspace().

Parameters

mnumpy array

An matrix of shape (M,N) whose nullspace to compute.

tolfloat , optional

Nullspace tolerance, used when comparing diagonal values of R with zero.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.tools.nice_nullspace(m, tol=1e-07, orthogonalize=False)

Computes the nullspace of a matrix, and tries to return a “nice” basis for it.

Columns of the returned value (a basis for the nullspace) each have a maximum absolute value of 1.0 and are chosen so as to align with the the original matrix’s basis as much as possible (the basis is found by projecting each original basis vector onto an arbitrariliy-found nullspace and keeping only a set of linearly independent projections).

Parameters

mnumpy array

An matrix of shape (M,N) whose nullspace to compute.

tolfloat , optional

Nullspace tolerance, used when comparing diagonal values of R with zero.

orthogonalizebool, optional

If True, the nullspace vectors are additionally orthogonalized.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.tools.normalize_columns(m, return_norms=False, ord=None)

Normalizes the columns of a matrix.

Parameters

mnumpy.ndarray or scipy sparse matrix

The matrix.

return_normsbool, optional

If True, also return a 1D array containing the norms of the columns (before they were normalized).

ordint or list of ints, optional

The order of the norm. See numpy.linalg.norm(). An array of orders can be given to specify the norm on a per-column basis.

Returns

normalized_mnumpy.ndarray

The matrix after columns are normalized

column_normsnumpy.ndarray

Only returned when return_norms=True, a 1-dimensional array of the pre-normalization norm of each column.

pygsti.tools.column_norms(m, ord=None)

Compute the norms of the columns of a matrix.

Parameters

mnumpy.ndarray or scipy sparse matrix

The matrix.

ordint or list of ints, optional

The order of the norm. See numpy.linalg.norm(). An array of orders can be given to specify the norm on a per-column basis.

Returns

numpy.ndarray

A 1-dimensional array of the column norms (length is number of columns of m).

pygsti.tools.scale_columns(m, scale_values)

Scale each column of a matrix by a given value.

Usually used for normalization purposes, when the matrix columns represent vectors.

Parameters

mnumpy.ndarray or scipy sparse matrix

The matrix.

scale_valuesnumpy.ndarray

A 1-dimensional array of scale values, one per column of m.

Returns

numpy.ndarray or scipy sparse matrix

A copy of m with scaled columns, possibly with different sparsity structure.

pygsti.tools.columns_are_orthogonal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.

Parameters

mnumpy.ndarray

The matrix to check.

tolfloat, optional

Tolerance for checking whether dot products are zero.

Returns

bool

pygsti.tools.columns_are_orthonormal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.

Parameters

mnumpy.ndarray

The matrix to check.

tolfloat, optional

Tolerance for checking whether dot products are zero.

Returns

bool

pygsti.tools.independent_columns(m, initial_independent_cols=None, tol=1e-07)

Computes the indices of the linearly-independent columns in a matrix.

Optionally starts with a “base” matrix of independent columns, so that the returned indices indicate the columns of m that are independent of all the base columns and the other independent columns of m.

Parameters

mnumpy.ndarray or scipy sparse matrix

The matrix.

initial_independent_colsnumpy.ndarray or scipy sparse matrix, optional

If not None, a matrix of known-to-be independent columns so to test the columns of m with respect to (in addition to the already chosen independent columns of m.

tolfloat, optional

Tolerance threshold used to decide whether a singular value is nonzero (it is if it’s is greater than tol).

Returns

list

A list of the independent-column indices of m.

pygsti.tools.pinv_of_matrix_with_orthogonal_columns(m)

TODO: docstring

pygsti.tools.matrix_sign(m)

The “sign” matrix of m

Parameters

mnumpy.ndarray

the matrix.

Returns

numpy.ndarray

pygsti.tools.print_mx(mx, width=9, prec=4, withbrackets=False)

Print matrix in pretty format.

Will print real or complex matrices with a desired precision and “cell” width.

Parameters

mxnumpy array

the matrix (2-D array) to print.

widthint, opitonal

the width (in characters) of each printed element

precint optional

the precision (in characters) of each printed element

withbracketsbool, optional

whether to print brackets and commas to make the result something that Python can read back in.

Returns

None

pygsti.tools.mx_to_string(m, width=9, prec=4, withbrackets=False)

Generate a “pretty-format” string for a matrix.

Will generate strings for real or complex matrices with a desired precision and “cell” width.

Parameters

mnumpy.ndarray

array to print.

widthint, opitonal

the width (in characters) of each converted element

precint optional

the precision (in characters) of each converted element

withbracketsbool, optional

whether to print brackets and commas to make the result something that Python can read back in.

Returns

string

matrix m as a pretty formated string.

pygsti.tools.mx_to_string_complex(m, real_width=9, im_width=9, prec=4)

Generate a “pretty-format” string for a complex-valued matrix.

Parameters

mnumpy array

array to format.

real_widthint, opitonal

the width (in characters) of the real part of each element.

im_widthint, opitonal

the width (in characters) of the imaginary part of each element.

precint optional

the precision (in characters) of each element’s real and imaginary parts.

Returns

string

matrix m as a pretty formated string.

pygsti.tools.unitary_superoperator_matrix_log(m, mx_basis)

Construct the logarithm of superoperator matrix m.

This function assumes that m acts as a unitary on density-matrix space, (m: rho -> U rho Udagger) so that log(m) can be written as the action by Hamiltonian H:

log(m): rho -> -i[H,rho].

Parameters

mnumpy array

The superoperator matrix whose logarithm is taken

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

A matrix logM, of the same shape as m, such that m = exp(logM) and logM can be written as the action rho -> -i[H,rho].

pygsti.tools.near_identity_matrix_log(m, tol=1e-08)

Construct the logarithm of superoperator matrix m that is near the identity.

If m is real, the resulting logarithm will be real.

Parameters

mnumpy array

The superoperator matrix whose logarithm is taken

tolfloat, optional

The tolerance used when testing for zero imaginary parts.

Returns

numpy array

An matrix logM, of the same shape as m, such that m = exp(logM) and logM is real when m is real.

pygsti.tools.approximate_matrix_log(m, target_logm, target_weight=10.0, tol=1e-06)

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.

The equation m = exp( logM ) is allowed to become inexact in order to make logM close to target_logm. In particular, the objective function that is minimized is (where || indicates the 2-norm):

|exp(logM) - m|_1 + target_weight * ||logM - target_logm||^2

Parameters

mnumpy array

The superoperator matrix whose logarithm is taken

target_logmnumpy array

The target logarithm

target_weightfloat

A weighting factor used to blance the exactness-of-log term with the closeness-to-target term in the optimized objective function. This value multiplies the latter term.

tolfloat, optional

Optimzer tolerance.

Returns

logMnumpy array

An matrix of the same shape as m.

pygsti.tools.real_matrix_log(m, action_if_imaginary='raise', tol=1e-08)

Construct a real logarithm of real matrix m.

This is possible when negative eigenvalues of m come in pairs, so that they can be viewed as complex conjugate pairs.

Parameters

mnumpy array

The matrix to take the logarithm of

action_if_imaginary{“raise”,”warn”,”ignore”}, optional

What action should be taken if a real-valued logarithm cannot be found. “raise” raises a ValueError, “warn” issues a warning, and “ignore” ignores the condition and simply returns the complex-valued result.

tolfloat, optional

An internal tolerance used when testing for equivalence and zero imaginary parts (real-ness).

Returns

logMnumpy array

An matrix logM, of the same shape as m, such that m = exp(logM)

pygsti.tools.column_basis_vector(i, dim)

Returns the ith standard basis vector in dimension dim.

Parameters

iint

Basis vector index.

dimint

Vector dimension.

Returns

numpy.ndarray

An array of shape (dim, 1) that is all zeros except for its i-th element, which equals 1.

pygsti.tools.vec(matrix_in)

Stacks the columns of a matrix to return a vector

Parameters

matrix_in : numpy.ndarray

Returns

numpy.ndarray

pygsti.tools.unvec(vector_in)

Slices a vector into the columns of a matrix.

Parameters

vector_in : numpy.ndarray

Returns

numpy.ndarray

pygsti.tools.norm1(m)

Returns the 1 norm of a matrix

Parameters

mnumpy.ndarray

The matrix.

Returns

numpy.ndarray

pygsti.tools.random_hermitian(dim)

Generates a random Hermitian matrix

Parameters

dimint

the matrix dimensinon.

Returns

numpy.ndarray

pygsti.tools.norm1to1(operator, num_samples=10000, mx_basis='gm', return_list=False)

The Hermitian 1-to-1 norm of a superoperator represented in the standard basis.

This is calculated via Monte-Carlo sampling. The definition of Hermitian 1-to-1 norm can be found in arxiv:1109.6887.

Parameters

operatornumpy.ndarray

The operator matrix to take the norm of.

num_samplesint, optional

Number of Monte-Carlo samples.

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis

The basis of operator.

return_listbool, optional

Whether the entire list of sampled values is returned or just the maximum.

Returns

float or list

Depends on the value of return_list.

pygsti.tools.complex_compare(a, b)

Comparison function for complex numbers that compares real part, then imaginary part.

Parameters

a : complex

b : complex

Returns

-1 if a < b

0 if a == b

+1 if a > b

pygsti.tools.prime_factors(n)

GCD algorithm to produce prime factors of n

Parameters

nint

The number to factorize.

Returns

list

The prime factors of n.

pygsti.tools.minweight_match(a, b, metricfn=None, return_pairs=True, pass_indices_to_metricfn=False)

Matches the elements of two vectors, a and b by minimizing the weight between them.

The weight is defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b).

Parameters

alist or numpy.ndarray

First 1D array to match elements between.

blist or numpy.ndarray

Second 1D array to match elements between.

metricfnfunction, optional

A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(x-y) is used.

return_pairsbool, optional

If True, the matching is also returned.

pass_indices_to_metricfnbool, optional

If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.

Returns

weight_arraynumpy.ndarray

The array of weights corresponding to the min-weight matching. The sum of this array’s elements is the minimized total weight.

pairslist

Only returned when return_pairs == True, a list of 2-tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair. The first (ix) indices will be in continuous ascending order starting at zero.

pygsti.tools.minweight_match_realmxeigs(a, b, metricfn=None, pass_indices_to_metricfn=False, eps=1e-09)

Matches the elements of a and b, whose elements are assumed to either real or one-half of a conjugate pair.

Matching is performed by minimizing the weight between elements, defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b). If straightforward matching fails to preserve eigenvalue conjugacy relations, then real and conjugate- pair eigenvalues are matched separately to ensure relations are preserved (but this can result in a sub-optimal matching). A ValueError is raised when the elements of a and b have incompatible conjugacy structures (#’s of conjugate vs. real pairs).

Parameters

anumpy.ndarray

First 1D array to match.

bnumpy.ndarray

Second 1D array to match.

metricfnfunction, optional

A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(x-y) is used.

pass_indices_to_metricfnbool, optional

If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.

epsfloat, optional

Tolerance when checking if eigenvalues are equal to each other.

Returns

pairslist

A list of 2-tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair.

pygsti.tools.safe_dot(a, b)

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.

Parameters

anumpy.ndarray or scipy.sparse matrix.

First matrix.

bnumpy.ndarray or scipy.sparse matrix.

Second matrix.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.tools.safe_real(a, inplace=False, check=False)

Get the real-part of a, where a can be either a dense array or a sparse matrix.

Parameters

anumpy.ndarray or scipy.sparse matrix.

Array to take real part of.

inplacebool, optional

Whether this operation should be done in-place.

checkbool, optional

If True, raise a ValueError if a has a nonzero imaginary part.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.tools.safe_imag(a, inplace=False, check=False)

Get the imaginary-part of a, where a can be either a dense array or a sparse matrix.

Parameters

anumpy.ndarray or scipy.sparse matrix.

Array to take imaginary part of.

inplacebool, optional

Whether this operation should be done in-place.

checkbool, optional

If True, raise a ValueError if a has a nonzero real part.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.tools.safe_norm(a, part=None)

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.

Parameters

andarray or scipy.sparse matrix

The matrix or vector to take the norm of.

part{None,’real’,’imag’}

If not None, return the norm of the real or imaginary part of a.

Returns

float

pygsti.tools.safe_onenorm(a)

Computes the 1-norm of the dense or sparse matrix a.

Parameters

andarray or sparse matrix

The matrix or vector to take the norm of.

Returns

float

pygsti.tools.csr_sum_indices(csr_matrices)

Precomputes the indices needed to sum a set of CSR sparse matrices.

Computes the index-arrays needed for use in csr_sum(), along with the index pointer and column-indices arrays for constructing a “template” CSR matrix to be the destination of csr_sum.

Parameters

csr_matriceslist

The SciPy CSR matrices to be summed.

Returns

ind_arrayslist

A list of numpy arrays giving the destination data-array indices of each element of csr_matrices.

indptr, indicesnumpy.ndarray

The row-pointer and column-indices arrays specifying the sparsity structure of a the destination CSR matrix.

Nint

The dimension of the destination matrix (and of each member of csr_matrices)

pygsti.tools.csr_sum(data, coeffs, csr_mxs, csr_sum_indices)

Accelerated summation of several CSR-format sparse matrices.

csr_sum_indices() precomputes the necessary indices for summing directly into the data-array of a destination CSR sparse matrix. If data is the data-array of matrix D (for “destination”), then this method performs:

D += sum_i( coeff[i] * csr_mxs[i] )

Note that D is not returned; the sum is done internally into D’s data-array.

Parameters

datanumpy.ndarray

The data-array of the destination CSR-matrix.

coeffsiterable

The weight coefficients which multiply each summed matrix.

csr_mxsiterable

A list of CSR matrix objects whose data-array is given by obj.data (e.g. a SciPy CSR sparse matrix).

csr_sum_indiceslist

A list of precomputed index arrays as returned by csr_sum_indices().

Returns

None

pygsti.tools.csr_sum_flat_indices(csr_matrices)

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.

The returned quantities can later be used to quickly compute a linear combination of the CSR sparse matrices csr_matrices.

Computes the index and data arrays needed for use in csr_sum_flat(), along with the index pointer and column-indices arrays for constructing a “template” CSR matrix to be the destination of csr_sum_flat.

Parameters

csr_matriceslist

The SciPy CSR matrices to be summed.

Returns

flat_dest_index_arraynumpy array

A 1D array of one element per nonzero element in any of csr_matrices, giving the destination-index of that element.

flat_csr_mx_datanumpy array

A 1D array of the same length as flat_dest_index_array, which simply concatenates the data arrays of csr_matrices.

mx_nnz_indptrnumpy array

A 1D array of length len(csr_matrices)+1 such that the data for the i-th element of csr_matrices lie in the index-range of mx_nnz_indptr[i] to mx_nnz_indptr[i+1]-1 of the flat arrays.

indptr, indicesnumpy.ndarray

The row-pointer and column-indices arrays specifying the sparsity structure of a the destination CSR matrix.

Nint

The dimension of the destination matrix (and of each member of csr_matrices)

pygsti.tools.csr_sum_flat(data, coeffs, flat_dest_index_array, flat_csr_mx_data, mx_nnz_indptr)

Computation of the summation of several CSR-format sparse matrices.

csr_sum_flat_indices() precomputes the necessary indices for summing directly into the data-array of a destination CSR sparse matrix. If data is the data-array of matrix D (for “destination”), then this method performs:

D += sum_i( coeff[i] * csr_mxs[i] )

Note that D is not returned; the sum is done internally into D’s data-array.

Parameters

datanumpy.ndarray

The data-array of the destination CSR-matrix.

coeffsndarray

The weight coefficients which multiply each summed matrix.

flat_dest_index_arrayndarray

The index array generated by csr_sum_flat_indices().

flat_csr_mx_datandarray

The data array generated by csr_sum_flat_indices().

mx_nnz_indptrndarray

The number-of-nonzero-elements pointer array generated by csr_sum_flat_indices().

Returns

None

pygsti.tools.expm_multiply_prep(a, tol=EXPM_DEFAULT_TOL)

Computes “prepared” meta-info about matrix a, to be used in expm_multiply_fast.

This includes a shifted version of a.

Parameters

anumpy.ndarray

the matrix that will be later exponentiated.

tolfloat, optional

Tolerance used to within matrix exponentiation routines.

Returns

tuple

A tuple of values to pass to expm_multiply_fast.

pygsti.tools.expm_multiply_fast(prep_a, v, tol=EXPM_DEFAULT_TOL)

Multiplies v by an exponentiated matrix.

Parameters

prep_atuple

A tuple of values from expm_multiply_prep() that defines the matrix to be exponentiated and holds other pre-computed quantities.

vnumpy.ndarray

Vector to multiply (take dot product with).

tolfloat, optional

Tolerance used to within matrix exponentiation routines.

Returns

numpy.ndarray

pygsti.tools.expop_multiply_prep(op, a_1_norm=None, tol=EXPM_DEFAULT_TOL)

Returns “prepared” meta-info about operation op, which is assumed to be traceless (so no shift is needed).

Used as input for use with _custom_expm_multiply_simple_core or fast C-reps.

Parameters

opscipy.sparse.linalg.LinearOperator

The operator to exponentiate.

a_1_normfloat, optional

The 1-norm (if computed separately) of op.

tolfloat, optional

Tolerance used to within matrix exponentiation routines.

Returns

tuple

A tuple of values to pass to expm_multiply_fast.

pygsti.tools.sparse_equal(a, b, atol=1e-08)

Checks whether two Scipy sparse matrices are (almost) equal.

Parameters

ascipy.sparse matrix

First matrix.

bscipy.sparse matrix

Second matrix.

atolfloat, optional

The tolerance to use, passed to numpy.allclose, when comparing the elements of a and b.

Returns

bool

pygsti.tools.sparse_onenorm(a)

Computes the 1-norm of the scipy sparse matrix a.

Parameters

ascipy sparse matrix

The matrix or vector to take the norm of.

Returns

float

pygsti.tools.ndarray_base(a, verbosity=0)

Get the base memory object for numpy array a.

This is found by following .base until it comes up None.

Parameters

anumpy.ndarray

Array to get base of.

verbosityint, optional

Print additional debugging information if this is > 0.

Returns

numpy.ndarray

pygsti.tools.to_unitary(scaled_unitary)

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.

Parameters

scaled_unitaryndarray

A scaled unitary matrix

Returns

scale : float

unitaryndarray

Such that scale * unitary == scaled_unitary.

pygsti.tools.sorted_eig(mx)

Similar to numpy.eig, but returns sorted output.

In particular, the eigenvalues and vectors sorted by eigenvalue, where sorting is done according to (real_part, imaginary_part) tuple.

Parameters

mxnumpy.ndarray

Matrix to act on.

Returns

eigenvalues : numpy.ndarray

eigenvectors : numpy.ndarray

pygsti.tools.compute_kite(eigenvalues)

Computes the “kite” corresponding to a list of eigenvalues.

The kite is defined as a list of integers, each indicating that there is a degnenerate block of that many eigenvalues within eigenvalues. Thus the sum of the list values equals len(eigenvalues).

Parameters

eigenvaluesnumpy.ndarray

A sorted array of eigenvalues.

Returns

list

A list giving the multiplicity structure of evals.

pygsti.tools.find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.

More specifically, find a matrix R such that u_inv R u0 is diagonal (so G = R G0 Rinv if G and G0 share the same eigenvalues and have eigenvectors u and u0 respectively) AND log(R) has no (zero) projection onto the commutant of G0 = u0 diag(evals) u0_inv.

Parameters

unumpy.ndarray

Usually the eigenvector matrix of a gate (G).

u_invnumpy.ndarray

Inverse of u.

u0numpy.ndarray

Usually the eigenvector matrix of the corresponding target gate (G0).

u0_invnumpy.ndarray

Inverse of u0.

kitelist

The kite structure of u0.

Returns

numpy.ndarray

pygsti.tools.project_onto_kite(mx, kite)

Project mx onto kite, so mx is zero everywhere except on the kite.

Parameters

mxnumpy.ndarray

Matrix to project.

kitelist

A kite structure.

Returns

numpy.ndarray

pygsti.tools.project_onto_antikite(mx, kite)

Project mx onto the complement of kite, so mx is zero everywhere on the kite.

Parameters

mxnumpy.ndarray

Matrix to project.

kitelist

A kite structure.

Returns

numpy.ndarray

pygsti.tools.remove_dependent_cols(mx, tol=1e-07)

Removes the linearly dependent columns of a matrix.

Parameters

mxnumpy.ndarray

The input matrix

Returns

A linearly independent subset of the columns of mx.

pygsti.tools.intersection_space(space1, space2, tol=1e-07, use_nice_nullspace=False)

TODO: docstring

pygsti.tools.union_space(space1, space2, tol=1e-07)

TODO: docstring

pygsti.tools.jamiolkowski_angle(hamiltonian_mx)

TODO: docstring

pygsti.tools.zvals_to_dense(self, zvals, superket=True)

Construct the dense operator or superoperator representation of a computational basis state.

Parameters

zvalslist or numpy.ndarray

The z-values, each 0 or 1, defining the computational basis state.

superketbool, optional

If True, the super-ket representation of the state is returned. If False, then the complex ket representation is returned.

Returns

numpy.ndarray

pygsti.tools.int64_parity(x)

Compute the partity of x.

Recursively divide a (64-bit) integer (x) into two equal halves and take their XOR until only 1 bit is left.

Parameters

x : int64

Returns

int64

pygsti.tools.zvals_int64_to_dense(zvals_int, nqubits, outvec=None, trust_outvec_sparsity=False, abs_elval=None)

Fills a dense array with the super-ket representation of a computational basis state.

Parameters

zvals_intint64

The array of (up to 64) z-values, encoded as the 0s and 1s in the binary representation of this integer.

nqubitsint

The number of z-values (up to 64)

outvecnumpy.ndarray, optional

The output array, which must be a 1D array of length 4**nqubits or None, in which case a new array is allocated.

trust_outvec_sparsitybool, optional

When True, it is assumed that the provided outvec starts as all zeros and so only non-zero elements of outvec need to be set.

abs_elvalfloat

the value 1 / (sqrt(2)**nqubits), which can be passed here so that it doesn’t need to be recomputed on every call to this function. If None, then we just compute the value.

Returns

numpy.ndarray

pygsti.tools.sign_fix_qr(q, r, tol=1e-06)

Change the signs of the columns of Q and rows of R to follow a convention.

Flips the signs of Q-columns and R-rows from a QR decomposition so that the largest absolute element in each Q-column is positive. This is an arbitrary but consisten convention that resolves sign-ambiguity in the output of a QR decomposition.

Parameters

q, rnumpy.ndarray

Input Q and R matrices from QR decomposition.

tolfloat, optional

Tolerance for computing the maximum element, i.e., anything within tol of the true max is counted as a maximal element, the first of which is set positive by the convention.

Returns

qq, rrnumpy.ndarray

Updated versions of q and r.

pygsti.tools.parallel_apply(f, l, comm)

Apply a function, f to every element of a list, l in parallel, using MPI.

Parameters

ffunction

function of an item in the list l

llist

list of items as arguments to f

commMPI Comm

MPI communicator object for organizing parallel programs

Returns

resultslist

list of items after f has been applied

pygsti.tools.mpi4py_comm()

Get a comm object

Returns

MPI.Comm

Comm object to be passed down to parallel pygsti routines

pygsti.tools.starmap_with_kwargs(fn, num_runs, num_processors, args_list, kwargs_list)
class pygsti.tools.NamedDict(keyname=None, keytype=None, valname=None, valtype=None, items=())

Bases: dict, pygsti.baseobjs.nicelyserializable.NicelySerializable

A dictionary that also holds category names and types.

This dict-derived class holds a catgory name applicable to its keys, and key and value type names indicating the types of its keys and values.

The main purpose of this class is to utilize its to_dataframe() method.

Parameters

keynamestr, optional

A category name for the keys of this dict. For example, if the dict contained the keys “dog” and “cat”, this might be “animals”. This becomes a column header if this dict is converted to a data frame.

keytype{“float”, “int”, “category”, None}, optional

The key-type, in correspondence with different pandas series types.

valnamestr, optional

A category name for the keys of this dict. This becomse a column header if this dict is converted to a data frame.

valtype{“float”, “int”, “category”, None}, optional

The value-type, in correspondence with different pandas series types.

itemslist or dict, optional

Initial items, used in serialization.

Initialize self. See help(type(self)) for accurate signature.

classmethod create_nested(key_val_type_list, inner)

Creates a nested NamedDict.

Parameters
key_val_type_listlist

A list of (key, value, type) tuples, one per nesting layer.

innervarious

The value that will be set to the inner-most nested dictionary’s value, supplying any additional layers of nesting (if inner is a NamedDict) or the value contained in all of the nested layers.

to_dataframe()

Render this dict as a pandas data frame.

Returns

pandas.DataFrame

pygsti.tools.IMAG_TOL = 1e-07
pygsti.tools.fidelity(a, b)

Returns the quantum state fidelity between density matrices.

This given by:

F = Tr( sqrt{ sqrt(a) * b * sqrt(a) } )^2

To compute process fidelity, pass this function the Choi matrices of the two processes, or just call entanglement_fidelity() with the operation matrices.

Parameters

anumpy array

First density matrix.

bnumpy array

Second density matrix.

Returns

float

The resulting fidelity.

pygsti.tools.frobeniusdist(a, b)

Returns the frobenius distance between gate or density matrices.

This is given by :

sqrt( sum( (a_ij-b_ij)^2 ) )

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

Returns

float

The resulting frobenius distance.

pygsti.tools.frobeniusdist_squared(a, b)

Returns the square of the frobenius distance between gate or density matrices.

This is given by :

sum( (A_ij-B_ij)^2 )

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

Returns

float

The resulting frobenius distance.

pygsti.tools.residuals(a, b)

Calculate residuals between the elements of two matrices

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

Returns

np.array

residuals

pygsti.tools.tracenorm(a)

Compute the trace norm of matrix a given by:

Tr( sqrt{ a^dagger * a } )

Parameters

anumpy array

The matrix to compute the trace norm of.

Returns

float

pygsti.tools.tracedist(a, b)

Compute the trace distance between matrices.

This is given by:

D = 0.5 * Tr( sqrt{ (a-b)^dagger * (a-b) } )

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

Returns

float

pygsti.tools.diamonddist(a, b, mx_basis='pp', return_x=False)

Returns the approximate diamond norm describing the difference between gate matrices.

This is given by :

D = ||a - b ||_diamond = sup_rho || AxI(rho) - BxI(rho) ||_1

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

mx_basisBasis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

return_xbool, optional

Whether to return a numpy array encoding the state (rho) at which the maximal trace distance occurs.

Returns

dmfloat

Diamond norm

Wnumpy array

Only returned if return_x = True. Encodes the state rho, such that dm = trace( |(J(a)-J(b)).T * W| ).

pygsti.tools.jtracedist(a, b, mx_basis='pp')

Compute the Jamiolkowski trace distance between operation matrices.

This is given by:

D = 0.5 * Tr( sqrt{ (J(a)-J(b))^2 } )

where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

float

pygsti.tools.entanglement_fidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)

Returns the “entanglement” process fidelity between gate matrices.

This is given by:

F = Tr( sqrt{ sqrt(J(a)) * J(b) * sqrt(J(a)) } )^2

where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.

When the both of the input matrices a and b are TP, and the target matrix b is unitary then we can use a more efficient formula:

F= Tr(a @ b.conjugate().T)/d^2

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The basis of the matrices. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

is_tpbool, optional (default None)

Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

is_unitarybool, optional (default None)

Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

Returns

float

pygsti.tools.average_gate_fidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)

Computes the average gate fidelity (AGF) between two gates.

Average gate fidelity (F_g) is related to entanglement fidelity (F_p), via:

F_g = (d * F_p + 1)/(1 + d),

where d is the Hilbert space dimension. This formula, and the definition of AGF, can be found in Phys. Lett. A 303 249-252 (2002).

Parameters

aarray or gate

The gate to compute the AGI to b of. E.g., an imperfect implementation of b.

barray or gate

The gate to compute the AGI to a of. E.g., the target gate corresponding to a.

mx_basis{“std”,”gm”,”pp”} or Basis object, optional

The basis of the matrices.

is_tpbool, optional (default None)

Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

is_unitarybool, optional (default None)

Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

Returns

AGIfloat

The AGI of a to b.

pygsti.tools.average_gate_infidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)

Computes the average gate infidelity (AGI) between two gates.

Average gate infidelity is related to entanglement infidelity (EI) via:

AGI = (d * (1-EI) + 1)/(1 + d),

where d is the Hilbert space dimension. This formula, and the definition of AGI, can be found in Phys. Lett. A 303 249-252 (2002).

Parameters

aarray or gate

The gate to compute the AGI to b of. E.g., an imperfect implementation of b.

barray or gate

The gate to compute the AGI to a of. E.g., the target gate corresponding to a.

mx_basis{“std”,”gm”,”pp”} or Basis object, optional

The basis of the matrices.

is_tpbool, optional (default None)

Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

is_unitarybool, optional (default None)

Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

Returns

float

pygsti.tools.entanglement_infidelity(a, b, mx_basis='pp', is_tp=None, is_unitary=None)

Returns the entanglement infidelity (EI) between gate matrices.

This i given by:

EI = 1 - Tr( sqrt{ sqrt(J(a)) * J(b) * sqrt(J(a)) } )^2

where J(.) is the Jamiolkowski isomorphism map that maps a operation matrix to it’s corresponding Choi Matrix.

Parameters

anumpy array

First matrix.

bnumpy array

Second matrix.

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The basis of the matrices. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

is_tpbool, optional (default None)

Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

is_unitarybool, optional (default None)

Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

Returns

EIfloat

The EI of a to b.

pygsti.tools.gateset_infidelity(model, target_model, itype='EI', weights=None, mx_basis=None, is_tp=None, is_unitary=None)

Computes the average-over-gates of the infidelity between gates in model and the gates in target_model.

If itype is ‘EI’ then the “infidelity” is the entanglement infidelity; if itype is ‘AGI’ then the “infidelity” is the average gate infidelity (AGI and EI are related by a dimension dependent constant).

This is the quantity that RB error rates are sometimes claimed to be related to directly related.

Parameters

modelModel

The model to calculate the average infidelity, to target_model, of.

target_modelModel

The model to calculate the average infidelity, to model, of.

itypestr, optional

The infidelity type. Either ‘EI’, corresponding to entanglement infidelity, or ‘AGI’, corresponding to average gate infidelity.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are, possibly unnormalized, probabilities. These probabilities corresponding to the weighting in the average, so if the model contains gates A and B and weights[A] = 2 and weights[B] = 1 then the output is Inf(A)*2/3 + Inf(B)/3 where Inf(X) is the infidelity (to the corresponding element in the other model) of X. If None, a uniform-average is taken, equivalent to setting all the weights to 1.

mx_basis{“std”,”gm”,”pp”} or Basis object, optional

The basis of the models. If None, the basis is obtained from the model.

is_tpbool, optional (default None)

Flag indicating both matrices are TP. If None (the default), an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

is_unitarybool, optional (default None)

Flag indicating that the second matrix, b, is unitary. If None (the default) an explicit check is performed. If True/False, the check is skipped and the provided value is used (faster, but should only be used when the user is certain this is true apriori).

Returns

float

The weighted average-over-gates infidelity between the two models.

pygsti.tools.unitarity(a, mx_basis='gm')

Returns the “unitarity” of a channel.

Unitarity is defined as in Wallman et al, “Estimating the Coherence of noise” NJP 17 113020 (2015). The unitarity is given by (Prop 1 in Wallman et al):

u(a) = Tr( A_u^{dagger} A_u ) / (d^2 - 1),

where A_u is the unital submatrix of a, and d is the dimension of the Hilbert space. When a is written in any basis for which the first element is the normalized identity (e.g., the pp or gm bases), The unital submatrix of a is the matrix obtained when the top row and left hand column is removed from a.

Parameters

aarray or gate

The gate for which the unitarity is to be computed.

mx_basis{“std”,”gm”,”pp”} or a Basis object, optional

The basis of the matrix.

Returns

float

pygsti.tools.fidelity_upper_bound(operation_mx)

Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix.

The closeness of the result to one tells

how “unitary” the action of operation_mx is.

Parameters

operation_mxnumpy array

The operation matrix to act on.

Returns

float

The resulting upper bound on fidelity(operation_mx, anyUnitaryGateMx)

pygsti.tools.compute_povm_map(model, povmlbl)

Constructs a gate-like quantity for the POVM within model.

This is done by embedding the k-outcome classical output space of the POVM in the Hilbert-Schmidt space of k by k density matrices by placing the classical probability distribution along the diagonal of the density matrix. Currently, this is only implemented for the case when k equals d, the dimension of the POVM’s Hilbert space.

Parameters

modelModel

The model supplying the POVM effect vectors and the basis those vectors are in.

povmlblstr

The POVM label

Returns

numpy.ndarray

The matrix of the “POVM map” in the model.basis basis.

pygsti.tools.povm_fidelity(model, target_model, povmlbl)

Computes the process (entanglement) fidelity between POVM maps.

Parameters

modelModel

The model the POVM belongs to.

target_modelModel

The target model (which also has povmlbl).

povmlblLabel

Label of the POVM to get the fidelity of.

Returns

float

pygsti.tools.povm_jtracedist(model, target_model, povmlbl)

Computes the Jamiolkowski trace distance between POVM maps using jtracedist().

Parameters

modelModel

The model the POVM belongs to.

target_modelModel

The target model (which also has povmlbl).

povmlblLabel

Label of the POVM to get the trace distance of.

Returns

float

pygsti.tools.povm_diamonddist(model, target_model, povmlbl)

Computes the diamond distance between POVM maps using diamonddist().

Parameters

modelModel

The model the POVM belongs to.

target_modelModel

The target model (which also has povmlbl).

povmlblLabel

Label of the POVM to get the diamond distance of.

Returns

float

pygsti.tools.decompose_gate_matrix(operation_mx)

Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates.

This funtion computes how the action of a operation matrix can be is decomposed into fixed points, axes of rotation, angles of rotation, and decays. Also determines whether a gate appears to be valid and/or unitary.

Parameters

operation_mxnumpy array

The operation matrix to act on.

Returns

dict

A dictionary describing the decomposed action. Keys are:

‘isValid’bool

whether decomposition succeeded

‘isUnitary’bool

whether operation_mx describes unitary action

‘fixed point’numpy array

the fixed point of the action

‘axis of rotation’numpy array or nan

the axis of rotation

‘decay of diagonal rotation terms’float

decay of diagonal terms

‘rotating axis 1’numpy array or nan

1st axis orthogonal to axis of rotation

‘rotating axis 2’numpy array or nan

2nd axis orthogonal to axis of rotation

‘decay of off diagonal rotation terms’float

decay of off-diagonal terms

‘pi rotations’float

angle of rotation in units of pi radians

pygsti.tools.state_to_dmvec(psi)

Compute the vectorized density matrix which acts as the state psi.

This is just the outer product map |psi> => |psi><psi| with the output flattened, i.e. dot(psi, conjugate(psi).T).

Parameters

psinumpy array

The state vector.

Returns

numpy array

The vectorized density matrix.

pygsti.tools.dmvec_to_state(dmvec, tol=1e-06)

Compute the pure state describing the action of density matrix vector dmvec.

If dmvec represents a mixed state, ValueError is raised.

Parameters

dmvecnumpy array

The vectorized density matrix, assumed to be in the standard (matrix unit) basis.

tolfloat, optional

tolerance for determining whether an eigenvalue is zero.

Returns

numpy array

The pure state, as a column vector of shape = (N,1)

pygsti.tools.unitary_to_superop(u, superop_mx_basis='pp')

TODO: docstring

pygsti.tools.unitary_to_process_mx(u)
pygsti.tools.unitary_to_std_process_mx(u)

Compute the superoperator corresponding to unitary matrix u.

Computes a super-operator (that acts on (row)-vectorized density matrices) from a unitary operator (matrix) u which acts on state vectors. This super-operator is given by the tensor product of u and conjugate(u), i.e. kron(u,u.conj).

Parameters

unumpy array

The unitary matrix which acts on state vectors.

Returns

numpy array

The super-operator process matrix.

pygsti.tools.superop_is_unitary(superop_mx, mx_basis='pp', rank_tol=1e-06)

TODO: docstring

pygsti.tools.superop_to_unitary(superop_mx, mx_basis='pp', check_superop_is_unitary=True)

TODO: docstring

pygsti.tools.process_mx_to_unitary(superop)
pygsti.tools.std_process_mx_to_unitary(superop_mx)

Compute the unitary corresponding to the (unitary-action!) super-operator superop.

This function assumes superop acts on (row)-vectorized density matrices, and that the super-operator is of the form kron(U,U.conj).

Parameters

superopnumpy array

The superoperator matrix which acts on vectorized density matrices (in the ‘std’ matrix-unit basis).

Returns

numpy array

The unitary matrix which acts on state vectors.

pygsti.tools.spam_error_generator(spamvec, target_spamvec, mx_basis, typ='logGTi')

Construct an error generator from a SPAM vector and it’s target.

Computes the value of the error generator given by errgen = log( diag(spamvec / target_spamvec) ), where division is element-wise. This results in a (non-unique) error generator matrix E such that spamvec = exp(E) * target_spamvec.

Note: This is currently of very limited use, as the above algorithm fails whenever target_spamvec has zero elements where spamvec doesn’t.

Parameters

spamvecndarray

The SPAM vector.

target_spamvecndarray

The target SPAM vector.

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

typ{“logGTi”}

The type of error generator to compute. Allowed values are:

  • “logGTi” : errgen = log( diag(spamvec / target_spamvec) )

Returns

errgenndarray

The error generator.

pygsti.tools.error_generator(gate, target_op, mx_basis, typ='logG-logT', logG_weight=None)

Construct the error generator from a gate and its target.

Computes the value of the error generator given by errgen = log( inv(target_op) * gate ), so that gate = target_op * exp(errgen).

Parameters

gatendarray

The operation matrix

target_opndarray

The target operation matrix

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

typ{“logG-logT”, “logTiG”, “logGTi”}

The type of error generator to compute. Allowed values are:

  • “logG-logT” : errgen = log(gate) - log(target_op)

  • “logTiG” : errgen = log( dot(inv(target_op), gate) )

  • “logGTi” : errgen = log( dot(gate,inv(target_op)) )

logG_weight: float or None (default)

Regularization weight for logG-logT penalty of approximate logG. If None, the default weight in approximate_matrix_log() is used. Note that this will result in a logG close to logT, but G may not exactly equal exp(logG). If self-consistency with func:operation_from_error_generator is desired, consider testing lower (or zero) regularization weight.

Returns

errgenndarray

The error generator.

pygsti.tools.operation_from_error_generator(error_gen, target_op, mx_basis, typ='logG-logT')

Construct a gate from an error generator and a target gate.

Inverts the computation done in error_generator() and returns the value of the gate given by gate = target_op * exp(error_gen).

Parameters

error_genndarray

The error generator matrix

target_opndarray

The target operation matrix

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

typ{“logG-logT”, “logG-logT-quick”, “logTiG”, “logGTi”}

The type of error generator to invert. Allowed values are:

  • “logG-logT” : gate = exp( errgen + log(target_op) ) using internal logm

  • “logG-logT-quick” : gate = exp( errgen + log(target_op) ) using SciPy logm

  • “logTiG” : gate = dot( target_op, exp(errgen) )

  • “logGTi” : gate = dot( exp(errgen), target_op )

Returns

ndarray

The operation matrix.

pygsti.tools.elementary_errorgens(dim, typ, basis)

Compute the elementary error generators of a certain type.

Parameters

dimint

The dimension of the error generators to be returned. This is also the associated gate dimension, and must be a perfect square, as sqrt(dim) is the dimension of density matrices. For a single qubit, dim == 4.

typ{‘H’, ‘S’, ‘C’, ‘A’}

The type of error generators to construct.

basisBasis or str

Which basis is used to construct the error generators. Note that this is not the basis of the returned error generators (which is always the ‘std’ matrix-unit basis) but that used to define the different elementary generator operations themselves.

Returns

generatorsnumpy.ndarray

An array of shape (#basis-elements,dim,dim). generators[i] is the generator corresponding to the ith basis matrix in the std (matrix unit) basis. (Note that in most cases #basis-elements == dim, so the size of generators is (dim,dim,dim) ). Each generator is normalized so that as a vector it has unit Frobenius norm.

pygsti.tools.elementary_errorgens_dual(dim, typ, basis)

Compute the set of dual-to-elementary error generators of a given type.

These error generators are dual to the elementary error generators constructed by elementary_errorgens().

Parameters

dimint

The dimension of the error generators to be returned. This is also the associated gate dimension, and must be a perfect square, as sqrt(dim) is the dimension of density matrices. For a single qubit, dim == 4.

typ{‘H’, ‘S’, ‘C’, ‘A’}

The type of error generators to construct.

basisBasis or str

Which basis is used to construct the error generators. Note that this is not the basis of the returned error generators (which is always the ‘std’ matrix-unit basis) but that used to define the different elementary generator operations themselves.

Returns

generatorsnumpy.ndarray

An array of shape (#basis-elements,dim,dim). generators[i] is the generator corresponding to the ith basis matrix in the std (matrix unit) basis. (Note that in most cases #basis-elements == dim, so the size of generators is (dim,dim,dim) ). Each generator is normalized so that as a vector it has unit Frobenius norm.

pygsti.tools.extract_elementary_errorgen_coefficients(errorgen, elementary_errorgen_labels, elementary_errorgen_basis='pp', errorgen_basis='pp', return_projected_errorgen=False)

TODO: docstring

pygsti.tools.project_errorgen(errorgen, elementary_errorgen_type, elementary_errorgen_basis, errorgen_basis='pp', return_dual_elementary_errorgens=False, return_projected_errorgen=False)

Compute the projections of a gate error generator onto a set of elementary error generators. TODO: docstring update

This standard set of errors is given by projection_type, and is constructed from the elements of the projection_basis basis.

Parameters

errorgen: ndarray

The error generator matrix to project.

projection_type{“hamiltonian”, “stochastic”, “affine”}

The type of error generators to project the gate error generator onto. If “hamiltonian”, then use the Hamiltonian generators which take a density matrix rho -> -i*[ H, rho ] for Pauli-product matrix H. If “stochastic”, then use the Stochastic error generators which take rho -> P*rho*P for Pauli-product matrix P (recall P is self adjoint). If “affine”, then use the affine error generators which take rho -> P (superop is |P>><<1|).

projection_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

return_generatorsbool, optional

If True, return the error generators projected against along with the projection values themseves.

return_scale_fctrbool, optional

If True, also return the scaling factor that was used to multply the projections onto normalized error generators to get the returned values.

Returns

projectionsnumpy.ndarray

An array of length equal to the number of elements in the basis used to construct the projectors. Typically this is is also the dimension of the gate (e.g. 4 for a single qubit).

generatorsnumpy.ndarray

Only returned when return_generators == True. An array of shape (#basis-els,op_dim,op_dim) such that generators[i] is the generator corresponding to the i-th basis element. Note that these matricies are in the std (matrix unit) basis.

pygsti.tools.create_elementary_errorgen_nqudit(typ, basis_element_labels, basis_1q, normalize=False, sparse=False, tensorprod_basis=False)

TODO: docstring - labels can be, e.g. (‘H’, ‘XX’) and basis should be a 1-qubit basis w/single-char labels

pygsti.tools.create_elementary_errorgen_nqudit_dual(typ, basis_element_labels, basis_1q, normalize=False, sparse=False, tensorprod_basis=False)

TODO: docstring - labels can be, e.g. (‘H’, ‘XX’) and basis should be a 1-qubit basis w/single-char labels

pygsti.tools.rotation_gate_mx(r, mx_basis='gm')

Construct a rotation operation matrix.

Build the operation matrix corresponding to the unitary

exp(-i * (r[0]/2*PP[0]*sqrt(d) + r[1]/2*PP[1]*sqrt(d) + …) )

where PP’ is the array of Pauli-product matrices obtained via `pp_matrices(d), where d = sqrt(len(r)+1). The division by 2 is for convention, and the sqrt(d) is to essentially un-normalise the matrices returned by pp_matrices() to they are equal to products of the standard Pauli matrices.

Parameters

rtuple

A tuple of coeffiecients, one per non-identity Pauli-product basis element

mx_basis{‘std’, ‘gm’, ‘pp’, ‘qt’} or Basis object

The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array

a d^2 x d^2 operation matrix in the specified basis.

pygsti.tools.project_model(model, target_model, projectiontypes=('H', 'S', 'H+S', 'LND'), gen_type='logG-logT', logG_weight=None)

Construct a new model(s) by projecting the error generator of model onto some sub-space then reconstructing.

Parameters

modelModel

The model whose error generator should be projected.

target_modelModel

The set of target (ideal) gates.

projectiontypestuple of {‘H’,’S’,’H+S’,’LND’,’LNDF’}

Which projections to use. The length of this tuple gives the number of Model objects returned. Allowed values are:

  • ‘H’ = Hamiltonian errors

  • ‘S’ = Stochastic Pauli-channel errors

  • ‘H+S’ = both of the above error types

  • ‘LND’ = errgen projected to a normal (CPTP) Lindbladian

  • ‘LNDF’ = errgen projected to an unrestricted (full) Lindbladian

gen_type{“logG-logT”, “logTiG”, “logGTi”}

The type of error generator to compute. For more details, see func:error_generator.

logG_weight: float or None (default)

Regularization weight for approximate logG in logG-logT generator. For more details, see func:error_generator.

Returns

projected_modelslist of Models

Elements are projected versions of model corresponding to the elements of projectiontypes.

Npslist of parameter counts

Integer parameter counts for each model in projected_models. Useful for computing the expected log-likelihood or chi2.

pygsti.tools.compute_best_case_gauge_transform(gate_mx, target_gate_mx, return_all=False)

Returns a gauge transformation that maps gate_mx into a matrix that is co-diagonal with target_gate_mx.

(Co-diagonal means that they share a common set of eigenvectors.)

Gauge transformations effectively change the basis of all the gates in a model. From the perspective of a single gate a gauge transformation leaves it’s eigenvalues the same and changes its eigenvectors. This function finds a real transformation that transforms the eigenspaces of gate_mx so that there exists a set of eigenvectors which diagonalize both gate_mx and target_gate_mx.

Parameters

gate_mxnumpy.ndarray

Gate matrix to transform.

target_gate_mxnumpy.ndarray

Target gate matrix.

return_allbool, optional

If true, also return the matrices of eigenvectors for Ugate for gate_mx and Utgt for target_gate_mx such that U = dot(Utgt, inv(Ugate)) is real.

Returns

Unumpy.ndarray

A gauge transformation such that if epgate = U * gate_mx * U_inv, then epgate (which has the same eigenalues as gate_mx), can be diagonalized with a set of eigenvectors that also diagonalize target_gate_mx. Furthermore, U is real.

Ugate, Utgtnumpy.ndarray

only if return_all == True. See above.

pygsti.tools.project_to_target_eigenspace(model, target_model, eps=1e-06)

Project each gate of model onto the eigenspace of the corresponding gate within target_model.

Returns the resulting Model.

Parameters

modelModel

Model to act on.

target_modelModel

The target model, whose gates define the target eigenspaces being projected onto.

epsfloat, optional

Small magnitude specifying how much to “nudge” the target gates before eigen-decomposing them, so that their spectra will have the same conjugacy structure as the gates of model.

Returns

Model

pygsti.tools.unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

Parameters

unumpy array

A dxd array giving the action of the unitary on a state in the sigma-z basis. where d = 2 ** n-qubits

Returns

numpy array

The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.

pygsti.tools.is_valid_lindblad_paramtype(typ)

Whether typ is a recognized Lindblad-gate parameterization type.

A Lindblad type is comprised of a parameter specification followed optionally by an evolution-type suffix. The parameter spec can be “GLND” (general unconstrained Lindbladian), “CPTP” (cptp-constrained), or any/all of the letters “H” (Hamiltonian), “S” (Stochastic, CPTP), “s” (Stochastic), “A” (Affine), “D” (Depolarization, CPTP), “d” (Depolarization) joined with plus (+) signs. Note that “A” cannot appear without one of {“S”,”s”,”D”,”d”}. The suffix can be non-existent (density-matrix), “terms” (state-vector terms) or “clifford terms” (stabilizer-state terms). For example, valid Lindblad types are “H+S”, “H+d+A”, “CPTP clifford terms”, or “S+A terms”.

Parameters

typstr

A paramterization type.

Returns

bool

pygsti.tools.effect_label_to_outcome(povm_and_effect_lbl)

Extract the outcome label from a “simplified” effect label.

Simplified effect labels are not themselves so simple. They combine POVM and effect labels so that accessing any given effect vector is simpler.

If povm_and_effect_lbl is None then “NONE” is returned.

Parameters

povm_and_effect_lblLabel

Simplified effect vector.

Returns

str

pygsti.tools.effect_label_to_povm(povm_and_effect_lbl)

Extract the POVM label from a “simplified” effect label.

Simplified effect labels are not themselves so simple. They combine POVM and effect labels so that accessing any given effect vector is simpler.

If povm_and_effect_lbl is None then “NONE” is returned.

Parameters

povm_and_effect_lblLabel

Simplified effect vector.

Returns

str

pygsti.tools.id2x2
pygsti.tools.sigmax
pygsti.tools.sigmay
pygsti.tools.sigmaz
pygsti.tools.unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

Parameters

unumpy array

A dxd array giving the action of the unitary on a state in the sigma-z basis. where d = 2 ** n-qubits

Returns

numpy array

The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.

pygsti.tools.sigmaii
pygsti.tools.sigmaix
pygsti.tools.sigmaiy
pygsti.tools.sigmaiz
pygsti.tools.sigmaxi
pygsti.tools.sigmaxx
pygsti.tools.sigmaxy
pygsti.tools.sigmaxz
pygsti.tools.sigmayi
pygsti.tools.sigmayx
pygsti.tools.sigmayy
pygsti.tools.sigmayz
pygsti.tools.sigmazi
pygsti.tools.sigmazx
pygsti.tools.sigmazy
pygsti.tools.sigmazz
pygsti.tools.single_qubit_gate(hx, hy, hz, noise=0)

Construct the single-qubit operation matrix.

Build the operation matrix given by exponentiating -i * (hx*X + hy*Y + hz*Z), where X, Y, and Z are the sigma matrices. Thus, hx, hy, and hz correspond to rotation angles divided by 2. Additionally, a uniform depolarization noise can be applied to the gate.

Parameters

hxfloat

Coefficient of sigma-X matrix in exponent.

hyfloat

Coefficient of sigma-Y matrix in exponent.

hzfloat

Coefficient of sigma-Z matrix in exponent.

noisefloat, optional

The amount of uniform depolarizing noise.

Returns

numpy array

4x4 operation matrix which operates on a 1-qubit density matrix expressed as a vector in the Pauli basis ( {I,X,Y,Z}/sqrt(2) ).

pygsti.tools.two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)

Construct the single-qubit operation matrix.

Build the operation matrix given by exponentiating -i * (xx*XX + xy*XY + …) where terms in the exponent are tensor products of two Pauli matrices.

Parameters

ixfloat, optional

Coefficient of IX matrix in exponent.

iyfloat, optional

Coefficient of IY matrix in exponent.

izfloat, optional

Coefficient of IZ matrix in exponent.

xifloat, optional

Coefficient of XI matrix in exponent.

xxfloat, optional

Coefficient of XX matrix in exponent.

xyfloat, optional

Coefficient of XY matrix in exponent.

xzfloat, optional

Coefficient of XZ matrix in exponent.

yifloat, optional

Coefficient of YI matrix in exponent.

yxfloat, optional

Coefficient of YX matrix in exponent.

yyfloat, optional

Coefficient of YY matrix in exponent.

yzfloat, optional

Coefficient of YZ matrix in exponent.

zifloat, optional

Coefficient of ZI matrix in exponent.

zxfloat, optional

Coefficient of ZX matrix in exponent.

zyfloat, optional

Coefficient of ZY matrix in exponent.

zzfloat, optional

Coefficient of ZZ matrix in exponent.

iifloat, optional

Coefficient of II matrix in exponent.

Returns

numpy array

16x16 operation matrix which operates on a 2-qubit density matrix expressed as a vector in the Pauli-Product basis.

pygsti.tools.deprecate(replacement=None)

Decorator for deprecating a function.

Parameters

replacementstr, optional

the name of the function that should replace it.

Returns

function

pygsti.tools.cache_by_hashed_args(obj)

Decorator for caching a function values

Deprecated since version v0.9.8.3: cache_by_hashed_args() will be removed in pyGSTi v0.9.9. Use functools.lru_cache() instead.

Parameters

objfunction

function to decorate

Returns

function

pygsti.tools.timed_block(label, time_dict=None, printer=None, verbosity=2, round_places=6, pre_message=None, format_str=None)

Context manager that times a block of code

Parameters

labelstr

An identifying label for this timed block.

time_dictdict, optional

A dictionary to store the final time in, under the key label.

printerVerbosityPrinter, optional

A printer object to log the timer’s message. If None, this message will be printed directly.

verbosityint, optional

The verbosity level at which to print the time message (if printer is given).

round_placesint, opitonal

How many decimal places of precision to print time with (in seconds).

pre_messagestr, optional

A format string to print out before the timer’s message, which formats the label arguent, e.g. “My label is {}”.

format_strstr, optional

A format string used to format the label before the resulting “rendered label” is used as the first argument in the final formatting string “{} took {} seconds”.

pygsti.tools.time_hash()

Get string-version of current time

Returns

str

pygsti.tools.tvd(p, q)

Calculates the total variational distance between two probability distributions.

The distributions must be dictionaries, where keys are events (e.g., bit strings) and values are the probabilities. If an event in the keys of one dictionary isn’t in the keys of the other then that probability is assumed to be zero. There are no checks that the input probability distributions are valid (i.e., that the probabilities sum up to one and are postiive).

Parameters

p, qdicts

The distributions to calculate the TVD between.

Returns

float

pygsti.tools.classical_fidelity(p, q)

Calculates the (classical) fidelity between two probability distributions.

The distributions must be dictionaries, where keys are events (e.g., bit strings) and values are the probabilities. If an event in the keys of one dictionary isn’t in the keys of the other then that probability is assumed to be zero. There are no checks that the input probability distributions are valid (i.e., that the probabilities sum up to one and are postiive).

Parameters

p, qdicts

The distributions to calculate the TVD between.

Returns

float

pygsti.tools.predicted_rb_number(model, target_model, weights=None, d=None, rtype='EI')

Predicts the RB error rate from a model.

Uses the “L-matrix” theory from Proctor et al Phys. Rev. Lett. 119, 130502 (2017). Note that this gives the same predictions as the theory in Wallman Quantum 2, 47 (2018).

This theory is valid for various types of RB, including standard Clifford RB – i.e., it will accurately predict the per-Clifford error rate reported by standard Clifford RB. It is also valid for “direct RB” under broad circumstances.

For this function to be valid the model should be trace preserving and completely positive in some representation, but the particular representation of the model used is irrelevant, as the predicted RB error rate is a gauge-invariant quantity. The function is likely reliable when complete positivity is slightly violated, although the theory on which it is based assumes complete positivity.

Parameters

modelModel

The model to calculate the RB number of. This model is the model randomly sampled over, so this is not necessarily the set of physical primitives. In Clifford RB this is a set of Clifford gates; in “direct RB” this normally would be the physical primitives.

target_modelModel

The target model, corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be non-negative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.

dint, optional

The Hilbert space dimension. If None, then sqrt(model.dim) is used.

rtypestr, optional

The type of RB error rate, either “EI” or “AGI”, corresponding to different dimension-dependent rescalings of the RB decay constant p obtained from fitting to Pm = A + Bp^m. “EI” corresponds to an RB error rate that is associated with entanglement infidelity, which is the probability of error for a gate with stochastic errors. This is the RB error rate defined in the “direct RB” protocol, and is given by:

r = (d^2 - 1)(1 - p)/d^2,

The AGI-type r is given by

r = (d - 1)(1 - p)/d,

which is the conventional r definition in Clifford RB. This r is associated with (gate-averaged) average gate infidelity.

Returns

rfloat.

The predicted RB number.

pygsti.tools.predicted_rb_decay_parameter(model, target_model, weights=None)

Computes the second largest eigenvalue of the ‘L matrix’ (see the L_matrix function).

For standard Clifford RB and direct RB, this corresponds to the RB decay parameter p in Pm = A + Bp^m for “reasonably low error” trace preserving and completely positive gates. See also the predicted_rb_number function.

Parameters

modelModel

The model to calculate the RB decay parameter of. This model is the model randomly sampled over, so this is not necessarily the set of physical primitives. In Clifford RB this is a set of Clifford gates; in “direct RB” this normally would be the physical primitives.

target_modelModel

The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be non-negative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.

Returns

pfloat.

The second largest eigenvalue of L. This is the RB decay parameter for various types of RB.

pygsti.tools.rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)

Computes the gauge transformation required so that the RB number matches the average model infidelity.

This function computes the gauge transformation required so that, when the model is transformed via this gauge-transformation, the RB number – as predicted by the function predicted_rb_number – is the average model infidelity between the transformed model model and the target model target_model. This transformation is defined Proctor et al Phys. Rev. Lett. 119, 130502 (2017), and see also Wallman Quantum 2, 47 (2018).

Parameters

modelModel

The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).

target_modelModel

The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be non-negative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.

mx_basis{“std”,”gm”,”pp”}, optional

The basis of the models. If None, the basis is obtained from the model.

eigenvector_weightingfloat, optional

Must be non-zero. A weighting on the eigenvector with eigenvalue that is the RB decay parameter, in the sum of this eigenvector and the eigenvector with eigenvalue of 1 that defines the returned matrix l_operator. The value of this factor does not change whether this l_operator transforms into a gauge in which r = AGsI, but it may impact on other properties of the gates in that gauge. It is irrelevant if the gates are unital.

Returns

l_operatorarray

The matrix defining the gauge-transformation.

pygsti.tools.transform_to_rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)

Transforms a Model into the “RB gauge” (see the RB_gauge function).

This notion was introduced in Proctor et al Phys. Rev. Lett. 119, 130502 (2017). This gauge is a function of both the model and its target. These may be input in any gauge, for the purposes of obtaining “r = average model infidelity” between the output Model and target_model.

Parameters

modelModel

The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).

target_modelModel

The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be non-negative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.

mx_basis{“std”,”gm”,”pp”}, optional

The basis of the models. If None, the basis is obtained from the model.

eigenvector_weightingfloat, optional

Must be non-zero. A weighting on the eigenvector with eigenvalue that is the RB decay parameter, in the sum of this eigenvector and the eigenvector with eigenvalue of 1 that defines the returned matrix l_operator. The value of this factor does not change whether this l_operator transforms into a gauge in which r = AGsI, but it may impact on other properties of the gates in that gauge. It is irrelevant if the gates are unital.

Returns

model_in_RB_gaugeModel

The model model transformed into the “RB gauge”.

pygsti.tools.L_matrix(model, target_model, weights=None)

Constructs a generalization of the ‘L-matrix’ linear operator on superoperators.

From Proctor et al Phys. Rev. Lett. 119, 130502 (2017), the ‘L-matrix’ is represented as a matrix via the “stack” operation. This eigenvalues of this matrix describe the decay constant (or constants) in an RB decay curve for an RB protocol whereby random elements of the provided model are sampled according to the weights probability distribution over the model. So, this facilitates predictions of Clifford RB and direct RB decay curves.

Parameters

modelModel

The RB model. This is not necessarily the set of physical primitives – it is the model randomly sampled over in the RB protocol (e.g., the Cliffords).

target_modelModel

The target model corresponding to model. This function is not invariant under swapping model and target_model: this Model must be the target model, and should consistent of perfect gates.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be non-negative, and they must not all be zero. Because, when divided by their sum, they must be a valid probability distribution. If None, the weighting defaults to an equal weighting on all gates, as this is used in many RB protocols (e.g., Clifford RB). But, this weighting is flexible in the “direct RB” protocol.

Returns

Lfloat

A weighted version of the L operator from Proctor et al Phys. Rev. Lett. 119, 130502 (2017), represented as a matrix using the ‘stacking’ convention.

pygsti.tools.R_matrix_predicted_rb_decay_parameter(model, group, group_to_model=None, weights=None)

Returns the second largest eigenvalue of a generalization of the ‘R-matrix’ [see the R_matrix function].

Introduced in Proctor et al Phys. Rev. Lett. 119, 130502 (2017). This number is a prediction of the RB decay parameter for trace-preserving gates and a variety of forms of RB, including Clifford and direct RB. This function creates a matrix which scales super-exponentially in the number of qubits.

Parameters

modelModel

The model to predict the RB decay paramter for. If group_to_model is None, the labels of the gates in model should be the same as the labels of the group elements in group. For Clifford RB this would be the clifford model, for direct RB it would be the primitive gates.

groupMatrixGroup

The group that the model model contains gates from (model does not need to be the full group, and could be a subset of group). For Clifford RB and direct RB, this would be the Clifford group.

group_to_modeldict, optional

If not None, a dictionary that maps labels of group elements to labels of model. If model and group elements have the same labels, this dictionary is not required. Otherwise it is necessary.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at each stage of the RB protocol. If not None, the values in weights must all be positive or zero, and they must not all be zero (because, when divided by their sum, they must be a valid probability distribution). If None, the weighting defaults to an equal weighting on all gates, as used in most RB protocols.

Returns

pfloat

The predicted RB decay parameter. Valid for standard Clifford RB or direct RB with trace-preserving gates, and in a range of other circumstances.

pygsti.tools.R_matrix(model, group, group_to_model=None, weights=None)

Constructs a generalization of the ‘R-matrix’ of Proctor et al Phys. Rev. Lett. 119, 130502 (2017).

This matrix described the exact behaviour of the average success probablities of RB sequences. This matrix is super-exponentially large in the number of qubits, but can be constructed for 1-qubit models.

Parameters

modelModel

The noisy model (e.g., the Cliffords) to calculate the R matrix of. The correpsonding target model (not required in this function) must be equal to or a subset of (a faithful rep of) the group group. If group_to_model `is None, the labels of the gates in model should be the same as the labels of the corresponding group elements in `group. For Clifford RB model should be the clifford model; for direct RB this should be the native model.

groupMatrixGroup

The group that the model model contains gates from. For Clifford RB or direct RB, this would be the Clifford group.

group_to_modeldict, optional

If not None, a dictionary that maps labels of group elements to labels of model. This is required if the labels of the gates in model are different from the labels of the corresponding group elements in group.

weightsdict, optional

If not None, a dictionary of floats, whereby the keys are the gates in model and the values are the unnormalized probabilities to apply each gate at for each layer of the RB protocol. If None, the weighting defaults to an equal weighting on all gates, as used in most RB protocols (e.g., Clifford RB).

Returns

Rfloat

A weighted, a subset-sampling generalization of the ‘R-matrix’ from Proctor et al Phys. Rev. Lett. 119, 130502 (2017).

pygsti.tools.errormaps(model, target_model)

Computes the ‘left-multiplied’ error maps associated with a noisy gate set, along with the average error map.

This is the model [E_1,…] such that

G_i = E_iT_i,

where T_i is the gate which G_i is a noisy implementation of. There is an additional gate in the set, that has the key ‘Gavg’. This is the average of the error maps.

Parameters

modelModel

The imperfect model.

target_modelModel

The target model.

Returns

errormapsModel

The left multplied error gates, along with the average error map, with the key ‘Gavg’.

pygsti.tools.gate_dependence_of_errormaps(model, target_model, norm='diamond', mx_basis=None)

Computes the “gate-dependence of errors maps” parameter defined by

delta_avg = avg_i|| E_i - avg_i(E_i) ||,

where E_i are the error maps, and the norm is either the diamond norm or the 1-to-1 norm. This quantity is defined in Magesan et al PRA 85 042311 2012.

Parameters

modelModel

The actual model

target_modelModel

The target model.

normstr, optional

The norm used in the calculation. Can be either ‘diamond’ for the diamond norm, or ‘1to1’ for the Hermitian 1 to 1 norm.

mx_basis{“std”,”gm”,”pp”}, optional

The basis of the models. If None, the basis is obtained from the model.

Returns

delta_avgfloat

The value of the parameter defined above.

pygsti.tools.length(s)

Returns the length (the number of indices) contained in a slice.

Parameters

sslice

The slice to operate upon.

Returns

int

pygsti.tools.shift(s, offset)

Returns a new slice whose start and stop points are shifted by offset.

Parameters

sslice

The slice to operate upon.

offsetint

The amount to shift the start and stop members of s.

Returns

slice

pygsti.tools.intersect(s1, s2)

Returns the intersection of two slices (which must have the same step).

Parameters

s1slice

First slice.

s2slice

Second slice.

Returns

slice

pygsti.tools.intersect_within(s1, s2)

Returns the intersection of two slices (which must have the same step). and the sub-slice of s1 and s2 that specifies the intersection.

Furthermore, s2 may be an array of indices, in which case the returned slices become arrays as well.

Parameters

s1slice

First slice. Must have definite boundaries (start & stop cannot be None).

s2slice or numpy.ndarray

Second slice or index array.

Returns

intersectionslice or numpy.ndarray

The intersection of s1 and s2.

subslice1slice or numpy.ndarray

The portion of s1 that yields intersection.

subslice2slice or numpy.ndarray

The portion of s2 that yields intersection.

pygsti.tools.indices(s, n=None)

Returns a list of the indices specified by slice s.

Parameters

sslice

The slice to operate upon.

nint, optional

The number of elements in the array being indexed, used for computing negative start/stop points.

Returns

list of ints

pygsti.tools.list_to_slice(lst, array_ok=False, require_contiguous=True)

Returns a slice corresponding to a given list of (integer) indices, if this is possible.

If not, array_ok determines the behavior.

Parameters

lstlist

The list of integers to convert to a slice (must be contiguous if require_contiguous == True).

array_okbool, optional

If True, an integer array (of type numpy.ndarray) is returned when lst does not correspond to a single slice. Otherwise, an AssertionError is raised.

require_contiguousbool, optional

If True, then lst will only be converted to a contiguous (step=1) slice, otherwise either a ValueError is raised (if array_ok is False) or an array is returned.

Returns

numpy.ndarray or slice

pygsti.tools.to_array(slc_or_list_like)

Returns slc_or_list_like as an index array (an integer numpy.ndarray).

Parameters

slc_or_list_likeslice or list

A slice, list, or array.

Returns

numpy.ndarray

pygsti.tools.divide(slc, max_len)

Divides a slice into sub-slices based on a maximum length (for each sub-slice).

For example: divide(slice(0,10,2), 2) == [slice(0,4,2), slice(4,8,2), slice(8,10,2)]

Parameters

slcslice

The slice to divide

max_lenint

The maximum length (i.e. number of indices) allowed in a sub-slice.

Returns

list of slices

pygsti.tools.slice_of_slice(slc, base_slc)

A slice that is the composition of base_slc and slc.

So that when indexing an array a, a[slice_of_slice(slc, base_slc)] == a[base_slc][slc]

Parameters

slcslice

the slice to take out of base_slc.

base_slcslice

the original “base” slice to act upon.

Returns

slice

pygsti.tools.slice_hash(slc)
pygsti.tools.smart_cached(obj)

Decorator for applying a smart cache to a single function or method.

Parameters

objfunction

function to decorate.

Returns

function

pygsti.tools.symplectic_form(n, convention='standard')

Creates the symplectic form for the number of qubits specified.

There are two variants, of the sympletic form over the finite field of the integers modulo 2, used in pyGSTi. These corresponding to the ‘standard’ and ‘directsum’ conventions. In the case of ‘standard’, the symplectic form is the 2n x 2n matrix of ((0,1),(1,0)), where ‘1’ and ‘0’ are the identity and all-zeros matrices of size n x n. The ‘standard’ symplectic form is probably the most commonly used, and it is the definition used throughout most of the code, including the Clifford compilers. In the case of ‘directsum’, the symplectic form is the direct sum of n 2x2 bit-flip matrices. This is only used in pyGSTi for sampling from the symplectic group.

Parameters

nint

The number of qubits the symplectic form should be constructed for. That is, the function creates a 2n x 2n matrix that is a sympletic form

conventionstr, optional

Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form.

Returns

numpy array

The specified symplectic form.

pygsti.tools.change_symplectic_form_convention(s, outconvention='standard')

Maps the input symplectic matrix between the ‘standard’ and ‘directsum’ symplectic form conventions.

That is, if the input is a symplectic matrix with respect to the ‘directsum’ convention and outconvention =’standard’ the output of this function is the equivalent symplectic matrix in the ‘standard’ symplectic form convention. Similarily, if the input is a symplectic matrix with respect to the ‘standard’ convention and outconvention = ‘directsum’ the output of this function is the equivalent symplectic matrix in the ‘directsum’ symplectic form convention.

Parameters

snumpy.ndarray

The input symplectic matrix.

outconventionstr, optional

Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form. This is the convention the input is being converted to (and so the input should be a symplectic matrix in the other convention).

Returns

numpy array

The matrix s converted to outconvention.

pygsti.tools.check_symplectic(m, convention='standard')

Checks whether a matrix is symplectic.

Parameters

mnumpy array

The matrix to check.

conventionstr, optional

Can be either ‘standard’ or ‘directsum’, Specifies the convention of the symplectic form with respect to which the matrix should be sympletic.

Returns

bool

A bool specifying whether the matrix is symplectic

pygsti.tools.inverse_symplectic(s)

Returns the inverse of a symplectic matrix over the integers mod 2.

Parameters

snumpy array

The matrix to invert

Returns

numpy array

The inverse of s, over the field of the integers mod 2.

pygsti.tools.inverse_clifford(s, p)

Returns the inverse of a Clifford gate in the symplectic representation.

This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford

Returns

sinversenumpy array

The symplectic matrix representing the inverse of the input Clifford.

pinversenumpy array

The ‘phase vector’ representing the inverse of the input Clifford.

pygsti.tools.check_valid_clifford(s, p)

Checks if a symplectic matrix - phase vector pair (s,p) is the symplectic representation of a Clifford.

This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford

Returns

bool

True if (s,p) is the symplectic representation of some Clifford.

pygsti.tools.construct_valid_phase_vector(s, pseed)

Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate.

If the seed phase vector, when paired with s, represents some Clifford this seed is returned. Otherwise 1 mod 4 is added to the required elements of the pseed in order to make it at valid phase vector (which is one of many possible phase vectors that, together with s, define a valid Clifford).

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford

pseednumpy array

The seed ‘phase vector’ over the integers mod 4.

Returns

numpy array

Some p such that (s,p) is the symplectic representation of some Clifford.

pygsti.tools.find_postmultipled_pauli(s, p_implemented, p_target, qubit_labels=None)

Finds the Pauli layer that should be appended to a circuit to implement a given Clifford.

If some circuit implements the clifford described by the symplectic matrix s and the vector p_implemented, this function returns the Pauli layer that should be appended to this circuit to implement the clifford described by s and the vector p_target.

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford implemented by the circuit

p_implementednumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford implemented by the circuit

p_targetnumpy array

The ‘phase vector’ over the integers mod 4 that, together with s represents the Clifford that you want to implement. Together with s, this vector must define a valid Clifford.

qubit_labelslist, optional

A list of qubit labels, that are strings or ints. The length of this list should be equal to the number of qubits the Clifford acts on. The ith element of the list is the label corresponding to the qubit at the ith index of s and the two phase vectors. If None, defaults to the integers from 0 to number of qubits - 1.

Returns

list

A list that defines a Pauli layer, with the ith element containig one of the 4 tuples (P,qubit_labels[i]) with P = ‘I’, ‘Z’, ‘Y’ and ‘Z’

pygsti.tools.find_premultipled_pauli(s, p_implemented, p_target, qubit_labels=None)

Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford.

If some circuit implements the clifford described by the symplectic matrix s and the vector p_implemented, this function returns the Pauli layer that should be prefixed to this circuit to implement the clifford described by s and the vector p_target.

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford implemented by the circuit

p_implementednumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford implemented by the circuit

p_targetnumpy array

The ‘phase vector’ over the integers mod 4 that, together with s represents the Clifford that you want to implement. Together with s, this vector must define a valid Clifford.

qubit_labelslist, optional

A list of qubit labels, that are strings or ints. The length of this list should be equal to the number of qubits the Clifford acts on. The ith element of the list is the label corresponding to the qubit at the ith index of s and the two phase vectors. If None, defaults to the integers from 0 to number of qubits - 1.

Returns

list

A list that defines a Pauli layer, with the ith element containig one of the 4 tuples (‘I’,i), (‘X’,i), (‘Y’,i), (‘Z’,i).

pygsti.tools.find_pauli_layer(pvec, qubit_labels, pauli_labels=None)

TODO: docstring pauli_labels defaults to [‘I’, ‘X’, ‘Y’, ‘Z’].

pygsti.tools.find_pauli_number(pvec)

TODO: docstring

pygsti.tools.compose_cliffords(s1, p1, s2, p2, do_checks=True)

Multiplies two cliffords in the symplectic representation.

The output corresponds to the symplectic representation of C2 times C1 (i.e., C1 acts first) where s1 (s2) and p1 (p2) are the symplectic matrix and phase vector, respectively, for Clifford C1 (C2). This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).

Parameters

s1numpy array

The symplectic matrix over the integers mod 2 representing the first Clifford

p1numpy array

The ‘phase vector’ over the integers mod 4 representing the first Clifford

s2numpy array

The symplectic matrix over the integers mod 2 representing the second Clifford

p2numpy array

The ‘phase vector’ over the integers mod 4 representing the second Clifford

do_checksbool

If True (default), check inputs and output are valid cliffords. If False, these checks are skipped (for speed)

Returns

snumpy array

The symplectic matrix over the integers mod 2 representing the composite Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the compsite Clifford

pygsti.tools.symplectic_kronecker(sp_factors)

Takes a kronecker product of symplectic representations.

Construct a single (s,p) symplectic (or stabilizer) representation that corresponds to the tensor (kronecker) product of the objects represented by each (s,p) element of sp_factors.

This is performed by inserting each factor’s s and p elements into the appropriate places of the final (large) s and p arrays. This operation works for combining Clifford operations AND also stabilizer states.

Parameters

sp_factorsiterable

A list of (s,p) symplectic (or stabilizer) representation factors.

Returns

snumpy.ndarray

An array of shape (2n,2n) where n is the total number of qubits (the sum of the number of qubits in each sp_factors element).

pnumpy.ndarray

A 1D array of length 2n.

pygsti.tools.prep_stabilizer_state(nqubits, zvals=None)

Contruct the (s,p) stabilizer representation for a computational basis state given by zvals.

Parameters

nqubitsint

Number of qubits

zvalsiterable, optional

An iterable over anything that can be cast as True/False to indicate the 0/1 value of each qubit in the Z basis. If None, the all-zeros state is created. If None, then all zeros is assumed.

Returns

s,pnumpy.ndarray

The stabilizer “matrix” and phase vector corresponding to the desired state. s has shape (2n,2n) (it includes antistabilizers) and p has shape 2n, where n equals nqubits.

pygsti.tools.apply_clifford_to_stabilizer_state(s, p, state_s, state_p)

Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation.

The output corresponds to the stabilizer representation of the output state.

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford

state_snumpy array

The matrix over the integers mod 2 representing the stabilizer state

state_pnumpy array

The ‘phase vector’ over the integers mod 4 representing the stabilizer state

Returns

out_snumpy array

The symplectic matrix over the integers mod 2 representing the output state

out_pnumpy array

The ‘phase vector’ over the integers mod 4 representing the output state

pygsti.tools.pauli_z_measurement(state_s, state_p, qubit_index)

Computes the probabilities of 0/1 (+/-) outcomes from measuring a Pauli operator on a stabilizer state.

Parameters

state_snumpy array

The matrix over the integers mod 2 representing the stabilizer state

state_pnumpy array

The ‘phase vector’ over the integers mod 4 representing the stabilizer state

qubit_indexint

The index of the qubit being measured

Returns

p0, p1float

Probabilities of 0 (+ eigenvalue) and 1 (- eigenvalue) outcomes.

state_s_0, state_s_1numpy array

Matrix over the integers mod 2 representing the output stabilizer states.

state_p_0, state_p_1numpy array

Phase vectors over the integers mod 4 representing the output stabilizer states.

pygsti.tools.colsum(i, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

Updates the i-th stabilizer generator (column of s and element of p) with the group-action product of the j-th and the i-th generators, i.e.

generator[i] -> generator[j] + generator[i]

Parameters

iint

Destination generator index.

jint

Sournce generator index.

snumpy array

The matrix over the integers mod 2 representing the stabilizer state

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the stabilizer state

nint

The number of qubits. s must be shape (2n,2n) and p must be length 2n.

Returns

None

pygsti.tools.colsum_acc(acc_s, acc_p, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

Similar to colsum() except a separate “accumulator” column is used instead of the i-th column of s and element of p. I.e., this performs:

acc[0] -> generator[j] + acc[0]

Parameters

acc_snumpy array

The matrix over the integers mod 2 representing the “accumulator” stabilizer state

acc_pnumpy array

The ‘phase vector’ over the integers mod 4 representing the “accumulator” stabilizer state

jint

Index of the stabilizer generator being accumulated (see above).

snumpy array

The matrix over the integers mod 2 representing the stabilizer state

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the stabilizer state

nint

The number of qubits. s must be shape (2n,2n) and p must be length 2n.

Returns

None

pygsti.tools.stabilizer_measurement_prob(state_sp_tuple, moutcomes, qubit_filter=None, return_state=False)

Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state.

Returns this probability, optionally along with the updated (post-measurement) stabilizer state.

Parameters

state_sp_tupletuple

A (s,p) tuple giving the stabilizer state to measure.

moutcomesarray-like

The z-values identifying which measurement outcome (a computational basis state) to compute the probability for.

qubit_filteriterable, optional

If not None, a list of qubit indices which are measured. len(qubit_filter) should always equal len(moutcomes). If None, then assume all qubits are measured (len(moutcomes) == num_qubits).

return_statebool, optional

Whether the post-measurement (w/outcome moutcomes) state is also returned.

Returns

pfloat

The probability of the given measurement outcome.

state_s,state_pnumpy.ndarray

Only returned when return_state=True. The post-measurement stabilizer state representation (an updated version of state_sp_tuple).

pygsti.tools.embed_clifford(s, p, qubit_inds, n)

Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation.

The action of (s,p) takes place on the qubit indices specified by qubit_inds.

Parameters

snumpy array

The symplectic matrix over the integers mod 2 representing the Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the Clifford

qubit_indslist

A list or array of integers specifying which qubits s and p act on.

nint

The total number of qubits

Returns

snumpy array

The symplectic matrix over the integers mod 2 representing the embedded Clifford

pnumpy array

The ‘phase vector’ over the integers mod 4 representing the embedded Clifford

pygsti.tools.compute_internal_gate_symplectic_representations(gllist=None)

Creates a dictionary of the symplectic representations of ‘standard’ Clifford gates.

Returns a dictionary containing the symplectic matrices and phase vectors that represent the specified ‘standard’ Clifford gates, or the representations of all the standard gates if no list of operation labels is supplied. These ‘standard’ Clifford gates are those gates that are already known to the code gates (e.g., the label ‘CNOT’ has a specfic meaning in the code), and are recorded as unitaries in “internalgates.py”.

Parameters

gllistlist, optional

If not None, a list of strings corresponding to operation labels for any of the standard gates that have fixed meaning for the code (e.g., ‘CNOT’ corresponds to the CNOT gate with the first qubit the target). For example, this list could be gllist = [‘CNOT’,’H’,’P’,’I’,’X’].

Returns

srep_dictdict

dictionary of (smatrix,svector) tuples, where smatrix and svector are numpy arrays containing the symplectic matrix and phase vector representing the operation label given by the key.

pygsti.tools.symplectic_rep_of_clifford_circuit(circuit, srep_dict=None, pspec=None)

Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit.

This uses the formualas derived in Hostens and De Moor PRA 71, 042315 (2005).

Parameters

circuitCircuit

The Clifford circuit to calculate the global action of, input as a Circuit object.

srep_dictdict, optional

If not None, a dictionary providing the (symplectic matrix, phase vector) tuples associated with each operation label. If the circuit layer contains only ‘standard’ gates which have a hard-coded symplectic representation this may be None. Alternatively, if pspec is specifed and it contains the gates in circuit in a Clifford model, it also does not need to be specified (and it is ignored if it is specified). Otherwise it must be specified.

pspecQubitProcessorSpec, optional

A QubitProcessorSpec that contains a Clifford model that defines the symplectic action of all of the gates in circuit. If this is not None it over-rides srep_dict. Both pspec and srep_dict can only be None if the circuit contains only gates with names that are hard-coded into pyGSTi.

Returns

snumpy array

The symplectic matrix representing the Clifford implement by the input circuit

pdictionary of numpy arrays

The phase vector representing the Clifford implement by the input circuit

pygsti.tools.symplectic_rep_of_clifford_layer(layer, n=None, q_labels=None, srep_dict=None, add_internal_sreps=True)

Constructs the symplectic representation of the n-qubit Clifford implemented by a single quantum circuit layer.

(Gates in a “single layer” must act on disjoint sets of qubits, but not all qubits need to be acted upon in the layer.)

Parameters

layerLabel

A layer label, often a compound label with components. Specifies The Clifford gate(s) to calculate the global action of.

nint, optional

The total number of qubits. Must be specified if q_labels is None.

q_labelslist, optional

A list of all the qubit labels. If the layer is over qubits that are not labelled by integers 0 to n-1 then it is necessary to specify this list. Note that this should contain all the qubit labels for the circuit that this is a layer from, and they should be ordered as in that circuit, otherwise the symplectic rep returned might not be of the correct dimension or of the correct order.

srep_dictdict, optional

If not None, a dictionary providing the (symplectic matrix, phase vector) tuples associated with each operation label. If the circuit layer contains only ‘standard’ gates which have a hard-coded symplectic representation this may be None. Otherwise it must be specified. If the layer contains some standard gates it is not necesary to specify the symplectic represenation for those gates.

add_internal_srepsbool, optional

If True, the symplectic reps for internal gates are calculated and added to srep_dict. For speed, calculate these reps once, store them in srep_dict, and set this to False.

Returns

snumpy array

The symplectic matrix representing the Clifford implement by specified circuit layer

pnumpy array

The phase vector representing the Clifford implement by specified circuit layer

pygsti.tools.one_q_clifford_symplectic_group_relations()

Gives the group relationship between the ‘I’, ‘H’, ‘P’ ‘HP’, ‘PH’, and ‘HPH’ up-to-Paulis operators.

The returned dictionary contains keys (A,B) for all A and B in the above list. The value for key (A,B) is C if BA = C x some Pauli operator. E,g, (‘P’,’P’) = ‘I’.

This dictionary is important for Compiling multi-qubit Clifford gates without unneccessary 1-qubit gate over-heads. But note that this dictionary should not be used for compressing circuits containing these gates when the exact action of the circuit is of importance (not only the up-to-Paulis action of the circuit).

Returns

dict

pygsti.tools.unitary_is_clifford(unitary)

Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise.

Parameters

unitarynumpy.ndarray

A unitary matrix to test.

Returns

bool

pygsti.tools.unitary_to_symplectic(u, flagnonclifford=True)

Returns the symplectic representation of a one-qubit or two-qubit Clifford unitary.

The Clifford is input as a complex matrix in the standard computational basis.

Parameters

unumpy array

The unitary matrix to construct the symplectic representation for. This must be a one-qubit or two-qubit gate (so, it is a 2 x 2 or 4 x 4 matrix), and it must be provided in the standard computational basis. It also must be a Clifford gate in the standard sense.

flagnoncliffordbool, opt

If True, a ValueError is raised when the input unitary is not a Clifford gate. If False, when the unitary is not a Clifford the returned s and p are None.

Returns

snumpy array or None

The symplectic matrix representing the unitary, or None if the input unitary is not a Clifford and flagnonclifford is False

pnumpy array or None

The phase vector representing the unitary, or None if the input unitary is not a Clifford and flagnonclifford is False

pygsti.tools.random_symplectic_matrix(n, convention='standard', rand_state=None)

Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n).

This uses the method of Robert Koenig and John A. Smolin, presented in “How to efficiently select an arbitrary Clifford group element”.

Parameters

nint

The size of the symplectic group to sample from.

conventionstr, optional

Can be either ‘standard’ or ‘directsum’, which correspond to two different definitions for the symplectic form. In the case of ‘standard’, the symplectic form is the 2n x 2n matrix of ((0,1),(1,0)), where ‘1’ and ‘0’ are the identity and all-zeros matrices of size n x n. The ‘standard’ symplectic form is the convention used throughout most of the code. In the case of ‘directsum’, the symplectic form is the direct sum of n 2x2 bit-flip matrices.

rand_state: RandomState, optional

A np.random.RandomState object for seeding RNG

Returns

snumpy array

A uniformly sampled random symplectic matrix.

pygsti.tools.random_clifford(n, rand_state=None)

Returns a Clifford, in the symplectic representation, sampled uniformly at random from the n-qubit Clifford group.

The core of this function uses the method of Robert Koenig and John A. Smolin, presented in “How to efficiently select an arbitrary Clifford group element”, for sampling a uniformly random symplectic matrix.

Parameters

nint

The number of qubits the Clifford group is over.

rand_state: RandomState, optional

A np.random.RandomState object for seeding RNG

Returns

snumpy array

The symplectic matrix representating the uniformly sampled random Clifford.

pnumpy array

The phase vector representating the uniformly sampled random Clifford.

pygsti.tools.random_phase_vector(s, n, rand_state=None)

Generates a uniformly random phase vector for a n-qubit Clifford.

(This vector, together with the provided symplectic matrix, define a valid Clifford operation.) In combination with a uniformly random s the returned p defines a uniformly random Clifford gate.

Parameters

snumpy array

The symplectic matrix to construct a random phase vector

nint

The number of qubits the Clifford group is over.

rand_state: RandomState, optional

A np.random.RandomState object for seeding RNG

Returns

pnumpy array

A phase vector sampled uniformly at random from all those phase vectors that, as a pair with s, define a valid n-qubit Clifford.

pygsti.tools.bitstring_for_pauli(p)

Get the bitstring corresponding to a Pauli.

The state, represented by a bitstring, that the Pauli operator represented by the phase-vector p creates when acting on the standard input state.

Parameters

pnumpy.ndarray

Phase vector of a symplectic representation, encoding a Pauli operation.

Returns

list

A list of 0 or 1 elements.

pygsti.tools.apply_internal_gate_to_symplectic(s, gate_name, qindex_list, optype='row')

Applies a Clifford gate to the n-qubit Clifford gate specified by the 2n x 2n symplectic matrix.

The Clifford gate is specified by the internally hard-coded name gate_name. This gate is applied to the qubits with indices in qindex_list, where these indices are w.r.t to indeices of s. This gate is applied from the left (right) of s if optype is ‘row’ (‘column’), and has a row-action (column-action) on s. E.g., the Hadmard (‘H’) on qubit with index i swaps the ith row (or column) with the (i+n)th row (or column) of s; CNOT adds rows, etc.

Note that this function updates s, and returns None.

Parameters

snp.array

A even-dimension square array over [0,1] that is the symplectic representation of some (normally multi-qubit) Clifford gate.

gate_namestr

The gate name. Should be one of the gate-names of the hard-coded gates used internally in pyGSTi that is also a Clifford gate. Currently not all of those gates are supported, and gate_name must be one of: ‘H’, ‘P’, ‘CNOT’, ‘SWAP’.

qindex_listlist or tuple

The qubit indices that gate_name acts on (can be either length 1 or 2 depending on whether the gate acts on 1 or 2 qubits).

optype{‘row’, ‘column’}, optional

Whether the symplectic operator type uses rows or columns: TODO: docstring - better explanation.

Returns

None

pygsti.tools.compute_num_cliffords(n)

The number of Clifford gates in the n-qubit Clifford group.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

nint

The number of qubits the Clifford group is over.

Returns

long integer

The cardinality of the n-qubit Clifford group.

pygsti.tools.compute_num_symplectics(n)

The number of elements in the symplectic group S(n) over the 2-element finite field.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

nint

S(n) group parameter.

Returns

int

pygsti.tools.compute_num_cosets(n)

Returns the number of different cosets for the symplectic group S(n) over the 2-element finite field.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

nint

S(n) group parameter.

Returns

int

pygsti.tools.symplectic_innerproduct(v, w)

Returns the symplectic inner product of two vectors in F_2^(2n).

Here F_2 is the finite field containing 0 and 1, and 2n is the length of the vectors. Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

vnumpy.ndarray

A length-2n vector.

wnumpy.ndarray

A length-2n vector.

Returns

int

pygsti.tools.symplectic_transvection(k, v)

Applies transvection Z k to v.

Code from “How to efficiently select an arbitrary Clifford group element by Robert Koenig and John A. Smolin.

Parameters

knumpy.ndarray

A length-2n vector.

vnumpy.ndarray

A length-2n vector.

Returns

numpy.ndarray

pygsti.tools.int_to_bitstring(i, n)

Converts integer i to an length n array of bits.

Code from “How to efficiently select an arbitrary Clifford group element by Robert Koenig and John A. Smolin.

Parameters

iint

Any integer.

nint

Number of bits

Returns

numpy.ndarray

Integer array of 0s and 1s.

pygsti.tools.bitstring_to_int(b, n)

Converts an n-bit string b to an integer between 0 and 2^`n` - 1.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

blist, tuple, or array

Sequence of bits (a bitstring).

nint

Number of bits.

Returns

int

pygsti.tools.find_symplectic_transvection(x, y)

A utility function for selecting a random Clifford element.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

xnumpy.ndarray

A length-2n vector.

ynumpy.ndarray

A length-2n vector.

Returns

numpy.ndarray

pygsti.tools.compute_symplectic_matrix(i, n)

Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the “canonical” index i.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

iint

Canonical index.

nint

Number of qubits.

Returns

numpy.ndarray

pygsti.tools.compute_symplectic_label(gn, n=None)

Returns the “canonical” index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

gnnumpy.ndarray

symplectic matrix

nint, optional

Number of qubits (if None, use gn.shape[0] // 2).

Returns

int

The canonical index of gn.

pygsti.tools.random_symplectic_index(n, rand_state=None)

The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1.

Code from “How to efficiently select an arbitrary Clifford group element” by Robert Koenig and John A. Smolin.

Parameters

nint

Number of qubits (half dimension of symplectic matrix).

rand_state: RandomState, optional

A np.random.RandomState object for seeding RNG

Returns

numpy.ndarray

class pygsti.tools.TypedDict(types=None, items=())

Bases: dict

A dictionary that holds per-key type information.

This type of dict is used for the “leaves” in a tree of nested NamedDict objects, specifying a collection of data of different types pertaining to some set of category labels (the index-path of the named dictionaries).

When converted to a data frame, each key specifies a different column and values contribute the values of a single data frame row. Columns will be series of the held data types.

Parameters

typesdict, optional

Keys are the keys that can appear in this dictionary, and values are valid data frame type strings, e.g. “int”, “float”, or “category”, that specify the type of each value.

itemsdict or list

Initial data, used for serialization.

Initialize self. See help(type(self)) for accurate signature.

as_dataframe()

Render this dict as a pandas data frame.

Returns

pandas.DataFrame