pygsti

A Python implementation of LinearOperator Set Tomography

Subpackages

Submodules

Package Contents

Classes

_DummyProfiler

A dummy profiler that doesn't do anything.

BuiltinBasis

A basis that is included within and integrated into pyGSTi.

VerbosityPrinter

Class responsible for logging things to stdout or a file.

DirectSumBasis

A basis that is the direct sum of one or more "component" bases.

_CircuitList

A unmutable list (a tuple) of Circuit objects and associated metadata.

_ResourceAllocation

Describes available resources and how they should be allocated.

_CustomLMOptimizer

A Levenberg-Marquardt optimizer customized for GST-like problems.

_Optimizer

An optimizer. Optimizes an objective function.

_TrivialGaugeGroupElement

Element of TrivialGaugeGroup

_DataSet

An association between Circuits and outcome counts, serving as the input data for many QCVV protocols.

_GSTAdvancedOptions

Advanced options for GST driver functions.

_QubitProcessorSpec

The device specification for a one or more qubit quantum computer.

_Model

A predictive model for a Quantum Information Processor (QIP).

NamedDict

A dictionary that also holds category names and types.

TypedDict

A dictionary that holds per-key type information.

_Basis

An ordered set of labeled matrices/vectors.

_ExplicitBasis

A Basis whose elements are specified directly.

_DirectSumBasis

A basis that is the direct sum of one or more "component" bases.

_Label

A label used to identify a gate, circuit layer, or (sub-)circuit.

_LocalElementaryErrorgenLabel

Labels an elementary error generator by simply a type and one or two

Functions

contract(model, to_what, dataset=None, maxiter=1000000, tol=0.01, use_direct_cp=True, method='Nelder-Mead', verbosity=0)

Contract a Model to a specified space.

_contract_to_xp(model, dataset, verbosity, method='Nelder-Mead', maxiter=100000, tol=1e-10)

_contract_to_cp(model, verbosity, method='Nelder-Mead', maxiter=100000, tol=0.01)

_contract_to_cp_direct(model, verbosity, tp_also=False, maxiter=100000, tol=1e-08)

_contract_to_tp(model, verbosity)

_contract_to_valid_spam(model, verbosity=0)

Contract the surface preparation and measurement operations of

run_lgst(dataset, prep_fiducials, effect_fiducials, target_model, op_labels=None, op_label_aliases=None, guess_model_for_gauge=None, svd_truncate_to=None, verbosity=0)

Performs Linear-inversion Gate Set Tomography on the dataset.

_lgst_matrix_dims(model, prep_fiducials, effect_fiducials)

_construct_ab(prep_fiducials, effect_fiducials, model, dataset, op_label_aliases=None)

_construct_x_matrix(prep_fiducials, effect_fiducials, model, op_label_tuple, dataset, op_label_aliases=None)

_construct_a(effect_fiducials, model)

_construct_b(prep_fiducials, model)

_construct_target_ab(prep_fiducials, effect_fiducials, target_model)

gram_rank_and_eigenvalues(dataset, prep_fiducials, effect_fiducials, target_model)

Returns the rank and singular values of the Gram matrix for a dataset.

run_gst_fit_simple(dataset, start_model, circuits, optimizer, objective_function_builder, resource_alloc, verbosity=0)

Performs core Gate Set Tomography function of model optimization.

run_gst_fit(mdc_store, optimizer, objective_function_builder, verbosity=0)

Performs core Gate Set Tomography function of model optimization.

run_iterative_gst(dataset, start_model, circuit_lists, optimizer, iteration_objfn_builders, final_objfn_builders, resource_alloc, verbosity=0)

Performs Iterative Gate Set Tomography on the dataset.

_do_runopt(objective, optimizer, printer)

Runs the core model-optimization step within a GST routine by optimizing

_do_term_runopt(objective, optimizer, printer)

Runs the core model-optimization step for models using the

find_closest_unitary_opmx(operation_mx)

Find the closest (in fidelity) unitary superoperator to operation_mx.

gaugeopt_to_target(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', gauge_group=None, method='auto', maxiter=100000, maxfev=None, tol=1e-08, oob_check_interval=0, return_all=False, comm=None, verbosity=0, check_jac=False)

Optimize the gauge degrees of freedom of a model to that of a target.

gaugeopt_custom(model, objective_fn, gauge_group=None, method='L-BFGS-B', maxiter=100000, maxfev=None, tol=1e-08, oob_check_interval=0, return_all=False, jacobian_fn=None, comm=None, verbosity=0)

Optimize the gauge of a model using a custom objective function.

_create_objective_fn(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', method=None, comm=None, check_jac=False)

Creates the objective function and jacobian (if available)

_cptp_penalty_size(mdl)

Helper function - same as that in core.py.

_spam_penalty_size(mdl)

Helper function - same as that in core.py.

_cptp_penalty(mdl, prefactor, op_basis)

Helper function - CPTP penalty: (sum of tracenorms of gates),

_spam_penalty(mdl, prefactor, op_basis)

Helper function - CPTP penalty: (sum of tracenorms of gates),

_cptp_penalty_jac_fill(cp_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)

Helper function - jacobian of CPTP penalty (sum of tracenorms of gates)

_spam_penalty_jac_fill(spam_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)

Helper function - jacobian of CPTP penalty (sum of tracenorms of gates)

_gram_rank_and_evals(dataset, prep_fiducials, effect_fiducials, target_model)

Returns the rank and singular values of the Gram matrix for a dataset.

max_gram_basis(op_labels, dataset, max_length=0)

Compute a maximal set of basis circuits for a Gram matrix.

max_gram_rank_and_eigenvalues(dataset, target_model, max_basis_string_length=10, fixed_lists=None)

Compute the rank and singular values of a maximal Gram matrix.

unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

single_qubit_gate(hx, hy, hz, noise=0)

Construct the single-qubit operation matrix.

two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)

Construct the single-qubit operation matrix.

create_bootstrap_dataset(input_data_set, generation_method, input_model=None, seed=None, outcome_labels=None, verbosity=1)

Creates a DataSet used for generating bootstrapped error bars.

create_bootstrap_models(num_models, input_data_set, generation_method, fiducial_prep, fiducial_measure, germs, max_lengths, input_model=None, target_model=None, start_seed=0, outcome_labels=None, lsgst_lists=None, return_data=False, verbosity=2)

Creates a series of "bootstrapped" Models.

gauge_optimize_models(gs_list, target_model, gate_metric='frobenius', spam_metric='frobenius', plot=True)

Optimizes the "spam weight" parameter used when gauge optimizing a set of models.

_model_stdev(gs_func, gs_ensemble, ddof=1, axis=None, **kwargs)

Standard deviation of gs_func over an ensemble of models.

_model_mean(gs_func, gs_ensemble, axis=None, **kwargs)

Mean of gs_func over an ensemble of models.

_to_mean_model(gs_list, target_gs)

Take the per-gate-element mean of a set of models.

_to_std_model(gs_list, target_gs, ddof=1)

Take the per-gate-element standard deviation of a list of models.

_to_rms_model(gs_list, target_gs)

Take the per-gate-element RMS of a set of models.

_create_explicit_model(processor_spec, modelnoise, custom_gates=None, evotype='default', simulator='auto', ideal_gate_type='auto', ideal_prep_type='auto', ideal_povm_type='auto', embed_gates=False, basis='pp')

run_model_test(model_filename_or_object, data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Compares a Model's predictions to a DataSet using GST-like circuits.

run_linear_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Perform Linear Gate Set Tomography (LGST).

run_long_sequence_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Perform long-sequence GST (LSGST).

run_long_sequence_gst_base(data_filename_or_set, target_model_filename_or_object, lsgst_lists, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

A more fundamental interface for performing end-to-end GST.

run_stdpractice_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, modes='full TP,CPTP,Target', gaugeopt_suite='stdgaugeopt', gaugeopt_target=None, models_to_test=None, comm=None, mem_limit=None, advanced_options=None, output_pkl=None, verbosity=2)

Perform end-to-end GST analysis using standard practices.

_load_model(model_filename_or_object)

_load_dataset(data_filename_or_set, comm, verbosity)

Loads a DataSet from the data_filename_or_set argument of functions in this module.

_update_objfn_builders(builders, advanced_options)

_get_badfit_options(advanced_options)

_output_to_pickle(obj, output_pkl, comm)

_get_gst_initial_model(target_model, advanced_options)

_get_gst_builders(advanced_options)

_get_optimizer(advanced_options, model_being_optimized)

parallel_apply(f, l, comm)

Apply a function, f to every element of a list, l in parallel, using MPI.

mpi4py_comm()

Get a comm object

starmap_with_kwargs(fn, num_runs, num_processors, args_list, kwargs_list)

basis_matrices(name_or_basis, dim, sparse=False)

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

basis_longname(basis)

Get the "long name" for a particular basis, which is typically used in reports, etc.

basis_element_labels(basis, dim)

Get a list of short labels corresponding to to the elements of the described basis.

is_sparse_basis(name_or_basis)

Whether a basis contains sparse matrices.

change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

create_basis_pair(mx, from_basis, to_basis)

Constructs bases from transforming mx between two basis names.

create_basis_for_matrix(mx, basis)

Construct a Basis object with type given by basis and dimension approprate for transforming mx.

resize_std_mx(mx, resize, std_basis_1, std_basis_2)

Change the basis of mx to a potentially larger or smaller 'std'-type basis given by std_basis_2.

flexible_change_basis(mx, start_basis, end_basis)

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.

resize_mx(mx, dim_or_block_dims=None, resize=None)

Wrapper for resize_std_mx(), that manipulates mx to be in another basis.

state_to_stdmx(state_vec)

Convert a state vector into a density matrix.

state_to_pauli_density_vec(state_vec)

Convert a single qubit state vector into a Liouville vector in the Pauli basis.

vec_to_stdmx(v, basis, keep_complex=False)

Convert a vector in this basis to a matrix in the standard basis.

stdmx_to_vec(m, basis)

Convert a matrix in the standard basis to a vector in the Pauli basis.

_deprecated_fn(replacement=None)

Decorator for deprecating a function.

chi2(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the total (aggregate) chi^2 for a set of circuits.

chi2_per_circuit(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit chi^2 contributions for a set of cirucits.

chi2_jacobian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the gradient of the chi^2 function computed by :function:`chi2`.

chi2_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the Hessian matrix of the chi2() function.

chi2_approximate_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(-10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute and approximate Hessian matrix of the chi2() function.

chialpha(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(-10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the chi-alpha objective function.

chialpha_per_circuit(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(-10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the per-circuit chi-alpha objective function.

chi2fn_2outcome(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes chi^2 for a 2-outcome measurement.

chi2fn_2outcome_wfreqs(n, p, f)

Computes chi^2 for a 2-outcome measurement using frequency-weighting.

chi2fn(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes the chi^2 term corresponding to a single outcome.

chi2fn_wfreqs(n, p, f, min_freq_clip_for_weighting=0.0001)

Computes the frequency-weighed chi^2 term corresponding to a single outcome.

bonferroni_correction(significance, numtests)

Calculates the standard Bonferroni correction.

sidak_correction(significance, numtests)

Sidak correction.

generalized_bonferroni_correction(significance, weights, numtests=None, nested_method='bonferroni', tol=1e-10)

Generalized Bonferroni correction.

jamiolkowski_iso(operation_mx, op_mx_basis='pp', choi_mx_basis='pp')

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.

jamiolkowski_iso_inv(choi_mx, choi_mx_basis='pp', op_mx_basis='pp')

Given a choi matrix, return the corresponding operation matrix.

fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.

fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)

Given a choi matrix in the standard basis, return the corresponding operation matrix.

sum_of_negative_choi_eigenvalues(model, weights=None)

Compute the amount of non-CP-ness of a model.

sums_of_negative_choi_eigenvalues(model)

Compute the amount of non-CP-ness of a model.

magnitudes_of_negative_choi_eigenvalues(model)

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.

warn_deprecated(name, replacement=None)

Formats and prints a deprecation warning message.

deprecate(replacement=None)

Decorator for deprecating a function.

deprecate_imports(module_name, replacement_map, warning_msg)

Utility to deprecate imports from a module.

logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

The log-likelihood function.

logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit log-likelihood contribution for a set of circuits.

logl_jacobian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The jacobian of the log-likelihood function.

logl_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The hessian of the log-likelihood function.

logl_approximate_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

An approximate Hessian of the log-likelihood function.

logl_max(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The maximum log-likelihood possible for a DataSet.

logl_max_per_circuit(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The vector of maximum log-likelihood contributions for each circuit, aggregated over outcomes.

two_delta_logl_nsigma(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method='modeltest', wildcard=None)

See docstring for :function:`pygsti.tools.two_delta_logl`

two_delta_logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the difference between the maximum and actual log-likelihood.

two_delta_logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(-1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the per-circuit difference between the maximum and actual log-likelihood.

two_delta_logl_term(n, p, f, min_prob_clip=1e-06, poisson_picture=True)

Term of the 2*[log(L)-upper-bound - log(L)] sum corresponding to a single circuit and spam label.

hamiltonian_to_lindbladian(hamiltonian, sparse=False)

Construct the Lindbladian corresponding to a given Hamiltonian.

stochastic_lindbladian(q, sparse=False)

Construct the Lindbladian corresponding to stochastic q-errors.

affine_lindbladian(q, sparse=False)

Construct the Lindbladian corresponding to affine q-errors.

nonham_lindbladian(Lm, Ln, sparse=False)

Construct the Lindbladian corresponding to generalized non-Hamiltonian (stochastic) errors.

remove_duplicates_in_place(l, index_to_test=None)

Remove duplicates from the list passed as an argument.

remove_duplicates(l, index_to_test=None)

Remove duplicates from the a list and return the result.

compute_occurrence_indices(lst)

A 0-based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.

find_replace_tuple(t, alias_dict)

Replace elements of t according to rules in alias_dict.

find_replace_tuple_list(list_of_tuples, alias_dict)

Applies find_replace_tuple() on each element of list_of_tuples.

apply_aliases_to_circuits(list_of_circuits, alias_dict)

Applies alias_dict to the circuits in list_of_circuits.

sorted_partitions(n)

Iterate over all sorted (decreasing) partitions of integer n.

partitions(n)

Iterate over all partitions of integer n.

partition_into(n, nbins)

Iterate over all partitions of integer n into nbins bins.

_partition_into_slow(n, nbins)

Helper function for partition_into that performs the same task for

incd_product(*args)

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.

dot_mod2(m1, m2)

Returns the product over the integers modulo 2 of two matrices.

multidot_mod2(mlist)

Returns the product over the integers modulo 2 of a list of matrices.

det_mod2(m)

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).

matrix_directsum(m1, m2)

Returns the direct sum of two square matrices of integers.

inv_mod2(m)

Finds the inverse of a matrix over GL(n,2)

Axb_mod2(A, b)

Solves Ax = b over GF(2)

gaussian_elimination_mod2(a)

Gaussian elimination mod2 of a.

diagonal_as_vec(m)

Returns a 1D array containing the diagonal of the input square 2D array m.

strictly_upper_triangle(m)

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.

diagonal_as_matrix(m)

Returns a diagonal matrix containing the diagonal of m.

albert_factor(d, failcount=0)

Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.

random_bitstring(n, p, failcount=0)

Constructs a random bitstring of length n with parity p

random_invertable_matrix(n, failcount=0)

Finds a random invertable matrix M over GL(n,2)

random_symmetric_invertable_matrix(n)

Creates a random, symmetric, invertible matrix from GL(n,2)

onesify(a, failcount=0, maxfailcount=100)

Returns M such that M a M.T has ones along the main diagonal

permute_top(a, i)

Permutes the first row & col with the i'th row & col

fix_top(a)

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.

proper_permutation(a)

Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.

_check_proper_permutation(a)

Check to see if the matrix has been properly permuted.

trace(m)

The trace of a matrix, sum_i m[i,i].

is_hermitian(mx, tol=1e-09)

Test whether mx is a hermitian matrix.

is_pos_def(mx, tol=1e-09)

Test whether mx is a positive-definite matrix.

is_valid_density_mx(mx, tol=1e-09)

Test whether mx is a valid density matrix (hermitian, positive-definite, and unit trace).

frobeniusnorm(ar)

Compute the frobenius norm of an array (or matrix),

frobeniusnorm_squared(ar)

Compute the squared frobenius norm of an array (or matrix),

nullspace(m, tol=1e-07)

Compute the nullspace of a matrix.

nullspace_qr(m, tol=1e-07)

Compute the nullspace of a matrix using the QR decomposition.

nice_nullspace(m, tol=1e-07)

Computes the nullspace of a matrix, and tries to return a "nice" basis for it.

normalize_columns(m, return_norms=False, ord=None)

Normalizes the columns of a matrix.

column_norms(m, ord=None)

Compute the norms of the columns of a matrix.

scale_columns(m, scale_values)

Scale each column of a matrix by a given value.

columns_are_orthogonal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

columns_are_orthonormal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

independent_columns(m, initial_independent_cols=None, tol=1e-07)

Computes the indices of the linearly-independent columns in a matrix.

pinv_of_matrix_with_orthogonal_columns(m)

TODO: docstring

matrix_sign(m)

The "sign" matrix of m

print_mx(mx, width=9, prec=4, withbrackets=False)

Print matrix in pretty format.

mx_to_string(m, width=9, prec=4, withbrackets=False)

Generate a "pretty-format" string for a matrix.

mx_to_string_complex(m, real_width=9, im_width=9, prec=4)

Generate a "pretty-format" string for a complex-valued matrix.

unitary_superoperator_matrix_log(m, mx_basis)

Construct the logarithm of superoperator matrix m.

near_identity_matrix_log(m, tol=1e-08)

Construct the logarithm of superoperator matrix m that is near the identity.

approximate_matrix_log(m, target_logm, target_weight=10.0, tol=1e-06)

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.

real_matrix_log(m, action_if_imaginary='raise', tol=1e-08)

Construct a real logarithm of real matrix m.

column_basis_vector(i, dim)

Returns the ith standard basis vector in dimension dim.

vec(matrix_in)

Stacks the columns of a matrix to return a vector

unvec(vector_in)

Slices a vector into the columns of a matrix.

norm1(m)

Returns the 1 norm of a matrix

random_hermitian(dim)

Generates a random Hermitian matrix

norm1to1(operator, num_samples=10000, mx_basis='gm', return_list=False)

The Hermitian 1-to-1 norm of a superoperator represented in the standard basis.

complex_compare(a, b)

Comparison function for complex numbers that compares real part, then imaginary part.

prime_factors(n)

GCD algorithm to produce prime factors of n

minweight_match(a, b, metricfn=None, return_pairs=True, pass_indices_to_metricfn=False)

Matches the elements of two vectors, a and b by minimizing the weight between them.

minweight_match_realmxeigs(a, b, metricfn=None, pass_indices_to_metricfn=False, eps=1e-09)

Matches the elements of a and b, whose elements are assumed to either real or one-half of a conjugate pair.

_fas(a, inds, rhs, add=False)

Fancy Assignment, equivalent to a[*inds] = rhs but with

_findx_shape(a, inds)

Returns the shape of a fancy-indexed array (a[*inds].shape)

_findx(a, inds, always_copy=False)

Fancy Indexing, equivalent to a[*inds].copy() but with

safe_dot(a, b)

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.

safe_real(a, inplace=False, check=False)

Get the real-part of a, where a can be either a dense array or a sparse matrix.

safe_imag(a, inplace=False, check=False)

Get the imaginary-part of a, where a can be either a dense array or a sparse matrix.

safe_norm(a, part=None)

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.

safe_onenorm(a)

Computes the 1-norm of the dense or sparse matrix a.

csr_sum_indices(csr_matrices)

Precomputes the indices needed to sum a set of CSR sparse matrices.

csr_sum(data, coeffs, csr_mxs, csr_sum_indices)

Accelerated summation of several CSR-format sparse matrices.

csr_sum_flat_indices(csr_matrices)

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.

csr_sum_flat(data, coeffs, flat_dest_index_array, flat_csr_mx_data, mx_nnz_indptr)

Computation of the summation of several CSR-format sparse matrices.

expm_multiply_prep(a, tol=EXPM_DEFAULT_TOL)

Computes "prepared" meta-info about matrix a, to be used in expm_multiply_fast.

expm_multiply_fast(prep_a, v, tol=EXPM_DEFAULT_TOL)

Multiplies v by an exponentiated matrix.

_custom_expm_multiply_simple_core(a, b, mu, m_star, s, tol, eta)

a helper function. Note that this (python) version works when a is a LinearOperator

expop_multiply_prep(op, a_1_norm=None, tol=EXPM_DEFAULT_TOL)

Returns "prepared" meta-info about operation op, which is assumed to be traceless (so no shift is needed).

sparse_equal(a, b, atol=1e-08)

Checks whether two Scipy sparse matrices are (almost) equal.

sparse_onenorm(a)

Computes the 1-norm of the scipy sparse matrix a.

ndarray_base(a, verbosity=0)

Get the base memory object for numpy array a.

to_unitary(scaled_unitary)

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.

sorted_eig(mx)

Similar to numpy.eig, but returns sorted output.

compute_kite(eigenvalues)

Computes the "kite" corresponding to a list of eigenvalues.

find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.

project_onto_kite(mx, kite)

Project mx onto kite, so mx is zero everywhere except on the kite.

project_onto_antikite(mx, kite)

Project mx onto the complement of kite, so mx is zero everywhere on the kite.

remove_dependent_cols(mx, tol=1e-07)

Removes the linearly dependent columns of a matrix.

intersection_space(space1, space2, tol=1e-07, use_nice_nullspace=False)

TODO: docstring

union_space(space1, space2, tol=1e-07)

TODO: docstring

jamiolkowski_angle(hamiltonian_mx)

TODO: docstring

zvals_to_dense(self, zvals, superket=True)

Construct the dense operator or superoperator representation of a computational basis state.

int64_parity(x)

Compute the partity of x.

zvals_int64_to_dense(zvals_int, nqubits, outvec=None, trust_outvec_sparsity=False, abs_elval=None)

Fills a dense array with the super-ket representation of a computational basis state.

_flat_mut_blks(i, j, block_dims)

_hack_sqrtm(a)

fidelity(a, b)

Returns the quantum state fidelity between density matrices.

frobeniusdist(a, b)

Returns the frobenius distance between gate or density matrices.

frobeniusdist_squared(a, b)

Returns the square of the frobenius distance between gate or density matrices.

residuals(a, b)

Calculate residuals between the elements of two matrices

tracenorm(a)

Compute the trace norm of matrix a given by:

tracedist(a, b)

Compute the trace distance between matrices.

diamonddist(a, b, mx_basis='pp', return_x=False)

Returns the approximate diamond norm describing the difference between gate matrices.

jtracedist(a, b, mx_basis='pp')

Compute the Jamiolkowski trace distance between operation matrices.

entanglement_fidelity(a, b, mx_basis='pp')

Returns the "entanglement" process fidelity between gate matrices.

average_gate_fidelity(a, b, mx_basis='pp')

Computes the average gate fidelity (AGF) between two gates.

average_gate_infidelity(a, b, mx_basis='gm')

Computes the average gate infidelity (AGI) between two gates.

entanglement_infidelity(a, b, mx_basis='pp')

Returns the entanglement infidelity (EI) between gate matrices.

gateset_infidelity(model, target_model, itype='EI', weights=None, mx_basis=None)

Computes the average-over-gates of the infidelity between gates in model and the gates in target_model.

unitarity(a, mx_basis='gm')

Returns the "unitarity" of a channel.

fidelity_upper_bound(operation_mx)

Get an upper bound on the fidelity of the given operation matrix with any unitary operation matrix.

compute_povm_map(model, povmlbl)

Constructs a gate-like quantity for the POVM within model.

povm_fidelity(model, target_model, povmlbl)

Computes the process (entanglement) fidelity between POVM maps.

povm_jtracedist(model, target_model, povmlbl)

Computes the Jamiolkowski trace distance between POVM maps using jtracedist().

povm_diamonddist(model, target_model, povmlbl)

Computes the diamond distance between POVM maps using diamonddist().

decompose_gate_matrix(operation_mx)

Decompse a gate matrix into fixed points, axes of rotation, angles of rotation, and decay rates.

state_to_dmvec(psi)

Compute the vectorized density matrix which acts as the state psi.

dmvec_to_state(dmvec, tol=1e-06)

Compute the pure state describing the action of density matrix vector dmvec.

unitary_to_process_mx(u)

Compute the superoperator corresponding to unitary matrix u.

process_mx_to_unitary(superop)

Compute the unitary corresponding to the (unitary-action!) super-operator superop.

spam_error_generator(spamvec, target_spamvec, mx_basis, typ='logGTi')

Construct an error generator from a SPAM vector and it's target.

error_generator(gate, target_op, mx_basis, typ='logG-logT', logG_weight=None)

Construct the error generator from a gate and its target.

operation_from_error_generator(error_gen, target_op, mx_basis, typ='logG-logT')

Construct a gate from an error generator and a target gate.

std_scale_factor(dim, projection_type)

Gets the scaling factors required to turn std_error_generators() output into projectors.

std_error_generators(dim, projection_type, projection_basis)

Compute the gate error generators for a standard set of errors.

std_errorgen_projections(errgen, projection_type, projection_basis, mx_basis='gm', return_generators=False, return_scale_fctr=False)

Compute the projections of a gate error generator onto generators for a standard set of errors.

_assert_shape(ar, shape, sparse=False)

Asserts ar.shape == shape ; works with sparse matrices too

lindblad_error_generator(errorgen_type, basis_element_labels, basis_1q, normalize, sparse=False, tensorprod_basis=False)

TODO: docstring - labels can be, e.g. ('H', 'XX') and basis should be a 1-qubit basis w/single-char labels

lindblad_error_generators(dmbasis_ham, dmbasis_other, normalize, other_mode='all')

Compute the superoperator-generators corresponding to Lindblad terms.

lindblad_errorgen_projections(errgen, ham_basis, other_basis, mx_basis='gm', normalize=True, return_generators=False, other_mode='all', sparse=False)

Compute the projections of an error generator onto generators for the Lindblad-term errors.

projections_to_lindblad_terms(ham_projs, other_projs, ham_basis, other_basis, other_mode='all', return_basis=True)

Converts error-generator projections into a dictionary of error coefficients.

lindblad_terms_to_projections(lindblad_term_dict, basis, other_mode='all')

Convert a set of Lindblad terms into a dense matrix/grid of projections.

lindblad_param_labels(ham_basis, other_basis, param_mode='cptp', other_mode='all', dim=None)

Generate human-readable labels for the Lindblad parameters.

lindblad_projections_to_paramvals(ham_projs, other_projs, param_mode='cptp', other_mode='all', truncate=True)

Compute Lindblad-gate parameter values from error generator projections.

lindblad_terms_projection_indices(ham_basis, other_basis, other_mode='all')

Constructs a dictionary mapping Lindblad term labels to projection coefficients.

paramvals_to_lindblad_projections(paramvals, ham_basis_size, other_basis_size, param_mode='cptp', other_mode='all', cache_mx=None)

Construct Lindblad-term projections from Lindblad-operator parameter values.

paramvals_to_lindblad_projections_deriv(paramvals, ham_basis_size, other_basis_size, param_mode='cptp', other_mode='all', cache_mx=None)

Construct derivative of Lindblad-term projections with respect to the parameter values.

rotation_gate_mx(r, mx_basis='gm')

Construct a rotation operation matrix.

project_model(model, target_model, projectiontypes=('H', 'S', 'H+S', 'LND'), gen_type='logG-logT', logG_weight=None)

Construct a new model(s) by projecting the error generator of model onto some sub-space then reconstructing.

compute_best_case_gauge_transform(gate_mx, target_gate_mx, return_all=False)

Returns a gauge transformation that maps gate_mx into a matrix that is co-diagonal with target_gate_mx.

project_to_target_eigenspace(model, target_model, eps=1e-06)

Project each gate of model onto the eigenspace of the corresponding gate within target_model.

unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

is_valid_lindblad_paramtype(typ)

Whether typ is a recognized Lindblad-gate parameterization type.

effect_label_to_outcome(povm_and_effect_lbl)

Extract the outcome label from a "simplified" effect label.

effect_label_to_povm(povm_and_effect_lbl)

Extract the POVM label from a "simplified" effect label.

single_qubit_gate(hx, hy, hz, noise=0)

Construct the single-qubit operation matrix.

two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)

Construct the single-qubit operation matrix.

cache_by_hashed_args(obj)

Decorator for caching a function values

timed_block(label, time_dict=None, printer=None, verbosity=2, round_places=6, pre_message=None, format_str=None)

Context manager that times a block of code

time_hash()

Get string-version of current time

tvd(p, q)

Calculates the total variational distance between two probability distributions.

classical_fidelity(p, q)

Calculates the (classical) fidelity between two probability distributions.

predicted_rb_number(model, target_model, weights=None, d=None, rtype='EI')

Predicts the RB error rate from a model.

predicted_rb_decay_parameter(model, target_model, weights=None)

Computes the second largest eigenvalue of the 'L matrix' (see the L_matrix function).

rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)

Computes the gauge transformation required so that the RB number matches the average model infidelity.

transform_to_rb_gauge(model, target_model, weights=None, mx_basis=None, eigenvector_weighting=1.0)

Transforms a Model into the "RB gauge" (see the RB_gauge function).

L_matrix(model, target_model, weights=None)

Constructs a generalization of the 'L-matrix' linear operator on superoperators.

R_matrix_predicted_rb_decay_parameter(model, group, group_to_model=None, weights=None)

Returns the second largest eigenvalue of a generalization of the 'R-matrix' [see the R_matrix function].

R_matrix(model, group, group_to_model=None, weights=None)

Constructs a generalization of the 'R-matrix' of Proctor et al Phys. Rev. Lett. 119, 130502 (2017).

errormaps(model, target_model)

Computes the 'left-multiplied' error maps associated with a noisy gate set, along with the average error map.

gate_dependence_of_errormaps(model, target_model, norm='diamond', mx_basis=None)

Computes the "gate-dependence of errors maps" parameter defined by

length(s)

Returns the length (the number of indices) contained in a slice.

shift(s, offset)

Returns a new slice whose start and stop points are shifted by offset.

intersect(s1, s2)

Returns the intersection of two slices (which must have the same step).

intersect_within(s1, s2)

Returns the intersection of two slices (which must have the same step).

indices(s, n=None)

Returns a list of the indices specified by slice s.

list_to_slice(lst, array_ok=False, require_contiguous=True)

Returns a slice corresponding to a given list of (integer) indices, if this is possible.

to_array(slc_or_list_like)

Returns slc_or_list_like as an index array (an integer numpy.ndarray).

divide(slc, max_len)

Divides a slice into sub-slices based on a maximum length (for each sub-slice).

slice_of_slice(slc, base_slc)

A slice that is the composition of base_slc and slc.

slice_hash(slc)

smart_cached(obj)

Decorator for applying a smart cache to a single function or method.

symplectic_form(n, convention='standard')

Creates the symplectic form for the number of qubits specified.

change_symplectic_form_convention(s, outconvention='standard')

Maps the input symplectic matrix between the 'standard' and 'directsum' symplectic form conventions.

check_symplectic(m, convention='standard')

Checks whether a matrix is symplectic.

inverse_symplectic(s)

Returns the inverse of a symplectic matrix over the integers mod 2.

inverse_clifford(s, p)

Returns the inverse of a Clifford gate in the symplectic representation.

check_valid_clifford(s, p)

Checks if a symplectic matrix - phase vector pair (s,p) is the symplectic representation of a Clifford.

construct_valid_phase_vector(s, pseed)

Constructs a phase vector that, when paired with the provided symplectic matrix, defines a Clifford gate.

find_postmultipled_pauli(s, p_implemented, p_target, qubit_labels=None)

Finds the Pauli layer that should be appended to a circuit to implement a given Clifford.

find_premultipled_pauli(s, p_implemented, p_target, qubit_labels=None)

Finds the Pauli layer that should be prepended to a circuit to implement a given Clifford.

find_pauli_layer(pvec, qubit_labels, pauli_labels=['I', 'X', 'Y', 'Z'])

TODO: docstring

find_pauli_number(pvec)

TODO: docstring

compose_cliffords(s1, p1, s2, p2, do_checks=True)

Multiplies two cliffords in the symplectic representation.

symplectic_kronecker(sp_factors)

Takes a kronecker product of symplectic representations.

prep_stabilizer_state(nqubits, zvals=None)

Contruct the (s,p) stabilizer representation for a computational basis state given by zvals.

apply_clifford_to_stabilizer_state(s, p, state_s, state_p)

Applies a clifford in the symplectic representation to a stabilizer state in the standard stabilizer representation.

pauli_z_measurement(state_s, state_p, qubit_index)

Computes the probabilities of 0/1 (+/-) outcomes from measuring a Pauli operator on a stabilizer state.

colsum(i, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

colsum_acc(acc_s, acc_p, j, s, p, n)

A helper routine used for manipulating stabilizer state representations.

stabilizer_measurement_prob(state_sp_tuple, moutcomes, qubit_filter=None, return_state=False)

Compute the probability of a given outcome when measuring some or all of the qubits in a stabilizer state.

embed_clifford(s, p, qubit_inds, n)

Embeds the (s,p) Clifford symplectic representation into a larger symplectic representation.

compute_internal_gate_symplectic_representations(gllist=None)

Creates a dictionary of the symplectic representations of 'standard' Clifford gates.

symplectic_rep_of_clifford_circuit(circuit, srep_dict=None, pspec=None)

Returns the symplectic representation of the composite Clifford implemented by the specified Clifford circuit.

symplectic_rep_of_clifford_layer(layer, n=None, q_labels=None, srep_dict=None, add_internal_sreps=True)

Constructs the symplectic representation of the n-qubit Clifford implemented by a single quantum circuit layer.

one_q_clifford_symplectic_group_relations()

Gives the group relationship between the 'I', 'H', 'P' 'HP', 'PH', and 'HPH' up-to-Paulis operators.

unitary_is_clifford(unitary)

Returns True if the unitary is a Clifford gate (w.r.t the standard basis), and False otherwise.

_unitary_to_symplectic_1q(u, flagnonclifford=True)

Returns the symplectic representation of a single qubit Clifford unitary,

_unitary_to_symplectic_2q(u, flagnonclifford=True)

Returns the symplectic representation of a two-qubit Clifford unitary,

unitary_to_symplectic(u, flagnonclifford=True)

Returns the symplectic representation of a one-qubit or two-qubit Clifford unitary.

random_symplectic_matrix(n, convention='standard', rand_state=None)

Returns a symplectic matrix of dimensions 2n x 2n sampled uniformly at random from the symplectic group S(n).

random_clifford(n, rand_state=None)

Returns a Clifford, in the symplectic representation, sampled uniformly at random from the n-qubit Clifford group.

random_phase_vector(s, n, rand_state=None)

Generates a uniformly random phase vector for a n-qubit Clifford.

bitstring_for_pauli(p)

Get the bitstring corresponding to a Pauli.

apply_internal_gate_to_symplectic(s, gate_name, qindex_list, optype='row')

Applies a Clifford gate to the n-qubit Clifford gate specified by the 2n x 2n symplectic matrix.

compute_num_cliffords(n)

The number of Clifford gates in the n-qubit Clifford group.

compute_num_symplectics(n)

The number of elements in the symplectic group S(n) over the 2-element finite field.

compute_num_cosets(n)

Returns the number of different cosets for the symplectic group S(n) over the 2-element finite field.

symplectic_innerproduct(v, w)

Returns the symplectic inner product of two vectors in F_2^(2n).

symplectic_transvection(k, v)

Applies transvection Z k to v.

int_to_bitstring(i, n)

Converts integer i to an length n array of bits.

bitstring_to_int(b, n)

Converts an n-bit string b to an integer between 0 and 2^`n` - 1.

find_symplectic_transvection(x, y)

A utility function for selecting a random Clifford element.

compute_symplectic_matrix(i, n)

Returns the 2n x 2n symplectic matrix, over the finite field containing 0 and 1, with the "canonical" index i.

compute_symplectic_label(gn, n=None)

Returns the "canonical" index of 2n x 2n symplectic matrix gn over the finite field containing 0 and 1.

random_symplectic_index(n, rand_state=None)

The index of a uniformly random 2n x 2n symplectic matrix over the finite field containing 0 and 1.

_numpy14einsumfix()

str(.) on first arg of einsum skirts a bug in Numpy 14.0

Attributes

__version__

_dummy_profiler

CUSTOMLM

FLOATSIZE

id2x2

sigmax

sigmay

sigmaz

sigmaii

sigmaix

sigmaiy

sigmaiz

sigmaxi

sigmaxx

sigmaxy

sigmaxz

sigmayi

sigmayx

sigmayy

sigmayz

sigmazi

sigmazx

sigmazy

sigmazz

ROBUST_SUFFIX_LIST

DEFAULT_BAD_FIT_THRESHOLD

_basis_constructor_dict

gmvec_to_stdmx

ppvec_to_stdmx

qtvec_to_stdmx

stdvec_to_stdmx

stdmx_to_ppvec

stdmx_to_gmvec

stdmx_to_stdvec

TOL

_fastcalc

EXPM_DEFAULT_TOL

IMAG_TOL

id2x2

sigmax

sigmay

sigmaz

sigmaii

sigmaix

sigmaiy

sigmaiz

sigmaxi

sigmaxx

sigmaxy

sigmaxz

sigmayi

sigmayx

sigmayy

sigmayz

sigmazi

sigmazx

sigmazy

sigmazz

pygsti.__version__ = 0.9.10.post4
pygsti.contract(model, to_what, dataset=None, maxiter=1000000, tol=0.01, use_direct_cp=True, method='Nelder-Mead', verbosity=0)

Contract a Model to a specified space.

All contraction operations except ‘vSPAM’ operate entirely on the gate matrices and leave state preparations and measurments alone, while ‘vSPAM’ operations only on SPAM.

Parameters
  • model (Model) – The model to contract

  • to_what (string) –

    Specifies which space is the model is contracted to. Allowed values are:

    • ’TP’ – All gates are manifestly trace-preserving maps.

    • ’CP’ – All gates are manifestly completely-positive maps.

    • ’CPTP’ – All gates are manifestly completely-positive and trace-preserving maps.

    • ’XP’ – All gates are manifestly “experimentally-positive” maps.

    • ’XPTP’ – All gates are manifestly “experimentally-positive” and trace-preserving maps.

    • ’vSPAM’ – state preparation and measurement operations are valid.

    • ’nothing’ – no contraction is performed.

  • dataset (DataSet, optional) – Dataset to use to determine whether a model is in the “experimentally-positive” (XP) space. Required only when contracting to XP or XPTP.

  • maxiter (int, optional) – Maximum number of iterations for iterative contraction routines.

  • tol (float, optional) – Tolerance for iterative contraction routines.

  • use_direct_cp (bool, optional) – Whether to use a faster direct-contraction method for CP contraction. This method essentially transforms to the Choi matrix, truncates any negative eigenvalues to zero, then transforms back to a operation matrix.

  • method (string, optional) – The method used when contracting to XP and non-directly to CP (i.e. use_direct_cp == False).

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

Model – The contracted model

pygsti._contract_to_xp(model, dataset, verbosity, method='Nelder-Mead', maxiter=100000, tol=1e-10)
pygsti._contract_to_cp(model, verbosity, method='Nelder-Mead', maxiter=100000, tol=0.01)
pygsti._contract_to_cp_direct(model, verbosity, tp_also=False, maxiter=100000, tol=1e-08)
pygsti._contract_to_tp(model, verbosity)
pygsti._contract_to_valid_spam(model, verbosity=0)

Contract the surface preparation and measurement operations of a Model to the space of valid quantum operations.

Parameters
  • model (Model) – The model to contract

  • verbosity (int) – How much detail to send to stdout.

Returns

Model – The contracted model

class pygsti._DummyProfiler

Bases: object

A dummy profiler that doesn’t do anything.

A class which implements the same interface as Profiler but which doesn’t actually do any profiling (consists of stub functions).

add_time(self, name, start_time, prefix=0)

Stub function that does nothing

Parameters
  • name (string) – The name of the timer to add elapsed time into (if the name doesn’t exist, one is created and initialized to the elapsed time).

  • start_time (float) – The starting time used to compute the elapsed, i.e. the value time.time()-start_time, which is added to the named timer.

  • prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.

Returns

None

add_count(self, name, inc=1, prefix=0)

Stub function that does nothing

Parameters
  • name (string) – The name of the counter to add val into (if the name doesn’t exist, one is created and initialized to val).

  • inc (int, optional) – The increment (the value to add to the counter).

  • prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.

Returns

None

memory_check(self, name, printme=None, prefix=0)

Stub function that does nothing

Parameters
  • name (string) – The name of the memory checkpoint. (Later, memory information can be organized by checkpoint name.)

  • printme (bool, optional) – Whether or not to print the memory usage during this function call (if None, the default, then the value of default_print_memcheck specified during Profiler construction is used).

  • prefix (int, optional) – Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to ” 3: myFunc: Total”.

Returns

None

class pygsti.BuiltinBasis(name, dim_or_statespace, sparse=False)

Bases: LazyBasis

A basis that is included within and integrated into pyGSTi.

Such bases may, in most cases be represented merely by its name. (In actuality, a dimension is also required, but this is often able to be inferred from context.)

Parameters
  • name ({"pp", "gm", "std", "qt", "id", "cl", "sv"}) – Name of the basis to be created.

  • dim_or_statespace (int or StateSpace) – The dimension of the basis to be created or the state space for which a basis should be created. Note that when this is an integer it is the dimension of the vectors, which correspond to flattened elements in simple cases. Thus, a 1-qubit basis would have dimension 2 in the state-vector (name=”sv”) case and dimension 4 when constructing a density-matrix basis (e.g. name=”pp”).

  • sparse (bool, optional) – Whether basis elements should be stored as SciPy CSR sparse matrices or dense numpy arrays (the default).

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
property dim(self)

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

property size(self)

The number of elements (or vector-elements) in the basis.

property elshape(self)

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

__hash__(self)

Return hash(self).

_lazy_build_elements(self)
_lazy_build_labels(self)
_copy_with_toggled_sparsity(self)
__eq__(self, other)

Return self==value.

class pygsti.VerbosityPrinter(verbosity=1, filename=None, comm=None, warnings=True, split=False, clear_file=True)

Bases: object

Class responsible for logging things to stdout or a file.

Controls verbosity and can print progress bars. ex:

>>> VerbosityPrinter(1)

would construct a printer that printed out messages of level one or higher to the screen.

>>> VerbosityPrinter(3, 'output.txt')

would construct a printer that sends verbose output to a text file

The static function create_printer() will construct a printer from either an integer or an already existing printer. it is a static method of the VerbosityPrinter class, so it is called like so:

>>> VerbosityPrinter.create_printer(2)

or

>>> VerbostityPrinter.create_printer(VerbosityPrinter(3, 'output.txt'))

printer.log('status') would log ‘status’ if the printers verbosity was one or higher. printer.log('status2', 2) would log ‘status2’ if the printer’s verbosity was two or higher

printer.error('something terrible happened') would ALWAYS log ‘something terrible happened’. printer.warning('something worrisome happened') would log if verbosity was one or higher - the same as a normal status.

Both printer.error and printer.warning will prepend ‘ERROR: ‘ or ‘WARNING: ‘ to the message they are given. Optionally, printer.log() can also prepend ‘Status_n’ to the message, where n is the message level.

Logging of progress bars/iterations:

>>> with printer_instance.progress_logging(verbosity):
>>>     for i, item in enumerate(data):
>>>         printer.show_progress(i, len(data))
>>>         printer.log(...)

will output either a progress bar or iteration statuses depending on the printer’s verbosity

Parameters
  • verbosity (int) – How verbose the printer should be.

  • filename (str, optional) – Where to put output (If none, output goes to screen)

  • comm (mpi4py.MPI.Comm or ResourceAllocation, optional) – Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …

  • warnings (bool, optional) – Whether or not to print warnings

  • split (bool, optional) – Whether to split output between stdout and stderr as appropriate, or to combine the streams so everything is sent to stdout.

  • clear_file (bool, optional) – Whether or not filename should be cleared (overwritten) or simply appended to.

_comm_path

relative path where comm files (outputs of non-root ranks) are stored.

Type

str

_comm_file_name

root filename for comm files (outputs of non-root ranks).

Type

str

_comm_file_ext

filename extension for comm files (outputs of non-root ranks).

Type

str

_comm_path =
_comm_file_name =
_comm_file_ext = .txt
_create_file(self, filename)
_get_comm_file(self, comm_id)
clone(self)

Instead of deepcopy, initialize a new printer object and feed it some select deepcopied members

Returns

VerbosityPrinter

static create_printer(verbosity, comm=None)

Function for converting between interfaces

Parameters
  • verbosity (int or VerbosityPrinter object, required:) – object to build a printer from

  • comm (mpi4py.MPI.Comm object, optional) – Comm object to build printers with. !Will override!

Returns

VerbosityPrinter – The printer object, constructed from either an integer or another printer

__add__(self, other)

Increase the verbosity of a VerbosityPrinter

__sub__(self, other)

Decrease the verbosity of a VerbosityPrinter

__getstate__(self)
__setstate__(self, state_dict)
_append_to(self, filename, message)
_put(self, message, flush=True, stderr=False)
_record(self, typ, level, message)
error(self, message)

Log an error to the screen/file

Parameters

message (str) – the error message

Returns

None

warning(self, message)

Log a warning to the screen/file if verbosity > 1

Parameters

message (str) – the warning message

Returns

None

log(self, message, message_level=None, indent_char='  ', show_statustype=False, do_indent=True, indent_offset=0, end='\n', flush=True)

Log a status message to screen/file.

Determines whether the message should be printed based on current verbosity setting, then sends the message to the appropriate output

Parameters
  • message (str) – the message to print (or log)

  • message_level (int, optional) – the minimum verbosity level at which this level is printed.

  • indent_char (str, optional) – what constitutes an “indent” (messages at higher levels are indented more when do_indent=True).

  • show_statustype (bool, optional) – if True, prepend lines with “Status Level X” indicating the message_level.

  • do_indent (bool, optional) – whether messages at higher message levels should be indented. Note that if this is False it may be helpful to set show_statustype=True.

  • indent_offset (int, optional) – an additional number of indentations to add, on top of any due to the message level.

  • end (str, optional) – the character (or string) to end message lines with.

  • flush (bool, optional) – whether stdout should be flushed right after this message is printed (this avoids delays in on-screen output due to buffering).

Returns

None

_progress_bar(self, iteration, total, bar_length, num_decimals, fill_char, empty_char, prefix, suffix, indent)
_verbose_iteration(self, iteration, total, prefix, suffix, verbose_messages, indent, end)
__str__(self)

Return str(self).

verbosity_env(self, level)

Create a temporary environment with a different verbosity level.

This is context manager, controlled using Python’s with statement:

>>> with printer.verbosity_env(2):
        printer.log('Message1') # printed at verbosity level 2
        printer.log('Message2') # printed at verbosity level 2
Parameters

level (int) – the verbosity level of the environment.

progress_logging(self, message_level=1)

Context manager for logging progress bars/iterations.

(The printer will return to its normal, unrestricted state when the progress logging has finished)

Parameters

message_level (int, optional) – progress messages will not be shown until the verbosity level reaches message_level.

show_progress(self, iteration, total, bar_length=50, num_decimals=2, fill_char='#', empty_char='-', prefix='Progress:', suffix='', verbose_messages=[], indent_char='  ', end='\n')

Displays a progress message (to be used within a progress_logging block).

Parameters
  • iteration (int) – the 0-based current iteration – the interation number this message is for.

  • total (int) – the total number of iterations expected.

  • bar_length (int, optional) – the length, in characters, of a text-format progress bar (only used when the verbosity level is exactly equal to the progress_logging message level.

  • num_decimals (int, optional) – number of places after the decimal point that are displayed in progress bar’s percentage complete.

  • fill_char (str, optional) – replaces ‘#’ as the bar-filling character

  • empty_char (str, optional) – replaces ‘-’ as the empty-bar character

  • prefix (str, optional) – message in front of the bar

  • suffix (str, optional) – message after the bar

  • verbose_messages (list, optional) – A list of strings to display after an initial “Iter X of Y” line when the verbosity level is higher than the progress_logging message level and so more verbose messages are shown (and a progress bar is not). The elements of verbose_messages will occur, one per line, after the initial “Iter X of Y” line.

  • indent_char (str, optional) – what constitutes an “indentation”.

  • end (str, optional) – the character (or string) to end message lines with.

Returns

None

_end_progress(self)
start_recording(self)

Begins recording the output (to memory).

Begins recording (in memory) a list of (type, verbosityLevel, message) tuples that is returned by the next call to :method:`stop_recording`.

Returns

None

is_recording(self)

Returns whether this VerbosityPrinter is currently recording.

Returns

bool

stop_recording(self)

Stops recording and returns recorded output.

Stops a “recording” started by :method:`start_recording` and returns the list of (type, verbosityLevel, message) tuples that have been recorded since then.

Returns

list

class pygsti.DirectSumBasis(component_bases, name=None, longname=None)

Bases: LazyBasis

A basis that is the direct sum of one or more “component” bases.

Elements of this basis are the union of the basis elements on each component, each embedded into a common block-diagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).

Parameters
  • component_bases (iterable) – A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to :function:`Basis.cast`, e.g. (‘pp’,4).

  • name (str, optional) – The name of this basis. If None, the names of the component bases joined with “+” is used.

  • longname (str, optional) – A longer description of this basis. If None, then a long name is automatically generated.

vector_elements

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Type

list

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
property dim(self)

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

property size(self)

The number of elements (or vector-elements) in the basis.

property elshape(self)

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

__hash__(self)

Return hash(self).

_lazy_build_vector_elements(self)
_lazy_build_elements(self)
_lazy_build_labels(self)
_copy_with_toggled_sparsity(self)
__eq__(self, other)

Return self==value.

property vector_elements(self)

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Returns

list

property to_std_transform_matrix(self)

Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.

Returns

numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).

property to_elementstd_transform_matrix(self)

Get transformation matrix from this basis to the “element space”.

Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in.

Returns

numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).

create_equivalent(self, builtin_basis_name)

Create an equivalent basis with components of type builtin_basis_name.

Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.

Parameters

builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.

Returns

DirectSumBasis

create_simple_equivalent(self, builtin_basis_name=None)

Create a basis of type builtin_basis_name whose elements are compatible with this basis.

Create a simple basis and one without components (e.g. a TensorProdBasis, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.

Parameters

builtin_basis_name (str, optional) – The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.

Returns

Basis

class pygsti._CircuitList(circuits, op_label_aliases=None, circuit_weights=None, name=None)

Bases: pygsti.baseobjs.nicelyserializable.NicelySerializable

A unmutable list (a tuple) of Circuit objects and associated metadata.

Parameters
  • circuits (list) – The list of circuits that constitutes the primary data held by this object.

  • op_label_aliases (dict, optional) – Dictionary of circuit meta-data whose keys are operation label “aliases” and whose values are circuits corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined). e.g. op_label_aliases[‘Gx^3’] = pygsti.obj.Circuit([‘Gx’,’Gx’,’Gx’])

  • circuit_weights (numpy.ndarray, optional) – If not None, an array of per-circuit weights (of length equal to the number of circuits) that are typically used to multiply the counts extracted for each circuit.

  • name (str, optional) – An optional name for this list, used for status messages.

classmethod cast(cls, circuits)

Convert (if needed) an object into a CircuitList.

Parameters

circuits (list or CircuitList) – The object to convert.

Returns

CircuitList

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
__len__(self)
__getitem__(self, index)
__iter__(self)
apply_aliases(self)

Applies any operation-label aliases to this circuit list.

Returns

list – A list of :class:`Circuit`s.

truncate(self, circuits_to_keep)

Builds a new circuit list containing only a given subset.

This can be safer then just creating a new CircuitList because it preserves the aliases, etc., of this list.

Parameters

circuits_to_keep (list or set) – The circuits to retain in the returned circuit list.

Returns

CircuitList

truncate_to_dataset(self, dataset)

Builds a new circuit list containing only those elements in dataset.

Parameters

dataset (DataSet) – The dataset to check. Aliases are applied to the circuits in this circuit list before they are tested.

Returns

CircuitList

__hash__(self)

Return hash(self).

__eq__(self, other)

Return self==value.

__setstate__(self, state_dict)
class pygsti._ResourceAllocation(comm=None, mem_limit=None, profiler=None, distribute_method='default', allocated_memory=0)

Bases: object

Describes available resources and how they should be allocated.

This includes the number of processors and amount of memory, as well as a strategy for how computations should be distributed among them.

Parameters
  • comm (mpi4py.MPI.Comm, optional) – MPI communicator holding the number of available processors.

  • mem_limit (int, optional) – A rough per-processor memory limit in bytes.

  • profiler (Profiler, optional) – A lightweight profiler object for tracking resource usage.

  • distribute_method (str, optional) – The name of a distribution strategy.

classmethod cast(cls, arg)

Cast arg to a ResourceAllocation object.

If arg already is a ResourceAllocation instance, it just returned. Otherwise this function attempts to create a new instance from arg.

Parameters

arg (ResourceAllocation or dict) – An object that can be cast to a ResourceAllocation.

Returns

ResourceAllocation

build_hostcomms(self)
property comm_rank(self)

A safe way to get self.comm.rank (0 if self.comm is None)

property comm_size(self)

A safe way to get self.comm.size (1 if self.comm is None)

property is_host_leader(self)

True if this processors is the rank-0 “leader” of its host (node). False otherwise.

host_comm_barrier(self)

Calls self.host_comm.barrier() when self.host_comm is not None.

This convenience function provides an often-used barrier that follows code where a single “leader” processor modifies a memory block shared between all members of self.host_comm, and the other processors must wait until this modification is performed before proceeding with their own computations.

Returns

None

copy(self)

Copy this object.

Returns

ResourceAllocation

reset(self, allocated_memory=0)

Resets internal allocation counters to given values (defaults to zero).

Parameters

allocated_memory (int64) – The value to set the memory allocation counter to.

Returns

None

add_tracked_memory(self, num_elements, dtype='d')

Adds nelements * itemsize bytes to the total amount of allocated memory being tracked.

If the total (tracked) memory exceeds self.mem_limit a MemoryError exception is raised.

Parameters
  • num_elements (int) – The number of elements to track allocation of.

  • dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.

Returns

None

check_can_allocate_memory(self, num_elements, dtype='d')

Checks that allocating nelements doesn’t cause the memory limit to be exceeded.

This memory isn’t tracked - it’s just added to the current tracked memory and a MemoryError exception is raised if the result exceeds self.mem_limit.

Parameters
  • num_elements (int) – The number of elements to track allocation of.

  • dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.

Returns

None

temporarily_track_memory(self, num_elements, dtype='d')

Temporarily adds nelements to tracked memory (a context manager).

A MemoryError exception is raised if the tracked memory exceeds self.mem_limit.

Parameters
  • num_elements (int) – The number of elements to track allocation of.

  • dtype (numpy.dtype, optional) – The type of elements, needed to compute the number of bytes per element.

Returns

contextmanager

gather_base(self, result, local, slice_of_global, unit_ralloc=None, all_gather=False)

Gather or all-gather operation using local arrays and a unit resource allocation.

Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array. When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.

  • all_gather (bool, optional) – Whether the final result should be gathered on all the processors of this ResourceAllocation or just the root (rank 0) processor.

Returns

None

gather(self, result, local, slice_of_global, unit_ralloc=None)

Gather local arrays into a global result array potentially with a unit resource allocation.

Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.

The global array is only gathered on the root (rank 0) processor of this resource allocation.

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array, only needed on the root (rank 0) processor. When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.

Returns

None

allgather(self, result, local, slice_of_global, unit_ralloc=None)

All-gather local arrays into global arrays on each processor, potentially using a unit resource allocation.

Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array. When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • slice_of_global (slice or numpy.ndarray) – The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.

Returns

None

allreduce_sum(self, result, local, unit_ralloc=None)

Sum local arrays on different processors, potentially using a unit resource allocation.

Similar to a normal MPI reduce call (with MPI.SUM type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the sum, only processors with unit_ralloc.rank == 0 contribute to the sum. This handles the case where simply summing the local contributions from all processors would result in over-counting because of multiple processors hold the same logical result (summand).

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being summed. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.

Returns

None

allreduce_sum_simple(self, local, unit_ralloc=None)

A simplified sum over quantities on different processors that doesn’t use shared memory.

The shared memory usage of :method:`allreduce_sum` can be overkill when just summing a single scalar quantity. This method provides a way to easily sum a quantity across all the processors in this ResourceAllocation object using a unit resource allocation.

Parameters
  • local (int or float) – The local (per-processor) value to sum.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local value, so that only the unit_ralloc.rank == 0 processors will contribute to the sum. If None, then it is assumed that each processor computes a logically different local value.

Returns

float or int – The sum of all local quantities, returned on all the processors.

allreduce_min(self, result, local, unit_ralloc=None)

Take elementwise min of local arrays on different processors, potentially using a unit resource allocation.

Similar to a normal MPI reduce call (with MPI.MIN type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the min operation, only processors with unit_ralloc.rank == 0 contribute.

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.

Returns

None

allreduce_max(self, result, local, unit_ralloc=None)

Take elementwise max of local arrays on different processors, potentially using a unit resource allocation.

Similar to a normal MPI reduce call (with MPI.MAX type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the max operation, only processors with unit_ralloc.rank == 0 contribute.

Parameters
  • result (numpy.ndarray, possibly shared) – The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.

  • local (numpy.ndarray) – The locally computed quantity. This can be a shared-memory array, but need not be.

  • unit_ralloc (ResourceAllocation, optional) – A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.

Returns

None

bcast(self, value, root=0)

Broadcasts a value from the root processor/host to the others in this resource allocation.

This is similar to a usual MPI broadcast, except it takes advantage of shared memory when it is available. When shared memory is being used, i.e. when this ResourceAllocation object has a nontrivial inter-host comm, then this routine places value in a shared memory buffer and uses the resource allocation’s inter-host communicator to broadcast the result from the root host to all the other hosts using all the processor on the root host in parallel (all processors with the same intra-host rank participate in a MPI broadcast).

Parameters
  • value (numpy.ndarray) – The value to broadcast. May be shared memory but doesn’t need to be. Only need to specify this on the rank root processor, other processors can provide any value for this argument (it’s unused).

  • root (int) – The rank of the processor whose value will be to broadcast.

Returns

numpy.ndarray – The broadcast value, in a new, non-shared-memory array.

__getstate__(self)
class pygsti._CustomLMOptimizer(maxiter=100, maxfev=100, tol=1e-06, fditer=0, first_fditer=0, damping_mode='identity', damping_basis='diagonal_values', damping_clip=None, use_acceleration=False, uphill_step_threshold=0.0, init_munu='auto', oob_check_interval=0, oob_action='reject', oob_check_mode=0, serial_solve_proc_threshold=100)

Bases: Optimizer

A Levenberg-Marquardt optimizer customized for GST-like problems.

Parameters
  • maxiter (int, optional) – The maximum number of (outer) interations.

  • maxfev (int, optional) – The maximum function evaluations.

  • tol (float or dict, optional) – The tolerance, specified as a single float or as a dict with keys {‘relx’, ‘relf’, ‘jac’, ‘maxdx’}. A single float sets the ‘relf’ and ‘jac’ elemments and leaves the others at their default values.

  • fditer (int optional) – Internally compute the Jacobian using a finite-difference method for the first fditer iterations. This is useful when the initial point lies at a special or singular point where the analytic Jacobian is misleading.

  • first_fditer (int, optional) – Number of finite-difference iterations applied to the first stage of the optimization (only). Unused.

  • damping_mode ({'identity', 'JTJ', 'invJTJ', 'adaptive'}) – How damping is applied. ‘identity’ means that the damping parameter mu multiplies the identity matrix. ‘JTJ’ means that mu multiplies the diagonal or singular values (depending on scaling_mode) of the JTJ (Fischer information and approx. hessaian) matrix, whereas ‘invJTJ’ means mu multiplies the reciprocals of these values instead. The ‘adaptive’ mode adaptively chooses a damping strategy.

  • damping_basis ({'diagonal_values', 'singular_values'}) – Whether the the diagonal or singular values of the JTJ matrix are used during damping. If ‘singular_values’ is selected, then a SVD of the Jacobian (J) matrix is performed and damping is performed in the basis of (right) singular vectors. If ‘diagonal_values’ is selected, the diagonal values of relevant matrices are used as a proxy for the the singular values (saving the cost of performing a SVD).

  • damping_clip (tuple, optional) – A 2-tuple giving upper and lower bounds for the values that mu multiplies. If damping_mode == “identity” then this argument is ignored, as mu always multiplies a 1.0 on the diagonal if the identity matrix. If None, then no clipping is applied.

  • use_acceleration (bool, optional) – Whether to include a geodesic acceleration term as suggested in arXiv:1201.5885. This is supposed to increase the rate of convergence with very little overhead. In practice we’ve seen mixed results.

  • uphill_step_threshold (float, optional) – Allows uphill steps when taking two consecutive steps in nearly the same direction. The condition for accepting an uphill step is that (uphill_step_threshold-beta)*new_objective < old_objective, where beta is the cosine of the angle between successive steps. If uphill_step_threshold == 0 then no uphill steps are allowed, otherwise it should take a value between 1.0 and 2.0, with 1.0 being the most permissive to uphill steps.

  • init_munu (tuple, optional) – If not None, a (mu, nu) tuple of 2 floats giving the initial values for mu and nu.

  • oob_check_interval (int, optional) – Every oob_check_interval outer iterations, the objective function (obj_fn) is called with a second argument ‘oob_check’, set to True. In this case, obj_fn can raise a ValueError exception to indicate that it is Out Of Bounds. If oob_check_interval is 0 then this check is never performed; if 1 then it is always performed.

  • oob_action ({"reject","stop"}) – What to do when the objective function indicates (by raising a ValueError as described above). “reject” means the step is rejected but the optimization proceeds; “stop” means the optimization stops and returns as converged at the last known-in-bounds point.

  • oob_check_mode (int, optional) – An advanced option, expert use only. If 0 then the optimization is halted as soon as an attempt is made to evaluate the function out of bounds. If 1 then the optimization is halted only when a would-be accepted step is out of bounds.

  • serial_solve_proc_threshold (int optional) – When there are fewer than this many processors, the optimizer will solve linear systems serially, using SciPy on a single processor, rather than using a parallelized Gaussian Elimination (with partial pivoting) algorithm coded in Python. Since SciPy’s implementation is more efficient, it’s not worth using the parallel version until there are many processors to spread the work among.

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
run(self, objective, profiler, printer)

Perform the optimization.

Parameters
  • objective (ObjectiveFunction) – The objective function to optimize.

  • profiler (Profiler) – A profiler to track resource usage.

  • printer (VerbosityPrinter) – printer to use for sending output to stdout.

class pygsti._Optimizer

Bases: pygsti.baseobjs.nicelyserializable.NicelySerializable

An optimizer. Optimizes an objective function.

classmethod cast(cls, obj)

Cast obj to a Optimizer.

If obj is already an Optimizer it is just returned, otherwise this function tries to create a new object using obj as a dictionary of constructor arguments.

Parameters

obj (Optimizer or dict) – The object to cast.

Returns

Optimizer

pygsti._dummy_profiler
pygsti.CUSTOMLM = True
pygsti.FLOATSIZE = 8
pygsti.run_lgst(dataset, prep_fiducials, effect_fiducials, target_model, op_labels=None, op_label_aliases=None, guess_model_for_gauge=None, svd_truncate_to=None, verbosity=0)

Performs Linear-inversion Gate Set Tomography on the dataset.

Parameters
  • dataset (DataSet) – The data used to generate the LGST estimates

  • prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.

  • effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.

  • target_model (Model) – A model used to specify which operation labels should be estimated, a guess for which gauge these estimates should be returned in.

  • op_labels (list, optional) – A list of which operation labels (or aliases) should be estimated. Overrides the operation labels in target_model. e.g. [‘Gi’,’Gx’,’Gy’,’Gx2’]

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are circuits corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = pygsti.obj.Circuit([‘Gx’,’Gx’,’Gx’])

  • guess_model_for_gauge (Model, optional) – A model used to compute a gauge transformation that is applied to the LGST estimates before they are returned. This gauge transformation is computed such that if the estimated gates matched the model given, then the operation matrices would match, i.e. the gauge would be the same as the model supplied. Defaults to target_model.

  • svd_truncate_to (int, optional) – The Hilbert space dimension to truncate the operation matrices to using a SVD to keep only the largest svdToTruncateTo singular values of the I_tildle LGST matrix. Zero means no truncation. Defaults to dimension of target_model.

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

Model – A model containing all of the estimated labels (or aliases)

pygsti._lgst_matrix_dims(model, prep_fiducials, effect_fiducials)
pygsti._construct_ab(prep_fiducials, effect_fiducials, model, dataset, op_label_aliases=None)
pygsti._construct_x_matrix(prep_fiducials, effect_fiducials, model, op_label_tuple, dataset, op_label_aliases=None)
pygsti._construct_a(effect_fiducials, model)
pygsti._construct_b(prep_fiducials, model)
pygsti._construct_target_ab(prep_fiducials, effect_fiducials, target_model)
pygsti.gram_rank_and_eigenvalues(dataset, prep_fiducials, effect_fiducials, target_model)

Returns the rank and singular values of the Gram matrix for a dataset.

Parameters
  • dataset (DataSet) – The data used to populate the Gram matrix

  • prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.

  • effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.

  • target_model (Model) – A model used to make sense of circuit elements, and to compute the theoretical gram matrix eigenvalues (returned as svalues_target).

Returns

  • rank (int) – the rank of the Gram matrix

  • svalues (numpy array) – the singular values of the Gram matrix

  • svalues_target (numpy array) – the corresponding singular values of the Gram matrix generated by target_model.

pygsti.run_gst_fit_simple(dataset, start_model, circuits, optimizer, objective_function_builder, resource_alloc, verbosity=0)

Performs core Gate Set Tomography function of model optimization.

Optimizes the parameters of start_model by minimizing the objective function built by objective_function_builder. Probabilities are computed by the model, and outcome counts are supplied by dataset.

Parameters
  • dataset (DataSet) – The dataset to obtain counts from.

  • start_model (Model) – The Model used as a starting point for the least-squares optimization.

  • circuits (list of (tuples or Circuits)) – Each tuple contains operation labels and specifies a circuit whose probabilities are considered when trying to least-squares-fit the probabilities given in the dataset. e.g. [ (), (‘Gx’,), (‘Gx’,’Gy’) ]

  • optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.

  • objective_function_builder (ObjectiveFunctionBuilder) – Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”.

  • resource_alloc (ResourceAllocation) – A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

  • result (OptimizerResult) – the result of the optimization

  • model (Model) – the best-fit model.

pygsti.run_gst_fit(mdc_store, optimizer, objective_function_builder, verbosity=0)

Performs core Gate Set Tomography function of model optimization.

Optimizes the model to the data within mdc_store by minimizing the objective function built by objective_function_builder.

Parameters
  • mdc_store (ModelDatasetCircuitsStore) – An object holding a model, data set, and set of circuits. This defines the model to be optimized, the data to fit to, and the circuits where predicted vs. observed comparisons should be made. This object also contains additional information specific to the given model, data set, and circuit list, doubling as a cache for increased performance. This information is also specific to a particular resource allocation, which affects how cached values stored.

  • optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.

  • objective_function_builder (ObjectiveFunctionBuilder) – Defines the objective function that is optimized. Can also be anything readily converted to an objective function builder, e.g. “logl”. If None, then mdc_store must itself be an already-built objective function.

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

  • result (OptimizerResult) – the result of the optimization

  • objfn_store (MDCObjectiveFunction) – the objective function and store containing the best-fit model evaluated at the best-fit point.

pygsti.run_iterative_gst(dataset, start_model, circuit_lists, optimizer, iteration_objfn_builders, final_objfn_builders, resource_alloc, verbosity=0)

Performs Iterative Gate Set Tomography on the dataset.

Parameters
  • dataset (DataSet) – The data used to generate MLGST gate estimates

  • start_model (Model) – The Model used as a starting point for the least-squares optimization.

  • circuit_lists (list of lists of (tuples or Circuits)) – The i-th element is a list of the circuits to be used in the i-th iteration of the optimization. Each element of these lists is a circuit, specifed as either a Circuit object or as a tuple of operation labels (but all must be specified using the same type). e.g. [ [ (), (‘Gx’,) ], [ (), (‘Gx’,), (‘Gy’,) ], [ (), (‘Gx’,), (‘Gy’,), (‘Gx’,’Gy’) ] ]

  • optimizer (Optimizer or dict) – The optimizer to use, or a dictionary of optimizer parameters from which a default optimizer can be built.

  • iteration_objfn_builders (list) – List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on each iteration.

  • final_objfn_builders (list) – List of ObjectiveFunctionBuilder objects defining which objective functions should be optimizized (successively) on the final iteration.

  • resource_alloc (ResourceAllocation) – A resource allocation object containing information about how to divide computation amongst multiple processors and any memory limits that should be imposed.

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

  • models (list of Models) – list whose i-th element is the model corresponding to the results of the i-th iteration.

  • optimums (list of OptimizerResults) – list whose i-th element is the final optimizer result from that iteration.

  • final_objfn (MDSObjectiveFunction) – The final iteration’s objective function / store, which encapsulated the final objective function evaluated at the best-fit point (an “evaluated” model-dataSet-circuits store).

pygsti._do_runopt(objective, optimizer, printer)

Runs the core model-optimization step within a GST routine by optimizing objective using optimizer.

This is factored out as a separate function because of the differences when running Taylor-term simtype calculations, which utilize this as a subroutine (see :function:`_do_term_runopt`).

Parameters
  • objective (MDSObjectiveFunction) – A “model-dataset” objective function to optimize.

  • optimizer (Optimizer) – The optimizer to use.

  • printer (VerbosityPrinter) – An object for printing output.

Returns

OptimizerResult

pygsti._do_term_runopt(objective, optimizer, printer)

Runs the core model-optimization step for models using the Taylor-term (path integral) method of computing probabilities.

This routine serves the same purpose as :function:`_do_runopt`, but is more complex because an appropriate “path set” must be found, requiring a loop of model optimizations with fixed path sets until a sufficient “good” path set is obtained.

Parameters
  • objective (MDSObjectiveFunction) – A “model-dataset” objective function to optimize.

  • optimizer (Optimizer) – The optimizer to use.

  • printer (VerbosityPrinter) – An object for printing output.

Returns

OptimizerResult

pygsti.find_closest_unitary_opmx(operation_mx)

Find the closest (in fidelity) unitary superoperator to operation_mx.

Finds the closest operation matrix (by maximizing fidelity) to operation_mx that describes a unitary quantum gate.

Parameters

operation_mx (numpy array) – The operation matrix to act on.

Returns

numpy array – The resulting closest unitary operation matrix.

class pygsti._TrivialGaugeGroupElement(dim)

Bases: GaugeGroupElement

Element of TrivialGaugeGroup

Parameters

dim (int) – The Hilbert-Schmidt space dimension of the gauge group.

property transform_matrix(self)

The gauge-transform matrix.

Returns

numpy.ndarray

property transform_matrix_inverse(self)

The inverse of the gauge-transform matrix.

Returns

numpy.ndarray

deriv_wrt_params(self, wrt_filter=None)

Computes the derivative of the gauge group at this element.

That is, the derivative of a general element with respect to the gauge group’s parameters, evaluated at this element.

Parameters

wrt_filter (list or numpy.ndarray, optional) – Indices of the gauge group parameters to differentiate with respect to. If None, differentiation is performed with respect to all the group’s parameters.

Returns

numpy.ndarray

to_vector(self)

Get the parameter vector corresponding to this transform.

Returns

numpy.ndarray

from_vector(self, v)

Reinitialize this GaugeGroupElement using the the parameter vector v.

Parameters

v (numpy.ndarray) – A 1D array of length :method:`num_params`

Returns

None

property num_params(self)

Return the number of parameters (degrees of freedom) of this element.

Returns

int

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
pygsti.gaugeopt_to_target(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', gauge_group=None, method='auto', maxiter=100000, maxfev=None, tol=1e-08, oob_check_interval=0, return_all=False, comm=None, verbosity=0, check_jac=False)

Optimize the gauge degrees of freedom of a model to that of a target.

Parameters
  • model (Model) – The model to gauge-optimize

  • target_model (Model) – The model to optimize to. The metric used for comparing models is given by gates_metric and spam_metric.

  • item_weights (dict, optional) – Dictionary of weighting factors for gates and spam operators. Keys can be gate, state preparation, or POVM effect, as well as the special values “spam” or “gates” which apply the given weighting to all spam operators or gates respectively. Values are floating point numbers. Values given for specific gates or spam operators take precedence over “gates” and “spam” values. The precise use of these weights depends on the model metric(s) being used.

  • cptp_penalty_factor (float, optional) – If greater than zero, the objective function also contains CPTP penalty terms which penalize non-CPTP-ness of the gates being optimized. This factor multiplies these CPTP penalty terms.

  • spam_penalty_factor (float, optional) – If greater than zero, the objective function also contains SPAM penalty terms which penalize non-positive-ness of the state preps being optimized. This factor multiplies these SPAM penalty terms.

  • gates_metric ({"frobenius", "fidelity", "tracedist"}, optional) – The metric used to compare gates within models. “frobenius” computes the normalized sqrt(sum-of-squared-differences), with weights multiplying the squared differences (see Model.frobeniusdist()). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each gate, weighted by the weights.

  • spam_metric ({"frobenius", "fidelity", "tracedist"}, optional) – The metric used to compare spam vectors within models. “frobenius” computes the normalized sqrt(sum-of-squared-differences), with weights multiplying the squared differences (see Model.frobeniusdist()). “fidelity” and “tracedist” sum the individual infidelities or trace distances of each “SPAM gate”, weighted by the weights.

  • gauge_group (GaugeGroup, optional) – The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.

  • method (string, optional) –

    The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘Nelder-Mead’, ‘CG’, ‘L-BFGS-B’, or additionally:

    • ’auto’ – ‘ls’ when allowed, otherwise ‘L-BFGS-B’

    • ’ls’ – custom least-squares optimizer.

    • ’custom’ – custom CG that often works better than ‘CG’

    • ’supersimplex’ – repeated application of ‘Nelder-Mead’ to converge it

    • ’basinhopping’ – scipy.optimize.basinhopping using L-BFGS-B as a local optimizer

    • ’swarm’ – particle swarm global optimization algorithm

    • ’evolve’ – evolutionary global optimization algorithm using DEAP

    • ’brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions

  • maxiter (int, optional) – Maximum number of iterations for the gauge optimization.

  • maxfev (int, optional) – Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.

  • tol (float, optional) – The tolerance for the gauge optimization.

  • oob_check_interval (int, optional) – If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an out-of-bounds condition that the gauge optimizer will avoid. If zero, then any gauge-transform failures just terminate the optimization.

  • return_all (bool, optional) – When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • verbosity (int, optional) – How much detail to send to stdout.

  • check_jac (bool) – When True, check least squares analytic jacobian against finite differences

Returns

  • model if return_all == False

  • (goodnessMin, gaugeMx, model) if return_all == True – where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gauge-transformed model.

pygsti.gaugeopt_custom(model, objective_fn, gauge_group=None, method='L-BFGS-B', maxiter=100000, maxfev=None, tol=1e-08, oob_check_interval=0, return_all=False, jacobian_fn=None, comm=None, verbosity=0)

Optimize the gauge of a model using a custom objective function.

Parameters
  • model (Model) – The model to gauge-optimize

  • objective_fn (function) – The function to be minimized. The function must take a single Model argument and return a float.

  • gauge_group (GaugeGroup, optional) – The gauge group which defines which gauge trasformations are optimized over. If None, then the model’s default gauge group is used.

  • method (string, optional) –

    The method used to optimize the objective function. Can be any method known by scipy.optimize.minimize such as ‘BFGS’, ‘Nelder-Mead’, ‘CG’, ‘L-BFGS-B’, or additionally:

    • ’custom’ – custom CG that often works better than ‘CG’

    • ’supersimplex’ – repeated application of ‘Nelder-Mead’ to converge it

    • ’basinhopping’ – scipy.optimize.basinhopping using L-BFGS-B as a local optimizer

    • ’swarm’ – particle swarm global optimization algorithm

    • ’evolve’ – evolutionary global optimization algorithm using DEAP

    • ’brute’ – Experimental: scipy.optimize.brute using 4 points along each dimensions

  • maxiter (int, optional) – Maximum number of iterations for the gauge optimization.

  • maxfev (int, optional) – Maximum number of function evaluations for the gauge optimization. Defaults to maxiter.

  • tol (float, optional) – The tolerance for the gauge optimization.

  • oob_check_interval (int, optional) – If greater than zero, gauge transformations are allowed to fail (by raising any exception) to indicate an out-of-bounds condition that the gauge optimizer will avoid. If zero, then any gauge-transform failures just terminate the optimization.

  • return_all (bool, optional) – When True, return best “goodness” value and gauge matrix in addition to the gauge optimized model.

  • jacobian_fn (function, optional) – The jacobian of objective_fn. The function must take three parameters, 1) the un-transformed Model, 2) the transformed Model, and 3) the GaugeGroupElement representing the transformation that brings the first argument into the second.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • verbosity (int, optional) – How much detail to send to stdout.

Returns

  • model if return_all == False

  • (goodnessMin, gaugeMx, model) if return_all == True – where goodnessMin is the minimum value of the goodness function (the best ‘goodness’) found, gaugeMx is the gauge matrix used to transform the model, and model is the final gauge-transformed model.

pygsti._create_objective_fn(model, target_model, item_weights=None, cptp_penalty_factor=0, spam_penalty_factor=0, gates_metric='frobenius', spam_metric='frobenius', method=None, comm=None, check_jac=False)

Creates the objective function and jacobian (if available) for gaugeopt_to_target

pygsti._cptp_penalty_size(mdl)

Helper function - same as that in core.py.

pygsti._spam_penalty_size(mdl)

Helper function - same as that in core.py.

pygsti._cptp_penalty(mdl, prefactor, op_basis)

Helper function - CPTP penalty: (sum of tracenorms of gates), which in least squares optimization means returning an array of the sqrt(tracenorm) of each gate. This function is the same as that in core.py.

Returns

numpy array – a (real) 1D array of length len(mdl.operations).

pygsti._spam_penalty(mdl, prefactor, op_basis)

Helper function - CPTP penalty: (sum of tracenorms of gates), which in least squares optimization means returning an array of the sqrt(tracenorm) of each gate. This function is the same as that in core.py.

Returns

numpy array – a (real) 1D array of length _spam_penalty_size(mdl)

pygsti._cptp_penalty_jac_fill(cp_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)

Helper function - jacobian of CPTP penalty (sum of tracenorms of gates) Returns a (real) array of shape (len(mdl.operations), gauge_group_el.num_params).

pygsti._spam_penalty_jac_fill(spam_penalty_vec_grad_to_fill, mdl_pre, mdl_post, gauge_group_el, prefactor, op_basis, wrt_filter)

Helper function - jacobian of CPTP penalty (sum of tracenorms of gates) Returns a (real) array of shape (_spam_penalty_size(mdl), gauge_group_el.num_params).

pygsti._gram_rank_and_evals(dataset, prep_fiducials, effect_fiducials, target_model)

Returns the rank and singular values of the Gram matrix for a dataset.

Parameters
  • dataset (DataSet) – The data used to populate the Gram matrix

  • prep_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective preparation.

  • effect_fiducials (list of Circuits) – Fiducial Circuits used to construct a informationally complete effective measurement.

  • target_model (Model) – A model used to make sense of circuit elements, and to compute the theoretical gram matrix eigenvalues (returned as svalues_target).

Returns

  • rank (int) – the rank of the Gram matrix

  • svalues (numpy array) – the singular values of the Gram matrix

  • svalues_target (numpy array) – the corresponding singular values of the Gram matrix generated by target_model.

pygsti.max_gram_basis(op_labels, dataset, max_length=0)

Compute a maximal set of basis circuits for a Gram matrix.

That is, a maximal set of strings {S_i} such that the gate strings { S_i S_j } are all present in dataset. If max_length > 0, then restrict len(S_i) <= max_length.

Parameters
  • op_labels (list or tuple) – the operation labels to use in Gram matrix basis strings

  • dataset (DataSet) – the dataset to use when constructing the Gram matrix

  • max_length (int, optional) – the maximum string length considered for Gram matrix basis elements. Defaults to 0 (no limit).

Returns

list of tuples – where each tuple contains operation labels and specifies a single circuit.

pygsti.max_gram_rank_and_eigenvalues(dataset, target_model, max_basis_string_length=10, fixed_lists=None)

Compute the rank and singular values of a maximal Gram matrix.

That is, compute the rank and singular values of the Gram matrix computed using the basis: max_gram_basis(dataset.gate_labels(), dataset, max_basis_string_length).

Parameters
  • dataset (DataSet) – the dataset to use when constructing the Gram matrix

  • target_model (Model) – A model used to make sense of circuits and for the construction of a theoretical gram matrix and spectrum.

  • max_basis_string_length (int, optional) – the maximum string length considered for Gram matrix basis elements. Defaults to 10.

  • fixed_lists ((prep_fiducials, effect_fiducials), optional) – 2-tuple of Circuit lists, specifying the preparation and measurement fiducials to use when constructing the Gram matrix, and thereby bypassing the search for such lists.

Returns

  • rank (integer)

  • singularvalues (numpy array)

  • targetsingularvalues (numpy array)

pygsti.id2x2
pygsti.sigmax
pygsti.sigmay
pygsti.sigmaz
pygsti.unitary_to_pauligate(u)

Get the linear operator on (vectorized) density matrices corresponding to a n-qubit unitary operator on states.

Parameters

u (numpy array) – A dxd array giving the action of the unitary on a state in the sigma-z basis. where d = 2 ** n-qubits

Returns

numpy array – The operator on density matrices that have been vectorized as d**2 vectors in the Pauli basis.

pygsti.sigmaii
pygsti.sigmaix
pygsti.sigmaiy
pygsti.sigmaiz
pygsti.sigmaxi
pygsti.sigmaxx
pygsti.sigmaxy
pygsti.sigmaxz
pygsti.sigmayi
pygsti.sigmayx
pygsti.sigmayy
pygsti.sigmayz
pygsti.sigmazi
pygsti.sigmazx
pygsti.sigmazy
pygsti.sigmazz
pygsti.single_qubit_gate(hx, hy, hz, noise=0)

Construct the single-qubit operation matrix.

Build the operation matrix given by exponentiating -i * (hx*X + hy*Y + hz*Z), where X, Y, and Z are the sigma matrices. Thus, hx, hy, and hz correspond to rotation angles divided by 2. Additionally, a uniform depolarization noise can be applied to the gate.

Parameters
  • hx (float) – Coefficient of sigma-X matrix in exponent.

  • hy (float) – Coefficient of sigma-Y matrix in exponent.

  • hz (float) – Coefficient of sigma-Z matrix in exponent.

  • noise (float, optional) – The amount of uniform depolarizing noise.

Returns

numpy array – 4x4 operation matrix which operates on a 1-qubit density matrix expressed as a vector in the Pauli basis ( {I,X,Y,Z}/sqrt(2) ).

pygsti.two_qubit_gate(ix=0, iy=0, iz=0, xi=0, xx=0, xy=0, xz=0, yi=0, yx=0, yy=0, yz=0, zi=0, zx=0, zy=0, zz=0, ii=0)

Construct the single-qubit operation matrix.

Build the operation matrix given by exponentiating -i * (xx*XX + xy*XY + …) where terms in the exponent are tensor products of two Pauli matrices.

Parameters
  • ix (float, optional) – Coefficient of IX matrix in exponent.

  • iy (float, optional) – Coefficient of IY matrix in exponent.

  • iz (float, optional) – Coefficient of IZ matrix in exponent.

  • xi (float, optional) – Coefficient of XI matrix in exponent.

  • xx (float, optional) – Coefficient of XX matrix in exponent.

  • xy (float, optional) – Coefficient of XY matrix in exponent.

  • xz (float, optional) – Coefficient of XZ matrix in exponent.

  • yi (float, optional) – Coefficient of YI matrix in exponent.

  • yx (float, optional) – Coefficient of YX matrix in exponent.

  • yy (float, optional) – Coefficient of YY matrix in exponent.

  • yz (float, optional) – Coefficient of YZ matrix in exponent.

  • zi (float, optional) – Coefficient of ZI matrix in exponent.

  • zx (float, optional) – Coefficient of ZX matrix in exponent.

  • zy (float, optional) – Coefficient of ZY matrix in exponent.

  • zz (float, optional) – Coefficient of ZZ matrix in exponent.

  • ii (float, optional) – Coefficient of II matrix in exponent.

Returns

numpy array – 16x16 operation matrix which operates on a 2-qubit density matrix expressed as a vector in the Pauli-Product basis.

class pygsti._DataSet(oli_data=None, time_data=None, rep_data=None, circuits=None, circuit_indices=None, outcome_labels=None, outcome_label_indices=None, static=False, file_to_load_from=None, collision_action='aggregate', comment=None, aux_info=None)

Bases: object

An association between Circuits and outcome counts, serving as the input data for many QCVV protocols.

The DataSet class associates circuits with counts or time series of counts for each outcome label, and can be thought of as a table with gate strings labeling the rows and outcome labels and/or time labeling the columns. It is designed to behave similarly to a dictionary of dictionaries, so that counts are accessed by:

count = dataset[circuit][outcomeLabel]

in the time-independent case, and in the time-dependent case, for integer time index i >= 0,

outcomeLabel = dataset[circuit][i].outcome count = dataset[circuit][i].count time = dataset[circuit][i].time

Parameters
  • oli_data (list or numpy.ndarray) – When static == True, a 1D numpy array containing outcome label indices (integers), concatenated for all sequences. Otherwise, a list of 1D numpy arrays, one array per gate sequence. In either case, this quantity is indexed by the values of circuit_indices or the index of circuits.

  • time_data (list or numpy.ndarray) – Same format at oli_data except stores floating-point timestamp values.

  • rep_data (list or numpy.ndarray) – Same format at oli_data except stores integer repetition counts for each “data bin” (i.e. (outcome,time) pair). If all repetitions equal 1 (“single-shot” timestampted data), then rep_data can be None (no repetitions).

  • circuits (list of (tuples or Circuits)) – Each element is a tuple of operation labels or a Circuit object. Indices for these strings are assumed to ascend from 0. These indices must correspond to the time series of spam-label indices (above). Only specify this argument OR circuit_indices, not both.

  • circuit_indices (ordered dictionary) – An OrderedDict with keys equal to circuits (tuples of operation labels) and values equal to integer indices associating a row/element of counts with the circuit. Only specify this argument OR circuits, not both.

  • outcome_labels (list of strings or int) – Specifies the set of spam labels for the DataSet. Indices for the spam labels are assumed to ascend from 0, starting with the first element of this list. These indices will associate each elememtn of timeseries with a spam label. Only specify this argument OR outcome_label_indices, not both. If an int, specifies that the outcome labels should be those for a standard set of this many qubits.

  • outcome_label_indices (ordered dictionary) – An OrderedDict with keys equal to spam labels (strings) and value equal to integer indices associating a spam label with given index. Only specify this argument OR outcome_labels, not both.

  • static (bool) –

    When True, create a read-only, i.e. “static” DataSet which cannot be modified. In

    this case you must specify the timeseries data, circuits, and spam labels.

    When False, create a DataSet that can have time series data added to it. In this case,

    you only need to specify the spam labels.

  • file_to_load_from (string or file object) – Specify this argument and no others to create a static DataSet by loading from a file (just like using the load(…) function).

  • collision_action ({"aggregate","overwrite","keepseparate"}) – Specifies how duplicate circuits should be handled. “aggregate” adds duplicate-circuit counts to the same circuit’s data at the next integer timestamp. “overwrite” only keeps the latest given data for a circuit. “keepseparate” tags duplicate-circuits by setting the .occurrence ID of added circuits that are already contained in this data set to the next available positive integer.

  • comment (string, optional) – A user-specified comment string that gets carried around with the data. A common use for this field is to attach to the data details regarding its collection.

  • aux_info (dict, optional) – A user-specified dictionary of per-circuit auxiliary information. Keys should be the circuits in this DataSet and value should be Python dictionaries.

__iter__(self)
__len__(self)
__contains__(self, circuit)

Test whether data set contains a given circuit.

Parameters

circuit (tuple or Circuit) – A tuple of operation labels or a Circuit instance which specifies the the circuit to check for.

Returns

bool – whether circuit was found.

__hash__(self)

Return hash(self).

__getitem__(self, circuit)
__setitem__(self, circuit, outcome_dict_or_series)
__delitem__(self, circuit)
_get_row(self, circuit)

Get a row of data from this DataSet.

Parameters

circuit (Circuit or tuple) – The gate sequence to extract data for.

Returns

_DataSetRow

_set_row(self, circuit, outcome_dict_or_series)

Set the counts for a row of this DataSet.

Parameters
  • circuit (Circuit or tuple) – The gate sequence to extract data for.

  • outcome_dict_or_series (dict or tuple) – The outcome count data, either a dictionary of outcome counts (with keys as outcome labels) or a tuple of lists. In the latter case this can be a 2-tuple: (outcome-label-list, timestamp-list) or a 3-tuple: (outcome-label-list, timestamp-list, repetition-count-list).

Returns

None

keys(self)

Returns the circuits used as keys of this DataSet.

Returns

list – A list of Circuit objects which index the data counts within this data set.

items(self)

Iterator over (circuit, timeSeries) pairs.

Here circuit is a tuple of operation labels and timeSeries is a _DataSetRow instance, which behaves similarly to a list of spam labels whose index corresponds to the time step.

Returns

_DataSetKVIterator

values(self)

Iterator over _DataSetRow instances corresponding to the time series data for each circuit.

Returns

_DataSetValueIterator

property outcome_labels(self)

Get a list of all the outcome labels contained in this DataSet.

Returns

list of strings or tuples – A list where each element is an outcome label (which can be a string or a tuple of strings).

property timestamps(self)

Get a list of all the (unique) timestamps contained in this DataSet.

Returns

list of floats – A list where each element is a timestamp.

gate_labels(self, prefix='G')

Get a list of all the distinct operation labels used in the circuits of this dataset.

Parameters

prefix (str) – Filter the circuit labels so that only elements beginning with this prefix are returned. None performs no filtering.

Returns

list of strings – A list where each element is a operation label.

degrees_of_freedom(self, circuits=None, method='present_outcomes-1', aggregate_times=True)

Returns the number of independent degrees of freedom in the data for the circuits in circuits.

Parameters
  • circuits (list of Circuits) – The list of circuits to count degrees of freedom for. If None then all of the DataSet’s strings are used.

  • method ({'all_outcomes-1', 'present_outcomes-1', 'tuned'}) – How the degrees of freedom should be computed. ‘all_outcomes-1’ takes the number of circuits and multiplies this by the total number of outcomes (the length of what is returned by outcome_labels()) minus one. ‘present_outcomes-1’ counts on a per-circuit basis the number of present (usually = non-zero) outcomes recorded minus one. ‘tuned’ should be the most accurate, as it accounts for low-N “Poisson bump” behavior, but it is not the default because it is still under development. For timestamped data, see aggreate_times below.

  • aggregate_times (bool, optional) – Whether counts that occur at different times should be tallied separately. If True, then even when counts occur at different times degrees of freedom are tallied on a per-circuit basis. If False, then counts occuring at distinct times are treated as independent of those an any other time, and are tallied separately. So, for example, if aggregate_times is False and a data row has 0- and 1-counts of 45 & 55 at time=0 and 42 and 58 at time=1 this row would contribute 2 degrees of freedom, not 1. It can sometimes be useful to set this to False when the DataSet holds coarse-grained data, but usually you want this to be left as True (especially for time-series data).

Returns

int

_collisionaction_update_circuit(self, circuit)
_add_explicit_repetition_counts(self)

Build internal repetition counts if they don’t exist already.

This method is usually unnecessary, as repetition counts are almost always build as soon as they are needed.

Returns

None

add_count_dict(self, circuit, count_dict, record_zero_counts=True, aux=None, update_ol=True)

Add a single circuit’s counts to this DataSet

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object

  • count_dict (dict) – A dictionary with keys = outcome labels and values = counts

  • record_zero_counts (bool, optional) – Whether zero-counts are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

  • aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).

  • update_ol (bool, optional) – This argument is for internal use only and should be left as True.

Returns

None

add_count_list(self, circuit, outcome_labels, counts, record_zero_counts=True, aux=None, update_ol=True, unsafe=False)

Add a single circuit’s counts to this DataSet

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object

  • outcome_labels (list or tuple) – The outcome labels corresponding to counts.

  • counts (list or tuple) – The counts themselves.

  • record_zero_counts (bool, optional) – Whether zero-counts are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

  • aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).

  • update_ol (bool, optional) – This argument is for internal use only and should be left as True.

  • unsafe (bool, optional) – True means that outcome_labels is guaranteed to hold tuple-type outcome labels and never plain strings. Only set this to True if you know what you’re doing.

Returns

None

add_count_arrays(self, circuit, outcome_index_array, count_array, record_zero_counts=True, aux=None)

Add the outcomes for a single circuit, formatted as raw data arrays.

Parameters
  • circuit (Circuit) – The circuit to add data for.

  • outcome_index_array (numpy.ndarray) – An array of outcome indices, which must be values of self.olIndex (which maps outcome labels to indices).

  • count_array (numpy.ndarray) – An array of integer (or sometimes floating point) counts, one corresponding to each outcome index (element of outcome_index_array).

  • record_zero_counts (bool, optional) – Whether zero counts (zeros in count_array should be stored explicitly or not stored and inferred. Setting to False reduces the space taken by data sets containing lots of zero counts, but makes some objective function evaluations less precise.

  • aux (dict or None, optional) – If not None a dictionary of user-defined auxiliary information that should be associated with this circuit.

Returns

None

add_cirq_trial_result(self, circuit, trial_result, key)

Add a single circuit’s counts — stored in a Cirq TrialResult — to this DataSet

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object. Note that this must be a PyGSTi circuit — not a Cirq circuit.

  • trial_result (cirq.TrialResult) – The TrialResult to add

  • key (str) – The string key of the measurement. Set by cirq.measure.

Returns

None

add_raw_series_data(self, circuit, outcome_label_list, time_stamp_list, rep_count_list=None, overwrite_existing=True, record_zero_counts=True, aux=None, update_ol=True, unsafe=False)

Add a single circuit’s counts to this DataSet

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object

  • outcome_label_list (list) – A list of outcome labels (strings or tuples). An element’s index links it to a particular time step (i.e. the i-th element of the list specifies the outcome of the i-th measurement in the series).

  • time_stamp_list (list) – A list of floating point timestamps, each associated with the single corresponding outcome in outcome_label_list. Must be the same length as outcome_label_list.

  • rep_count_list (list, optional) – A list of integer counts specifying how many outcomes of type given by outcome_label_list occurred at the time given by time_stamp_list. If None, then all counts are assumed to be 1. When not None, must be the same length as outcome_label_list.

  • overwrite_existing (bool, optional) – Whether to overwrite the data for circuit (if it exists). If False, then the given lists are appended (added) to existing data.

  • record_zero_counts (bool, optional) – Whether zero-counts (elements of rep_count_list that are zero) are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

  • aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).

  • update_ol (bool, optional) – This argument is for internal use only and should be left as True.

  • unsafe (bool, optional) – When True, don’t bother checking that outcome_label_list contains tuple-type outcome labels and automatically upgrading strings to 1-tuples. Only set this to True if you know what you’re doing and need the marginally faster performance.

Returns

None

_add_raw_arrays(self, circuit, oli_array, time_array, rep_array, overwrite_existing, record_zero_counts, aux)
update_ol(self)

Updates the internal outcome-label list in this dataset.

Call this after calling add_count_dict(…) or add_raw_series_data(…) with update_olIndex=False.

Returns

None

add_series_data(self, circuit, count_dict_list, time_stamp_list, overwrite_existing=True, record_zero_counts=True, aux=None)

Add a single circuit’s counts to this DataSet

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object

  • count_dict_list (list) – A list of dictionaries holding the outcome-label:count pairs for each time step (times given by time_stamp_list.

  • time_stamp_list (list) – A list of floating point timestamps, each associated with an entire dictionary of outcomes specified by count_dict_list.

  • overwrite_existing (bool, optional) – If True, overwrite any existing data for the circuit. If False, add the count data with the next non-negative integer timestamp.

  • record_zero_counts (bool, optional) – Whether zero-counts (elements of the dictionaries in count_dict_list that are zero) are actually recorded (stored) in this DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

  • aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).

Returns

None

aggregate_outcomes(self, label_merge_dict, record_zero_counts=True)

Creates a DataSet which merges certain outcomes in this DataSet.

Used, for example, to aggregate a 2-qubit 4-outcome DataSet into a 1-qubit 2-outcome DataSet.

Parameters
  • label_merge_dict (dictionary) – The dictionary whose keys define the new DataSet outcomes, and whose items are lists of input DataSet outcomes that are to be summed together. For example, if a two-qubit DataSet has outcome labels “00”, “01”, “10”, and “11”, and we want to ‘’aggregate out’’ the second qubit, we could use label_merge_dict = {‘0’:[‘00’,’01’],’1’:[‘10’,’11’]}. When doing this, however, it may be better to use :function:`filter_qubits` which also updates the circuits.

  • record_zero_counts (bool, optional) – Whether zero-counts are actually recorded (stored) in the returned (merged) DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

Returns

merged_dataset (DataSet object) – The DataSet with outcomes merged according to the rules given in label_merge_dict.

aggregate_std_nqubit_outcomes(self, qubit_indices_to_keep, record_zero_counts=True)

Creates a DataSet which merges certain outcomes in this DataSet.

Used, for example, to aggregate a 2-qubit 4-outcome DataSet into a 1-qubit 2-outcome DataSet. This assumes that outcome labels are in the standard format whereby each qubit corresponds to a single ‘0’ or ‘1’ character.

Parameters
  • qubit_indices_to_keep (list) – A list of integers specifying which qubits should be kept, that is, not aggregated.

  • record_zero_counts (bool, optional) – Whether zero-counts are actually recorded (stored) in the returned (merged) DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

Returns

merged_dataset (DataSet object) – The DataSet with outcomes merged.

add_auxiliary_info(self, circuit, aux)

Add auxiliary meta information to circuit.

Parameters
  • circuit (tuple or Circuit) – A tuple of operation labels specifying the circuit or a Circuit object

  • aux (dict, optional) – A dictionary of auxiliary meta information to be included with this set of data counts (associated with circuit).

Returns

None

add_counts_from_dataset(self, other_data_set)

Append another DataSet’s data to this DataSet

Parameters

other_data_set (DataSet) – The dataset to take counts from.

Returns

None

add_series_from_dataset(self, other_data_set)

Append another DataSet’s series data to this DataSet

Parameters

other_data_set (DataSet) – The dataset to take time series data from.

Returns

None

property meantimestep(self)

The mean time-step, averaged over the time-step for each circuit and over circuits.

Returns

float

property has_constant_totalcounts_pertime(self)

True if the data for every circuit has the same number of total counts at every data collection time.

This will return True if there is a different number of total counts per circuit (i.e., after aggregating over time), as long as every circuit has the same total counts per time step (this will happen when the number of time-steps varies between circuit).

Returns

bool

property totalcounts_pertime(self)

Total counts per time, if this is constant over times and circuits.

When that doesn’t hold, an error is raised.

Returns

float or int

property has_constant_totalcounts(self)

True if the data for every circuit has the same number of total counts.

Returns

bool

property has_trivial_timedependence(self)

True if all the data in this DataSet occurs at time 0.

Returns

bool

__str__(self)

Return str(self).

to_str(self, mode='auto')

Render this DataSet as a string.

Parameters

mode ({"auto","time-dependent","time-independent"}) – Whether to display the data as time-series of outcome counts (“time-dependent”) or to report per-outcome counts aggregated over time (“time-independent”). If “auto” is specified, then the time-independent mode is used only if all time stamps in the DataSet are equal to zero (trivial time dependence).

Returns

str

truncate(self, list_of_circuits_to_keep, missing_action='raise')

Create a truncated dataset comprised of a subset of the circuits in this dataset.

Parameters
  • list_of_circuits_to_keep (list of (tuples or Circuits)) – A list of the circuits for the new returned dataset. If a circuit is given in this list that isn’t in the original data set, missing_action determines the behavior.

  • missing_action ({"raise","warn","ignore"}) – What to do when a string in list_of_circuits_to_keep is not in the data set (raise a KeyError, issue a warning, or do nothing).

Returns

DataSet – The truncated data set.

time_slice(self, start_time, end_time, aggregate_to_time=None)

Creates a DataSet by aggregating the counts within the [start_time,`end_time`) interval.

Parameters
  • start_time (float) – The starting time.

  • end_time (float) – The ending time.

  • aggregate_to_time (float, optional) – If not None, a single timestamp to give all the data in the specified range, resulting in time-independent DataSet. If None, then the original timestamps are preserved.

Returns

DataSet

split_by_time(self, aggregate_to_time=None)

Creates a dictionary of DataSets, each of which is a equal-time slice of this DataSet.

The keys of the returned dictionary are the distinct timestamps in this dataset.

Parameters

aggregate_to_time (float, optional) – If not None, a single timestamp to give all the data in each returned data set, resulting in time-independent `DataSet`s. If None, then the original timestamps are preserved.

Returns

OrderedDict – A dictionary of DataSet objects whose keys are the timestamp values of the original (this) data set in sorted order.

drop_zero_counts(self)

Creates a copy of this data set that doesn’t include any zero counts.

Returns

DataSet

process_times(self, process_times_array_fn)

Manipulate this DataSet’s timestamps according to processor_fn.

For example, using, the folloing process_times_array_fn would change the timestamps for each circuit to sequential integers.

``` def process_times_array_fn(times):

return list(range(len(times)))

```

Parameters

process_times_array_fn (function) – A function which takes a single array-of-timestamps argument and returns another similarly-sized array. This function is called, once per circuit, with the circuit’s array of timestamps.

Returns

DataSet – A new data set with altered timestamps.

process_circuits(self, processor_fn, aggregate=False)

Create a new data set by manipulating this DataSet’s circuits (keys) according to processor_fn.

The new DataSet’s circuits result from by running each of this DataSet’s circuits through processor_fn. This can be useful when “tracing out” qubits in a dataset containing multi-qubit data.

Parameters
  • processor_fn (function) – A function which takes a single Circuit argument and returns another (or the same) Circuit. This function may also return None, in which case the data for that string is deleted.

  • aggregate (bool, optional) – When True, aggregate the data for ciruits that processor_fn assigns to the same “new” circuit. When False, use the data from the last original circuit that maps to a given “new” circuit.

Returns

DataSet

process_circuits_inplace(self, processor_fn, aggregate=False)

Manipulate this DataSet’s circuits (keys) in-place according to processor_fn.

All of this DataSet’s circuits are updated by running each one through processor_fn. This can be useful when “tracing out” qubits in a dataset containing multi-qubit data.

Parameters
  • processor_fn (function) – A function which takes a single Circuit argument and returns another (or the same) Circuit. This function may also return None, in which case the data for that string is deleted.

  • aggregate (bool, optional) – When True, aggregate the data for ciruits that processor_fn assigns to the same “new” circuit. When False, use the data from the last original circuit that maps to a given “new” circuit.

Returns

None

remove(self, circuits, missing_action='raise')

Remove (delete) the data for circuits from this DataSet.

Parameters
  • circuits (iterable) – An iterable over Circuit-like objects specifying the keys (circuits) to remove.

  • missing_action ({"raise","warn","ignore"}) – What to do when a string in circuits is not in this data set (raise a KeyError, issue a warning, or do nothing).

Returns

None

_remove(self, gstr_indices)

Removes the data in indices given by gstr_indices

copy(self)

Make a copy of this DataSet.

Returns

DataSet

copy_nonstatic(self)

Make a non-static copy of this DataSet.

Returns

DataSet

done_adding_data(self)

Promotes a non-static DataSet to a static (read-only) DataSet.

This method should be called after all data has been added.

Returns

None

__getstate__(self)
__setstate__(self, state_dict)
save(self, file_or_filename)
write_binary(self, file_or_filename)

Write this data set to a binary-format file.

Parameters

file_or_filename (string or file object) – If a string, interpreted as a filename. If this filename ends in “.gz”, the file will be gzip compressed.

Returns

None

load(self, file_or_filename)
read_binary(self, file_or_filename)

Read a DataSet from a binary file, clearing any data is contained previously.

The file should have been created with :method:`DataSet.write_binary`

Parameters

file_or_filename (str or buffer) – The file or filename to load from.

Returns

None

rename_outcome_labels(self, old_to_new_dict)

Replaces existing output labels with new ones as per old_to_new_dict.

Parameters

old_to_new_dict (dict) – A mapping from old/existing outcome labels to new ones. Strings in keys or values are automatically converted to 1-tuples. Missing outcome labels are left unaltered.

Returns

None

add_std_nqubit_outcome_labels(self, nqubits)

Adds all the “standard” outcome labels (e.g. ‘0010’) on nqubits qubits.

This is useful to ensure that, even if not all outcomes appear in the data, that all are recognized as being potentially valid outcomes (and so attempts to get counts for these outcomes will be 0 rather than raising an error).

Parameters

nqubits (int) – The number of qubits. For example, if equal to 3 the outcome labels ‘000’, ‘001’, … ‘111’ are added.

Returns

None

add_outcome_labels(self, outcome_labels, update_ol=True)

Adds new valid outcome labels.

Ensures that all the elements of outcome_labels are stored as valid outcomes for circuits in this DataSet, adding new outcomes as necessary.

Parameters
  • outcome_labels (list or generator) – A list or generator of string- or tuple-valued outcome labels.

  • update_ol (bool, optional) – Whether to update internal mappings to reflect the new outcome labels. Leave this as True unless you really know what you’re doing.

Returns

None

auxinfo_dataframe(self, pivot_valuename=None, pivot_value=None, drop_columns=False)

Create a Pandas dataframe with aux-data from this dataset.

Parameters
  • pivot_valuename (str, optional) – If not None, the resulting dataframe is pivoted using pivot_valuename as the column whose values name the pivoted table’s column names. If None and pivot_value is not None,`”ValueName”` is used.

  • pivot_value (str, optional) – If not None, the resulting dataframe is pivoted such that values of the pivot_value column are rearranged into new columns whose names are given by the values of the pivot_valuename column. If None and pivot_valuename is not None,`”Value”` is used.

  • drop_columns (bool or list, optional) – A list of column names to drop (prior to performing any pivot). If True appears in this list or is given directly, then all constant-valued columns are dropped as well. No columns are dropped when drop_columns == False.

Returns

pandas.DataFrame

pygsti.create_bootstrap_dataset(input_data_set, generation_method, input_model=None, seed=None, outcome_labels=None, verbosity=1)

Creates a DataSet used for generating bootstrapped error bars.

Parameters
  • input_data_set (DataSet) – The data set to use for generating the “bootstrapped” data set.

  • generation_method ({ 'nonparametric', 'parametric' }) – The type of dataset to generate. ‘parametric’ generates a DataSet with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates a DataSet with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.

  • input_model (Model, optional) – The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).

  • seed (int, optional) – A seed value for numpy’s random number generator.

  • outcome_labels (list, optional) – The list of outcome labels to include in the output dataset. If None are specified, defaults to the spam labels of input_data_set.

  • verbosity (int, optional) – How verbose the function output is. If 0, then printing is suppressed. If 1 (or greater), then printing is not suppressed.

Returns

DataSet

pygsti.create_bootstrap_models(num_models, input_data_set, generation_method, fiducial_prep, fiducial_measure, germs, max_lengths, input_model=None, target_model=None, start_seed=0, outcome_labels=None, lsgst_lists=None, return_data=False, verbosity=2)

Creates a series of “bootstrapped” Models.

Models are created from a single DataSet (and possibly Model) and are typically used for generating bootstrapped error bars. The resulting Models are obtained by performing MLGST on data generated by repeatedly calling :function:`create_bootstrap_dataset` with consecutive integer seed values.

Parameters
  • num_models (int) – The number of models to create.

  • input_data_set (DataSet) – The data set to use for generating the “bootstrapped” data set.

  • generation_method ({ 'nonparametric', 'parametric' }) – The type of data to generate. ‘parametric’ generates DataSets with the same circuits and sample counts as input_data_set but using the probabilities in input_model (which must be provided). ‘nonparametric’ generates DataSets with the same circuits and sample counts as input_data_set using the count frequencies of input_data_set as probabilities.

  • fiducial_prep (list of Circuits) – The state preparation fiducial circuits used by MLGST.

  • fiducial_measure (list of Circuits) – The measurement fiducial circuits used by MLGST.

  • germs (list of Circuits) – The germ circuits used by MLGST.

  • max_lengths (list of ints) – List of integers, one per MLGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the i-th LSGST iteration includes the repeated germs truncated to the L-values up to and including the i-th one.

  • input_model (Model, optional) – The model used to compute the probabilities for circuits when generation_method is set to ‘parametric’. If ‘nonparametric’ is selected, this argument must be set to None (the default).

  • target_model (Model, optional) – Mandatory model to use for as the target model for MLGST when generation_method is set to ‘nonparametric’. When ‘parametric’ is selected, input_model is used as the target.

  • start_seed (int, optional) – The initial seed value for numpy’s random number generator when generating data sets. For each succesive dataset (and model) that are generated, the seed is incremented by one.

  • outcome_labels (list, optional) – The list of Outcome labels to include in the output dataset. If None are specified, defaults to the effect labels of input_data_set.

  • lsgst_lists (list of circuit lists, optional) – Provides explicit list of circuit lists to be used in analysis; to be given if the dataset uses “incomplete” or “reduced” sets of circuit. Default is None.

  • return_data (bool) – Whether generated data sets should be returned in addition to models.

  • verbosity (int) – Level of detail printed to stdout.

Returns

  • models (list) – The list of generated Model objects.

  • data (list) – The list of generated DataSet objects, only returned when return_data == True.

pygsti.gauge_optimize_models(gs_list, target_model, gate_metric='frobenius', spam_metric='frobenius', plot=True)

Optimizes the “spam weight” parameter used when gauge optimizing a set of models.

This function gauge optimizes multiple times using a range of spam weights and takes the one the minimizes the average spam error multiplied by the average gate error (with respect to a target model).

Parameters
  • gs_list (list) – The list of Model objects to gauge optimize (simultaneously).

  • target_model (Model) – The model to compare the gauge-optimized gates with, and also to gauge-optimize them to.

  • gate_metric ({ "frobenius", "fidelity", "tracedist" }, optional) – The metric used within the gauge optimization to determing error in the gates.

  • spam_metric ({ "frobenius", "fidelity", "tracedist" }, optional) – The metric used within the gauge optimization to determing error in the state preparation and measurement.

  • plot (bool, optional) – Whether to create a plot of the model-target discrepancy as a function of spam weight (figure displayed interactively).

Returns

list – The list of Models gauge-optimized using the best spamWeight.

pygsti._model_stdev(gs_func, gs_ensemble, ddof=1, axis=None, **kwargs)

Standard deviation of gs_func over an ensemble of models.

Parameters
  • gs_func (function) – A function that takes a Model as its first argument, and whose additional arguments may be given by keyword arguments.

  • gs_ensemble (list) – A list of Model objects.

  • ddof (int, optional) – As in numpy.std

  • axis (int or None, optional) – As in numpy.std

Returns

numpy.ndarray – The output of numpy.std

pygsti._model_mean(gs_func, gs_ensemble, axis=None, **kwargs)

Mean of gs_func over an ensemble of models.

Parameters
  • gs_func (function) – A function that takes a Model as its first argument, and whose additional arguments may be given by keyword arguments.

  • gs_ensemble (list) – A list of Model objects.

  • axis (int or None, optional) – As in numpy.mean

Returns

numpy.ndarray – The output of numpy.mean

pygsti._to_mean_model(gs_list, target_gs)

Take the per-gate-element mean of a set of models.

Return the Model constructed from the mean parameter vector of the models in gs_list, that is, the mean of the parameter vectors of each model in gs_list.

Parameters
  • gs_list (list) – A list of Model objects.

  • target_gs (Model) – A template model used to specify the parameterization of the returned Model.

Returns

Model

pygsti._to_std_model(gs_list, target_gs, ddof=1)

Take the per-gate-element standard deviation of a list of models.

Return the Model constructed from the standard-deviation parameter vector of the models in gs_list, that is, the standard- devaiation of the parameter vectors of each model in gs_list.

Parameters
  • gs_list (list) – A list of Model objects.

  • target_gs (Model) – A template model used to specify the parameterization of the returned Model.

  • ddof (int, optional) – As in numpy.std

Returns

Model

pygsti._to_rms_model(gs_list, target_gs)

Take the per-gate-element RMS of a set of models.

Return the Model constructed from the root-mean-squared parameter vector of the models in gs_list, that is, the RMS of the parameter vectors of each model in gs_list.

Parameters
  • gs_list (list) – A list of Model objects.

  • target_gs (Model) – A template model used to specify the parameterization of the returned Model.

Returns

Model

class pygsti._GSTAdvancedOptions(items=None)

Bases: AdvancedOptions

Advanced options for GST driver functions.

valid_keys

the valid (allowed) keys.

Type

tuple

valid_keys = ['always_perform_mle', 'bad_fit_threshold', 'circuit_weights', 'contract_start_to_cptp',...
class pygsti._QubitProcessorSpec(num_qubits, gate_names, nonstd_gate_unitaries=None, availability=None, geometry=None, qubit_labels=None, nonstd_gate_symplecticreps=None, aux_info=None)

Bases: ProcessorSpec

The device specification for a one or more qubit quantum computer.

This is objected is geared towards multi-qubit devices; many of the contained structures are superfluous in the case of a single qubit.

Parameters
  • num_qubits (int) – The number of qubits in the device.

  • gate_names (list of strings) –

    The names of gates in the device. This may include standard gate names known by pyGSTi (see below) or names which appear in the nonstd_gate_unitaries argument. The set of standard gate names includes, but is not limited to:

    • ’Gi’ : the 1Q idle operation

    • ’Gx’,’Gy’,’Gz’ : 1-qubit pi/2 rotations

    • ’Gxpi’,’Gypi’,’Gzpi’ : 1-qubit pi rotations

    • ’Gh’ : Hadamard

    • ’Gp’ : phase or S-gate (i.e., ((1,0),(0,i)))

    • ’Gcphase’,’Gcnot’,’Gswap’ : standard 2-qubit gates

    Alternative names can be used for all or any of these gates, but then they must be explicitly defined in the nonstd_gate_unitaries dictionary. Including any standard names in nonstd_gate_unitaries overrides the default (builtin) unitary with the one supplied.

  • nonstd_gate_unitaries (dictionary of numpy arrays) – A dictionary with keys that are gate names (strings) and values that are numpy arrays specifying quantum gates in terms of unitary matrices. This is an additional “lookup” database of unitaries - to add a gate to this QubitProcessorSpec its names still needs to appear in the gate_names list. This dictionary’s values specify additional (target) native gates that can be implemented in the device as unitaries acting on ordinary pure-state-vectors, in the standard computationl basis. These unitaries need not, and often should not, be unitaries acting on all of the qubits. E.g., a CNOT gate is specified by a key that is the desired name for CNOT, and a value that is the standard 4 x 4 complex matrix for CNOT. All gate names must start with ‘G’. As an advanced behavior, a unitary-matrix-returning function which takes a single argument - a tuple of label arguments - may be given instead of a single matrix to create an operation factory which allows continuously-parameterized gates. This function must also return an empty/dummy unitary when None is given as it’s argument.

  • availability (dict, optional) – A dictionary whose keys are some subset of the keys (which are gate names) nonstd_gate_unitaries and the strings (which are gate names) in gate_names and whose values are lists of qubit-label-tuples. Each qubit-label-tuple must have length equal to the number of qubits the corresponding gate acts upon, and causes that gate to be available to act on the specified qubits. Instead of a list of tuples, values of availability may take the special values “all-permutations” and “all-combinations”, which as their names imply, equate to all possible permutations and combinations of the appropriate number of qubit labels (deterined by the gate’s dimension). If a gate name is not present in availability, the default is “all-permutations”. So, the availability of a gate only needs to be specified when it cannot act in every valid way on the qubits (e.g., the device does not have all-to-all connectivity).

  • geometry ({"line","ring","grid","torus"} or QubitGraph, optional) – The type of connectivity among the qubits, specifying a graph used to define neighbor relationships. Alternatively, a QubitGraph object with qubit_labels as the node labels may be passed directly. This argument is only used as a convenient way of specifying gate availability (edge connections are used for gates whose availability is unspecified by availability or whose value there is “all-edges”).

  • qubit_labels (list or tuple, optional) – The labels (integers or strings) of the qubits. If None, then the integers starting with zero are used.

  • nonstd_gate_symplecticreps (dict, optional) – A dictionary similar to nonstd_gate_unitaries that supplies, instead of a unitary matrix, the symplectic representation of a Clifford operations, given as a 2-tuple of numpy arrays.

  • aux_info (dict, optional) – Any additional information that should be attached to this processor spec.

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
property num_qubits(self)

The number of qubits.

property primitive_op_labels(self)

All the primitive operation labels derived from the gate names and availabilities

gate_num_qubits(self, gate_name)

The number of qubits that a given gate acts upon.

Parameters

gate_name (str) – The name of the gate.

Returns

int

resolved_availability(self, gate_name, tuple_or_function='auto')

The availability of a given gate, resolved as either a tuple of sslbl-tuples or a function.

This function does more than just access the availability attribute, as this may hold special values like “all-edges”. It takes the value of self.availability[gate_name] and resolves and converts it into the desired format: either a tuple of state-space labels or a function with a single state-space-labels-tuple argument.

Parameters
  • gate_name (str) – The gate name to get the availability of.

  • tuple_or_function ({'tuple', 'function', 'auto'}) – The type of object to return. ‘tuple’ means a tuple of state space label tuples, e.g. ((0,1), (1,2)). ‘function’ means a function that takes a single state space label tuple argument and returns True or False to indicate whether the gate is available on the given target labels. If ‘auto’ is given, then either a tuple or function is returned - whichever is more computationally convenient.

Returns

tuple or function

_resolve_availability(self, avail_entry, gate_nqubits, tuple_or_function='auto')
is_available(self, gate_label)

Check whether a gate at a given location is available.

Parameters

gate_label (Label) – The gate name and target labels to check availability of.

Returns

bool

available_gatenames(self, sslbls)

List all the gate names that are available within a set of state space labels.

This function finds all the gate names that are available for at least a subset of sslbls.

Parameters

sslbls (tuple) – The state space labels to find availability within.

Returns

tuple of strings – A tuple of gate names (strings).

available_gatelabels(self, gate_name, sslbls)

List all the gate labels that are available for gate_name on at least a subset of sslbls.

Parameters
  • gate_name (str) – The gate name.

  • sslbls (tuple) – The state space labels to find availability within.

Returns

tuple of Labels – The available gate labels (all with name gate_name).

force_recompute_gate_relationships(self)

Invalidates LRU caches for all compute_* methods of this object, forcing them to recompute their values.

The compute_* methods of this processor spec compute various relationships and properties of its gates. These routines can be computationally intensive, and so their values are cached for performance. If the gates of a processor spec changes and its compute_* methods are used, force_recompute_gate_relationships should be called.

compute_clifford_symplectic_reps(self, gatename_filter=None)

Constructs a dictionary of the symplectic representations for all the Clifford gates in this processor spec.

Parameters

gatename_filter (iterable, optional) – A list, tuple, or set of gate names whose symplectic representations should be returned (if they exist).

Returns

dict – keys are gate names, values are (symplectic_matrix, phase_vector) tuples.

compute_one_qubit_gate_relations(self)

Computes the basic pair-wise relationships relationships between the gates.

1. It multiplies all possible combinations of two 1-qubit gates together, from the full model available to in this device. If the two gates multiple to another 1-qubit gate from this set of gates this is recorded in the dictionary self.oneQgate_relations. If the 1-qubit gate with name name1 followed by the 1-qubit gate with name name2 multiple (up to phase) to the gate with name3, then self.oneQgate_relations[name1,`name2`] = name3.

2. If the inverse of any 1-qubit gate is contained in the model, this is recorded in the dictionary self.gate_inverse.

Returns

  • gate_relations (dict) – Keys are (gatename1, gatename2) and values are either the gate name of the product of the two gates or None, signifying the identity.

  • gate_inverses (dict) – Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).

compute_multiqubit_inversion_relations(self)

Computes the inverses of multi-qubit (>1 qubit) gates.

Finds whether any of the multi-qubit gates in this device also have their inverse in the model. That is, if the unitaries for the multi-qubit gate with name name1 followed by the multi-qubit gate (of the same dimension) with name name2 multiple (up to phase) to the identity, then gate_inverse[name1] = name2 and gate_inverse[name2] = name1

1-qubit gates are not computed by this method, as they are be computed by the method :method:`compute_one_qubit_gate_relations`.

Returns

gate_inverse (dict) – Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).

compute_clifford_ops_on_qubits(self)

Constructs a dictionary mapping tuples of state space labels to the clifford operations available on them.

Returns

dict – A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available Clifford gates on those target labels.

compute_ops_on_qubits(self)

Constructs a dictionary mapping tuples of state space labels to the operations available on them.

Returns

dict – A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available gates on those target labels.

compute_clifford_2Q_connectivity(self)

Constructs a graph encoding the connectivity between qubits via 2-qubit Clifford gates.

Returns

QubitGraph – A graph with nodes equal to the qubit labels and edges present whenever there is a 2-qubit Clifford gate between the vertex qubits.

compute_2Q_connectivity(self)

Constructs a graph encoding the connectivity between qubits via 2-qubit gates.

Returns

QubitGraph – A graph with nodes equal to the qubit labels and edges present whenever there is a 2-qubit gate between the vertex qubits.

subset(self, gate_names_to_include='all', qubit_labels_to_keep='all')

Construct a smaller processor specification by keeping only a select set of gates from this processor spec.

Parameters

gate_names_to_include (list or tuple or set) – The gate names that should be included in the returned processor spec.

Returns

QubitProcessorSpec

map_qubit_labels(self, mapper)

Creates a new QubitProcessorSpec whose qubit labels are updated according to the mapping function mapper.

Parameters

mapper (dict or function) – A dictionary whose keys are the existing self.qubit_labels values and whose value are the new labels, or a function which takes a single (existing qubit-label) argument and returns a new qubit label.

Returns

QubitProcessorSpec

property idle_gate_names(self)

The gate names that correspond to idle operations.

property global_idle_gate_name(self)

The (first) gate name that corresponds to a global idle operation.

property global_idle_layer_label(self)

Similar to global_idle_gate_name but include the appropriate sslbls (either None or all the qubits)

class pygsti._Model(state_space)

Bases: pygsti.baseobjs.nicelyserializable.NicelySerializable

A predictive model for a Quantum Information Processor (QIP).

The main function of a Model object is to compute the outcome probabilities of Circuit objects based on the action of the model’s ideal operations plus (potentially) noise which makes the outcome probabilities deviate from the perfect ones.

Parameters

state_space (StateSpace) – The state space of this model.

_to_nice_serialization(self)
property state_space(self)

State space labels

Returns

StateSpaceLabels

property hyperparams(self)

Dictionary of hyperparameters associated with this model

Returns

dict

property num_params(self)

The number of free parameters when vectorizing this model.

Returns

int – the number of model parameters.

property num_modeltest_params(self)

The parameter count to use when testing this model against data.

Often times, this is the same as :method:`num_params`, but there are times when it can convenient or necessary to use a parameter count different than the actual number of parameters in this model.

Returns

int – the number of model parameters.

property parameter_bounds(self)

Upper and lower bounds on the values of each parameter, utilized by optimization routines

set_parameter_bounds(self, index, lower_bound=- _np.inf, upper_bound=_np.inf)

Set the bounds for a single model parameter.

These limit the values the parameter can have during an optimization of the model.

Parameters
  • index (int) – The index of the paramter whose bounds should be set.

  • lower_bound (float, optional) – The lower and upper bounds for the parameter. Can be set to the special numpy.inf (or -numpy.inf) values to effectively have no bound.

  • upper_bound (float, optional) – The lower and upper bounds for the parameter. Can be set to the special numpy.inf (or -numpy.inf) values to effectively have no bound.

Returns

None

property parameter_labels(self)

A list of labels, usually of the form (op_label, string_description) describing this model’s parameters.

property parameter_labels_pretty(self)

The list of parameter labels but formatted in a nice way.

In particular, tuples where the first element is an op label are made into a single string beginning with the string representation of the operation.

set_parameter_label(self, index, label)

Set the label of a single model parameter.

Parameters
  • index (int) – The index of the paramter whose label should be set.

  • label (object) – An object that serves to label this parameter. Often a string.

Returns

None

to_vector(self)

Returns the model vectorized according to the optional parameters.

Returns

numpy array – The vectorized model parameters.

from_vector(self, v, close=False)

Sets this Model’s operations based on parameter values v.

Parameters
  • v (numpy.ndarray) – A vector of parameters, with length equal to self.num_params.

  • close (bool, optional) – Set to True if v is close to the current parameter vector. This can make some operations more efficient.

Returns

None

abstract probabilities(self, circuit, clip_to=None)

Construct a dictionary containing the outcome probabilities of circuit.

Parameters
  • circuit (Circuit or tuple of operation labels) – The sequence of operation labels specifying the circuit.

  • clip_to (2-tuple, optional) – (min,max) to clip probabilities to if not None.

Returns

probs (dictionary) – A dictionary such that probs[SL] = pr(SL,circuit,clip_to) for each spam label (string) SL.

abstract bulk_probabilities(self, circuits, clip_to=None, comm=None, mem_limit=None, smartc=None)

Construct a dictionary containing the probabilities for an entire list of circuits.

Parameters
  • circuits ((list of Circuits) or CircuitOutcomeProbabilityArrayLayout) – When a list, each element specifies a circuit to compute outcome probabilities for. A CircuitOutcomeProbabilityArrayLayout specifies the circuits along with an internal memory layout that reduces the time required by this function and can restrict the computed probabilities to those corresponding to only certain outcomes.

  • clip_to (2-tuple, optional) – (min,max) to clip return value if not None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors. Distribution is performed over subtrees of evalTree (if it is split).

  • mem_limit (int, optional) – A rough memory limit in bytes which is used to determine processor allocation.

  • smartc (SmartCache, optional) – A cache object to cache & use previously cached values inside this function.

Returns

probs (dictionary) – A dictionary such that probs[opstr] is an ordered dictionary of (outcome, p) tuples, where outcome is a tuple of labels and p is the corresponding probability.

_init_copy(self, copy_into, memo)

Copies any “tricky” member of this model into copy_into, before deep copying everything else within a .copy() operation.

_post_copy(self, copy_into, memo)

Called after all other copying is done, to perform “linking” between the new model (copy_into) and its members.

copy(self)

Copy this model.

Returns

Model – a (deep) copy of this model.

__str__(self)

Return str(self).

__hash__(self)

Return hash(self).

circuit_outcomes(self, circuit)

Get all the possible outcome labels produced by simulating this circuit.

Parameters

circuit (Circuit) – Circuit to get outcomes of.

Returns

tuple

compute_num_outcomes(self, circuit)

The number of outcomes of circuit, given by it’s existing or implied POVM label.

Parameters

circuit (Circuit) – The circuit to simplify

Returns

int

complete_circuit(self, circuit)

Adds any implied preparation or measurement layers to circuit

Parameters

circuit (Circuit) – Circuit to act on.

Returns

Circuit – Possibly the same object as circuit, if no additions are needed.

pygsti._create_explicit_model(processor_spec, modelnoise, custom_gates=None, evotype='default', simulator='auto', ideal_gate_type='auto', ideal_prep_type='auto', ideal_povm_type='auto', embed_gates=False, basis='pp')
pygsti.ROBUST_SUFFIX_LIST = ['.robust', '.Robust', '.robust+', '.Robust+']
pygsti.DEFAULT_BAD_FIT_THRESHOLD = 2.0
pygsti.run_model_test(model_filename_or_object, data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Compares a Model’s predictions to a DataSet using GST-like circuits.

This routine tests a Model model against a DataSet using a specific set of structured, GST-like circuits (given by fiducials, max_lengths and germs). In particular, circuits are constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. Each string thus constructed is sandwiched between all pairs of (preparation, measurement) fiducial sequences.

model_filename_or_object is used directly (without any optimization) as the the model estimate at each maximum-length “iteration”. The model is given a trivial default_gauge_group so that it is not altered during any gauge optimization step.

A ModelEstimateResults object is returned, which encapsulates the model estimate and related parameters, and can be used with report-generation routines.

Parameters
  • model_filename_or_object (Model or string) – The model model, specified either directly or by the filename of a model file (text format).

  • data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).

  • processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor this model test is to be run on, given either directly or by the filename of a processor-spec file (text format). The processor specification contains basic interface-level information about the processor being tested, e.g., its state space and available gates.

  • prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).

  • meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If None, then use the same strings as specified by prep_fiducial_list_or_filename.

  • germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).

  • max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the i-th LSGST iteration includes the repeated germs truncated to the L-values up to and including the i-th one.

  • gauge_opt_params (dict, optional) – A dictionary of arguments to gaugeopt_to_target(), specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments of gaugeopt_to_target() except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.

  • advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expert-level functionality.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multi-CPUs).

  • output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).

  • verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.

Returns

Results

pygsti.run_linear_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Perform Linear Gate Set Tomography (LGST).

This function differs from the lower level :function:`run_lgst` function in that it may perform a post-LGST gauge optimization and this routine returns a Results object containing the LGST estimate.

Overall, this is a high-level driver routine which can be used similarly to :function:`run_long_sequence_gst` whereas run_lgst is a low-level routine used when building your own algorithms.

Parameters
  • data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).

  • processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor that LGST is to be run on, given either directly or by the filename of a processor-spec file (text format). The processor specification contains basic interface-level information about the processor being tested, e.g., its state space and available gates.

  • prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).

  • meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If None, then use the same strings as specified by prep_fiducial_list_or_filename.

  • gauge_opt_params (dict, optional) – A dictionary of arguments to gaugeopt_to_target(), specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments of gaugeopt_to_target() except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.

  • advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expert-level functionality. See :function:`run_long_sequence_gst`.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors. In this LGST case, this is just the gauge optimization.

  • mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multi-CPUs).

  • output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).

  • verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.

Returns

Results

pygsti.run_long_sequence_gst(data_filename_or_set, target_model_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

Perform long-sequence GST (LSGST).

This analysis fits a model (target_model_filename_or_object) to data (data_filename_or_set) using the outcomes from periodic GST circuits constructed by repeating germ strings an integer number of times such that the length of the repeated germ is less than or equal to the maximum length set in max_lengths. When LGST is applicable (i.e. for explicit models with full or TP parameterizations), the LGST estimate of the gates is computed, gauge optimized, and used as a starting seed for the remaining optimizations.

LSGST iterates len(max_lengths) times, optimizing the chi2 using successively larger sets of circuits. On the i-th iteration, the repeated germs sequences limited by max_lengths[i] are included in the growing set of circuits used by LSGST. The final iteration maximizes the log-likelihood.

Once computed, the model estimates are optionally gauge optimized as directed by gauge_opt_params. A ModelEstimateResults object is returned, which encapsulates the input and outputs of this GST analysis, and can generate final end-user output such as reports and presentations.

Parameters
  • data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).

  • target_model_filename_or_object (Model or string) – The target model, specified either directly or by the filename of a model file (text format).

  • prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).

  • meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If None, then use the same strings as specified by prep_fiducial_list_or_filename.

  • germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).

  • max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the i-th LSGST iteration includes the repeated germs truncated to the L-values up to and including the i-th one.

  • gauge_opt_params (dict, optional) – A dictionary of arguments to gaugeopt_to_target(), specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments of gaugeopt_to_target() except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.

  • advanced_options (dict, optional) –

    Specifies advanced options most of which deal with numerical details of the objective function or expert-level functionality. The allowed keys and values include: - objective = {‘chi2’, ‘logl’} - op_labels = list of strings - circuit_weights = dict or None - starting_point = “LGST-if-possible” (default), “LGST”, or “target” - depolarize_start = float (default == 0) - randomize_start = float (default == 0) - contract_start_to_cptp = True / False (default) - cptpPenaltyFactor = float (default = 0) - tolerance = float or dict w/’relx’,’relf’,’f’,’jac’,’maxdx’ keys - max_iterations = int - finitediff_iterations = int - min_prob_clip = float - min_prob_clip_for_weighting = float (default == 1e-4) - prob_clip_interval = tuple (default == (-1e6,1e6) - radius = float (default == 1e-4) - use_freq_weighted_chi2 = True / False (default) - XX nested_circuit_lists = True (default) / False - XX include_lgst = True / False (default is True) - distribute_method = “default”, “circuits” or “deriv” - profile = int (default == 1) - check = True / False (default) - XX op_label_aliases = dict (default = None) - always_perform_mle = bool (default = False) - only_perform_mle = bool (default = False) - XX truncScheme = “whole germ powers” (default) or “truncated germ powers”

    or “length as exponent”

    • appendTo = Results (default = None)

    • estimateLabel = str (default = “default”)

    • XX missingDataAction = {‘drop’,’raise’} (default = ‘drop’)

    • XX string_manipulation_rules = list of (find,replace) tuples

    • germ_length_limits = dict of form {germ: maxlength}

    • record_output = bool (default = True)

    • timeDependent = bool (default = False)

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multi-CPUs).

  • output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).

  • verbosity (int, optional) –

    The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation. - 0 – prints nothing - 1 – shows progress bar for entire iterative GST - 2 – show summary details about each individual iteration - 3 – also shows outer iterations of LM algorithm - 4 – also shows inner iterations of LM algorithm - 5 – also shows detailed info from within jacobian

    and objective function calls.

Returns

Results

pygsti.run_long_sequence_gst_base(data_filename_or_set, target_model_filename_or_object, lsgst_lists, gauge_opt_params=None, advanced_options=None, comm=None, mem_limit=None, output_pkl=None, verbosity=2)

A more fundamental interface for performing end-to-end GST.

Similar to run_long_sequence_gst() except this function takes lsgst_lists, a list of either raw circuit lists or of PlaquetteGridCircuitStructure objects to define which circuits are used on each GST iteration.

Parameters
  • data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).

  • target_model_filename_or_object (Model or string) – The target model, specified either directly or by the filename of a model file (text format).

  • lsgst_lists (list of lists or PlaquetteGridCircuitStructure(s)) – An explicit list of either the raw circuit lists to be used in the analysis or of PlaquetteGridCircuitStructure objects, which additionally contain the structure of a set of circuits. A single PlaquetteGridCircuitStructure object can also be given, which is equivalent to passing a list of successive L-value truncations of this object (e.g. if the object has Ls = [1,2,4] then this is like passing a list of three PlaquetteGridCircuitStructure objects w/truncations [1], [1,2], and [1,2,4]).

  • gauge_opt_params (dict, optional) – A dictionary of arguments to gaugeopt_to_target(), specifying how the final gauge optimization should be performed. The keys and values of this dictionary may correspond to any of the arguments of gaugeopt_to_target() except for the first model argument, which is specified internally. The target_model argument, can be set, but is specified internally when it isn’t. If None, then the dictionary {‘item_weights’: {‘gates’:1.0, ‘spam’:0.001}} is used. If False, then then no gauge optimization is performed.

  • advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expert-level functionality. See run_long_sequence_gst() for a list of the allowed keys, with the exception “nested_circuit_lists”, “op_label_aliases”, “include_lgst”, and “truncScheme”.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multi-CPUs).

  • output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).

  • verbosity (int, optional) –

    The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation. - 0 – prints nothing - 1 – shows progress bar for entire iterative GST - 2 – show summary details about each individual iteration - 3 – also shows outer iterations of LM algorithm - 4 – also shows inner iterations of LM algorithm - 5 – also shows detailed info from within jacobian

    and objective function calls.

Returns

Results

pygsti.run_stdpractice_gst(data_filename_or_set, processorspec_filename_or_object, prep_fiducial_list_or_filename, meas_fiducial_list_or_filename, germs_list_or_filename, max_lengths, modes='full TP,CPTP,Target', gaugeopt_suite='stdgaugeopt', gaugeopt_target=None, models_to_test=None, comm=None, mem_limit=None, advanced_options=None, output_pkl=None, verbosity=2)

Perform end-to-end GST analysis using standard practices.

This routines is an even higher-level driver than run_long_sequence_gst(). It performs bottled, typically-useful, runs of long sequence GST on a dataset. This essentially boils down to running run_long_sequence_gst() one or more times using different model parameterizations, and performing commonly-useful gauge optimizations, based only on the high-level modes argument.

Parameters
  • data_filename_or_set (DataSet or string) – The data set object to use for the analysis, specified either directly or by the filename of a dataset file (assumed to be a pickled DataSet if extension is ‘pkl’ otherwise assumed to be in pyGSTi’s text format).

  • processorspec_filename_or_object (ProcessorSpec or string) – A specification of the processor that GST is to be run on, given either directly or by the filename of a processor-spec file (text format). The processor specification contains basic interface-level information about the processor being tested, e.g., its state space and available gates.

  • prep_fiducial_list_or_filename ((list of Circuits) or string) – The state preparation fiducial circuits, specified either directly or by the filename of a circuit list file (text format).

  • meas_fiducial_list_or_filename ((list of Circuits) or string or None) – The measurement fiducial circuits, specified either directly or by the filename of a circuit list file (text format). If None, then use the same strings as specified by prep_fiducial_list_or_filename.

  • germs_list_or_filename ((list of Circuits) or string) – The germ circuits, specified either directly or by the filename of a circuit list file (text format).

  • max_lengths (list of ints) – List of integers, one per LSGST iteration, which set truncation lengths for repeated germ strings. The list of circuits for the i-th LSGST iteration includes the repeated germs truncated to the L-values up to and including the i-th one.

  • modes (str, optional) –

    A comma-separated list of modes which dictate what types of analyses are performed. Currently, these correspond to different types of parameterizations/constraints to apply to the estimated model. The default value is usually fine. Allowed values are:

    • ”full” : full (completely unconstrained)

    • ”TP” : TP-constrained

    • ”CPTP” : Lindbladian CPTP-constrained

    • ”H+S” : Only Hamiltonian + Stochastic errors allowed (CPTP)

    • ”S” : Only Stochastic errors allowed (CPTP)

    • ”Target” : use the target (ideal) gates as the estimate

    • <model> : any key in the models_to_test argument

  • gaugeopt_suite (str or list or dict, optional) –

    Specifies which gauge optimizations to perform on each estimate. A string or list of strings (see below) specifies built-in sets of gauge optimizations, otherwise gaugeopt_suite should be a dictionary of gauge-optimization parameter dictionaries, as specified by the gauge_opt_params argument of run_long_sequence_gst(). The key names of gaugeopt_suite then label the gauge optimizations within the resuling Estimate objects. The built-in suites are:

    • ”single” : performs only a single “best guess” gauge optimization.

    • ”varySpam” : varies spam weight and toggles SPAM penalty (0 or 1).

    • ”varySpamWt” : varies spam weight but no SPAM penalty.

    • ”varyValidSpamWt” : varies spam weight with SPAM penalty == 1.

    • ”toggleValidSpam” : toggles spame penalty (0 or 1); fixed SPAM wt.

    • ”unreliable2Q” : adds branch to a spam suite that weights 2Q gates less

    • ”none” : no gauge optimizations are performed.

  • gaugeopt_target (Model, optional) – If not None, a model to be used as the “target” for gauge- optimization (only). This argument is useful when you want to gauge optimize toward something other than the ideal target gates given by target_model_filename_or_object, which are used as the default when gaugeopt_target is None.

  • models_to_test (dict, optional) – A dictionary of Model objects representing (gate-set) models to test against the data. These Models are essentially hypotheses for which (if any) model generated the data. The keys of this dictionary can (and must, to actually test the models) be used within the comma- separate list given by the modes argument.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int or None, optional) – A rough memory limit in bytes which restricts the amount of memory used (per core when run on multi-CPUs).

  • advanced_options (dict, optional) – Specifies advanced options most of which deal with numerical details of the objective function or expert-level functionality. See run_long_sequence_gst() for a list of the allowed keys for each such dictionary.

  • output_pkl (str or file, optional) – If not None, a file(name) to pickle.dump the returned Results object to (only the rank 0 process performs the dump when comm is not None).

  • verbosity (int, optional) – The ‘verbosity’ option is an integer specifying the level of detail printed to stdout during the calculation.

Returns

Results

pygsti._load_model(model_filename_or_object)
pygsti._load_dataset(data_filename_or_set, comm, verbosity)

Loads a DataSet from the data_filename_or_set argument of functions in this module.

pygsti._update_objfn_builders(builders, advanced_options)
pygsti._get_badfit_options(advanced_options)
pygsti._output_to_pickle(obj, output_pkl, comm)
pygsti._get_gst_initial_model(target_model, advanced_options)
pygsti._get_gst_builders(advanced_options)
pygsti._get_optimizer(advanced_options, model_being_optimized)
pygsti.parallel_apply(f, l, comm)

Apply a function, f to every element of a list, l in parallel, using MPI.

Parameters
  • f (function) – function of an item in the list l

  • l (list) – list of items as arguments to f

  • comm (MPI Comm) – MPI communicator object for organizing parallel programs

Returns

results (list) – list of items after f has been applied

pygsti.mpi4py_comm()

Get a comm object

Returns

MPI.Comm – Comm object to be passed down to parallel pygsti routines

pygsti.starmap_with_kwargs(fn, num_runs, num_processors, args_list, kwargs_list)
class pygsti.NamedDict(keyname=None, keytype=None, valname=None, valtype=None, items=())

Bases: dict, pygsti.baseobjs.nicelyserializable.NicelySerializable

A dictionary that also holds category names and types.

This dict-derived class holds a catgory name applicable to its keys, and key and value type names indicating the types of its keys and values.

The main purpose of this class is to utilize its :method:`to_dataframe` method.

Parameters
  • keyname (str, optional) – A category name for the keys of this dict. For example, if the dict contained the keys “dog” and “cat”, this might be “animals”. This becomes a column header if this dict is converted to a data frame.

  • keytype ({"float", "int", "category", None}, optional) – The key-type, in correspondence with different pandas series types.

  • valname (str, optional) – A category name for the keys of this dict. This becomse a column header if this dict is converted to a data frame.

  • valtype ({"float", "int", "category", None}, optional) – The value-type, in correspondence with different pandas series types.

  • items (list or dict, optional) – Initial items, used in serialization.

classmethod create_nested(cls, key_val_type_list, inner)

Creates a nested NamedDict.

Parameters
  • key_val_type_list (list) – A list of (key, value, type) tuples, one per nesting layer.

  • inner (various) – The value that will be set to the inner-most nested dictionary’s value, supplying any additional layers of nesting (if inner is a NamedDict) or the value contained in all of the nested layers.

__reduce__(self)

Helper for pickle.

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
to_dataframe(self)

Render this dict as a pandas data frame.

Returns

pandas.DataFrame

_add_to_columns(self, columns, seriestypes, row_prefix)
class pygsti.TypedDict(types=None, items=())

Bases: dict

A dictionary that holds per-key type information.

This type of dict is used for the “leaves” in a tree of nested NamedDict objects, specifying a collection of data of different types pertaining to some set of category labels (the index-path of the named dictionaries).

When converted to a data frame, each key specifies a different column and values contribute the values of a single data frame row. Columns will be series of the held data types.

Parameters
  • types (dict, optional) – Keys are the keys that can appear in this dictionary, and values are valid data frame type strings, e.g. “int”, “float”, or “category”, that specify the type of each value.

  • items (dict or list) – Initial data, used for serialization.

__reduce__(self)

Helper for pickle.

as_dataframe(self)

Render this dict as a pandas data frame.

Returns

pandas.DataFrame

_add_to_columns(self, columns, seriestypes, row_prefix)
pygsti._basis_constructor_dict
pygsti.basis_matrices(name_or_basis, dim, sparse=False)

Get the elements of the specifed basis-type which spans the density-matrix space given by dim.

Parameters
  • name_or_basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match dim.

  • dim (int) – The dimension of the density-matrix space.

  • sparse (bool, optional) – Whether any built matrices should be SciPy CSR sparse matrices or dense numpy arrays (the default).

Returns

list – A list of N numpy arrays each of shape (dmDim, dmDim), where dmDim is the matrix-dimension of the overall “embedding” density matrix (the sum of dim_or_block_dims) and N is the dimension of the density-matrix space, equal to sum( block_dim_i^2 ).

pygsti.basis_longname(basis)

Get the “long name” for a particular basis, which is typically used in reports, etc.

Parameters

basis (Basis or str) – The basis or standard-basis-name.

Returns

string

pygsti.basis_element_labels(basis, dim)

Get a list of short labels corresponding to to the elements of the described basis.

These labels are typically used to label the rows/columns of a box-plot of a matrix in the basis.

Parameters
  • basis ({'std', 'gm', 'pp', 'qt'}) – Which basis the model is represented in. Allowed options are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp) and Qutrit (qt). If the basis is not known, then an empty list is returned.

  • dim (int or list) – Dimension of basis matrices. If a list of integers, then gives the dimensions of the terms in a direct-sum decomposition of the density matrix space acted on by the basis.

Returns

list of strings – A list of length dim, whose elements label the basis elements.

pygsti.is_sparse_basis(name_or_basis)

Whether a basis contains sparse matrices.

Parameters

name_or_basis (Basis or str) – The basis or standard-basis-name.

Returns

bool

pygsti.change_basis(mx, from_basis, to_basis)

Convert a operation matrix from one basis of a density matrix space to another.

Parameters
  • mx (numpy array) – The operation matrix (a 2D square array) in the from_basis basis.

  • from_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

  • to_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The destination basis. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – The given operation matrix converted to the to_basis basis. Array size is the same as mx.

pygsti.create_basis_pair(mx, from_basis, to_basis)

Constructs bases from transforming mx between two basis names.

Construct a pair of Basis objects with types from_basis and to_basis, and dimension appropriate for transforming mx (if they’re not already given by from_basis or to_basis being a Basis rather than a str).

Parameters
  • mx (numpy.ndarray) – A matrix, assumed to be square and have a dimension that is a perfect square.

  • from_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source basis (named because it’s usually the source basis for a basis change). Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).

  • to_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The destination basis (named because it’s usually the destination basis for a basis change). Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension should be equal to sqrt(mx.shape[0]) == sqrt(mx.shape[1]).

Returns

from_basis, to_basis (Basis)

pygsti.create_basis_for_matrix(mx, basis)

Construct a Basis object with type given by basis and dimension approprate for transforming mx.

Dimension is taken from mx (if it’s not given by basis) that is sqrt(mx.shape[0]).

Parameters
  • mx (numpy.ndarray) – A matrix, assumed to be square and have a dimension that is a perfect square.

  • basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – A basis name or Basis object. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object). If a custom basis object is provided, it’s dimension must equal sqrt(mx.shape[0]), as this will be checked.

Returns

Basis

pygsti.resize_std_mx(mx, resize, std_basis_1, std_basis_2)

Change the basis of mx to a potentially larger or smaller ‘std’-type basis given by std_basis_2.

(mx is assumed to be in the ‘std’-type basis given by std_basis_1.)

This is possible when the two ‘std’-type bases have the same “embedding dimension”, equal to the sum of their block dimensions. If, for example, std_basis_1 has block dimensions (kite structure) of (4,2,1) then mx, expressed as a sum of 4^2 + 2^2 + 1^2 = 21 basis elements, can be “embedded” within a larger ‘std’ basis having a single block with dimension 7 (7^2 = 49 elements).

When std_basis_2 is smaller than std_basis_1 the reverse happens and mx is irreversibly truncated, or “contracted” to a basis having a particular kite structure.

Parameters
  • mx (numpy array) – A square matrix in the std_basis_1 basis.

  • resize ({'expand','contract'}) – Whether mx can be expanded or contracted.

  • std_basis_1 (Basis) – The ‘std’-type basis that mx is currently in.

  • std_basis_2 (Basis) – The ‘std’-type basis that mx should be converted to.

Returns

numpy.ndarray

pygsti.flexible_change_basis(mx, start_basis, end_basis)

Change mx from start_basis to end_basis allowing embedding expansion and contraction if needed.

(see resize_std_mx() for more details).

Parameters
  • mx (numpy array) – The operation matrix (a 2D square array) in the start_basis basis.

  • start_basis (Basis) – The source basis.

  • end_basis (Basis) – The destination basis.

Returns

numpy.ndarray

pygsti.resize_mx(mx, dim_or_block_dims=None, resize=None)

Wrapper for resize_std_mx(), that manipulates mx to be in another basis.

This function first constructs two ‘std’-type bases using dim_or_block_dims and sum(dim_or_block_dims). The matrix mx is converted from the former to the latter when resize == “expand”, and from the latter to the former when resize == “contract”.

Parameters
  • mx (numpy array) – Matrix of size N x N, where N is the dimension of the density matrix space, i.e. sum( dimOrBlockDims_i^2 )

  • dim_or_block_dims (int or list of ints) – Structure of the density-matrix space. Gives the matrix dimensions of each block.

  • resize ({'expand','contract'}) – Whether mx should be expanded or contracted.

Returns

numpy.ndarray

pygsti.state_to_stdmx(state_vec)

Convert a state vector into a density matrix.

Parameters

state_vec (list or tuple) – State vector in the standard (sigma-z) basis.

Returns

numpy.ndarray – A density matrix of shape (d,d), corresponding to the pure state given by the length-d array, state_vec.

pygsti.state_to_pauli_density_vec(state_vec)

Convert a single qubit state vector into a Liouville vector in the Pauli basis.

Parameters

state_vec (list or tuple) – State vector in the sigma-z basis, len(state_vec) == 2

Returns

numpy array – The 2x2 density matrix of the pure state given by state_vec, given as a 4x1 column vector in the Pauli basis.

pygsti.vec_to_stdmx(v, basis, keep_complex=False)

Convert a vector in this basis to a matrix in the standard basis.

Parameters
  • v (numpy array) – The vector length 4 or 16 respectively.

  • basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match v.

  • keep_complex (bool, optional) – If True, leave the final (output) array elements as complex numbers when v is complex. Usually, the final elements are real (even though v is complex) and so when keep_complex=False the elements are forced to be real and the returned array is float (not complex) valued.

Returns

numpy array – The matrix, 2x2 or 4x4 depending on nqubits

pygsti.gmvec_to_stdmx
pygsti.ppvec_to_stdmx
pygsti.qtvec_to_stdmx
pygsti.stdvec_to_stdmx
pygsti.stdmx_to_vec(m, basis)

Convert a matrix in the standard basis to a vector in the Pauli basis.

Parameters
  • m (numpy array) – The matrix, shape 2x2 (1Q) or 4x4 (2Q)

  • basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis type. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt). If a Basis object, then the basis matrices are contained therein, and its dimension is checked to match m.

Returns

numpy array – The vector, length 4 or 16 respectively.

pygsti.stdmx_to_ppvec
pygsti.stdmx_to_gmvec
pygsti.stdmx_to_stdvec
pygsti._deprecated_fn(replacement=None)

Decorator for deprecating a function.

Parameters

replacement (str, optional) – the name of the function that should replace it.

Returns

function

pygsti.chi2(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(- 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the total (aggregate) chi^2 for a set of circuits.

The chi^2 test statistic obtained by summing up the contributions of a given set of circuits or all the circuits available in a dataset. For the gradient or Hessian, see the :function:`chi2_jacobian` and :function:`chi2_hessian` functions.

Parameters
  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

  • min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

chi2 (float) – chi^2 value, equal to the sum of chi^2 terms from all specified circuits

pygsti.chi2_per_circuit(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(- 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit chi^2 contributions for a set of cirucits.

This function returns the same value as chi2() except the contributions from different circuits are not summed but returned as an array (the contributions of all the outcomes of a given cirucit are summed together).

Parameters
  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

  • min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

chi2 (numpy.ndarray) – Array of length either len(circuits) or len(dataset.keys()). Values are the chi2 contributions of the corresponding circuit aggregated over outcomes.

pygsti.chi2_jacobian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(- 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the gradient of the chi^2 function computed by :function:`chi2`.

The returned value holds the derivatives of the chi^2 function with respect to model’s parameters.

Parameters
  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

  • min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array – The gradient vector of length model.num_params, the number of model parameters.

pygsti.chi2_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(- 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the Hessian matrix of the chi2() function.

Parameters
  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

  • min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array – The Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params.

pygsti.chi2_approximate_hessian(model, dataset, circuits=None, min_prob_clip_for_weighting=0.0001, prob_clip_interval=(- 10000, 10000), op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute and approximate Hessian matrix of the chi2() function.

This approximation neglects terms proportional to the Hessian of the probabilities w.r.t. the model parameters (which can take a long time to compute). See logl_approximate_hessian for details on the analogous approximation for the log-likelihood Hessian.

Parameters
  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi^2 sum. Default value (None) means “all strings in dataset”.

  • min_prob_clip_for_weighting (float, optional) – defines the clipping interval for the statistical weight.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by the model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy array – The Hessian matrix of shape (nModelParams, nModelParams), where nModelParams = model.num_params.

pygsti.chialpha(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(- 10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the chi-alpha objective function.

Parameters
  • alpha (float) – The alpha parameter, which lies in the interval (0,1].

  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi-alpha sum. Default value (None) means “all strings in dataset”.

  • pfratio_stitchpt (float, optional) – The x-value (x = probility/frequency ratio) below which the chi-alpha function is replaced with it second-order Taylor expansion.

  • pfratio_derivpt (float, optional) – The x-value at which the Taylor expansion derivatives are evaluated at.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • radius (float, optional) – If radius is not None then a “harsh” method of regularizing the zero-frequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zero-frequency terms.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

float

pygsti.chialpha_per_circuit(alpha, model, dataset, circuits=None, pfratio_stitchpt=0.01, pfratio_derivpt=0.01, prob_clip_interval=(- 10000, 10000), radius=None, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None)

Compute the per-circuit chi-alpha objective function.

Parameters
  • alpha (float) – The alpha parameter, which lies in the interval (0,1].

  • model (Model) – The model used to specify the probabilities and SPAM labels

  • dataset (DataSet) – The data used to specify frequencies and counts

  • circuits (list of Circuits or tuples, optional) – List of circuits whose terms will be included in chi-alpha sum. Default value (None) means “all strings in dataset”.

  • pfratio_stitchpt (float, optional) – The x-value (x = probility/frequency ratio) below which the chi-alpha function is replaced with it second-order Taylor expansion.

  • pfratio_derivpt (float, optional) – The x-value at which the Taylor expansion derivatives are evaluated at.

  • prob_clip_interval (tuple, optional) – A (min, max) tuple that specifies the minium (possibly negative) and maximum values allowed for probabilities generated by model. If the model gives probabilities outside this range they are clipped to min or max. These values can be quite generous, as the optimizers are quite tolerant of badly behaved probabilities.

  • radius (float, optional) – If radius is not None then a “harsh” method of regularizing the zero-frequency terms (where the local function = N*p) is used. If radius is None, then fmin is used to handle the zero-frequency terms.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the chi-alpha contributions of the corresponding circuit aggregated over outcomes.

pygsti.chi2fn_2outcome(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes chi^2 for a 2-outcome measurement.

The chi-squared function for a 2-outcome measurement using a clipped probability for the statistical weighting.

Parameters
  • n (float or numpy array) – Number of samples.

  • p (float or numpy array) – Probability of 1st outcome (typically computed).

  • f (float or numpy array) – Frequency of 1st outcome (typically observed).

  • min_prob_clip_for_weighting (float, optional) – Defines clipping interval (see return value).

Returns

float or numpy array – n(p-f)^2 / (cp(1-cp)), where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1-min_prob_clip_for_weighting)

pygsti.chi2fn_2outcome_wfreqs(n, p, f)

Computes chi^2 for a 2-outcome measurement using frequency-weighting.

The chi-squared function for a 2-outcome measurement using the observed frequency in the statistical weight.

Parameters
  • n (float or numpy array) – Number of samples.

  • p (float or numpy array) – Probability of 1st outcome (typically computed).

  • f (float or numpy array) – Frequency of 1st outcome (typically observed).

Returns

float or numpy array – n(p-f)^2 / (f*(1-f*)), where f* = (f*n+1)/n+2 is the frequency value used in the statistical weighting (prevents divide by zero errors)

pygsti.chi2fn(n, p, f, min_prob_clip_for_weighting=0.0001)

Computes the chi^2 term corresponding to a single outcome.

The chi-squared term for a single outcome of a multi-outcome measurement using a clipped probability for the statistical weighting.

Parameters
  • n (float or numpy array) – Number of samples.

  • p (float or numpy array) – Probability of 1st outcome (typically computed).

  • f (float or numpy array) – Frequency of 1st outcome (typically observed).

  • min_prob_clip_for_weighting (float, optional) – Defines clipping interval (see return value).

Returns

float or numpy array – n(p-f)^2 / cp , where cp is the value of p clipped to the interval (min_prob_clip_for_weighting, 1-min_prob_clip_for_weighting)

pygsti.chi2fn_wfreqs(n, p, f, min_freq_clip_for_weighting=0.0001)

Computes the frequency-weighed chi^2 term corresponding to a single outcome.

The chi-squared term for a single outcome of a multi-outcome measurement using the observed frequency in the statistical weight.

Parameters
  • n (float or numpy array) – Number of samples.

  • p (float or numpy array) – Probability of 1st outcome (typically computed).

  • f (float or numpy array) – Frequency of 1st outcome (typically observed).

  • min_freq_clip_for_weighting (float, optional) – The minimum frequency weighting used in the weighting, i.e. the largest weighting factor is 1 / fmin_freq_clip_for_weighting.

Returns

float or numpy array

pygsti.bonferroni_correction(significance, numtests)

Calculates the standard Bonferroni correction.

This is used for reducing the “local” significance for > 1 statistical hypothesis test to guarantee maintaining a “global” significance (i.e., a family-wise error rate) of significance.

Parameters
  • significance (float) – Significance of each individual test.

  • numtests (int) – The number of hypothesis tests performed.

Returns

  • The Boferroni-corrected local significance, given by

  • significance / numtests.

pygsti.sidak_correction(significance, numtests)

Sidak correction.

TODO: docstring - better explanaition

Parameters
  • significance (float) – Significance of each individual test.

  • numtests (int) – The number of hypothesis tests performed.

Returns

float

pygsti.generalized_bonferroni_correction(significance, weights, numtests=None, nested_method='bonferroni', tol=1e-10)

Generalized Bonferroni correction.

Parameters
  • significance (float) – Significance of each individual test.

  • weights (array-like) – An array of non-negative floating-point weights, one per individual test, that sum to 1.0.

  • numtests (int) – The number of hypothesis tests performed.

  • nested_method ({'bonferroni', 'sidak'}) – Which method is used to find the significance of the composite test.

  • tol (float, optional) – Tolerance when checking that the weights add to 1.0.

Returns

float

class pygsti._Basis(name, longname, real, sparse)

Bases: pygsti.baseobjs.nicelyserializable.NicelySerializable

An ordered set of labeled matrices/vectors.

The base class for basis objects. A basis in pyGSTi is an abstract notion of a set of labeled elements, or “vectors”. Each basis has a certain size, and has .elements, .labels, and .ellookup members, the latter being a dictionary mapping of labels to elements.

An important point to note that isn’t immediately intuitive is that while Basis object holds elements (in its .elements property) these are not the same as its vectors (given by the object’s vector_elements property). Often times, in what we term a “simple” basis, the you just flatten an element to get the corresponding vector-element. This works for bases where the elements are either vectors (where flattening does nothing) and matrices. By storing elements as distinct from vector_elements, the Basis can capture additional structure of the elements (such as viewing them as matrices) that can be helpful for their display and interpretation. The elements are also sometimes referred to as the “natural elements” because they represent how to display the element in a natrual way. A non-simple basis occurs when vector_elements need to be stored as elements in a larger “embedded” way so that these elements can be displayed and interpeted naturally.

A second important note is that there is assumed to be some underlying “standard” basis underneath all the bases in pyGSTi. The elements in a Basis are always written in this standard basis. In the case of the “std”-named basis in pyGSTi, these elements are just the trivial vector or matrix units, so one can rightly view the “std” pyGSTi basis as the “standard” basis for a that particular dimension.

The arguments below describe the basic properties of all basis objects in pyGSTi. It is important to remember that the vector_elements of a basis are different from its elements (see the Basis docstring), and that dim refers to the vector elements whereas elshape refers to the elements.

For example, consider a 2-element Basis containing the I and X Pauli matrices. The size of this basis is 2, as there are two elements (and two vector elements). Since vector elements are the length-4 flattened Pauli matrices, the dimension (dim) is 4. Since the elements are 2x2 Pauli matrices, the elshape is (2,2).

As another example consider a basis which spans all the diagonal 2x2 matrices. The elements of this basis are the two matrix units with a 1 in the (0,0) or (1,1) location. The vector elements, however, are the length-2 [1,0] and [0,1] vectors obtained by extracting just the diagonal entries from each basis element. Thus, for this basis, size=2, dim=2, and elshape=(2,2) - so the dimension is not just the product of elshape entries (equivalently, elsize).

Parameters
  • name (string) – The name of the basis. This can be anything, but is usually short and abbreviated. There are several types of bases built into pyGSTi that can be constructed by this name.

  • longname (string) – A more descriptive name for the basis.

  • real (bool) – Elements and vector elements are always allowed to have complex entries. This argument indicates whether the coefficients in the expression of an arbitrary vector in this basis must be real. For example, if real=True, then when pyGSTi transforms a vector in some other basis to a vector in this basis, it will demand that the values of that vector (i.e. the coefficients which multiply this basis’s elements to obtain a vector in the “standard” basis) are real.

  • sparse (bool) – Whether the elements of .elements for this Basis are stored (when they are stored at all) as sparse matrices or vectors.

dim

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

Type

int

size

The number of elements (or vector-elements) in the basis.

Type

int

elshape

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

Type

int

elndim

The number of element dimensions, i.e. len(self.elshape)

Type

int

elsize

The total element size, i.e. product(self.elshape)

Type

int

vector_elements

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Type

list

classmethod cast(cls, name_or_basis_or_matrices, dim=None, sparse=None, classical_name='cl')

Convert various things that can describe a basis into a Basis object.

Parameters
  • name_or_basis_or_matrices (various) –

    Can take on a variety of values to produce different types of bases:

    • None: an empty ExpicitBasis

    • Basis: checked with dim and sparse and passed through.

    • str: BuiltinBasis or DirectSumBasis with the given name.

    • list: an ExplicitBasis if given matrices/vectors or a

      DirectSumBasis if given a (name, dim) pairs.

  • dim (int or StateSpace, optional) – The dimension of the basis to create. Sometimes this can be inferred based on name_or_basis_or_matrices, other times it must be supplied. This is the dimension of the space that this basis fully or partially spans. This is equal to the number of basis elements in a “full” (ordinary) basis. When a StateSpace object is given, a more detailed direct-sum-of-tensor-product-blocks structure for the state space (rather than a single dimension) is described, and a basis is produced for this space. For instance, a DirectSumBasis basis of TensorProdBasis components can result when there are multiple tensor-product blocks and these blocks consist of multiple factors.

  • sparse (bool, optional) – Whether the resulting basis should be “sparse”, meaning that its elements will be sparse rather than dense matrices.

  • classical_name (str, optional) – An alternate builtin basis name that should be used when constructing the bases for the classical sectors of dim, when dim is a StateSpace object.

Returns

Basis

property dim(self)

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

property size(self)

The number of elements (or vector-elements) in the basis.

property elshape(self)

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

property elndim(self)

The number of element dimensions, i.e. len(self.elshape)

Returns

int

property elsize(self)

The total element size, i.e. product(self.elshape)

Returns

int

is_simple(self)

Whether the flattened-element vector space is the same space as the space this basis’s vectors belong to.

Returns

bool

is_complete(self)

Whether this is a complete basis, i.e. this basis’s vectors span the entire space that they live in.

Returns

bool

is_partial(self)

The negative of :method:`is_complete`, effectively “is_incomplete”.

Returns

bool

property vector_elements(self)

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Returns

list – A list of 1D arrays.

copy(self)

Make a copy of this Basis object.

Returns

Basis

with_sparsity(self, desired_sparsity)

Returns either this basis or a copy of it with the desired sparsity.

If this basis has the desired sparsity it is simply returned. If not, this basis is copied to one that does.

Parameters

desired_sparsity (bool) – The sparsity (True for sparse elements, False for dense elements) that is desired.

Returns

Basis

abstract _copy_with_toggled_sparsity(self)
__str__(self)

Return str(self).

__getitem__(self, index)
__len__(self)
__eq__(self, other)

Return self==value.

create_transform_matrix(self, to_basis)

Get the matrix that transforms a vector from this basis to to_basis.

Parameters

to_basis (Basis or string) – The basis to transform to or a built-in basis name. In the latter case, a basis to transform to is built with the same structure as this basis but with all components constructed from the given name.

Returns

numpy.ndarray (even if basis is sparse)

reverse_transform_matrix(self, from_basis)

Get the matrix that transforms a vector from from_basis to this basis.

The reverse of :method:`create_transform_matrix`.

Parameters

from_basis (Basis or string) – The basis to transform from or a built-in basis name. In the latter case, a basis to transform from is built with the same structure as this basis but with all components constructed from the given name.

Returns

numpy.ndarray (even if basis is sparse)

is_normalized(self)

Check if a basis is normalized, meaning that Tr(Bi Bi) = 1.0.

Available only to bases whose elements are matrices for now.

Returns

bool

property to_std_transform_matrix(self)

Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.

Returns

numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).

property from_std_transform_matrix(self)

Retrieve the matrix that transforms vectors from the standard basis to this basis.

Returns

numpy array or scipy sparse matrix – An array of shape (size, dim) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).

property to_elementstd_transform_matrix(self)

Get transformation matrix from this basis to the “element space”.

Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in.

Returns

numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).

property from_elementstd_transform_matrix(self)

Get transformation matrix from “element space” to this basis.

Get the matrix that transforms vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in - to vectors in this basis (with length equal to the dim of this basis).

Returns

numpy array – An array of shape (size, element_dim) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).

create_equivalent(self, builtin_basis_name)

Create an equivalent basis with components of type builtin_basis_name.

Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.

Parameters

builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.

Returns

Basis

create_simple_equivalent(self, builtin_basis_name=None)

Create a basis of type builtin_basis_name whose elements are compatible with this basis.

Create a simple basis and one without components (e.g. a TensorProdBasis, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.

Parameters

builtin_basis_name (str, optional) – The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.

Returns

Basis

is_compatible_with_state_space(self, state_space)

Checks whether this basis is compatible with a given state space.

Parameters

state_space (StateSpace) – the state space to check.

Returns

bool

pygsti.jamiolkowski_iso(operation_mx, op_mx_basis='pp', choi_mx_basis='pp')

Given a operation matrix, return the corresponding Choi matrix that is normalized to have trace == 1.

Parameters
  • operation_mx (numpy array) – the operation matrix to compute Choi matrix of.

  • op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

  • choi_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – the Choi matrix, normalized to have trace == 1, in the desired basis.

pygsti.jamiolkowski_iso_inv(choi_mx, choi_mx_basis='pp', op_mx_basis='pp')

Given a choi matrix, return the corresponding operation matrix.

This function performs the inverse of :function:`jamiolkowski_iso`.

Parameters
  • choi_mx (numpy array) – the Choi matrix, normalized to have trace == 1, to compute operation matrix for.

  • choi_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

  • op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – operation matrix in the desired basis.

pygsti.fast_jamiolkowski_iso_std(operation_mx, op_mx_basis)

The corresponding Choi matrix in the standard basis that is normalized to have trace == 1.

This routine only computes the case of the Choi matrix being in the standard (matrix unit) basis, but does so more quickly than jamiolkowski_iso() and so is particuarly useful when only the eigenvalues of the Choi matrix are needed.

Parameters
  • operation_mx (numpy array) – the operation matrix to compute Choi matrix of.

  • op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – the Choi matrix, normalized to have trace == 1, in the std basis.

pygsti.fast_jamiolkowski_iso_std_inv(choi_mx, op_mx_basis)

Given a choi matrix in the standard basis, return the corresponding operation matrix.

This function performs the inverse of :function:`fast_jamiolkowski_iso_std`.

Parameters
  • choi_mx (numpy array) – the Choi matrix in the standard (matrix units) basis, normalized to have trace == 1, to compute operation matrix for.

  • op_mx_basis (Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – operation matrix in the desired basis.

pygsti.sum_of_negative_choi_eigenvalues(model, weights=None)

Compute the amount of non-CP-ness of a model.

This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model.

Parameters
  • model (Model) – The model to act on.

  • weights (dict) – A dictionary of weights used to multiply the negative eigenvalues of different gates. Keys are operation labels, values are floating point numbers.

Returns

float – the sum of negative eigenvalues of the Choi matrix for each gate.

pygsti.sums_of_negative_choi_eigenvalues(model)

Compute the amount of non-CP-ness of a model.

This is defined (somewhat arbitarily) by summing the negative eigenvalues of the Choi matrix for each gate in model separately. This function is different from :function:`sum_of_negative_choi_eigenvalues` in that it returns sums separately for each operation of model.

Parameters

model (Model) – The model to act on.

Returns

list of floats – each element == sum of the negative eigenvalues of the Choi matrix for the corresponding gate (as ordered by model.operations.iteritems()).

pygsti.magnitudes_of_negative_choi_eigenvalues(model)

Compute the magnitudes of the negative eigenvalues of the Choi matricies for each gate in model.

Parameters

model (Model) – The model to act on.

Returns

list of floats – list of the magnitues of all negative Choi eigenvalues. The length of this list will vary based on how many negative eigenvalues are found, as positive eigenvalues contribute nothing to this list.

pygsti.warn_deprecated(name, replacement=None)

Formats and prints a deprecation warning message.

Parameters
  • name (str) – The name of the function that is now deprecated.

  • replacement (str, optional) – the name of the function that should replace it.

Returns

None

pygsti.deprecate(replacement=None)

Decorator for deprecating a function.

Parameters

replacement (str, optional) – the name of the function that should replace it.

Returns

function

pygsti.deprecate_imports(module_name, replacement_map, warning_msg)

Utility to deprecate imports from a module.

This works by swapping the underlying module in the import mechanisms with a ModuleType object that overrides attribute lookup to check against the replacement map.

Note that this will slow down module attribute lookup substantially. If you need to deprecate multiple names, DO NOT call this method more than once on a given module! Instead, use the replacement map to batch multiple deprecations into one call. When using this method, plan to remove the deprecated paths altogether sooner rather than later.

Parameters
  • module_name (str) – The fully-qualified name of the module whose names have been deprecated.

  • replacement_map ({name: function}) – A map of each deprecated name to a factory which will be called with no arguments when importing the name.

  • warning_msg (str) – A message to be displayed as a warning when importing a deprecated name. Optionally, this may include the format string name, which will be formatted with the deprecated name.

Returns

None

pygsti.TOL = 1e-20
pygsti.logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

The log-likelihood function.

Parameters
  • model (Model) – Model of parameterized gates

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • wildcard (WildcardBudget) – A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

float – The log likelihood

pygsti.logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, wildcard=None, mdc_store=None, comm=None, mem_limit=None)

Computes the per-circuit log-likelihood contribution for a set of circuits.

Parameters
  • model (Model) – Model of parameterized gates

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • wildcard (WildcardBudget) – A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

Returns

numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the log-likelihood contributions of the corresponding circuit aggregated over outcomes.

pygsti.logl_jacobian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The jacobian of the log-likelihood function.

Parameters
  • model (Model) – Model of parameterized gates (including SPAM)

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the Poisson-picutre log-likelihood should be differentiated.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

  • verbosity (int, optional) – How much detail to print to stdout.

Returns

numpy array – array of shape (M,), where M is the length of the vectorized model.

pygsti.logl_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

The hessian of the log-likelihood function.

Parameters
  • model (Model) – Model of parameterized gates (including SPAM)

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the Poisson-picutre log-likelihood should be differentiated.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

  • verbosity (int, optional) – How much detail to print to stdout.

Returns

numpy array – array of shape (M,M), where M is the length of the vectorized model.

pygsti.logl_approximate_hessian(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, mdc_store=None, comm=None, mem_limit=None, verbosity=0)

An approximate Hessian of the log-likelihood function.

An approximation to the true Hessian is computed using just the Jacobian (and not the Hessian) of the probabilities w.r.t. the model parameters. Let J = d(probs)/d(params) and denote the Hessian of the log-likelihood w.r.t. the probabilities as d2(logl)/dprobs2 (a diagonal matrix indexed by the term, i.e. probability, of the log-likelihood). Then this function computes:

H = J * d2(logl)/dprobs2 * J.T

Which simply neglects the d2(probs)/d(params)2 terms of the true Hessian. Since this curvature is expected to be small at the MLE point, this approximation can be useful for computing approximate error bars.

Parameters
  • model (Model) – Model of parameterized gates (including SPAM)

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the Poisson-picutre log-likelihood should be differentiated.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

  • mem_limit (int, optional) – A rough memory limit in bytes which restricts the amount of intermediate values that are computed and stored.

  • verbosity (int, optional) – How much detail to print to stdout.

Returns

numpy array – array of shape (M,M), where M is the length of the vectorized model.

pygsti.logl_max(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The maximum log-likelihood possible for a DataSet.

That is, the log-likelihood obtained by a maximal model that can fit perfectly the probability of each circuit.

Parameters
  • model (Model) – the model, used only for circuit compilation

  • dataset (DataSet) – the data set to use.

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the max-log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • poisson_picture (boolean, optional) – Whether the Poisson-picture maximum log-likelihood should be returned.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

Returns

float

pygsti.logl_max_per_circuit(model, dataset, circuits=None, poisson_picture=True, op_label_aliases=None, mdc_store=None)

The vector of maximum log-likelihood contributions for each circuit, aggregated over outcomes.

Parameters
  • model (Model) – the model, used only for circuit compilation

  • dataset (DataSet) – the data set to use.

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the max-log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • poisson_picture (boolean, optional) – Whether the Poisson-picture maximum log-likelihood should be returned.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

Returns

numpy.ndarray – Array of length either len(circuits) or len(dataset.keys()). Values are the maximum log-likelihood contributions of the corresponding circuit aggregated over outcomes.

pygsti.two_delta_logl_nsigma(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method='modeltest', wildcard=None)

See docstring for :function:`pygsti.tools.two_delta_logl`

Parameters
  • model (Model) – Model of parameterized gates

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • dof_calc_method ({"all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. “all” uses model.num_params whereas “modeltest” uses model.num_modeltest_params (the number of non-gauge parameters by default).

  • wildcard (WildcardBudget) – A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

Returns

float

pygsti.two_delta_logl(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the difference between the maximum and actual log-likelihood.

Optionally also can return the Nsigma (# std deviations from mean) and p-value relative to expected chi^2 distribution (when dof_calc_method is not None).

This function’s arguments are supersets of :function:`logl`, and :function:`logl_max`. This is a convenience function, equivalent to 2*(logl_max(…) - logl(…)), whose value is what is often called the log-likelihood-ratio between the “maximal model” (that which trivially fits the data exactly) and the model given by model.

Parameters
  • model (Model) – Model of parameterized gates

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the computed log-likelihood values.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • dof_calc_method ({None, "all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model. If None, then Nsigma and pvalue are not returned (see below).

  • wildcard (WildcardBudget) – A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

Returns

  • twoDeltaLogL (float) – 2*(loglikelihood(maximal_model,data) - loglikelihood(model,data))

  • Nsigma, pvalue (float) – Only returned when dof_calc_method is not None.

pygsti.two_delta_logl_per_circuit(model, dataset, circuits=None, min_prob_clip=1e-06, prob_clip_interval=(- 1000000.0, 1000000.0), radius=0.0001, poisson_picture=True, op_label_aliases=None, dof_calc_method=None, wildcard=None, mdc_store=None, comm=None)

Twice the per-circuit difference between the maximum and actual log-likelihood.

Contributions are aggregated over each circuit’s outcomes, but no further.

Optionally (when dof_calc_method is not None) returns parallel vectors containing the Nsigma (# std deviations from mean) and the p-value relative to expected chi^2 distribution for each sequence.

Parameters
  • model (Model) – Model of parameterized gates

  • dataset (DataSet) – Probability data

  • circuits (list of (tuples or Circuits), optional) – Each element specifies a circuit to include in the log-likelihood sum. Default value of None implies all the circuits in dataset should be used.

  • min_prob_clip (float, optional) – The minimum probability treated normally in the evaluation of the log-likelihood. A penalty function replaces the true log-likelihood for probabilities that lie below this threshold so that the log-likelihood never becomes undefined (which improves optimizer performance).

  • prob_clip_interval (2-tuple or None, optional) – (min,max) values used to clip the probabilities predicted by models during MLEGST’s search for an optimal model (if not None). if None, no clipping is performed.

  • radius (float, optional) – Specifies the severity of rounding used to “patch” the zero-frequency terms of the log-likelihood.

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

  • op_label_aliases (dictionary, optional) – Dictionary whose keys are operation label “aliases” and whose values are tuples corresponding to what that operation label should be expanded into before querying the dataset. Defaults to the empty dictionary (no aliases defined) e.g. op_label_aliases[‘Gx^3’] = (‘Gx’,’Gx’,’Gx’)

  • dof_calc_method ({"all", "modeltest"}) – How model’s number of degrees of freedom (parameters) are obtained when computing the number of standard deviations and p-value relative to a chi2_k distribution, where k is additional degrees of freedom possessed by the maximal model.

  • wildcard (WildcardBudget) – A wildcard budget to apply to this log-likelihood computation. This increases the returned log-likelihood value by adjusting (by a maximal amount measured in TVD, given by the budget) the probabilities produced by model to optimially match the data (within the bugetary constraints) evaluating the log-likelihood.

  • mdc_store (ModelDatasetCircuitsStore, optional) – An object that bundles cached quantities along with a given model, dataset, and circuit list. If given, model and dataset and circuits should be set to None.

  • comm (mpi4py.MPI.Comm, optional) – When not None, an MPI communicator for distributing the computation across multiple processors.

Returns

  • twoDeltaLogL_terms (numpy.ndarray)

  • Nsigma, pvalue (numpy.ndarray) – Only returned when dof_calc_method is not None.

pygsti.two_delta_logl_term(n, p, f, min_prob_clip=1e-06, poisson_picture=True)

Term of the 2*[log(L)-upper-bound - log(L)] sum corresponding to a single circuit and spam label.

Parameters
  • n (float or numpy array) – Number of samples.

  • p (float or numpy array) – Probability of 1st outcome (typically computed).

  • f (float or numpy array) – Frequency of 1st outcome (typically observed).

  • min_prob_clip (float, optional) – Minimum probability clip point to avoid evaluating log(number <= zero)

  • poisson_picture (boolean, optional) – Whether the log-likelihood-in-the-Poisson-picture terms should be included in the returned logl value.

Returns

float or numpy array

pygsti.hamiltonian_to_lindbladian(hamiltonian, sparse=False)

Construct the Lindbladian corresponding to a given Hamiltonian.

Mathematically, for a d-dimensional Hamiltonian matrix H, this routine constructs the d^2-dimension Lindbladian matrix L whose action is given by L(rho) = -1j*sqrt(d)/2*[ H, rho ], where square brackets denote the commutator and rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.

Parameters
  • hamiltonian (ndarray) – The hamiltonian matrix used to construct the Lindbladian.

  • sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.stochastic_lindbladian(q, sparse=False)

Construct the Lindbladian corresponding to stochastic q-errors.

Mathematically, for a d-dimensional matrix q, this routine constructs the d^2-dimension Lindbladian matrix L whose action is given by L(rho) = q*rho*q^dag where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.

Parameters
  • q (ndarray) – The matrix used to construct the Lindbladian.

  • sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.affine_lindbladian(q, sparse=False)

Construct the Lindbladian corresponding to affine q-errors.

Mathematically, for a d-dimensional matrix q, this routine constructs the d^2-dimension Lindbladian matrix L whose action is given by L(rho) = q where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.

Parameters
  • q (ndarray) – The matrix used to construct the Lindbladian.

  • sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.nonham_lindbladian(Lm, Ln, sparse=False)

Construct the Lindbladian corresponding to generalized non-Hamiltonian (stochastic) errors.

Mathematically, for d-dimensional matrices Lm and Ln, this routine constructs the d^2-dimension Lindbladian matrix L whose action is given by:

L(rho) = Ln*rho*Lm^dag - 1/2(rho*Lm^dag*Ln + Lm^dag*Ln*rho)

where rho is a density matrix. L is returned as a superoperator matrix that acts on a vectorized density matrices.

Parameters
  • Lm (numpy.ndarray) – d-dimensional matrix.

  • Ln (numpy.ndarray) – d-dimensional matrix.

  • sparse (bool, optional) – Whether to construct a sparse or dense (the default) matrix.

Returns

ndarray or Scipy CSR matrix

pygsti.remove_duplicates_in_place(l, index_to_test=None)

Remove duplicates from the list passed as an argument.

Parameters
  • l (list) – The list to remove duplicates from.

  • index_to_test (int, optional) – If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate y-values.

Returns

None

pygsti.remove_duplicates(l, index_to_test=None)

Remove duplicates from the a list and return the result.

Parameters
  • l (iterable) – The list/set to remove duplicates from.

  • index_to_test (int, optional) – If not None, the index within the elements of l to test. For example, if all the elements of l contain 2 tuples (x,y) then set index_to_test == 1 to remove tuples with duplicate y-values.

Returns

list – the list after duplicates have been removed.

pygsti.compute_occurrence_indices(lst)

A 0-based list of integers specifying which occurrence, i.e. enumerated duplicate, each list item is.

For example, if lst = [ ‘A’,’B’,’C’,’C’,’A’] then the returned list will be [ 0 , 0 , 0 , 1 , 1 ]. This may be useful when working with DataSet objects that have collisionAction set to “keepseparate”.

Parameters

lst (list) – The list to process.

Returns

list

pygsti.find_replace_tuple(t, alias_dict)

Replace elements of t according to rules in alias_dict.

Parameters
  • t (tuple or list) – The object to perform replacements upon.

  • alias_dict (dictionary) – Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a sub-sequence that the given element should be replaced with. If None, no replacement is performed.

Returns

tuple

pygsti.find_replace_tuple_list(list_of_tuples, alias_dict)

Applies find_replace_tuple() on each element of list_of_tuples.

Parameters
  • list_of_tuples (list) – A list of tuple objects to perform replacements upon.

  • alias_dict (dictionary) – Dictionary whose keys are potential elements of t and whose values are tuples corresponding to a sub-sequence that the given element should be replaced with. If None, no replacement is performed.

Returns

list

pygsti.apply_aliases_to_circuits(list_of_circuits, alias_dict)

Applies alias_dict to the circuits in list_of_circuits.

Parameters
  • list_of_circuits (list) – A list of circuits to make replacements in.

  • alias_dict (dict) – A dictionary whose keys are layer Labels (or equivalent tuples or strings), and whose values are Circuits or tuples of labels.

Returns

list

pygsti.sorted_partitions(n)

Iterate over all sorted (decreasing) partitions of integer n.

A partition of n here is defined as a list of one or more non-zero integers which sum to n. Sorted partitions (those iterated over here) have their integers in decreasing order.

Parameters

n (int) – The number to partition.

pygsti.partitions(n)

Iterate over all partitions of integer n.

A partition of n here is defined as a list of one or more non-zero integers which sum to n. Every partition is iterated over exacty once - there are no duplicates/repetitions.

Parameters

n (int) – The number to partition.

pygsti.partition_into(n, nbins)

Iterate over all partitions of integer n into nbins bins.

Here, unlike in :function:`partition`, a “partition” is allowed to contain zeros. For example, (4,1,0) is a valid partition of 5 using 3 bins. This function fixes the number of bins and iterates over all possible length- nbins partitions while allowing zeros. This is equivalent to iterating over all usual partitions of length at most nbins and inserting zeros into all possible places for partitions of length less than nbins.

Parameters
  • n (int) – The number to partition.

  • nbins (int) – The fixed number of bins, equal to the length of all the partitions that are iterated over.

pygsti._partition_into_slow(n, nbins)

Helper function for partition_into that performs the same task for a general number n.

pygsti.incd_product(*args)

Like itertools.product but returns the first modified (incremented) index along with the product tuple itself.

Parameters

*args (iterables) – Any number of iterable things that we’re taking the product of.

pygsti.dot_mod2(m1, m2)

Returns the product over the integers modulo 2 of two matrices.

Parameters
  • m1 (numpy.ndarray) – First matrix

  • m2 (numpy.ndarray) – Second matrix

Returns

numpy.ndarray

pygsti.multidot_mod2(mlist)

Returns the product over the integers modulo 2 of a list of matrices.

Parameters

mlist (list) – A list of matrices.

Returns

numpy.ndarray

pygsti.det_mod2(m)

Returns the determinant of a matrix over the integers modulo 2 (GL(n,2)).

Parameters

m (numpy.ndarray) – Matrix to take determinant of.

Returns

numpy.ndarray

pygsti.matrix_directsum(m1, m2)

Returns the direct sum of two square matrices of integers.

Parameters
  • m1 (numpy.ndarray) – First matrix

  • m2 (numpy.ndarray) – Second matrix

Returns

numpy.ndarray

pygsti.inv_mod2(m)

Finds the inverse of a matrix over GL(n,2)

Parameters

m (numpy.ndarray) – Matrix to take inverse of.

Returns

numpy.ndarray

pygsti.Axb_mod2(A, b)

Solves Ax = b over GF(2)

Parameters
  • A (numpy.ndarray) – Matrix to operate on.

  • b (numpy.ndarray) – Vector to operate on.

Returns

numpy.ndarray

pygsti.gaussian_elimination_mod2(a)

Gaussian elimination mod2 of a.

Parameters

a (numpy.ndarray) – Matrix to operate on.

Returns

numpy.ndarray

pygsti.diagonal_as_vec(m)

Returns a 1D array containing the diagonal of the input square 2D array m.

Parameters

m (numpy.ndarray) – Matrix to operate on.

Returns

numpy.ndarray

pygsti.strictly_upper_triangle(m)

Returns a matrix containing the strictly upper triangle of m and zeros elsewhere.

Parameters

m (numpy.ndarray) – Matrix to operate on.

Returns

numpy.ndarray

pygsti.diagonal_as_matrix(m)

Returns a diagonal matrix containing the diagonal of m.

Parameters

m (numpy.ndarray) – Matrix to operate on.

Returns

numpy.ndarray

pygsti.albert_factor(d, failcount=0)

Returns a matrix M such that d = M M.T for symmetric d, where d and M are matrices over [0,1] mod 2.

The algorithm mostly follows the proof in “Orthogonal Matrices Over Finite Fields” by Jessie MacWilliams in The American Mathematical Monthly, Vol. 76, No. 2 (Feb., 1969), pp. 152-164

There is generally not a unique albert factorization, and this algorthm is randomized. It will general return a different factorizations from multiple calls.

Parameters
  • d (array-like) – Symmetric matrix mod 2.

  • failcount (int, optional) – UNUSED.

Returns

numpy.ndarray

pygsti.random_bitstring(n, p, failcount=0)

Constructs a random bitstring of length n with parity p

Parameters
  • n (int) – Number of bits.

  • p (int) – Parity.

  • failcount (int, optional) – Internal use only.

Returns

numpy.ndarray

pygsti.random_invertable_matrix(n, failcount=0)

Finds a random invertable matrix M over GL(n,2)

Parameters
  • n (int) – matrix dimension

  • failcount (int, optional) – Internal use only.

Returns

numpy.ndarray

pygsti.random_symmetric_invertable_matrix(n)

Creates a random, symmetric, invertible matrix from GL(n,2)

Parameters

n (int) – Matrix dimension.

Returns

numpy.ndarray

pygsti.onesify(a, failcount=0, maxfailcount=100)

Returns M such that M a M.T has ones along the main diagonal

Parameters
  • a (numpy.ndarray) – The matrix.

  • failcount (int, optional) – Internal use only.

  • maxfailcount (int, optional) – Maximum number of tries before giving up.

Returns

numpy.ndarray

pygsti.permute_top(a, i)

Permutes the first row & col with the i’th row & col

Parameters
  • a (numpy.ndarray) – The matrix to act on.

  • i (int) – index to permute with first row/col.

Returns

numpy.ndarray

pygsti.fix_top(a)

Computes the permutation matrix P such that the [1:t,1:t] submatrix of P a P is invertible.

Parameters

a (numpy.ndarray) – A symmetric binary matrix with ones along the diagonal.

Returns

numpy.ndarray

pygsti.proper_permutation(a)

Computes the permutation matrix P such that all [n:t,n:t] submatrices of P a P are invertible.

Parameters

a (numpy.ndarray) – A symmetric binary matrix with ones along the diagonal.

Returns

numpy.ndarray

pygsti._check_proper_permutation(a)

Check to see if the matrix has been properly permuted.

This should be redundent to what is already built into ‘fix_top’.

Parameters

a (numpy.ndarray) – A matrix.

Returns

bool

pygsti._fastcalc
pygsti.EXPM_DEFAULT_TOL
pygsti.trace(m)

The trace of a matrix, sum_i m[i,i].

A memory leak in some version of numpy can cause repeated calls to numpy’s trace function to eat up all available system memory, and this function does not have this problem.

Parameters

m (numpy array) – the matrix (any object that can be double-indexed)

Returns

element type of m – The trace of m.

pygsti.is_hermitian(mx, tol=1e-09)

Test whether mx is a hermitian matrix.

Parameters
  • mx (numpy array) – Matrix to test.

  • tol (float, optional) – Tolerance on absolute magitude of elements.

Returns

bool – True if mx is hermitian, otherwise False.

pygsti.is_pos_def(mx, tol=1e-09)

Test whether mx is a positive-definite matrix.

Parameters
  • mx (numpy array) – Matrix to test.

  • tol (float, optional) – Tolerance on absolute magitude of elements.

Returns

bool – True if mx is positive-semidefinite, otherwise False.

pygsti.is_valid_density_mx(mx, tol=1e-09)

Test whether mx is a valid density matrix (hermitian, positive-definite, and unit trace).

Parameters
  • mx (numpy array) – Matrix to test.

  • tol (float, optional) – Tolerance on absolute magitude of elements.

Returns

bool – True if mx is a valid density matrix, otherwise False.

pygsti.frobeniusnorm(ar)

Compute the frobenius norm of an array (or matrix),

sqrt( sum( each_element_of_a^2 ) )

Parameters

ar (numpy array) – What to compute the frobenius norm of. Note that ar can be any shape or number of dimenions.

Returns

float or complex – depending on the element type of ar.

pygsti.frobeniusnorm_squared(ar)

Compute the squared frobenius norm of an array (or matrix),

sum( each_element_of_a^2 ) )

Parameters

ar (numpy array) – What to compute the squared frobenius norm of. Note that ar can be any shape or number of dimenions.

Returns

float or complex – depending on the element type of ar.

pygsti.nullspace(m, tol=1e-07)

Compute the nullspace of a matrix.

Parameters
  • m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.

  • tol (float , optional) – Nullspace tolerance, used when comparing singular values with zero.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.nullspace_qr(m, tol=1e-07)

Compute the nullspace of a matrix using the QR decomposition.

The QR decomposition is faster but less accurate than the SVD used by nullspace().

Parameters
  • m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.

  • tol (float , optional) – Nullspace tolerance, used when comparing diagonal values of R with zero.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.nice_nullspace(m, tol=1e-07)

Computes the nullspace of a matrix, and tries to return a “nice” basis for it.

Columns of the returned value (a basis for the nullspace) each have a maximum absolute value of 1.0 and are chosen so as to align with the the original matrix’s basis as much as possible (the basis is found by projecting each original basis vector onto an arbitrariliy-found nullspace and keeping only a set of linearly independent projections).

Parameters
  • m (numpy array) – An matrix of shape (M,N) whose nullspace to compute.

  • tol (float , optional) – Nullspace tolerance, used when comparing diagonal values of R with zero.

Returns

An matrix of shape (M,K) whose columns contain nullspace basis vectors.

pygsti.normalize_columns(m, return_norms=False, ord=None)

Normalizes the columns of a matrix.

Parameters
  • m (numpy.ndarray or scipy sparse matrix) – The matrix.

  • return_norms (bool, optional) – If True, also return a 1D array containing the norms of the columns (before they were normalized).

  • ord (int, optional) – The order of the norm. See :function:`numpy.linalg.norm`.

Returns

  • normalized_m (numpy.ndarray) – The matrix after columns are normalized

  • column_norms (numpy.ndarray) – Only returned when return_norms=True, a 1-dimensional array of the pre-normalization norm of each column.

pygsti.column_norms(m, ord=None)

Compute the norms of the columns of a matrix.

Parameters
Returns

numpy.ndarray – A 1-dimensional array of the column norms (length is number of columns of m).

pygsti.scale_columns(m, scale_values)

Scale each column of a matrix by a given value.

Usually used for normalization purposes, when the matrix columns represent vectors.

Parameters
  • m (numpy.ndarray or scipy sparse matrix) – The matrix.

  • scale_values (numpy.ndarray) – A 1-dimensional array of scale values, one per column of m.

Returns

numpy.ndarray or scipy sparse matrix – A copy of m with scaled columns, possibly with different sparsity structure.

pygsti.columns_are_orthogonal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.

Parameters
  • m (numpy.ndarray) – The matrix to check.

  • tol (float, optional) – Tolerance for checking whether dot products are zero.

Returns

bool

pygsti.columns_are_orthonormal(m, tol=1e-07)

Checks whether a matrix contains orthogonal columns.

The columns do not need to be normalized. In the complex case, two vectors v and w are considered orthogonal if dot(v.conj(), w) == 0.

Parameters
  • m (numpy.ndarray) – The matrix to check.

  • tol (float, optional) – Tolerance for checking whether dot products are zero.

Returns

bool

pygsti.independent_columns(m, initial_independent_cols=None, tol=1e-07)

Computes the indices of the linearly-independent columns in a matrix.

Optionally starts with a “base” matrix of independent columns, so that the returned indices indicate the columns of m that are independent of all the base columns and the other independent columns of m.

Parameters
  • m (numpy.ndarray or scipy sparse matrix) – The matrix.

  • initial_independent_cols (numpy.ndarray or scipy sparse matrix, optional) – If not None, a matrix of known-to-be independent columns so to test the columns of m with respect to (in addition to the already chosen independent columns of m.

  • tol (float, optional) – Tolerance threshold used to decide whether a singular value is nonzero (it is if it’s is greater than tol).

Returns

list – A list of the independent-column indices of m.

pygsti.pinv_of_matrix_with_orthogonal_columns(m)

TODO: docstring

pygsti.matrix_sign(m)

The “sign” matrix of m

Parameters

m (numpy.ndarray) – the matrix.

Returns

numpy.ndarray

pygsti.print_mx(mx, width=9, prec=4, withbrackets=False)

Print matrix in pretty format.

Will print real or complex matrices with a desired precision and “cell” width.

Parameters
  • mx (numpy array) – the matrix (2-D array) to print.

  • width (int, opitonal) – the width (in characters) of each printed element

  • prec (int optional) – the precision (in characters) of each printed element

  • withbrackets (bool, optional) – whether to print brackets and commas to make the result something that Python can read back in.

Returns

None

pygsti.mx_to_string(m, width=9, prec=4, withbrackets=False)

Generate a “pretty-format” string for a matrix.

Will generate strings for real or complex matrices with a desired precision and “cell” width.

Parameters
  • m (numpy.ndarray) – array to print.

  • width (int, opitonal) – the width (in characters) of each converted element

  • prec (int optional) – the precision (in characters) of each converted element

  • withbrackets (bool, optional) – whether to print brackets and commas to make the result something that Python can read back in.

Returns

string – matrix m as a pretty formated string.

pygsti.mx_to_string_complex(m, real_width=9, im_width=9, prec=4)

Generate a “pretty-format” string for a complex-valued matrix.

Parameters
  • m (numpy array) – array to format.

  • real_width (int, opitonal) – the width (in characters) of the real part of each element.

  • im_width (int, opitonal) – the width (in characters) of the imaginary part of each element.

  • prec (int optional) – the precision (in characters) of each element’s real and imaginary parts.

Returns

string – matrix m as a pretty formated string.

pygsti.unitary_superoperator_matrix_log(m, mx_basis)

Construct the logarithm of superoperator matrix m.

This function assumes that m acts as a unitary on density-matrix space, (m: rho -> U rho Udagger) so that log(m) can be written as the action by Hamiltonian H:

log(m): rho -> -i[H,rho].

Parameters
  • m (numpy array) – The superoperator matrix whose logarithm is taken

  • mx_basis ({'std', 'gm', 'pp', 'qt'} or Basis object) – The source and destination basis, respectively. Allowed values are Matrix-unit (std), Gell-Mann (gm), Pauli-product (pp), and Qutrit (qt) (or a custom basis object).

Returns

numpy array – A matrix logM, of the same shape as m, such that m = exp(logM) and logM can be written as the action rho -> -i[H,rho].

pygsti.near_identity_matrix_log(m, tol=1e-08)

Construct the logarithm of superoperator matrix m that is near the identity.

If m is real, the resulting logarithm will be real.

Parameters
  • m (numpy array) – The superoperator matrix whose logarithm is taken

  • tol (float, optional) – The tolerance used when testing for zero imaginary parts.

Returns

numpy array – An matrix logM, of the same shape as m, such that m = exp(logM) and logM is real when m is real.

pygsti.approximate_matrix_log(m, target_logm, target_weight=10.0, tol=1e-06)

Construct an approximate logarithm of superoperator matrix m that is real and near the target_logm.

The equation m = exp( logM ) is allowed to become inexact in order to make logM close to target_logm. In particular, the objective function that is minimized is (where || indicates the 2-norm):

|exp(logM) - m|_1 + target_weight * ||logM - target_logm||^2

Parameters
  • m (numpy array) – The superoperator matrix whose logarithm is taken

  • target_logm (numpy array) – The target logarithm

  • target_weight (float) – A weighting factor used to blance the exactness-of-log term with the closeness-to-target term in the optimized objective function. This value multiplies the latter term.

  • tol (float, optional) – Optimzer tolerance.

Returns

logM (numpy array) – An matrix of the same shape as m.

pygsti.real_matrix_log(m, action_if_imaginary='raise', tol=1e-08)

Construct a real logarithm of real matrix m.

This is possible when negative eigenvalues of m come in pairs, so that they can be viewed as complex conjugate pairs.

Parameters
  • m (numpy array) – The matrix to take the logarithm of

  • action_if_imaginary ({"raise","warn","ignore"}, optional) – What action should be taken if a real-valued logarithm cannot be found. “raise” raises a ValueError, “warn” issues a warning, and “ignore” ignores the condition and simply returns the complex-valued result.

  • tol (float, optional) – An internal tolerance used when testing for equivalence and zero imaginary parts (real-ness).

Returns

logM (numpy array) – An matrix logM, of the same shape as m, such that m = exp(logM)

pygsti.column_basis_vector(i, dim)

Returns the ith standard basis vector in dimension dim.

Parameters
  • i (int) – Basis vector index.

  • dim (int) – Vector dimension.

Returns

numpy.ndarray – An array of shape (dim, 1) that is all zeros except for its i-th element, which equals 1.

pygsti.vec(matrix_in)

Stacks the columns of a matrix to return a vector

Parameters

matrix_in (numpy.ndarray) –

Returns

numpy.ndarray

pygsti.unvec(vector_in)

Slices a vector into the columns of a matrix.

Parameters

vector_in (numpy.ndarray) –

Returns

numpy.ndarray

pygsti.norm1(m)

Returns the 1 norm of a matrix

Parameters

m (numpy.ndarray) – The matrix.

Returns

numpy.ndarray

pygsti.random_hermitian(dim)

Generates a random Hermitian matrix

Parameters

dim (int) – the matrix dimensinon.

Returns

numpy.ndarray

pygsti.norm1to1(operator, num_samples=10000, mx_basis='gm', return_list=False)

The Hermitian 1-to-1 norm of a superoperator represented in the standard basis.

This is calculated via Monte-Carlo sampling. The definition of Hermitian 1-to-1 norm can be found in arxiv:1109.6887.

Parameters
  • operator (numpy.ndarray) – The operator matrix to take the norm of.

  • num_samples (int, optional) – Number of Monte-Carlo samples.

  • mx_basis ({'std', 'gm', 'pp', 'qt'} or Basis) – The basis of operator.

  • return_list (bool, optional) – Whether the entire list of sampled values is returned or just the maximum.

Returns

float or list – Depends on the value of return_list.

pygsti.complex_compare(a, b)

Comparison function for complex numbers that compares real part, then imaginary part.

Parameters
  • a (complex) –

  • b (complex) –

Returns

  • -1 if a < b – 0 if a == b

  • +1 if a > b

pygsti.prime_factors(n)

GCD algorithm to produce prime factors of n

Parameters

n (int) – The number to factorize.

Returns

list – The prime factors of n.

pygsti.minweight_match(a, b, metricfn=None, return_pairs=True, pass_indices_to_metricfn=False)

Matches the elements of two vectors, a and b by minimizing the weight between them.

The weight is defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b).

Parameters
  • a (list or numpy.ndarray) – First 1D array to match elements between.

  • b (list or numpy.ndarray) – Second 1D array to match elements between.

  • metricfn (function, optional) – A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(x-y) is used.

  • return_pairs (bool, optional) – If True, the matching is also returned.

  • pass_indices_to_metricfn (bool, optional) – If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.

Returns

  • weight_array (numpy.ndarray) – The array of weights corresponding to the min-weight matching. The sum of this array’s elements is the minimized total weight.

  • pairs (list) – Only returned when return_pairs == True, a list of 2-tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair. The first (ix) indices will be in continuous ascending order starting at zero.

pygsti.minweight_match_realmxeigs(a, b, metricfn=None, pass_indices_to_metricfn=False, eps=1e-09)

Matches the elements of a and b, whose elements are assumed to either real or one-half of a conjugate pair.

Matching is performed by minimizing the weight between elements, defined as the sum of metricfn(x,y) over all (x,y) pairs (x in a and y in b). If straightforward matching fails to preserve eigenvalue conjugacy relations, then real and conjugate- pair eigenvalues are matched separately to ensure relations are preserved (but this can result in a sub-optimal matching). A ValueError is raised when the elements of a and b have incompatible conjugacy structures (#’s of conjugate vs. real pairs).

Parameters
  • a (numpy.ndarray) – First 1D array to match.

  • b (numpy.ndarray) – Second 1D array to match.

  • metricfn (function, optional) – A function of two float parameters, x and y,which defines the cost associated with matching x with y. If None, abs(x-y) is used.

  • pass_indices_to_metricfn (bool, optional) – If True, the metric function is passed two indices into the a and b arrays, respectively, instead of the values.

  • eps (float, optional) – Tolerance when checking if eigenvalues are equal to each other.

Returns

pairs (list) – A list of 2-tuple pairs of indices (ix,iy) giving the indices into a and b respectively of each matched pair.

pygsti._fas(a, inds, rhs, add=False)

Fancy Assignment, equivalent to a[*inds] = rhs but with the elements of inds (allowed to be integers, slices, or integer arrays) always specifing a generalize-slice along the given dimension. This avoids some weird numpy indexing rules that make using square brackets a pain.

pygsti._findx_shape(a, inds)

Returns the shape of a fancy-indexed array (a[*inds].shape)

pygsti._findx(a, inds, always_copy=False)

Fancy Indexing, equivalent to a[*inds].copy() but with the elements of inds (allowed to be integers, slices, or integer arrays) always specifing a generalize-slice along the given dimension. This avoids some weird numpy indexing rules that make using square brackets a pain.

pygsti.safe_dot(a, b)

Performs dot(a,b) correctly when neither, either, or both arguments are sparse matrices.

Parameters
  • a (numpy.ndarray or scipy.sparse matrix.) – First matrix.

  • b (numpy.ndarray or scipy.sparse matrix.) – Second matrix.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.safe_real(a, inplace=False, check=False)

Get the real-part of a, where a can be either a dense array or a sparse matrix.

Parameters
  • a (numpy.ndarray or scipy.sparse matrix.) – Array to take real part of.

  • inplace (bool, optional) – Whether this operation should be done in-place.

  • check (bool, optional) – If True, raise a ValueError if a has a nonzero imaginary part.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.safe_imag(a, inplace=False, check=False)

Get the imaginary-part of a, where a can be either a dense array or a sparse matrix.

Parameters
  • a (numpy.ndarray or scipy.sparse matrix.) – Array to take imaginary part of.

  • inplace (bool, optional) – Whether this operation should be done in-place.

  • check (bool, optional) – If True, raise a ValueError if a has a nonzero real part.

Returns

numpy.ndarray or scipy.sparse matrix

pygsti.safe_norm(a, part=None)

Get the frobenius norm of a matrix or vector, a, when it is either a dense array or a sparse matrix.

Parameters
  • a (ndarray or scipy.sparse matrix) – The matrix or vector to take the norm of.

  • part ({None,'real','imag'}) – If not None, return the norm of the real or imaginary part of a.

Returns

float

pygsti.safe_onenorm(a)

Computes the 1-norm of the dense or sparse matrix a.

Parameters

a (ndarray or sparse matrix) – The matrix or vector to take the norm of.

Returns

float

pygsti.csr_sum_indices(csr_matrices)

Precomputes the indices needed to sum a set of CSR sparse matrices.

Computes the index-arrays needed for use in :method:`csr_sum`, along with the index pointer and column-indices arrays for constructing a “template” CSR matrix to be the destination of csr_sum.

Parameters

csr_matrices (list) – The SciPy CSR matrices to be summed.

Returns

  • ind_arrays (list) – A list of numpy arrays giving the destination data-array indices of each element of csr_matrices.

  • indptr, indices (numpy.ndarray) – The row-pointer and column-indices arrays specifying the sparsity structure of a the destination CSR matrix.

  • N (int) – The dimension of the destination matrix (and of each member of csr_matrices)

pygsti.csr_sum(data, coeffs, csr_mxs, csr_sum_indices)

Accelerated summation of several CSR-format sparse matrices.

:method:`csr_sum_indices` precomputes the necessary indices for summing directly into the data-array of a destination CSR sparse matrix. If data is the data-array of matrix D (for “destination”), then this method performs:

D += sum_i( coeff[i] * csr_mxs[i] )

Note that D is not returned; the sum is done internally into D’s data-array.

Parameters
  • data (numpy.ndarray) – The data-array of the destination CSR-matrix.

  • coeffs (iterable) – The weight coefficients which multiply each summed matrix.

  • csr_mxs (iterable) – A list of CSR matrix objects whose data-array is given by obj.data (e.g. a SciPy CSR sparse matrix).

  • csr_sum_indices (list) – A list of precomputed index arrays as returned by :method:`csr_sum_indices`.

Returns

None

pygsti.csr_sum_flat_indices(csr_matrices)

Precomputes quantities allowing fast computation of linear combinations of CSR sparse matrices.

The returned quantities can later be used to quickly compute a linear combination of the CSR sparse matrices csr_matrices.

Computes the index and data arrays needed for use in :method:`csr_sum_flat`, along with the index pointer and column-indices arrays for constructing a “template” CSR matrix to be the destination of csr_sum_flat.

Parameters

csr_matrices (list) – The SciPy CSR matrices to be summed.

Returns

  • flat_dest_index_array (numpy array) – A 1D array of one element per nonzero element in any of csr_matrices, giving the destination-index of that element.

  • flat_csr_mx_data (numpy array) – A 1D array of the same length as flat_dest_index_array, which simply concatenates the data arrays of csr_matrices.

  • mx_nnz_indptr (numpy array) – A 1D array of length len(csr_matrices)+1 such that the data for the i-th element of csr_matrices lie in the index-range of mx_nnz_indptr[i] to mx_nnz_indptr[i+1]-1 of the flat arrays.

  • indptr, indices (numpy.ndarray) – The row-pointer and column-indices arrays specifying the sparsity structure of a the destination CSR matrix.

  • N (int) – The dimension of the destination matrix (and of each member of csr_matrices)

pygsti.csr_sum_flat(data, coeffs, flat_dest_index_array, flat_csr_mx_data, mx_nnz_indptr)

Computation of the summation of several CSR-format sparse matrices.

:method:`csr_sum_flat_indices` precomputes the necessary indices for summing directly into the data-array of a destination CSR sparse matrix. If data is the data-array of matrix D (for “destination”), then this method performs:

D += sum_i( coeff[i] * csr_mxs[i] )

Note that D is not returned; the sum is done internally into D’s data-array.

Parameters
Returns

None

pygsti.expm_multiply_prep(a, tol=EXPM_DEFAULT_TOL)

Computes “prepared” meta-info about matrix a, to be used in expm_multiply_fast.

This includes a shifted version of a.

Parameters
  • a (numpy.ndarray) – the matrix that will belater exponentiated.

  • tol (float, optional) – Tolerance used to within matrix exponentiation routines.

Returns

tuple – A tuple of values to pass to expm_multiply_fast.

pygsti.expm_multiply_fast(prep_a, v, tol=EXPM_DEFAULT_TOL)

Multiplies v by an exponentiated matrix.

Parameters
  • prep_a (tuple) – A tuple of values from :function:`expm_multiply_prep` that defines the matrix to be exponentiated and holds other pre-computed quantities.

  • v (numpy.ndarray) – Vector to multiply (take dot product with).

  • tol (float, optional) – Tolerance used to within matrix exponentiation routines.

Returns

numpy.ndarray

pygsti._custom_expm_multiply_simple_core(a, b, mu, m_star, s, tol, eta)

a helper function. Note that this (python) version works when a is a LinearOperator as well as a SciPy CSR sparse matrix.

pygsti.expop_multiply_prep(op, a_1_norm=None, tol=EXPM_DEFAULT_TOL)

Returns “prepared” meta-info about operation op, which is assumed to be traceless (so no shift is needed).

Used as input for use with _custom_expm_multiply_simple_core or fast C-reps.

Parameters
  • op (scipy.sparse.linalg.LinearOperator) – The operator to exponentiate.

  • a_1_norm (float, optional) – The 1-norm (if computed separately) of op.

  • tol (float, optional) – Tolerance used to within matrix exponentiation routines.

Returns

tuple – A tuple of values to pass to expm_multiply_fast.

pygsti.sparse_equal(a, b, atol=1e-08)

Checks whether two Scipy sparse matrices are (almost) equal.

Parameters
  • a (scipy.sparse matrix) – First matrix.

  • b (scipy.sparse matrix) – Second matrix.

  • atol (float, optional) – The tolerance to use, passed to numpy.allclose, when comparing the elements of a and b.

Returns

bool

pygsti.sparse_onenorm(a)

Computes the 1-norm of the scipy sparse matrix a.

Parameters

a (scipy sparse matrix) – The matrix or vector to take the norm of.

Returns

float

pygsti.ndarray_base(a, verbosity=0)

Get the base memory object for numpy array a.

This is found by following .base until it comes up None.

Parameters
  • a (numpy.ndarray) – Array to get base of.

  • verbosity (int, optional) – Print additional debugging information if this is > 0.

Returns

numpy.ndarray

pygsti.to_unitary(scaled_unitary)

Compute the scaling factor required to turn a scalar multiple of a unitary matrix to a unitary matrix.

Parameters

scaled_unitary (ndarray) – A scaled unitary matrix

Returns

  • scale (float)

  • unitary (ndarray) – Such that scale * unitary == scaled_unitary.

pygsti.sorted_eig(mx)

Similar to numpy.eig, but returns sorted output.

In particular, the eigenvalues and vectors sorted by eigenvalue, where sorting is done according to (real_part, imaginary_part) tuple.

Parameters

mx (numpy.ndarray) – Matrix to act on.

Returns

  • eigenvalues (numpy.ndarray)

  • eigenvectors (numpy.ndarray)

pygsti.compute_kite(eigenvalues)

Computes the “kite” corresponding to a list of eigenvalues.

The kite is defined as a list of integers, each indicating that there is a degnenerate block of that many eigenvalues within eigenvalues. Thus the sum of the list values equals len(eigenvalues).

Parameters

eigenvalues (numpy.ndarray) – A sorted array of eigenvalues.

Returns

list – A list giving the multiplicity structure of evals.

pygsti.find_zero_communtant_connection(u, u_inv, u0, u0_inv, kite)

Find a matrix R such that u_inv R u0 is diagonal AND log(R) has no projection onto the commutant of G0.

More specifically, find a matrix R such that u_inv R u0 is diagonal (so G = R G0 Rinv if G and G0 share the same eigenvalues and have eigenvectors u and u0 respectively) AND log(R) has no (zero) projection onto the commutant of G0 = u0 diag(evals) u0_inv.

Parameters
  • u (numpy.ndarray) – Usually the eigenvector matrix of a gate (G).

  • u_inv (numpy.ndarray) – Inverse of u.

  • u0 (numpy.ndarray) – Usually the eigenvector matrix of the corresponding target gate (G0).

  • u0_inv (numpy.ndarray) – Inverse of u0.

  • kite (list) – The kite structure of u0.

Returns

numpy.ndarray

pygsti.project_onto_kite(mx, kite)

Project mx onto kite, so mx is zero everywhere except on the kite.

Parameters
  • mx (numpy.ndarray) – Matrix to project.

  • kite (list) – A kite structure.

Returns

numpy.ndarray

pygsti.project_onto_antikite(mx, kite)

Project mx onto the complement of kite, so mx is zero everywhere on the kite.

Parameters
  • mx (numpy.ndarray) – Matrix to project.

  • kite (list) – A kite structure.

Returns

numpy.ndarray

pygsti.remove_dependent_cols(mx, tol=1e-07)

Removes the linearly dependent columns of a matrix.

Parameters

mx (numpy.ndarray) – The input matrix

Returns

A linearly independent subset of the columns of mx.

pygsti.intersection_space(space1, space2, tol=1e-07, use_nice_nullspace=False)

TODO: docstring

pygsti.union_space(space1, space2, tol=1e-07)

TODO: docstring

pygsti.jamiolkowski_angle(hamiltonian_mx)

TODO: docstring

pygsti.zvals_to_dense(self, zvals, superket=True)

Construct the dense operator or superoperator representation of a computational basis state.

Parameters
  • zvals (list or numpy.ndarray) – The z-values, each 0 or 1, defining the computational basis state.

  • superket (bool, optional) – If True, the super-ket representation of the state is returned. If False, then the complex ket representation is returned.

Returns

numpy.ndarray

pygsti.int64_parity(x)

Compute the partity of x.

Recursively divide a (64-bit) integer (x) into two equal halves and take their XOR until only 1 bit is left.

Parameters

x (int64) –

Returns

int64

pygsti.zvals_int64_to_dense(zvals_int, nqubits, outvec=None, trust_outvec_sparsity=False, abs_elval=None)

Fills a dense array with the super-ket representation of a computational basis state.

Parameters
  • zvals_int (int64) – The array of (up to 64) z-values, encoded as the 0s and 1s in the binary representation of this integer.

  • nqubits (int) – The number of z-values (up to 64)

  • outvec (numpy.ndarray, optional) – The output array, which must be a 1D array of length 4**nqubits or None, in which case a new array is allocated.

  • trust_outvec_sparsity (bool, optional) – When True, it is assumed that the provided outvec starts as all zeros and so only non-zero elements of outvec need to be set.

  • abs_elval (float) – the value 1 / (sqrt(2)**nqubits), which can be passed here so that it doesn’t need to be recomputed on every call to this function. If None, then we just compute the value.

Returns

numpy.ndarray

class pygsti._ExplicitBasis(elements, labels=None, name=None, longname=None, real=False, sparse=None)

Bases: Basis

A Basis whose elements are specified directly.

All explicit bases are simple: their vector space is always taken to be that of the the flattened elements.

Parameters
  • elements (numpy.ndarray) – The basis elements (sometimes different from the vectors)

  • labels (list) – The basis labels

  • name (str, optional) – The name of this basis. If None, then a name will be automatically generated.

  • longname (str, optional) – A more descriptive name for this basis. If None, then the short name will be used.

  • real (bool, optional) – Whether the coefficients in the expression of an arbitrary vector as a linear combination of this basis’s elements must be real.

  • sparse (bool, optional) – Whether the elements of this Basis are stored as sparse matrices or vectors. If None, then this is automatically determined by the type of the initial object: elements[0] (sparse=False is used when len(elements) == 0).

Count

The number of custom bases, used for serialized naming

Type

int

Count = 0
_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
property dim(self)

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

property size(self)

The number of elements (or vector-elements) in the basis.

property elshape(self)

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

_copy_with_toggled_sparsity(self)
__hash__(self)

Return hash(self).

class pygsti._DirectSumBasis(component_bases, name=None, longname=None)

Bases: LazyBasis

A basis that is the direct sum of one or more “component” bases.

Elements of this basis are the union of the basis elements on each component, each embedded into a common block-diagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).

Parameters
  • component_bases (iterable) – A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to :function:`Basis.cast`, e.g. (‘pp’,4).

  • name (str, optional) – The name of this basis. If None, the names of the component bases joined with “+” is used.

  • longname (str, optional) – A longer description of this basis. If None, then a long name is automatically generated.

vector_elements

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Type

list

_to_nice_serialization(self)
classmethod _from_nice_serialization(cls, state)
property dim(self)

The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.

property size(self)

The number of elements (or vector-elements) in the basis.

property elshape(self)

The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).

__hash__(self)

Return hash(self).

_lazy_build_vector_elements(self)
_lazy_build_elements(self)
_lazy_build_labels(self)
_copy_with_toggled_sparsity(self)
__eq__(self, other)

Return self==value.

property vector_elements(self)

The “vectors” of this basis, always 1D (sparse or dense) arrays.

Returns

list

property to_std_transform_matrix(self)

Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.

Returns

numpy array or scipy.sparse.lil_matrix – An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).

property to_elementstd_transform_matrix(self)

Get transformation matrix from this basis to the “element space”.

Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in.

Returns

numpy array – An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).

create_equivalent(self, builtin_basis_name)

Create an equivalent basis with components of type builtin_basis_name.

Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.

Parameters

builtin_basis_name (str) – The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.

Returns

DirectSumBasis

create_simple_equivalent(self, builtin_basis_name=None)

Create a basis of type builtin_basis_name whose elements are compatible with this basis.

Create a simple basis and one without components (e.g. a TensorProdBasis, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.

Parameters

builtin_basis_name (str, optional) – The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.

Returns

Basis

class pygsti._Label

Bases: object

A label used to identify a gate, circuit layer, or (sub-)circuit.

A label consisting of a string along with a tuple of integers or sector-names specifying which qubits, or more generally, parts of the Hilbert space that is acted upon by an object so-labeled.

property depth(self)

The depth of this label, viewed as a sub-circuit.