pygsti.extras.rb.benchmarker

Encapsulates RB results and dataset objects

Module Contents

Classes

Benchmarker

todo

class pygsti.extras.rb.benchmarker.Benchmarker(specs, ds=None, summary_data=None, predicted_summary_data=None, dstype='standard', success_outcome='success', success_key='target', dscomparator=None)

Bases: object

todo

todo

dstype : (‘success-fail’, ‘standard’)

specs: dictionary of (name, RBSpec) key-value pairs. The names are arbitrary

select_volumetric_benchmark_regions(depths, boundary, widths='all', datatype='success_probabilities', statistic='mean', merit='aboveboundary', specs=None, aggregate=True, passnum=None, rescaler='auto')
volumetric_benchmark_data(depths, widths='all', datatype='success_probabilities', statistic='mean', specs=None, aggregate=True, rescaler='auto')
flattened_data(specs=None, aggregate=True)
test_pass_stability(formatdata=False, verbosity=1)
generate_success_or_fail_dataset(overwrite=False)
summary_data(datatype, specindex, qubits=None)
create_summary_data(predictions=None, verbosity=2, auxtypes=None)

todo

analyze(specindices=None, analysis='adjusted', bootstraps=200, verbosity=1)

todo

todo: this partly ignores specindices

filter_experiments(numqubits=None, containqubits=None, onqubits=None, sampler=None, two_qubit_gate_prob=None, prefilter=None, benchmarktype=None)

todo