pygsti.io

pyGSTi Input/Output Python Package

Submodules

Package Contents

Classes

StdInputParser

Encapsulates a text parser for reading GST input files.

QubitProcessorSpec

The device specification for a one or more qudit quantum computer.

QuditProcessorSpec

The device specification for a one or more qudit quantum computer.

Functions

load_dataset(filename[, cache, collision_action, ...])

Deprecated!

read_dataset(filename[, cache, collision_action, ...])

Load a DataSet from a file.

load_multidataset(filename[, cache, collision_action, ...])

Deprecated!

read_multidataset(filename[, cache, collision_action, ...])

Load a MultiDataSet from a file.

load_time_dependent_dataset(filename[, cache, ...])

Deprecated!

read_time_dependent_dataset(filename[, cache, ...])

Load time-dependent (time-stamped) data as a DataSet.

load_model(filename)

Load a Model from a file, formatted using the standard text-format for models.

load_circuit_dict(filename)

Load a circuit dictionary from a file, formatted using the standard text-format.

load_circuit_list(filename[, read_raw_strings, ...])

Deprecated!

read_circuit_list(filename[, read_raw_strings, ...])

Load a circuit list from a file, formatted using the standard text-format.

convert_strings_to_circuits(obj)

Converts an object resulting from convert_circuits_to_strings() back to its original.

read_circuit_strings(filename)

TODO: docstring - load various Circuit-containing standard objects from a file where

load_protocol_from_dir(dirname[, quick_load, comm])

Deprecated!

read_protocol_from_dir(dirname[, quick_load, comm])

Load a Protocol from a directory on disk.

read_protocol_from_mongodb(mongodb, doc_id[, quick_load])

Load a Protocol from a MongoDB database.

remove_protocol_from_mongodb(mongodb, doc_id[, ...])

Remove a Protocol from a MongoDB database.

load_edesign_from_dir(dirname[, quick_load, comm])

Deprecated!

read_edesign_from_dir(dirname[, quick_load, comm])

Load a ExperimentDesign from a directory on disk.

create_edesign_from_dir(dirname)

read_edesign_from_mongodb(mongodb, doc_id[, ...])

Load a ExperimentDesign from a MongoDB database.

remove_edesign_from_mongodb(mongodb, doc_id[, ...])

Remove an ExperimentDesign from a MongoDB database.

load_data_from_dir(dirname[, quick_load, comm])

Deprecated!

read_data_from_dir(dirname[, preloaded_edesign, ...])

Load a ProtocolData from a directory on disk.

read_data_from_mongodb(mongodb, doc_id[, ...])

Load a ProtocolData from a MongoDB database.

remove_data_from_mongodb(mongodb, doc_id[, session, ...])

Remove ProtocolData from a MongoDB database.

load_results_from_dir(dirname[, name, preloaded_data, ...])

Deprecated!

read_results_from_dir(dirname[, name, preloaded_data, ...])

Load a ProtocolResults or ProtocolsResultsDir from a directory on disk.

read_results_from_mongodb(mongodb, doc_id[, ...])

Load a ProtocolResults from a MongoDB database.

read_resultsdir_from_mongodb(mongodb, doc_id[, ...])

Load a ProtocolsResultsDir from a MongoDB database.

remove_results_from_mongodb(mongodb, doc_id[, comm, ...])

Remove ProtocolResults data from a MongoDB database.

remove_resultsdir_from_mongodb(mongodb, doc_id[, ...])

Remove ProtocolsResultsDir data from a MongoDB database.

load_meta_based_dir(root_dir[, auxfile_types_member, ...])

Load the contents of a root_dir into a dict.

write_meta_based_dir(root_dir, valuedict[, ...])

Write a dictionary of quantities to a directory.

write_obj_to_meta_based_dir(obj, dirname, ...[, ...])

Write the contents of obj to dirname using a 'meta.json' file and an auxfile-types dictionary.

write_dict_to_json_or_pkl_files(d, dirname)

Write each element of d into a separate file in dirname.

parse_model(filename)

Parse a model file into a Model object.

write_empty_dataset(filename, circuits[, ...])

Write an empty dataset file to be used as a template.

write_dataset(filename, dataset[, circuits, ...])

Write a text-formatted dataset file.

write_multidataset(filename, multidataset[, circuits, ...])

Write a text-formatted multi-dataset file.

write_circuit_list(filename, circuits[, header])

Write a text-formatted circuit list file.

write_model(model, filename[, title])

Write a text-formatted model file.

write_empty_protocol_data(dirname, edesign[, sparse, ...])

Write to disk an empty ProtocolData object.

fill_in_empty_dataset_with_fake_data(dataset_filename, ...)

Fills in the text-format data set file dataset_fileame with simulated data counts using model.

convert_circuits_to_strings(obj)

Converts a list or dictionary potentially containing Circuit objects to a JSON-able one with circuit strings.

write_circuit_strings(filename, obj)

TODO: docstring - write various Circuit-containing standard objects with circuits

read_auxtree_from_mongodb(mongodb, collection_name, doc_id)

Read a document containing links to auxiliary documents from a MongoDB database.

read_auxtree_from_mongodb_doc(mongodb, doc[, ...])

Load the contents of a MongoDB document into a dict.

write_obj_to_mongodb_auxtree(obj, mongodb, ...[, ...])

Write the attributes of an object to a MongoDB database, potentially as multiple documents.

add_obj_auxtree_write_ops_and_update_doc(obj, doc, ...)

Similar to write_obj_to_mongodb_auxtree, but just collect write operations and update a main-doc dictionary.

write_auxtree_to_mongodb(mongodb, collection_name, ...)

Write a dictionary of quantities to a MongoDB database, potentially as multiple documents.

add_auxtree_write_ops_and_update_doc(doc, write_ops, ...)

Similar to write_auxtree_to_mongodb, but just collect write operations and update a main-doc dictionary.

remove_auxtree_from_mongodb(mongodb, collection_name, ...)

Remove some or all of the MongoDB documents written by write_auxtree_to_mongodb

read_dict_from_mongodb(mongodb, collection_name, ...)

Read a dictionary serialized via write_dict_to_mongodb() into a dictionary.

write_dict_to_mongodb(d, mongodb, collection_name, ...)

Write each element of d as a separate document in a MongoDB collection

add_dict_to_mongodb_write_ops(d, write_ops, mongodb, ...)

Similar to write_dict_to_mongodb, but just collect write operations and update a main-doc dictionary.

remove_dict_from_mongodb(mongodb, collection_name, ...)

Remove elements of (separate documents) of a dictionary stored in a MongoDB collection

create_mongodb_indices_for_pygsti_collections(mongodb)

Create, if not existing already, indices useful for speeding up pyGSTi MongoDB operations.

Attributes

QUICK_LOAD_MAX_SIZE

pygsti.io.load_dataset(filename, cache=False, collision_action='aggregate', record_zero_counts=True, ignore_zero_count_lines=True, with_times='auto', circuit_parse_cache=None, verbosity=1)

Deprecated!

pygsti.io.read_dataset(filename, cache=False, collision_action='aggregate', record_zero_counts=True, ignore_zero_count_lines=True, with_times='auto', circuit_parse_cache=None, verbosity=1)

Load a DataSet from a file.

This function first tries to load file as a saved DataSet object, then as a standard text-formatted DataSet.

Parameters

filenamestring

The name of the file

cachebool, optional

When set to True, a pickle file with the name filename + “.cache” is searched for and loaded instead of filename if it exists and is newer than filename. If no cache file exists or one exists but it is older than filename, a cache file will be written after loading from filename.

collision_action{“aggregate”, “keepseparate”}

Specifies how duplicate circuits should be handled. “aggregate” adds duplicate-circuit counts, whereas “keepseparate” tags duplicate circuits by setting their .occurrence IDs to sequential positive integers.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels. When reading from a cache file (using cache==True) this argument is ignored: the presence of zero- counts is dictated by the value of record_zero_counts when the cache file was created.

ignore_zero_count_linesbool, optional

Whether circuits for which there are no counts should be ignored (i.e. omitted from the DataSet) or not.

with_timesbool or “auto”, optional

Whether to the time-stamped data format should be read in. If “auto”, then the time-stamped format is allowed but not required on a per-circuit basis (so the dataset can contain both formats). Typically you only need to set this to False when reading in a template file.

circuit_parse_cachedict, optional

A dictionary mapping qubit string representations into created Circuit objects, which can improve performance by reducing or eliminating the need to parse circuit strings.

verbosityint, optional

If zero, no output is shown. If greater than zero, loading progress is shown.

Returns

DataSet

pygsti.io.load_multidataset(filename, cache=False, collision_action='aggregate', record_zero_counts=True, verbosity=1)

Deprecated!

pygsti.io.read_multidataset(filename, cache=False, collision_action='aggregate', record_zero_counts=True, verbosity=1)

Load a MultiDataSet from a file.

This function first tries to load file as a saved MultiDataSet object, then as a standard text-formatted MultiDataSet.

Parameters

filenamestring

The name of the file

cachebool, optional

When set to True, a pickle file with the name filename + “.cache” is searched for and loaded instead of filename if it exists and is newer than filename. If no cache file exists or one exists but it is older than filename, a cache file will be written after loading from filename.

collision_action{“aggregate”, “keepseparate”}

Specifies how duplicate circuits should be handled. “aggregate” adds duplicate-circuit counts, whereas “keepseparate” tags duplicate circuits by setting their .occurrence IDs to sequential positive integers.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned MultiDataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels. When reading from a cache file (using cache==True) this argument is ignored: the presence of zero-counts is dictated by the value of record_zero_counts when the cache file was created.

verbosityint, optional

If zero, no output is shown. If greater than zero, loading progress is shown.

Returns

MultiDataSet

pygsti.io.load_time_dependent_dataset(filename, cache=False, record_zero_counts=True)

Deprecated!

pygsti.io.read_time_dependent_dataset(filename, cache=False, record_zero_counts=True)

Load time-dependent (time-stamped) data as a DataSet.

Parameters

filenamestring

The name of the file

cachebool, optional

Reserved to perform caching similar to read_dataset. Currently this argument doesn’t do anything.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

Returns

DataSet

pygsti.io.load_model(filename)

Load a Model from a file, formatted using the standard text-format for models.

Parameters

filenamestring

The name of the file

Returns

Model

pygsti.io.load_circuit_dict(filename)

Load a circuit dictionary from a file, formatted using the standard text-format.

Parameters

filenamestring

The name of the file.

Returns

Dictionary with keys = circuit labels and values = Circuit objects.

pygsti.io.load_circuit_list(filename, read_raw_strings=False, line_labels='auto', num_lines=None)

Deprecated!

pygsti.io.read_circuit_list(filename, read_raw_strings=False, line_labels='auto', num_lines=None)

Load a circuit list from a file, formatted using the standard text-format.

Parameters

filenamestring

The name of the file

read_raw_stringsboolean

If True, circuits are not converted to Circuit objects.

line_labelsiterable, optional

The (string valued) line labels used to initialize Circuit objects when line label information is absent from the one-line text representation contained in filename. If ‘auto’, then line labels are taken to be the list of all state-space labels present in the circuit’s layers. If there are no such labels then the special value ‘*’ is used as a single line label.

num_linesint, optional

Specify this instead of line_labels to set the latter to the integers between 0 and num_lines-1.

Returns

list of Circuit objects

pygsti.io.convert_strings_to_circuits(obj)

Converts an object resulting from convert_circuits_to_strings() back to its original.

Parameters

objlist or tuple or dict

The object to convert.

Returns

object

pygsti.io.read_circuit_strings(filename)

TODO: docstring - load various Circuit-containing standard objects from a file where they have been replaced by their string representations

pygsti.io.load_protocol_from_dir(dirname, quick_load=False, comm=None)

Deprecated!

pygsti.io.read_protocol_from_dir(dirname, quick_load=False, comm=None)

Load a Protocol from a directory on disk.

Parameters

dirnamestring

Directory name.

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize file access.

Returns

Protocol

pygsti.io.read_protocol_from_mongodb(mongodb, doc_id, quick_load=False)

Load a Protocol from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

doc_idstr

The user-defined identifier of the protocol object to load.

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

Returns

Protocol

pygsti.io.remove_protocol_from_mongodb(mongodb, doc_id, session=None, recursive=False)

Remove a Protocol from a MongoDB database.

If no protocol object with doc_id exists, this function returns False, otherwise it returns True.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

doc_idstr

The user-defined identifier of the protocol object to remove.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

bool

True if the specified protocol object was removed, False if it didn’t exist.

pygsti.io.load_edesign_from_dir(dirname, quick_load=False, comm=None)

Deprecated!

pygsti.io.read_edesign_from_dir(dirname, quick_load=False, comm=None)

Load a ExperimentDesign from a directory on disk.

Parameters

dirnamestring

Directory name.

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize file access.

Returns

ExperimentDesign

pygsti.io.create_edesign_from_dir(dirname)
pygsti.io.read_edesign_from_mongodb(mongodb, doc_id, quick_load=False, comm=None)

Load a ExperimentDesign from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

doc_idstr

The user-defined identifier of the experiment design to load.

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize file access.

Returns

ExperimentDesign

pygsti.io.remove_edesign_from_mongodb(mongodb, doc_id, session=None, recursive='default')

Remove an ExperimentDesign from a MongoDB database.

If no experiment design with doc_id exists, this function returns False, otherwise it returns True.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

doc_idstr

The user-defined identifier of the experiment design to remove.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

bool

True if the specified experiment design was removed, False if it didn’t exist.

pygsti.io.load_data_from_dir(dirname, quick_load=False, comm=None)

Deprecated!

pygsti.io.read_data_from_dir(dirname, preloaded_edesign=None, quick_load=False, comm=None)

Load a ProtocolData from a directory on disk.

Parameters

dirnamestring

Directory name.

preloaded_edesignExperimentDesign, optional

The experiment deisgn belonging to the to-be-loaded data object, in cases when this has been loaded already (only use this if you know what you’re doing).

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize file access.

Returns

ProtocolData

pygsti.io.read_data_from_mongodb(mongodb, doc_id, preloaded_edesign=None, quick_load=False, comm=None)

Load a ProtocolData from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

doc_idstr

The user-defined identifier of the data to load.

preloaded_edesignExperimentDesign, optional

The experiment deisgn belonging to the to-be-loaded data object, in cases when this has been loaded already (only use this if you know what you’re doing).

quick_loadbool, optional

Setting this to True skips the loading of components that may take a long time to load. This can be useful when this information isn’t needed and loading takes a long time.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize database access.

Returns

ProtocolData

pygsti.io.remove_data_from_mongodb(mongodb, doc_id, session=None, recursive='default')

Remove ProtocolData from a MongoDB database.

If no experiment design with doc_id exists, this function returns False, otherwise it returns True.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

doc_idstr

The user-defined identifier of the experiment design to remove.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

bool

True if the specified experiment design was removed, False if it didn’t exist.

pygsti.io.load_results_from_dir(dirname, name=None, preloaded_data=None, quick_load=False, comm=None)

Deprecated!

pygsti.io.read_results_from_dir(dirname, name=None, preloaded_data=None, quick_load=False, comm=None)

Load a ProtocolResults or ProtocolsResultsDir from a directory on disk.

Which object type is loaded depends on whether name is given: if it is, then a ProtocolResults object is loaded. If not, a ProtocolsResultsDir results.

Parameters

dirnamestring

Directory name. This should be a “base” directory, containing subdirectories like “edesign”, “data”, and “results”

namestring or None

The ‘name’ of a particular ProtocolResults object, which is a sub-directory beneath dirname/results/. If None, then all the results (all names) at the given base-directory are loaded and returned as a ProtocolResultsDir object.

preloaded_dataProtocolData, optional

The data object belonging to the to-be-loaded results, in cases when this has been loaded already (only use this if you know what you’re doing).

quick_loadbool, optional

Setting this to True skips the loading of data and experiment-design components that may take a long time to load. This can be useful all the information of interest lies only within the results objects.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize file access.

Returns

ProtocolResults or ProtocolResultsDir

pygsti.io.read_results_from_mongodb(mongodb, doc_id, preloaded_data=None, quick_load=False, comm=None)

Load a ProtocolResults from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

doc_idstr

The user-defined identifier of the results directory to load.

preloaded_dataProtocolData, optional

The data object belonging to the to-be-loaded results, in cases when this has been loaded already (only use this if you know what you’re doing).

quick_loadbool, optional

Setting this to True skips the loading of data and experiment-design components that may take a long time to load. This can be useful all the information of interest lies only within the results objects.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize database access.

Returns

ProtocolResults

pygsti.io.read_resultsdir_from_mongodb(mongodb, doc_id, preloaded_data=None, quick_load=False, read_all_results_for_data=False, comm=None)

Load a ProtocolsResultsDir from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

doc_idstr

The user-defined identifier of the results directory to load.

preloaded_dataProtocolData, optional

The data object belonging to the to-be-loaded results, in cases when this has been loaded already (only use this if you know what you’re doing).

quick_loadbool, optional

Setting this to True skips the loading of data and experiment-design components that may take a long time to load. This can be useful all the information of interest lies only within the results objects.

read_all_results_for_databool, optional

If True, the loaded result directory and sub-directories will read in all the results objects stored in the database associated with their ProtocolData object. Duplicate keys will be renamed to avoid collisions with warning messages are printed. If False (the default), then only the specific results associated with the directory when it was last saved are loaded. This can sometimes be useful for loading old results that have been overwritten but still exist in the database.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize database access.

Returns

ProtocolResultsDir

pygsti.io.remove_results_from_mongodb(mongodb, doc_id, comm=None, session=None, recursive='default')

Remove ProtocolResults data from a MongoDB database.

Which object type is removed depends on whether name is given: if it is, then data corresponding to a ProtocolResults object is removed. If not, that of a ProtocolsResultsDir is removed.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

doc_idstr

The user-defined identifier of the results directory to remove.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize database access.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

None

pygsti.io.remove_resultsdir_from_mongodb(mongodb, doc_id, comm=None, session=None, recursive='default')

Remove ProtocolsResultsDir data from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

doc_idstr

The user-defined identifier of the results directory to remove.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator used to synchronize database access.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

None

pygsti.io.QUICK_LOAD_MAX_SIZE
pygsti.io.load_meta_based_dir(root_dir, auxfile_types_member='auxfile_types', ignore_meta=('type',), separate_auxfiletypes=False, quick_load=False)

Load the contents of a root_dir into a dict.

The de-serialization uses the ‘meta.json’ file within root_dir to describe how the directory was serialized.

Parameters

root_dirstr

The directory name.

auxfile_types_memberstr, optional

The key within meta.json that is used to describe how other members have been serialized into files. Unless you know what you’re doing, leave this as the default.

ignore_metatuple, optional

Keys within meta.json that should be ignored, i.e. not loaded into elements of the returned dict. By default, “type” is in this category because it describes a class name to be built and is used in a separate first-pass processing to construct a object. Unless you know what you’re doing, leave this as the default.

separate_auxfiletypesbool, optional

If True, then return the auxfile_types_member element (a dict describing how quantities that aren’t in ‘meta.json’ have been serialized) as a separate return value, instead of placing it within the returned dict.

quick_loadbool, optional

Setting this to True skips the loading of members that may take a long time to load, namely those in separate files whose files are large. When the loading of an attribute is skipped, it is set to None.

Returns

loaded_qtysdict

A dictionary of the quantities in ‘meta.json’ plus any loaded from the auxiliary files.

auxfile_typesdict

Only returned as a separate value when separate_auxfiletypes=True. A dict describing how members of loaded_qtys that weren’t loaded directly from ‘meta.json’ were serialized.

pygsti.io.write_meta_based_dir(root_dir, valuedict, auxfile_types=None, init_meta=None)

Write a dictionary of quantities to a directory.

Write the dictionary by placing everything in a ‘meta.json’ file except for special key/value pairs (“special” usually because they lend themselves to an non-JSON format or they simply cannot be rendered as JSON) which are placed in “auxiliary” files formatted according to auxfile_types (which itself is saved in meta.json).

Parameters

root_dirstr

The directory to write to (will be created if needed).

valuedictdict

The dictionary of values to serialize to disk.

auxfile_typesdict, optional

A dictionary whose keys are a subset of the keys of valuedict, and whose values are known “aux-file” types. auxfile_types[key] says that valuedict[key] should be serialized into a separate file (whose name is usually key + an appropriate extension) of the given format rather than be included directly in ‘meta.json`. If None, this dictionary is assumed to be valuedict[‘auxfile_types’].

init_metadict, optional

A dictionary of “initial” meta-data to be included in the ‘meta.json’ (but that isn’t in valuedict). For example, the class name of an object is often stored as in the “type” field of meta.json when the_model objects .__dict__ is used as valuedict.

Returns

None

pygsti.io.write_obj_to_meta_based_dir(obj, dirname, auxfile_types_member, omit_attributes=(), include_attributes=None, additional_meta=None)

Write the contents of obj to dirname using a ‘meta.json’ file and an auxfile-types dictionary.

This is similar to write_meta_based_dir(), except it takes an object (obj) whose .__dict__, minus omitted attributes, is used as the dictionary to write and whose auxfile-types comes from another object attribute.

Parameters

objobject

the object to serialize

dirnamestr

the directory name

auxfile_types_memberstr or None

the name of the attribute within obj that holds the dictionary mapping of attributes to auxiliary-file types. Usually this is “auxfile_types”.

omit_attributeslist or tuple

List of (string-valued) names of attributes to omit when serializing this object. Usually you should just leave this empty.

include_attributeslist or tuple or None

A list of (string-valued) names of attributs to specifically include when serializing this object. If None, then all attributes are included except those specifically omitted via omit_attributes. If include_attributes is not None then omit_attributes is ignored.

additional_metadict, optional

A dictionary of additional meta-data to be included in the ‘meta.json’ file (but that isn’t an attribute of obj).

Returns

None

pygsti.io.write_dict_to_json_or_pkl_files(d, dirname)

Write each element of d into a separate file in dirname.

If the element is json-able, it is JSON-serialized and the “.json” extension is used. If not, pickle is used to serialize the element, and the “.pkl” extension is used. This is the reverse of _read_json_or_pkl_files_to_dict().

Parameters

ddict

the dictionary of elements to serialize.

dirnamestr

the directory name.

Returns

None

class pygsti.io.StdInputParser

Bases: object

Encapsulates a text parser for reading GST input files.

Create a new standard-input parser object

use_global_parse_cache = True
parse_circuit(s, lookup=None, create_subcircuits=True)

Parse a circuit from a string.

Parameters

sstring

The string to parse.

lookupdict, optional

A dictionary with keys == reflbls and values == tuples of operation labels which can be used for substitutions using the S<reflbl> syntax.

create_subcircuitsbool, optional

Whether to create sub-circuit-labels when parsing string representations or to just expand these into non-subcircuit labels.

Returns

Circuit

parse_circuit_raw(s, lookup=None, create_subcircuits=True)

Parse a circuit’s constituent pieces from a string.

This doesn’t actually create a circuit object, which may be desirable in some scenarios.

Parameters

sstring

The string to parse.

lookupdict, optional

A dictionary with keys == reflbls and values == tuples of operation labels which can be used for substitutions using the S<reflbl> syntax.

create_subcircuitsbool, optional

Whether to create sub-circuit-labels when parsing string representations or to just expand these into non-subcircuit labels.

Returns

label_tuple: tuple

Tuple of operation labels representing the circuit’s layers.

line_labels: tuple or None

A tuple or None giving the parsed line labels (follwing the ‘@’ symbol) of the circuit.

occurrence_id: int or None

The “occurence id” - an integer following a second ‘@’ symbol that identifies a particular copy of this circuit.

compilable_indicestuple or None

A tuple of layer indices (into label_tuple) marking the layers that can be “compiled”, and are not followed by a barrier so they can be compiled with following layers. This is non-None only when there are explicit markers within the circuit string indicating the presence or absence of barriers.

parse_dataline(s, lookup=None, expected_counts=-1, create_subcircuits=True, line_labels=None)

Parse a data line (dataline in grammar)

Parameters

sstring

The string to parse.

lookupdict, optional

A dictionary with keys == reflbls and values == tuples of operation labels which can be used for substitutions using the S<reflbl> syntax.

expected_countsint, optional

The expected number of counts to accompany the circuit on this data line. If < 0, no check is performed; otherwise raises ValueError if the number of counts does not equal expected_counts.

create_subcircuitsbool, optional

Whether to create sub-circuit-labels when parsing string representations or to just expand these into non-subcircuit labels.

Returns

circuitCircuit

The circuit.

countslist

List of counts following the circuit.

parse_dictline(s)

Parse a circuit dictionary line (dictline in grammar)

Parameters

sstring

The string to parse.

Returns

circuitLabelstring

The user-defined label to represent this circuit.

circuitTupletuple

The circuit as a tuple of operation labels.

circuitStrstring

The circuit as represented as a string in the dictline.

circuitLineLabelstuple

The line labels of the cirucit.

occurrenceobject

Circuit’s occurrence id, or None if there is none.

compilable_indicestuple or None

A tuple of layer indices (into label_tuple) marking the layers that can be “compiled”, and are not followed by a barrier so they can be compiled with following layers. This is non-None only when there are explicit markers within the circuit string indicating the presence or absence of barriers.

parse_stringfile(filename, line_labels='auto', num_lines=None, create_subcircuits=True)

Parse a circuit list file.

Parameters

filenamestring

The file to parse.

line_labelsiterable, optional

The (string valued) line labels used to initialize Circuit objects when line label information is absent from the one-line text representation contained in filename. If ‘auto’, then line labels are taken to be the list of all state-space labels present in the circuit’s layers. If there are no such labels then the special value ‘*’ is used as a single line label.

num_linesint, optional

Specify this instead of line_labels to set the latter to the integers between 0 and num_lines-1.

create_subcircuitsbool, optional

Whether to create sub-circuit-labels when parsing string representations or to just expand these into non-subcircuit labels.

Returns

list of Circuits

The circuits read from the file.

parse_dictfile(filename)

Parse a circuit dictionary file.

Parameters

filenamestring

The file to parse.

Returns

dict

Dictionary with keys == circuit labels and values == Circuits.

parse_datafile(filename, show_progress=True, collision_action='aggregate', record_zero_counts=True, ignore_zero_count_lines=True, with_times='auto')

Parse a data set file into a DataSet object.

Parameters

filenamestring

The file to parse.

show_progressbool, optional

Whether or not progress should be displayed

collision_action{“aggregate”, “keepseparate”}

Specifies how duplicate circuits should be handled. “aggregate” adds duplicate-circuit counts, whereas “keepseparate” tags duplicate circuits by setting their .occurrence IDs to sequential positive integers.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

ignore_zero_count_linesbool, optional

Whether circuits for which there are no counts should be ignored (i.e. omitted from the DataSet) or not.

with_timesbool or “auto”, optional

Whether to the time-stamped data format should be read in. If “auto”, then this format is allowed but not required. Typically you only need to set this to False when reading in a template file.

Returns

DataSet

A static DataSet object.

parse_multidatafile(filename, show_progress=True, collision_action='aggregate', record_zero_counts=True, ignore_zero_count_lines=True)

Parse a multiple data set file into a MultiDataSet object.

Parameters

filenamestring

The file to parse.

show_progressbool, optional

Whether or not progress should be displayed

collision_action{“aggregate”, “keepseparate”}

Specifies how duplicate circuits should be handled. “aggregate” adds duplicate-circuit counts, whereas “keepseparate” tags duplicate circuits by setting their .occurrence IDs to sequential positive integers.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned MultiDataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

ignore_zero_count_linesbool, optional

Whether circuits for which there are no counts should be ignored (i.e. omitted from the MultiDataSet) or not.

Returns

MultiDataSet

A MultiDataSet object.

parse_tddatafile(filename, show_progress=True, record_zero_counts=True, create_subcircuits=True)

Parse a timstamped data set file into a DataSet object.

Parameters

filenamestring

The file to parse.

show_progressbool, optional

Whether or not progress should be displayed

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

create_subcircuitsbool, optional

Whether to create sub-circuit-labels when parsing string representations or to just expand these into non-subcircuit labels.

Returns

DataSet

A static DataSet object.

pygsti.io.parse_model(filename)

Parse a model file into a Model object.

Parameters

filenamestring

The file to parse.

Returns

Model

class pygsti.io.QubitProcessorSpec(num_qubits, gate_names, nonstd_gate_unitaries=None, availability=None, geometry=None, qubit_labels=None, nonstd_gate_symplecticreps=None, prep_names=('rho0',), povm_names=('Mdefault',), instrument_names=(), nonstd_preps=None, nonstd_povms=None, nonstd_instruments=None, aux_info=None)

Bases: QuditProcessorSpec

The device specification for a one or more qudit quantum computer.

Parameters

num_qubitsint

The number of qubits in the device.

gate_nameslist of strings

The names of gates in the device. This may include standard gate names known by pyGSTi (see below) or names which appear in the nonstd_gate_unitaries argument. The set of standard gate names includes, but is not limited to:

  • ‘Gi’ : the 1Q idle operation

  • ‘Gx’,’Gy’,’Gz’ : 1-qubit pi/2 rotations

  • ‘Gxpi’,’Gypi’,’Gzpi’ : 1-qubit pi rotations

  • ‘Gh’ : Hadamard

  • ‘Gp’ : phase or S-gate (i.e., ((1,0),(0,i)))

  • ‘Gcphase’,’Gcnot’,’Gswap’ : standard 2-qubit gates

Alternative names can be used for all or any of these gates, but then they must be explicitly defined in the nonstd_gate_unitaries dictionary. Including any standard names in nonstd_gate_unitaries overrides the default (builtin) unitary with the one supplied.

nonstd_gate_unitaries: dictionary of numpy arrays

A dictionary with keys that are gate names (strings) and values that are numpy arrays specifying quantum gates in terms of unitary matrices. This is an additional “lookup” database of unitaries - to add a gate to this QubitProcessorSpec its names still needs to appear in the gate_names list. This dictionary’s values specify additional (target) native gates that can be implemented in the device as unitaries acting on ordinary pure-state-vectors, in the standard computationl basis. These unitaries need not, and often should not, be unitaries acting on all of the qubits. E.g., a CNOT gate is specified by a key that is the desired name for CNOT, and a value that is the standard 4 x 4 complex matrix for CNOT. All gate names must start with ‘G’. As an advanced behavior, a unitary-matrix-returning function which takes a single argument - a tuple of label arguments - may be given instead of a single matrix to create an operation factory which allows continuously-parameterized gates. This function must also return an empty/dummy unitary when None is given as it’s argument.

availabilitydict, optional

A dictionary whose keys are some subset of the keys (which are gate names) nonstd_gate_unitaries and the strings (which are gate names) in gate_names and whose values are lists of qubit-label-tuples. Each qubit-label-tuple must have length equal to the number of qubits the corresponding gate acts upon, and causes that gate to be available to act on the specified qubits. Instead of a list of tuples, values of availability may take the special values “all-permutations” and “all-combinations”, which as their names imply, equate to all possible permutations and combinations of the appropriate number of qubit labels (deterined by the gate’s dimension). If a gate name is not present in availability, the default is “all-permutations”. So, the availability of a gate only needs to be specified when it cannot act in every valid way on the qubits (e.g., the device does not have all-to-all connectivity).

geometry{“line”,”ring”,”grid”,”torus”} or QubitGraph, optional

The type of connectivity among the qubits, specifying a graph used to define neighbor relationships. Alternatively, a QubitGraph object with qubit_labels as the node labels may be passed directly. This argument is only used as a convenient way of specifying gate availability (edge connections are used for gates whose availability is unspecified by availability or whose value there is “all-edges”).

qubit_labelslist or tuple, optional

The labels (integers or strings) of the qubits. If None, then the integers starting with zero are used.

nonstd_gate_symplecticrepsdict, optional

A dictionary similar to nonstd_gate_unitaries that supplies, instead of a unitary matrix, the symplectic representation of a Clifford operations, given as a 2-tuple of numpy arrays.

aux_infodict, optional

Any additional information that should be attached to this processor spec.

property qubit_labels

The qubit labels.

property qubit_graph

The qubit graph (geometry).

property num_qubits

The number of qudits.

gate_num_qubits(gate_name)

The number of qubits that a given gate acts upon.

Parameters
gate_namestr

The name of the gate.

Returns

int

compute_ops_on_qubits()

Constructs a dictionary mapping tuples of state space labels to the operations available on them.

Returns
dict

A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available gates on those target labels.

subset(gate_names_to_include='all', qubit_labels_to_keep='all')

Construct a smaller processor specification by keeping only a select set of gates from this processor spec.

Parameters
gate_names_to_includelist or tuple or set

The gate names that should be included in the returned processor spec.

Returns

QubitProcessorSpec

map_qubit_labels(mapper)

Creates a new QubitProcessorSpec whose qubit labels are updated according to the mapping function mapper.

Parameters
mapperdict or function

A dictionary whose keys are the existing self.qubit_labels values and whose value are the new labels, or a function which takes a single (existing qubit-label) argument and returns a new qubit label.

Returns

QubitProcessorSpec

force_recompute_gate_relationships()

Invalidates LRU caches for all compute_* methods of this object, forcing them to recompute their values.

The compute_* methods of this processor spec compute various relationships and properties of its gates. These routines can be computationally intensive, and so their values are cached for performance. If the gates of a processor spec changes and its compute_* methods are used, force_recompute_gate_relationships should be called.

compute_clifford_symplectic_reps(gatename_filter=None)

Constructs a dictionary of the symplectic representations for all the Clifford gates in this processor spec.

Parameters
gatename_filteriterable, optional

A list, tuple, or set of gate names whose symplectic representations should be returned (if they exist).

Returns
dict

keys are gate names, values are (symplectic_matrix, phase_vector) tuples.

compute_one_qubit_gate_relations()

Computes the basic pair-wise relationships relationships between the gates.

1. It multiplies all possible combinations of two 1-qubit gates together, from the full model available to in this device. If the two gates multiple to another 1-qubit gate from this set of gates this is recorded in the dictionary self.oneQgate_relations. If the 1-qubit gate with name name1 followed by the 1-qubit gate with name name2 multiple (up to phase) to the gate with name3, then self.oneQgate_relations[name1,`name2`] = name3.

2. If the inverse of any 1-qubit gate is contained in the model, this is recorded in the dictionary self.gate_inverse.

Returns
gate_relationsdict

Keys are (gatename1, gatename2) and values are either the gate name of the product of the two gates or None, signifying the identity.

gate_inversesdict

Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).

compute_multiqubit_inversion_relations()

Computes the inverses of multi-qubit (>1 qubit) gates.

Finds whether any of the multi-qubit gates in this device also have their inverse in the model. That is, if the unitaries for the multi-qubit gate with name name1 followed by the multi-qubit gate (of the same dimension) with name name2 multiple (up to phase) to the identity, then gate_inverse[name1] = name2 and gate_inverse[name2] = name1

1-qubit gates are not computed by this method, as they are be computed by the method compute_one_qubit_gate_relations().

Returns
gate_inversedict

Keys and values are gate names, mapping a gate name to its inverse gate (if one exists).

compute_clifford_ops_on_qubits()

Constructs a dictionary mapping tuples of state space labels to the clifford operations available on them.

Returns
dict

A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available Clifford gates on those target labels.

compute_clifford_2Q_connectivity()

Constructs a graph encoding the connectivity between qubits via 2-qubit Clifford gates.

Returns
QubitGraph

A graph with nodes equal to the qubit labels and edges present whenever there is a 2-qubit Clifford gate between the vertex qubits.

compute_2Q_connectivity()

Constructs a graph encoding the connectivity between qubits via 2-qubit gates.

Returns
QubitGraph

A graph with nodes equal to the qubit labels and edges present whenever there is a 2-qubit gate between the vertex qubits.

class pygsti.io.QuditProcessorSpec(qudit_labels, qudit_udims, gate_names, nonstd_gate_unitaries=None, availability=None, geometry=None, prep_names=('rho0',), povm_names=('Mdefault',), instrument_names=(), nonstd_preps=None, nonstd_povms=None, nonstd_instruments=None, aux_info=None)

Bases: ProcessorSpec

The device specification for a one or more qudit quantum computer.

Parameters

num_qubitsint

The number of qubits in the device.

gate_nameslist of strings

The names of gates in the device. This may include standard gate names known by pyGSTi (see below) or names which appear in the nonstd_gate_unitaries argument. The set of standard gate names includes, but is not limited to:

  • ‘Gi’ : the 1Q idle operation

  • ‘Gx’,’Gy’,’Gz’ : 1-qubit pi/2 rotations

  • ‘Gxpi’,’Gypi’,’Gzpi’ : 1-qubit pi rotations

  • ‘Gh’ : Hadamard

  • ‘Gp’ : phase or S-gate (i.e., ((1,0),(0,i)))

  • ‘Gcphase’,’Gcnot’,’Gswap’ : standard 2-qubit gates

Alternative names can be used for all or any of these gates, but then they must be explicitly defined in the nonstd_gate_unitaries dictionary. Including any standard names in nonstd_gate_unitaries overrides the default (builtin) unitary with the one supplied.

nonstd_gate_unitaries: dictionary of numpy arrays

A dictionary with keys that are gate names (strings) and values that are numpy arrays specifying quantum gates in terms of unitary matrices. This is an additional “lookup” database of unitaries - to add a gate to this QubitProcessorSpec its names still needs to appear in the gate_names list. This dictionary’s values specify additional (target) native gates that can be implemented in the device as unitaries acting on ordinary pure-state-vectors, in the standard computationl basis. These unitaries need not, and often should not, be unitaries acting on all of the qubits. E.g., a CNOT gate is specified by a key that is the desired name for CNOT, and a value that is the standard 4 x 4 complex matrix for CNOT. All gate names must start with ‘G’. As an advanced behavior, a unitary-matrix-returning function which takes a single argument - a tuple of label arguments - may be given instead of a single matrix to create an operation factory which allows continuously-parameterized gates. This function must also return an empty/dummy unitary when None is given as it’s argument.

availabilitydict, optional

A dictionary whose keys are some subset of the keys (which are gate names) nonstd_gate_unitaries and the strings (which are gate names) in gate_names and whose values are lists of qubit-label-tuples. Each qubit-label-tuple must have length equal to the number of qubits the corresponding gate acts upon, and causes that gate to be available to act on the specified qubits. Instead of a list of tuples, values of availability may take the special values “all-permutations” and “all-combinations”, which as their names imply, equate to all possible permutations and combinations of the appropriate number of qubit labels (deterined by the gate’s dimension). If a gate name is not present in availability, the default is “all-permutations”. So, the availability of a gate only needs to be specified when it cannot act in every valid way on the qubits (e.g., the device does not have all-to-all connectivity).

geometry{“line”,”ring”,”grid”,”torus”} or QubitGraph, optional

The type of connectivity among the qubits, specifying a graph used to define neighbor relationships. Alternatively, a QubitGraph object with qubit_labels as the node labels may be passed directly. This argument is only used as a convenient way of specifying gate availability (edge connections are used for gates whose availability is unspecified by availability or whose value there is “all-edges”).

qubit_labelslist or tuple, optional

The labels (integers or strings) of the qubits. If None, then the integers starting with zero are used.

aux_infodict, optional

Any additional information that should be attached to this processor spec.

TODO: update this docstring for qudits

property num_qudits

The number of qudits.

property primitive_op_labels

All the primitive operation labels derived from the gate names and availabilities

property idle_gate_names

The gate names that correspond to idle operations.

property global_idle_gate_name

The (first) gate name that corresponds to a global idle operation.

property global_idle_layer_label

Similar to global_idle_gate_name but include the appropriate sslbls (either None or all the qudits)

prep_specifier(name)

The specifier for a given state preparation name.

The returned value specifies a state in one of several ways. It can either be a string identifying a standard state preparation (like “rho0”), or a complex array describing a pure state.

Parameters
namestr

The name of the state preparation to access.

Returns

str or numpy.ndarray

povm_specifier(name)

The specifier for a given POVM name.

The returned value specifies a POVM in one of several ways. It can either be a string identifying a standard POVM (like “Mz”), or a dictionary with values describing the pure states that each element of the POVM projects onto. Each value can be either a string describing a standard state or a complex array.

Parameters
namestr

The name of the POVM to access.

Returns

str or numpy.ndarray

instrument_specifier(name)

The specifier for a given instrument name.

The returned value specifies an instrument in one of several ways. It can either be a string identifying a standard instrument (like “Iz”), or a dictionary with values that are lists/tuples of 2-tuples describing each instrument member as the sum of rank-1 process matrices. Each 2-tuple element can be a string describing a standard state or a complex array describing an arbitrary pure state.

Parameters
namestr

The name of the state preparation to access.

Returns

str or dict

gate_num_qudits(gate_name)

The number of qudits that a given gate acts upon.

Parameters
gate_namestr

The name of the gate.

Returns

int

rename_gate_inplace(existing_gate_name, new_gate_name)

Renames a gate within this processor specification.

Parameters
existing_gate_namestr

The existing gate name you want to change.

new_gate_namestr

The new gate name.

Returns

None

resolved_availability(gate_name, tuple_or_function='auto')

The availability of a given gate, resolved as either a tuple of sslbl-tuples or a function.

This function does more than just access the availability attribute, as this may hold special values like “all-edges”. It takes the value of self.availability[gate_name] and resolves and converts it into the desired format: either a tuple of state-space labels or a function with a single state-space-labels-tuple argument.

Parameters
gate_namestr

The gate name to get the availability of.

tuple_or_function{‘tuple’, ‘function’, ‘auto’}

The type of object to return. ‘tuple’ means a tuple of state space label tuples, e.g. ((0,1), (1,2)). ‘function’ means a function that takes a single state space label tuple argument and returns True or False to indicate whether the gate is available on the given target labels. If ‘auto’ is given, then either a tuple or function is returned - whichever is more computationally convenient.

Returns

tuple or function

is_available(gate_label)

Check whether a gate at a given location is available.

Parameters
gate_labelLabel

The gate name and target labels to check availability of.

Returns

bool

available_gatenames(sslbls)

List all the gate names that are available within a set of state space labels.

This function finds all the gate names that are available for at least a subset of sslbls.

Parameters
sslblstuple

The state space labels to find availability within.

Returns
tuple of strings

A tuple of gate names (strings).

available_gatelabels(gate_name, sslbls)

List all the gate labels that are available for gate_name on at least a subset of sslbls.

Parameters
gate_namestr

The gate name.

sslblstuple

The state space labels to find availability within.

Returns
tuple of Labels

The available gate labels (all with name gate_name).

compute_ops_on_qudits()

Constructs a dictionary mapping tuples of state space labels to the operations available on them.

Returns
dict

A dictionary with keys that are state space label tuples and values that are lists of gate labels, giving the available gates on those target labels.

subset(gate_names_to_include='all', qudit_labels_to_keep='all')

Construct a smaller processor specification by keeping only a select set of gates from this processor spec.

Parameters
gate_names_to_includelist or tuple or set

The gate names that should be included in the returned processor spec.

Returns

QuditProcessorSpec

map_qudit_labels(mapper)

Creates a new QuditProcessorSpec whose qudit labels are updated according to the mapping function mapper.

Parameters
mapperdict or function

A dictionary whose keys are the existing self.qudit_labels values and whose value are the new labels, or a function which takes a single (existing qudit-label) argument and returns a new qudit label.

Returns

QuditProcessorSpec

pygsti.io.write_empty_dataset(filename, circuits, header_string='## Columns = 1 frequency, count total', num_zero_cols=None, append_weights_column=False)

Write an empty dataset file to be used as a template.

Parameters

filenamestring

The filename to write.

circuitslist of Circuits

List of circuits to write, each to be followed by num_zero_cols zeros.

header_stringstring, optional

Header string for the file; should start with a pound (#) or double-pound (##) so it is treated as a commend or directive, respectively.

num_zero_colsint, optional

The number of zero columns to place after each circuit. If None, then header_string must begin with “## Columns = “ and number of zero columns will be inferred.

append_weights_columnbool, optional

Add an additional ‘weights’ column.

Returns

None

pygsti.io.write_dataset(filename, dataset, circuits=None, outcome_label_order=None, fixed_column_mode='auto', with_times='auto')

Write a text-formatted dataset file.

Parameters

filenamestring

The filename to write.

datasetDataSet

The data set from which counts are obtained.

circuitslist of Circuits, optional

The list of circuits to include in the written dataset. If None, all circuits are output.

outcome_label_orderlist, optional

A list of the outcome labels in dataset which specifies the column order in the output file.

fixed_column_modebool or ‘auto’, optional

When True, a file is written with column headers indicating which outcome each column of counts corresponds to. If a row doesn’t have any counts for an outcome, ‘–’ is used in its place. When False, each row’s counts are written in an expanded form that includes the outcome labels (each “count” has the format <outcomeLabel>:<count>).

with_timesbool or “auto”, optional

Whether to include (save) time-stamp information in output. This can only be True when fixed_column_mode=False. “auto” will set this to True if fixed_column_mode=False and dataset has data at non-trivial (non-zero) times.

Returns

None

pygsti.io.write_multidataset(filename, multidataset, circuits=None, outcome_label_order=None)

Write a text-formatted multi-dataset file.

Parameters

filenamestring

The filename to write.

multidatasetMultiDataSet

The multi data set from which counts are obtained.

circuitslist of Circuits

The list of circuits to include in the written dataset. If None, all circuits are output.

outcome_label_orderlist, optional

A list of the SPAM labels in multidataset which specifies the column order in the output file.

Returns

None

pygsti.io.write_circuit_list(filename, circuits, header=None)

Write a text-formatted circuit list file.

Parameters

filenamestring

The filename to write.

circuitslist of Circuits

The list of circuits to include in the written dataset.

headerstring, optional

Header line (first line of file). Prepended with a pound sign (#), so no need to include one.

Returns

None

pygsti.io.write_model(model, filename, title=None)

Write a text-formatted model file.

Parameters

modelModel

The model to write to file.

filenamestring

The filename to write.

titlestring, optional

Header line (first line of file). Prepended with a pound sign (#), so no need to include one.

Returns

None

pygsti.io.write_empty_protocol_data(dirname, edesign, sparse='auto', clobber_ok=False)

Write to disk an empty ProtocolData object.

Write to a directory an experimental design (edesign) and the dataset template files needed to load in a ProtocolData object, e.g. using the read_data_from_dir() function, after the template files are filled in.

Parameters

dirnamestr

The root directory to write into. This directory will have ‘edesign’ and ‘data’ subdirectories created beneath it.

edesignExperimentDesign

The experiment design defining the circuits that need to be performed.

sparsebool or “auto”, optional

If True, then the template data set(s) are written in a sparse-data format, i.e. in a format where not all the outcomes need to be given. If False, then a dense data format is used, where counts for all possible bit strings are given. “auto” causes the sparse format to be used when the number of qubits is > 2.

clobber_okbool, optional

If True, then a template dataset file will be written even if a file of the same name already exists (this may overwrite existing data with an empty template file, so be careful!).

Returns

None

pygsti.io.fill_in_empty_dataset_with_fake_data(dataset_filename, model, num_samples, sample_error='multinomial', seed=None, rand_state=None, alias_dict=None, collision_action='aggregate', record_zero_counts=True, comm=None, mem_limit=None, times=None, fixed_column_mode='auto')

Fills in the text-format data set file dataset_fileame with simulated data counts using model.

Parameters

dataset_filenamestr

the path to the text-formatted data set file.

modelModel

the model to use to simulate the data.

num_samplesint or list of ints or None

The simulated number of samples for each circuit. This only has effect when sample_error == "binomial" or "multinomial". If an integer, all circuits have this number of total samples. If a list, integer elements specify the number of samples for the corresponding circuit. If None, then model_or_dataset must be a DataSet, and total counts are taken from it (on a per-circuit basis).

sample_errorstring, optional

What type of sample error is included in the counts. Can be:

  • “none” - no sample error: counts are floating point numbers such that the exact probabilty can be found by the ratio of count / total.

  • “clip” - no sample error, but clip probabilities to [0,1] so, e.g., counts are always positive.

  • “round” - same as “clip”, except counts are rounded to the nearest integer.

  • “binomial” - the number of counts is taken from a binomial distribution. Distribution has parameters p = (clipped) probability of the circuit and n = number of samples. This can only be used when there are exactly two SPAM labels in model_or_dataset.

  • “multinomial” - counts are taken from a multinomial distribution. Distribution has parameters p_k = (clipped) probability of the gate string using the k-th SPAM label and n = number of samples.

seedint, optional

If not None, a seed for numpy’s random number generator, which is used to sample from the binomial or multinomial distribution.

rand_statenumpy.random.RandomState

A RandomState object to generate samples from. Can be useful to set instead of seed if you want reproducible distribution samples across multiple random function calls but you don’t want to bother with manually incrementing seeds between those calls.

alias_dictdict, optional

A dictionary mapping single operation labels into tuples of one or more other operation labels which translate the given circuits before values are computed using model_or_dataset. The resulting Dataset, however, contains the un-translated circuits as keys.

collision_action{“aggregate”, “keepseparate”}

Determines how duplicate circuits are handled by the resulting DataSet. Please see the constructor documentation for DataSet.

record_zero_countsbool, optional

Whether zero-counts are actually recorded (stored) in the returned DataSet. If False, then zero counts are ignored, except for potentially registering new outcome labels.

commmpi4py.MPI.Comm, optional

When not None, an MPI communicator for distributing the computation across multiple processors and ensuring that the same dataset is generated on each processor.

mem_limitint, optional

A rough memory limit in bytes which is used to determine job allocation when there are multiple processors.

timesiterable, optional

When not None, a list of time-stamps at which data should be sampled. num_samples samples will be simulated at each time value, meaning that each circuit in circuits will be evaluated with the given time value as its start time.

fixed_column_modebool or ‘auto’, optional

How the underlying data set file is written - see write_dataset().

Returns

DataSet

The generated data set (also written in place of the template file).

pygsti.io.convert_circuits_to_strings(obj)

Converts a list or dictionary potentially containing Circuit objects to a JSON-able one with circuit strings.

Parameters

objlist or tuple or dict

The object to convert.

Returns

object

A JSON-able object containing circuit string representations in place of Circuit objects.

pygsti.io.write_circuit_strings(filename, obj)

TODO: docstring - write various Circuit-containing standard objects with circuits replaced by their string reps

pygsti.io.read_auxtree_from_mongodb(mongodb, collection_name, doc_id, auxfile_types_member='auxfile_types', ignore_meta=('_id', 'type'), separate_auxfiletypes=False, quick_load=False)

Read a document containing links to auxiliary documents from a MongoDB database.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

collection_namestr

the MongoDB collection within mongodb to read from.

doc_idbson.objectid.ObjectId

The identifier, of the root document to load from the database.

auxfile_types_memberstr

the name of the attribute within the document that holds the dictionary mapping of attributes to auxiliary-file (document) types. Usually this is “auxfile_types”.

ignore_metatuple, optional

Keys within the root document that should be ignored, i.e. not loaded into elements of the returned dict. By default, “_id” and “type” are in this category because they give the database id and a class name to be built, respectively, and are not needed in the constructed dictionary. Unless you know what you’re doing, leave this as the default.

separate_auxfiletypesbool, optional

If True, then return the auxfile_types_member element (a dict describing how quantities that aren’t in the main document have been serialized) as a separate return value, instead of placing it within the returned dict.

quick_loadbool, optional

Setting this to True skips the loading of members that may take a long time to load, namely those in separate documents that are large. When the loading of an attribute is skipped, it is set to None.

Returns

loaded_qtysdict

A dictionary of the quantities in the main document plus any loaded from the auxiliary documents.

auxfile_typesdict

Only returned as a separate value when separate_auxfiletypes=True. A dict describing how members of loaded_qtys that weren’t loaded directly from the main document were serialized.

pygsti.io.read_auxtree_from_mongodb_doc(mongodb, doc, auxfile_types_member='auxfile_types', ignore_meta=('_id', 'type'), separate_auxfiletypes=False, quick_load=False)

Load the contents of a MongoDB document into a dict.

The de-serialization possibly uses metadata within to root document to describe how associated data is stored in other collections.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to load data from.

docdict

The already-retrieved main document being read in.

auxfile_types_memberstr, optional

The key within the root document that is used to describe how other members have been serialized into documents. Unless you know what you’re doing, leave this as the default.

ignore_metatuple, optional

Keys within the root document that should be ignored, i.e. not loaded into elements of the returned dict. By default, “_id” and “type” are in this category because they give the database id and a class name to be built, respectively, and are not needed in the constructed dictionary. Unless you know what you’re doing, leave this as the default.

separate_auxfiletypesbool, optional

If True, then return the auxfile_types_member element (a dict describing how quantities that aren’t in root document have been serialized) as a separate return value, instead of placing it within the returned dict.

quick_loadbool, optional

Setting this to True skips the loading of members that may take a long time to load, namely those in separate files whose files are large. When the loading of an attribute is skipped, it is set to None.

Returns

loaded_qtysdict

A dictionary of the quantities in ‘meta.json’ plus any loaded from the auxiliary files.

auxfile_typesdict

Only returned as a separate value when separate_auxfiletypes=True. A dict describing how members of loaded_qtys that weren’t loaded directly from the root document were serialized.

pygsti.io.write_obj_to_mongodb_auxtree(obj, mongodb, collection_name, doc_id, auxfile_types_member, omit_attributes=(), include_attributes=None, additional_meta=None, session=None, overwrite_existing=False)

Write the attributes of an object to a MongoDB database, potentially as multiple documents.

Parameters

objobject

The object that is to be written.

mongodbpymongo.database.Database

The MongoDB instance to write data to.

collection_namestr

the MongoDB collection within mongodb to write to.

doc_idbson.objectid.ObjectId

The identifier, of the root document to store in the database. If None a new id will be created.

auxfile_types_memberstr, optional

The attribute of obj that is used to describe how other members should be serialized into separate “auxiliary” documents. Unless you know what you’re doing, leave this as the default.

omit_attributeslist or tuple

List of (string-valued) names of attributes to omit when serializing this object. Usually you should just leave this empty.

include_attributeslist or tuple or None

A list of (string-valued) names of attributs to specifically include when serializing this object. If None, then all attributes are included except those specifically omitted via omit_attributes. If include_attributes is not None then omit_attributes is ignored.

additional_metadict, optional

A dictionary of additional meta-data to be included in the main document (but that isn’t an attribute of obj).

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists. Setting this to True mimics the behaviour of a typical filesystem, where writing to a path can be done regardless of whether it already exists.

Returns

bson.objectid.ObjectId

The identifer of the root document that was written.

pygsti.io.add_obj_auxtree_write_ops_and_update_doc(obj, doc, write_ops, mongodb, collection_name, auxfile_types_member, omit_attributes=(), include_attributes=None, additional_meta=None, overwrite_existing=False)

Similar to write_obj_to_mongodb_auxtree, but just collect write operations and update a main-doc dictionary.

This function effectively performs all the heavy-lifting to write an object to a MongoDB database without actually executing any write operations. Instead, a dictionary representing the main document (which we typically assume will be written later) is updated and additional write operations (for auxiliary documents) are added to a WriteOpsByCollection object. This function is intended for use within a MongoSerializable-derived object’s _add_auxiliary_write_ops_and_update_doc method.

Parameters

objobject

The object that is to be written.

docdict

The root-document data, which is updated as needed and is expected to be initialized at least with an _id key-value pair.

write_opsWriteOpsByCollection

An object that keeps track of pymongo write operations on a per-collection basis. This object accumulates write operations to be performed at some point in the future.

mongodbpymongo.database.Database

The MongoDB instance that is planned to be written to. Used to test for existing records and not to write to, as writing is assumed to be done later, potentially as a bulk write operaiton.

collection_namestr

the MongoDB collection within mongodb that is planned to write to.

auxfile_types_memberstr, optional

The attribute of obj that is used to describe how other members should be serialized into separate “auxiliary” documents. Unless you know what you’re doing, leave this as the default.

omit_attributeslist or tuple

List of (string-valued) names of attributes to omit when serializing this object. Usually you should just leave this empty.

include_attributeslist or tuple or None

A list of (string-valued) names of attributs to specifically include when serializing this object. If None, then all attributes are included except those specifically omitted via omit_attributes. If include_attributes is not None then omit_attributes is ignored.

additional_metadict, optional

A dictionary of additional meta-data to be included in the main document (but that isn’t an attribute of obj).

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists. Setting this to True mimics the behaviour of a typical filesystem, where writing to a path can be done regardless of whether it already exists.

Returns

bson.objectid.ObjectId

The identifer of the root document that was written.

pygsti.io.write_auxtree_to_mongodb(mongodb, collection_name, doc_id, valuedict, auxfile_types=None, init_meta=None, session=None, overwrite_existing=False)

Write a dictionary of quantities to a MongoDB database, potentially as multiple documents.

Write the dictionary by placing everything in valuedict into a root document except for special key/value pairs (“special” usually because they lend themselves to an non-JSON format or they can be particularly large and may exceed MongoDB’s maximum document size) which are placed in “auxiliary” documents formatted according to auxfile_types (which itself is saved in the root document).

Parameters

mongodbpymongo.database.Database

The MongoDB instance to write data to.

collection_namestr

the MongoDB collection within mongodb to write to.

doc_idbson.objectid.ObjectId

The identifier, of the root document to store in the database. If None a new id will be created.

valuedictdict

The dictionary of values to serialize to disk.

auxfile_typesdict, optional

A dictionary whose keys are a subset of the keys of valuedict, and whose values are known “aux-file” types. auxfile_types[key] says that valuedict[key] should be serialized into a separate document with the given format rather than be included directly in the root document. If None, this dictionary is assumed to be valuedict[‘auxfile_types’].

init_metadict, optional

A dictionary of “initial” meta-data to be included in the root document (but that isn’t in valuedict). For example, the class name of an object is often stored as in the “type” field of meta.json when the_model objects .__dict__ is used as valuedict.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists. Setting this to True mimics the behaviour of a typical filesystem, where writing to a path can be done regardless of whether it already exists.

Returns

bson.objectid.ObjectId

The identifer of the root document that was written.

pygsti.io.add_auxtree_write_ops_and_update_doc(doc, write_ops, mongodb, collection_name, valuedict, auxfile_types=None, init_meta=None, overwrite_existing=False)

Similar to write_auxtree_to_mongodb, but just collect write operations and update a main-doc dictionary.

This function effectively performs all the heavy-lifting to write a dictionary to multiple documents within a MongoDB database without actually executing any write operations. Instead, a dictionary representing the main document (which we typically assume will be written later) is updated and additional write operations (for auxiliary documents) are added to a WriteOpsByCollection object. This function is intended for use within a MongoSerializable-derived object’s _add_auxiliary_write_ops_and_update_doc method.

Parameters

docdict

The root-document data, which is updated as needed and is expected to be initialized at least with an _id key-value pair.

write_opsWriteOpsByCollection

An object that keeps track of pymongo write operations on a per-collection basis. This object accumulates write operations to be performed at some point in the future.

mongodbpymongo.database.Database

The MongoDB instance that is planned to be written to. Used to test for existing records and not to write to, as writing is assumed to be done later, potentially as a bulk write operaiton.

collection_namestr

the MongoDB collection within mongodb that is planned to write to.

valuedictdict

The dictionary of values to serialize to disk.

auxfile_typesdict, optional

A dictionary whose keys are a subset of the keys of valuedict, and whose values are known “aux-file” types. auxfile_types[key] says that valuedict[key] should be serialized into a separate document with the given format rather than be included directly in the root document. If None, this dictionary is assumed to be valuedict[‘auxfile_types’].

init_metadict, optional

A dictionary of “initial” meta-data to be included in the root document (but that isn’t in valuedict). For example, the class name of an object is often stored as in the “type” field of meta.json when the_model objects .__dict__ is used as valuedict.

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists. Setting this to True mimics the behaviour of a typical filesystem, where writing to a path can be done regardless of whether it already exists.

Returns

bson.objectid.ObjectId

The identifer of the root document that was written.

pygsti.io.remove_auxtree_from_mongodb(mongodb, collection_name, doc_id, auxfile_types_member='auxfile_types', session=None, recursive=None)

Remove some or all of the MongoDB documents written by write_auxtree_to_mongodb

Removes a root document and possibly auxiliary documents.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove documents from.

collection_namestr

the MongoDB collection within mongodb to remove document from.

doc_idbson.objectid.ObjectId

The identifier of the root document stored in the database.

auxfile_types_memberstr, optional

The key of the stored document used to describe how other members are serialized into separate “auxiliary” documents. Unless you know what you’re doing, leave this as the default.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

recursiveRecursiveRemovalSpecification, optional

An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.

Returns

pymongo.results.DeleteResult

The result of deleting (or attempting to delete) the root record

pygsti.io.read_dict_from_mongodb(mongodb, collection_name, identifying_metadata)

Read a dictionary serialized via write_dict_to_mongodb() into a dictionary.

The elements of the constructed dictionary are stored as a separate documents in a the specified MongoDB collection.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to read data from.

collection_namestr

the MongoDB collection within mongodb to read from.

identifying_metadatadict

JSON-able metadata that identifies the dictionary being retrieved.

Returns

dict

pygsti.io.write_dict_to_mongodb(d, mongodb, collection_name, identifying_metadata, overwrite_existing=False, session=None)

Write each element of d as a separate document in a MongoDB collection

A document corresponding to each (key, value) pair of d is created that contains: 1. the metadata identifying the collection (identifying_metadata) 2. the pair’s key, stored under the key “key” 3. the pair’s value, stored under the key “value”

If the element is json-able, it’s value is written as a JSON-like dictionary. If not, pickle is used to serialize the element and store it in a bson.binary.Binary object within the database.

Parameters

ddict

the dictionary of elements to serialize.

mongodbpymongo.database.Database

The MongoDB instance to write data to.

collection_namestr

the MongoDB collection within mongodb to write to.

identifying_metadatadict

JSON-able metadata that identifies the dictionary being serialized. This metadata should be saved for later retrieving the elements of d from mongodb_collection.

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

Returns

None

pygsti.io.add_dict_to_mongodb_write_ops(d, write_ops, mongodb, collection_name, identifying_metadata, overwrite_existing)

Similar to write_dict_to_mongodb, but just collect write operations and update a main-doc dictionary.

Parameters

ddict

the dictionary of elements to serialize.

write_opsWriteOpsByCollection

An object that keeps track of pymongo write operations on a per-collection basis. This object accumulates write operations to be performed at some point in the future.

mongodbpymongo.database.Database

The MongoDB instance that is planned to be written to. Used to test for existing records and not to write to, as writing is assumed to be done later, potentially as a bulk write operaiton.

collection_namestr

the MongoDB collection within mongodb that is planned to write to.

identifying_metadatadict

JSON-able metadata that identifies the dictionary being serialized. This metadata should be saved for later retrieving the elements of d from mongodb_collection.

overwrite_existingbool, optional

Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given doc_id already exists.

Returns

None

pygsti.io.remove_dict_from_mongodb(mongodb, collection_name, identifying_metadata, session=None)

Remove elements of (separate documents) of a dictionary stored in a MongoDB collection

Parameters

mongodbpymongo.database.Database

The MongoDB instance to remove data from.

collection_namestr

the MongoDB collection within mongodb to remove documents from.

identifying_metadatadict

JSON-able metadata that identifies the dictionary being serialized.

sessionpymongo.client_session.ClientSession, optional

MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.

Returns

None

pygsti.io.create_mongodb_indices_for_pygsti_collections(mongodb)

Create, if not existing already, indices useful for speeding up pyGSTi MongoDB operations.

Indices are created as necessary within pygsti_* collections. While not necessary for database operations, these indices may dramatically speed up the reading and writing of pygsti objects to/from a Mongo database. You only need to call this once per database, typically when the database is first setup.

Parameters

mongodbpymongo.database.Database

The MongoDB instance to create indices in.

Returns

None