pygsti.baseobjs
A sub-package holding utility objects
Subpackages
Submodules
pygsti.baseobjs.advancedoptions
pygsti.baseobjs.basis
pygsti.baseobjs.basisconstructors
pygsti.baseobjs.errorgenbasis
pygsti.baseobjs.errorgenlabel
pygsti.baseobjs.errorgenspace
pygsti.baseobjs.exceptions
pygsti.baseobjs.label
pygsti.baseobjs.mongoserializable
pygsti.baseobjs.nicelyserializable
pygsti.baseobjs.outcomelabeldict
pygsti.baseobjs.polynomial
pygsti.baseobjs.profiler
pygsti.baseobjs.protectedarray
pygsti.baseobjs.qubitgraph
pygsti.baseobjs.resourceallocation
pygsti.baseobjs.smartcache
pygsti.baseobjs.statespace
pygsti.baseobjs.unitarygatefunction
pygsti.baseobjs.verbosityprinter
Package Contents
Classes
Cache object that profiles itself |
|
Class responsible for logging things to stdout or a file. |
|
Profiler objects are used for tracking both time and memory usage. |
|
An ordered set of labeled matrices/vectors. |
|
A basis that is included within and integrated into pyGSTi. |
|
A Basis whose elements are specified directly. |
|
A Basis that is the tensor product of one or more "component" bases. |
|
A basis that is the direct sum of one or more "component" bases. |
|
A label used to identify a gate, circuit layer, or (sub-)circuit. |
|
A (sub-)circuit label. |
|
The base class for all "nicely serializable" objects in pyGSTi. |
|
Base class for objects that can be serialized to a MongoDB database. |
|
An ordered dictionary of outcome labels, whose keys are tuple-valued outcome labels. |
|
Base class for defining a state space (Hilbert or Hilbert-Schmidt space). |
|
A state space consisting of N qubits. |
|
A customizable definition of a state space. |
|
Describes available resources and how they should be allocated. |
|
A directed or undirected graph data structure used to represent geometrical layouts of qubits or qubit gates. |
|
A basis for error-generator space defined by a set of elementary error generators. |
|
A basis for error-generator space defined by a set of elementary error generators. |
|
Spanned by the elementary error generators of given type(s) (e.g. "Hamiltonian" and/or "other") |
|
A vector space of error generators, spanned by some basis. |
|
A convenient base class for building serializable "functions" for unitary gate matrices. |
- class pygsti.baseobjs.SmartCache(decorating=(None, None))
Bases:
object
Cache object that profiles itself
Parameters
- decoratingtuple
module and function being decorated by the smart cache
Attributes
- StaticCacheListlist
A list of all
SmartCache
instances.
Construct a smart cache object
Parameters
- decoratingtuple
module and function being decorated by the smart cache
- StaticCacheList = []
- add_digest(custom)
Add a “custom” digest function, used for hashing otherwise un-hashable types.
Parameters
- customfunction
A hashing function, which takes two arguments: md5 (a running MD5 hash) and val (the value to be hashed). It should call md5.update to add to the running hash, and needn’t return anything.
Returns
None
- low_overhead_cached_compute(fn, arg_vals, kwargs=None)
Cached compute with less profiling. See
cached_compute()
docstring.Parameters
- fnfunction
Cached function
- arg_valstuple or list
Arguments to cached function
- kwargsdictionary
Keyword arguments to cached function
Returns
key : the key used to hash the function call result : result of fn called with arg_vals and kwargs
- cached_compute(fn, arg_vals, kwargs=None)
Shows effectiveness of a cache
Parameters
- fnfunction
Cached function
- arg_valstuple or list
Arguments to cached function
- kwargsdictionary
Keyword arguments to cached function
Returns
key : the key used to hash the function call result : result of fn called with arg_vals and kwargs
- static global_status(printer)
Show the statuses of all Cache objects
Parameters
- printerVerbosityPrinter
The printer to use for output.
Returns
None
- class pygsti.baseobjs.VerbosityPrinter(verbosity=1, filename=None, comm=None, warnings=True, split=False, clear_file=True)
Bases:
object
Class responsible for logging things to stdout or a file.
Controls verbosity and can print progress bars. ex:
>>> VerbosityPrinter(1)
would construct a printer that printed out messages of level one or higher to the screen.
>>> VerbosityPrinter(3, 'output.txt')
would construct a printer that sends verbose output to a text file
The static function
create_printer()
will construct a printer from either an integer or an already existing printer. it is a static method of the VerbosityPrinter class, so it is called like so:>>> VerbosityPrinter.create_printer(2)
or
>>> VerbostityPrinter.create_printer(VerbosityPrinter(3, 'output.txt'))
printer.log('status')
would log ‘status’ if the printers verbosity was one or higher.printer.log('status2', 2)
would log ‘status2’ if the printer’s verbosity was two or higherprinter.error('something terrible happened')
would ALWAYS log ‘something terrible happened’.printer.warning('something worrisome happened')
would log if verbosity was one or higher - the same as a normal status.Both printer.error and printer.warning will prepend ‘ERROR: ‘ or ‘WARNING: ‘ to the message they are given. Optionally, printer.log() can also prepend ‘Status_n’ to the message, where n is the message level.
Logging of progress bars/iterations:
>>> with printer_instance.progress_logging(verbosity): >>> for i, item in enumerate(data): >>> printer.show_progress(i, len(data)) >>> printer.log(...)
will output either a progress bar or iteration statuses depending on the printer’s verbosity
Parameters
- verbosityint
How verbose the printer should be.
- filenamestr, optional
Where to put output (If none, output goes to screen)
- commmpi4py.MPI.Comm or ResourceAllocation, optional
Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …
- warningsbool, optional
Whether or not to print warnings
- splitbool, optional
Whether to split output between stdout and stderr as appropriate, or to combine the streams so everything is sent to stdout.
- clear_filebool, optional
Whether or not filename should be cleared (overwritten) or simply appended to.
Attributes
- _comm_pathstr
relative path where comm files (outputs of non-root ranks) are stored.
- _comm_file_namestr
root filename for comm files (outputs of non-root ranks).
- _comm_file_extstr
filename extension for comm files (outputs of non-root ranks).
Customize a verbosity printer object
Parameters
- verbosityint, optional
How verbose the printer should be.
- filenamestr, optional
Where to put output (If none, output goes to screen)
- commmpi4py.MPI.Comm or ResourceAllocation, optional
Restricts output if the program is running in parallel (By default, if the rank is 0, output is sent to screen, and otherwise sent to commfiles 1, 2, …
- warningsbool, optional
Whether or not to print warnings
- clone()
Instead of deepcopy, initialize a new printer object and feed it some select deepcopied members
Returns
VerbosityPrinter
- static create_printer(verbosity, comm=None)
Function for converting between interfaces
Parameters
- verbosityint or VerbosityPrinter object, required:
object to build a printer from
- commmpi4py.MPI.Comm object, optional
Comm object to build printers with. !Will override!
Returns
- VerbosityPrinter :
The printer object, constructed from either an integer or another printer
- error(message)
Log an error to the screen/file
Parameters
- messagestr
the error message
Returns
None
- warning(message)
Log a warning to the screen/file if verbosity > 1
Parameters
- messagestr
the warning message
Returns
None
- log(message, message_level=None, indent_char=' ', show_statustype=False, do_indent=True, indent_offset=0, end='\n', flush=True)
Log a status message to screen/file.
Determines whether the message should be printed based on current verbosity setting, then sends the message to the appropriate output
Parameters
- messagestr
the message to print (or log)
- message_levelint, optional
the minimum verbosity level at which this level is printed.
- indent_charstr, optional
what constitutes an “indent” (messages at higher levels are indented more when do_indent=True).
- show_statustypebool, optional
if True, prepend lines with “Status Level X” indicating the message_level.
- do_indentbool, optional
whether messages at higher message levels should be indented. Note that if this is False it may be helpful to set show_statustype=True.
- indent_offsetint, optional
an additional number of indentations to add, on top of any due to the message level.
- endstr, optional
the character (or string) to end message lines with.
- flushbool, optional
whether stdout should be flushed right after this message is printed (this avoids delays in on-screen output due to buffering).
Returns
None
- verbosity_env(level)
Create a temporary environment with a different verbosity level.
This is context manager, controlled using Python’s with statement:
>>> with printer.verbosity_env(2): printer.log('Message1') # printed at verbosity level 2 printer.log('Message2') # printed at verbosity level 2
Parameters
- levelint
the verbosity level of the environment.
- progress_logging(message_level=1)
Context manager for logging progress bars/iterations.
(The printer will return to its normal, unrestricted state when the progress logging has finished)
Parameters
- message_levelint, optional
progress messages will not be shown until the verbosity level reaches message_level.
- show_progress(iteration, total, bar_length=50, num_decimals=2, fill_char='#', empty_char='-', prefix='Progress:', suffix='', verbose_messages=None, indent_char=' ', end='\n')
Displays a progress message (to be used within a progress_logging block).
Parameters
- iterationint
the 0-based current iteration – the interation number this message is for.
- totalint
the total number of iterations expected.
- bar_lengthint, optional
the length, in characters, of a text-format progress bar (only used when the verbosity level is exactly equal to the progress_logging message level.
- num_decimalsint, optional
number of places after the decimal point that are displayed in progress bar’s percentage complete.
- fill_charstr, optional
replaces ‘#’ as the bar-filling character
- empty_charstr, optional
replaces ‘-’ as the empty-bar character
- prefixstr, optional
message in front of the bar
- suffixstr, optional
message after the bar
- verbose_messageslist, optional
A list of strings to display after an initial “Iter X of Y” line when the verbosity level is higher than the progress_logging message level and so more verbose messages are shown (and a progress bar is not). The elements of verbose_messages will occur, one per line, after the initial “Iter X of Y” line.
- indent_charstr, optional
what constitutes an “indentation”.
- endstr, optional
the character (or string) to end message lines with.
Returns
None
- start_recording()
Begins recording the output (to memory).
Begins recording (in memory) a list of (type, verbosityLevel, message) tuples that is returned by the next call to
stop_recording()
.Returns
None
- stop_recording()
Stops recording and returns recorded output.
Stops a “recording” started by
start_recording()
and returns the list of (type, verbosityLevel, message) tuples that have been recorded since then.Returns
list
- class pygsti.baseobjs.Profiler(comm=None, default_print_memcheck=False)
Bases:
object
Profiler objects are used for tracking both time and memory usage.
Parameters
- commmpi4py.MPI.Comm optional
The active MPI communicator.
- default_print_memcheckbool, optional
Whether to print memory checks.
Construct a new Profiler instance.
Parameters
- commmpi4py.MPI.Comm, optional
MPI communicator so only profile and print messages on root proc.
- add_time(name, start_time, prefix=0)
Adds an elapsed time to a named “timer”-type accumulator.
Parameters
- namestring
The name of the timer to add elapsed time into (if the name doesn’t exist, one is created and initialized to the elapsed time).
- start_timefloat
The starting time used to compute the elapsed, i.e. the value time.time()-start_time, which is added to the named timer.
- prefixint, optional
Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to “ 3: myFunc: Total”.
Returns
None
- add_count(name, inc=1, prefix=0)
Adds a given value to a named “counter”-type accumulator.
Parameters
- namestring
The name of the counter to add val into (if the name doesn’t exist, one is created and initialized to val).
- incint, optional
The increment (the value to add to the counter).
- prefixint, optional
Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to “ 3: myFunc: Total”.
Returns
None
- memory_check(name, printme=None, prefix=0)
Record the memory usage at this point and tag with a name.
Parameters
- namestring
The name of the memory checkpoint. (Later, memory information can be organized by checkpoint name.)
- printmebool, optional
Whether or not to print the memory usage during this function call (if None, the default, then the value of default_print_memcheck specified during Profiler construction is used).
- prefixint, optional
Prefix to the timer name the current stack depth and this number of function names, starting with the current function and moving the call stack. When zero, no prefix is added. For example, with prefix == 1, “Total” might map to “ 3: myFunc: Total”.
Returns
None
- print_memory(name, show_minmax=False)
Prints the current memory usage (but doesn’t store it).
Useful for debugging, this function prints the current memory usage - optionally giving the mininum, maximum, and average across all the processors.
Parameters
- namestring
A label to print before the memory usage number(s).
- show_minmaxbool, optional
If True and there are multiple processors, print the min, average, and max memory usage from among the processors. Note that this will invoke MPI collective communication and so this print_memory call must be executed by all the processors. If False and there are multiple processors, only the rank 0 processor prints output.
Returns
None
- print_message(msg, all_ranks=False)
Prints a message to stdout, possibly from all ranks.
A utility function used in debugging, this function offers a convenient way to print a message on only the root processor or on all processors.
Parameters
- msgstring
The message to print.
- all_ranksbool, optional
If True, all processors will print msg, preceded by their rank label (e.g. “Rank4: “). If False, only the rank 0 processor will print the message.
Returns
None
- class pygsti.baseobjs.Basis(name, longname, real, sparse)
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
An ordered set of labeled matrices/vectors.
The base class for basis objects. A basis in pyGSTi is an abstract notion of a set of labeled elements, or “vectors”. Each basis has a certain size, and has .elements, .labels, and .ellookup members, the latter being a dictionary mapping of labels to elements.
An important point to note that isn’t immediately intuitive is that while Basis object holds elements (in its .elements property) these are not the same as its vectors (given by the object’s vector_elements property). Often times, in what we term a “simple” basis, the you just flatten an element to get the corresponding vector-element. This works for bases where the elements are either vectors (where flattening does nothing) and matrices. By storing elements as distinct from vector_elements, the Basis can capture additional structure of the elements (such as viewing them as matrices) that can be helpful for their display and interpretation. The elements are also sometimes referred to as the “natural elements” because they represent how to display the element in a natrual way. A non-simple basis occurs when vector_elements need to be stored as elements in a larger “embedded” way so that these elements can be displayed and interpeted naturally.
A second important note is that there is assumed to be some underlying “standard” basis underneath all the bases in pyGSTi. The elements in a Basis are always written in this standard basis. In the case of the “std”-named basis in pyGSTi, these elements are just the trivial vector or matrix units, so one can rightly view the “std” pyGSTi basis as the “standard” basis for a that particular dimension.
The arguments below describe the basic properties of all basis objects in pyGSTi. It is important to remember that the vector_elements of a basis are different from its elements (see the
Basis
docstring), and that dim refers to the vector elements whereas elshape refers to the elements.For example, consider a 2-element Basis containing the I and X Pauli matrices. The size of this basis is 2, as there are two elements (and two vector elements). Since vector elements are the length-4 flattened Pauli matrices, the dimension (dim) is 4. Since the elements are 2x2 Pauli matrices, the elshape is (2,2).
As another example consider a basis which spans all the diagonal 2x2 matrices. The elements of this basis are the two matrix units with a 1 in the (0,0) or (1,1) location. The vector elements, however, are the length-2 [1,0] and [0,1] vectors obtained by extracting just the diagonal entries from each basis element. Thus, for this basis, size=2, dim=2, and elshape=(2,2) - so the dimension is not just the product of elshape entries (equivalently, elsize).
Parameters
- namestring
The name of the basis. This can be anything, but is usually short and abbreviated. There are several types of bases built into pyGSTi that can be constructed by this name.
- longnamestring
A more descriptive name for the basis.
- realbool
Elements and vector elements are always allowed to have complex entries. This argument indicates whether the coefficients in the expression of an arbitrary vector in this basis must be real. For example, if real=True, then when pyGSTi transforms a vector in some other basis to a vector in this basis, it will demand that the values of that vector (i.e. the coefficients which multiply this basis’s elements to obtain a vector in the “standard” basis) are real.
- sparsebool
Whether the elements of .elements for this Basis are stored (when they are stored at all) as sparse matrices or vectors.
Attributes
- dimint
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- sizeint
The number of elements (or vector-elements) in the basis.
- elshapeint
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- elndimint
The number of element dimensions, i.e. len(self.elshape)
- elsizeint
The total element size, i.e. product(self.elshape)
- vector_elementslist
The “vectors” of this basis, always 1D (sparse or dense) arrays.
- abstract property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- abstract property size
The number of elements (or vector-elements) in the basis.
- abstract property elshape
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- property first_element_is_identity
True if the first element of this basis is proportional to the identity matrix, False otherwise.
- property vector_elements
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Returns
- list
A list of 1D arrays.
- property to_std_transform_matrix
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
Returns
- numpy array or scipy.sparse.lil_matrix
An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
- property from_std_transform_matrix
Retrieve the matrix that transforms vectors from the standard basis to this basis.
Returns
- numpy array or scipy sparse matrix
An array of shape (size, dim) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
- property to_elementstd_transform_matrix
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in.
Returns
- numpy array
An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
- property from_elementstd_transform_matrix
Get transformation matrix from “element space” to this basis.
Get the matrix that transforms vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in - to vectors in this basis (with length equal to the dim of this basis).
Returns
- numpy array
An array of shape (size, element_dim) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
- classmethod cast(name_or_basis_or_matrices, dim=None, sparse=None, classical_name='cl')
Convert various things that can describe a basis into a Basis object.
Parameters
- name_or_basis_or_matricesvarious
Can take on a variety of values to produce different types of bases:
None: an empty ExpicitBasis
Basis: checked with dim and sparse and passed through.
str: BuiltinBasis or DirectSumBasis with the given name.
- list: an ExplicitBasis if given matrices/vectors or a
DirectSumBasis if given a (name, dim) pairs.
- dimint or StateSpace, optional
The dimension of the basis to create. Sometimes this can be inferred based on name_or_basis_or_matrices, other times it must be supplied. This is the dimension of the space that this basis fully or partially spans. This is equal to the number of basis elements in a “full” (ordinary) basis. When a StateSpace object is given, a more detailed direct-sum-of-tensor-product-blocks structure for the state space (rather than a single dimension) is described, and a basis is produced for this space. For instance, a DirectSumBasis basis of TensorProdBasis components can result when there are multiple tensor-product blocks and these blocks consist of multiple factors.
- sparsebool, optional
Whether the resulting basis should be “sparse”, meaning that its elements will be sparse rather than dense matrices.
- classical_namestr, optional
An alternate builtin basis name that should be used when constructing the bases for the classical sectors of dim, when dim is a StateSpace object.
Returns
Basis
- is_simple()
Whether the flattened-element vector space is the same space as the space this basis’s vectors belong to.
Returns
bool
- is_complete()
Whether this is a complete basis, i.e. this basis’s vectors span the entire space that they live in.
Returns
bool
- is_partial()
The negative of
is_complete()
, effectively “is_incomplete”.Returns
bool
- with_sparsity(desired_sparsity)
Returns either this basis or a copy of it with the desired sparsity.
If this basis has the desired sparsity it is simply returned. If not, this basis is copied to one that does.
Parameters
- desired_sparsitybool
The sparsity (True for sparse elements, False for dense elements) that is desired.
Returns
Basis
- is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
- otherBasis or str
The basis to compare with.
- sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
- create_transform_matrix(to_basis)
Get the matrix that transforms a vector from this basis to to_basis.
Parameters
- to_basisBasis or string
The basis to transform to or a built-in basis name. In the latter case, a basis to transform to is built with the same structure as this basis but with all components constructed from the given name.
Returns
numpy.ndarray (even if basis is sparse)
- reverse_transform_matrix(from_basis)
Get the matrix that transforms a vector from from_basis to this basis.
The reverse of
create_transform_matrix()
.Parameters
- from_basisBasis or string
The basis to transform from or a built-in basis name. In the latter case, a basis to transform from is built with the same structure as this basis but with all components constructed from the given name.
Returns
numpy.ndarray (even if basis is sparse)
- is_normalized()
Check if a basis is normalized, meaning that Tr(Bi Bi) = 1.0.
Available only to bases whose elements are matrices for now.
Returns
bool
- create_equivalent(builtin_basis_name)
Create an equivalent basis with components of type builtin_basis_name.
Create a
Basis
that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.Parameters
- builtin_basis_namestr
The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
Returns
Basis
- create_simple_equivalent(builtin_basis_name=None)
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.Parameters
- builtin_basis_namestr, optional
The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.
Returns
Basis
- class pygsti.baseobjs.BuiltinBasis(name, dim_or_statespace, sparse=False)
Bases:
LazyBasis
A basis that is included within and integrated into pyGSTi.
Such bases may, in most cases be represented merely by its name. (In actuality, a dimension is also required, but this is often able to be inferred from context.)
Parameters
- name{“pp”, “gm”, “std”, “qt”, “id”, “cl”, “sv”}
Name of the basis to be created.
- dim_or_statespaceint or StateSpace
The dimension of the basis to be created or the state space for which a basis should be created. Note that when this is an integer it is the dimension of the vectors, which correspond to flattened elements in simple cases. Thus, a 1-qubit basis would have dimension 2 in the state-vector (name=”sv”) case and dimension 4 when constructing a density-matrix basis (e.g. name=”pp”).
- sparsebool, optional
Whether basis elements should be stored as SciPy CSR sparse matrices or dense numpy arrays (the default).
Creates a new LazyBasis. Parameters are the same as those to
Basis.__init__()
.- property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- property size
The number of elements (or vector-elements) in the basis.
- property elshape
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- property first_element_is_identity
True if the first element of this basis is proportional to the identity matrix, False otherwise.
- is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
- otherBasis or str
The basis to compare with.
- sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
- class pygsti.baseobjs.ExplicitBasis(elements, labels=None, name=None, longname=None, real=False, sparse=None, vector_elements=None)
Bases:
Basis
A Basis whose elements are specified directly.
All explicit bases are simple: their vector space is taken to be that of the the flattened elements unless separate vector_elements are given.
Parameters
- elementsnumpy.ndarray
The basis elements (sometimes different from the vectors)
- labelslist
The basis labels
- namestr, optional
The name of this basis. If None, then a name will be automatically generated.
- longnamestr, optional
A more descriptive name for this basis. If None, then the short name will be used.
- realbool, optional
Whether the coefficients in the expression of an arbitrary vector as a linear combination of this basis’s elements must be real.
- sparsebool, optional
Whether the elements of this Basis are stored as sparse matrices or vectors. If None, then this is automatically determined by the type of the initial object: elements[0] (sparse=False is used when len(elements) == 0).
- vector_elementsnumpy.ndarray, optional
A list or array of the 1D vectors corresponding to each element. If None, then the flattened elements are used as vectors. The size of these vectors sets the dimension of the basis.
Attributes
- Countint
The number of custom bases, used for serialized naming
Create a new ExplicitBasis.
Parameters
- elementsiterable
A list of the elements of this basis.
- labelsiterable, optional
A list of the labels corresponding to the elements of elements. If given, len(labels) must equal len(elements).
- namestr, optional
The name of this basis. If None, then a name will be automatically generated.
- longnamestr, optional
A more descriptive name for this basis. If None, then the short name will be used.
- realbool, optional
Whether the coefficients in the expression of an arbitrary vector as a linear combination of this basis’s elements must be real.
- sparsebool, optional
Whether the elements of this Basis are stored as sparse matrices or vectors. If None, then this is automatically determined by the type of the initial object: elements[0] (sparse=False is used when len(elements) == 0).
- vector_elementsnumpy.ndarray, optional
A list or array of the 1D vectors corresponding to each element. If None, then the flattened elements are used as vectors. The size of these vectors sets the dimension of the basis.
- property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- property size
The number of elements (or vector-elements) in the basis.
- property elshape
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- property vector_elements
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Returns
- list
A list of 1D arrays.
- Count = 0
- class pygsti.baseobjs.TensorProdBasis(component_bases, name=None, longname=None)
Bases:
LazyBasis
A Basis that is the tensor product of one or more “component” bases.
The elements of a TensorProdBasis consist of all tensor products of component basis elements (respecting the order given). The components of a TensorProdBasis must be simple bases so that kronecker products can be used to produce the parent basis’s elements.
A TensorProdBasis is a “simple” basis in that its flattened elements do correspond to its vectors.
Parameters
- component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4).- namestr, optional
The name of this basis. If None, the names of the component bases joined with “*” is used.
- longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
Create a new TensorProdBasis whose elements are the tensor products of the elements of a set of “component” bases.
Parameters
- component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4).- namestr, optional
The name of this basis. If None, the names of the component bases joined with “*” is used.
- longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
- property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- property size
The number of elements (or vector-elements) in the basis.
- property elshape
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
- otherBasis or str
The basis to compare with.
- sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
- create_equivalent(builtin_basis_name)
Create an equivalent basis with components of type builtin_basis_name.
Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.
Parameters
- builtin_basis_namestr
The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
Returns
TensorProdBasis
- create_simple_equivalent(builtin_basis_name=None)
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.Parameters
- builtin_basis_namestr, optional
The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.
Returns
Basis
- class pygsti.baseobjs.DirectSumBasis(component_bases, name=None, longname=None)
Bases:
LazyBasis
A basis that is the direct sum of one or more “component” bases.
Elements of this basis are the union of the basis elements on each component, each embedded into a common block-diagonal structure where each component occupies its own block. Thus, when there is more than one component, a DirectSumBasis is not a simple basis because the size of its elements is larger than the size of its vector space (which corresponds to just the diagonal blocks of its elements).
Parameters
- component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4).- namestr, optional
The name of this basis. If None, the names of the component bases joined with “+” is used.
- longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
Attributes
- vector_elementslist
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Create a new DirectSumBasis - a basis for a space that is the direct-sum of the spaces spanned by other “component” bases.
Parameters
- component_basesiterable
A list of the component bases. Each list elements may be either a Basis object or a tuple of arguments to
Basis.cast()
, e.g. (‘pp’,4).- namestr, optional
The name of this basis. If None, the names of the component bases joined with “+” is used.
- longnamestr, optional
A longer description of this basis. If None, then a long name is automatically generated.
- property dim
The dimension of the vector space this basis fully or partially spans. Equivalently, the length of the vector_elements of the basis.
- property size
The number of elements (or vector-elements) in the basis.
- property elshape
The shape of each element. Typically either a length-1 or length-2 tuple, corresponding to vector or matrix elements, respectively. Note that vector elements always have shape (dim,) (or (dim,1) in the sparse case).
- property vector_elements
The “vectors” of this basis, always 1D (sparse or dense) arrays.
Returns
list
- property to_std_transform_matrix
Retrieve the matrix that transforms a vector from this basis to the standard basis of this basis’s dimension.
Returns
- numpy array or scipy.sparse.lil_matrix
An array of shape (dim, size) where dim is the dimension of this basis (the length of its vectors) and size is the size of this basis (its number of vectors).
- property to_elementstd_transform_matrix
Get transformation matrix from this basis to the “element space”.
Get the matrix that transforms vectors in this basis (with length equal to the dim of this basis) to vectors in the “element space” - that is, vectors in the same standard basis that the elements of this basis are expressed in.
Returns
- numpy array
An array of shape (element_dim, size) where element_dim is the dimension, i.e. size, of the elements of this basis (e.g. 16 if the elements are 4x4 matrices) and size is the size of this basis (its number of vectors).
- is_equivalent(other, sparseness_must_match=True)
Tests whether this basis is equal to another basis, optionally ignoring sparseness.
Parameters
- otherBasis or str
The basis to compare with.
- sparseness_must_matchbool, optional
If False then comparison ignores differing sparseness, and this function returns True when the two bases are equal except for their .sparse values.
Returns
bool
- create_equivalent(builtin_basis_name)
Create an equivalent basis with components of type builtin_basis_name.
Create a Basis that is equivalent in structure & dimension to this basis but whose simple components (perhaps just this basis itself) is of the builtin basis type given by builtin_basis_name.
Parameters
- builtin_basis_namestr
The name of a builtin basis, e.g. “pp”, “gm”, or “std”. Used to construct the simple components of the returned basis.
Returns
DirectSumBasis
- create_simple_equivalent(builtin_basis_name=None)
Create a basis of type builtin_basis_name whose elements are compatible with this basis.
Create a simple basis and one without components (e.g. a
TensorProdBasis
, is a simple basis w/components) of the builtin type specified whose dimension is compatible with the elements of this basis. This function might also be named “element_equivalent”, as it returns the builtin_basis_name-analogue of the standard basis that this basis’s elements are expressed in.Parameters
- builtin_basis_namestr, optional
The name of the built-in basis to use. If None, then a copy of this basis is returned (if it’s simple) or this basis’s name is used to try to construct a simple and component-free version of the same builtin-basis type.
Returns
Basis
- class pygsti.baseobjs.Label
Bases:
object
A label used to identify a gate, circuit layer, or (sub-)circuit.
A label consisting of a string along with a tuple of integers or sector-names specifying which qubits, or more generally, parts of the Hilbert space that is acted upon by an object so-labeled.
Creates a new Model-item label, which is divided into a simple string label and a tuple specifying the part of the Hilbert space upon which the item acts (often just qubit indices).
Parameters
- namestr
The item name. E.g., ‘CNOT’ or ‘H’.
- state_space_labelslist or tuple, optional
A list or tuple that identifies which sectors/parts of the Hilbert space is acted upon. In many cases, this is a list of integers specifying the qubits on which a gate acts, when the ordering in the list defines the ‘direction’ of the gate. If something other than a list or tuple is passed, a single-element tuple is created containing the passed object.
- timefloat
The time at which this label occurs (can be relative or absolute)
- argsiterable of hashable types, optional
A list of “arguments” for this label. Having arguments makes the Label even more resemble a function call, and supplies parameters for the object (often a gate or layer operation) being labeled that are fixed at circuit-creation time (i.e. are not optimized over). For example, the angle of a continuously-variable X-rotation gate could be an argument of a gate label, and one might create a label: Label(‘Gx’, (0,), args=(pi/3,))
- property depth
The depth of this label, viewed as a sub-circuit.
- property reps
Number of repetitions (of this label’s components) that this label represents.
- property has_nontrivial_components
- collect_args()
- strip_args()
- expand_subcircuits()
Expand any sub-circuits within this label.
Returns a list of component labels which doesn’t include any
CircuitLabel
labels. This effectively expands any “boxes” or “exponentiation” within this label.Returns
- tuple
A tuple of component Labels (none of which should be
CircuitLabel
objects).
- class pygsti.baseobjs.CircuitLabel
Bases:
Label
,tuple
A (sub-)circuit label.
This class encapsulates a complete circuit as a single layer. It lacks some of the methods and metadata of a true
Circuit
object, but contains the essentials: the tuple of layer labels (held as the label’s components) and line labels (held as the label’s state-space labels)Initialize self. See help(type(self)) for accurate signature.
- property name
This label’s name (a string).
- property sslbls
This label’s state-space labels, often qubit labels (a tuple).
- property reps
Number of repetitions (of this label’s components) that this label represents.
- abstract property args
This label’s arguments.
- property components
The sub-label components of this label, or just (self,) if no sub-labels exist.
- property qubits
An alias for sslbls, since commonly these are just qubit indices. (a tuple)
- property num_qubits
The number of qubits this label “acts” on (an integer). None if self.ssbls is None.
- property depth
The depth of this label, viewed as a sub-circuit.
- has_prefix(prefix, typ='all')
Whether this label has the given prefix.
Usually used to test whether the label names a given type.
Parameters
- prefixstr
The prefix to check for.
- typ{“any”,”all”}
Whether, when there are multiple parts to the label, the prefix must occur in any or all of the parts.
Returns
bool
- map_state_space_labels(mapper)
Apply a mapping to this Label’s state-space (qubit) labels.
Return a copy of this Label with all of the state-space labels (often just qubit labels) updated according to a mapping function.
For example, calling this function with mapper = {0: 1, 1: 3} on the Label “Gcnot:0:1” would return “Gcnot:1:3”.
Parameters
- mapperdict or function
A dictionary whose keys are the existing state-space-label values and whose value are the new labels, or a function which takes a single (existing state-space-label) argument and returns a new state-space-label.
Returns
CircuitLabel
- abstract strip_args()
- to_native()
Returns this label as native python types.
Useful for faster serialization.
Returns
tuple
- replace_name(oldname, newname)
Returns a label with oldname replaced by newname.
Parameters
- oldnamestr
Name to find.
- newnamestr
Name to replace found name with.
Returns
CircuitLabel
- is_simple()
Whether this is a “simple” (opaque w/a true name, from a circuit perspective) label or not.
Returns
bool
- expand_subcircuits()
Expand any sub-circuits within this label.
Returns a list of component labels which doesn’t include any
CircuitLabel
labels. This effectively expands any “boxes” or “exponentiation” within this label.Returns
- tuple
A tuple of component Labels (none of which should be
CircuitLabel
objects).
- class pygsti.baseobjs.NicelySerializable(doc_id=None)
Bases:
pygsti.baseobjs.mongoserializable.MongoSerializable
The base class for all “nicely serializable” objects in pyGSTi.
A “nicely serializable” object can be converted to and created from a native Python object (like a string or dict) that contains only other native Python objects. In addition, there are constraints on the makeup of these objects so that they can be easily serialized to standard text-based formats, e.g. JSON. For example, dictionary keys must be strings, and the list vs. tuple distinction cannot be assumed to be preserved during serialization.
- classmethod read(path, format=None)
Read an object of this type, or a subclass of this type, from a file.
Parameters
- pathstr or Path or file-like
The filename to open or an already open input stream.
- format{‘json’, None}
The format of the file. If None this is determined automatically by the filename extension of a given path.
Returns
NicelySerializable
- classmethod load(f, format='json')
Load an object of this type, or a subclass of this type, from an input stream.
Parameters
- ffile-like
An open input stream to read from.
- format{‘json’}
The format of the input stream data.
Returns
NicelySerializable
- classmethod loads(s, format='json')
Load an object of this type, or a subclass of this type, from a string.
Parameters
- sstr
The serialized object.
- format{‘json’}
The format of the string data.
Returns
NicelySerializable
- classmethod from_nice_serialization(state)
Create and initialize an object from a “nice” serialization.
A “nice” serialization here means one created by a prior call to to_nice_serialization using this class or a subclass of it. Nice serializations adhere to additional rules (e.g. that dictionary keys must be strings) that make them amenable to common file formats (e.g. JSON).
The state argument is typically a dictionary containing ‘module’ and ‘state’ keys specifying the type of object that should be created. This type must be this class or a subclass of it.
Parameters
- stateobject
An object, usually a dictionary, representing the object to de-serialize.
Returns
object
- to_nice_serialization()
Serialize this object in a way that adheres to “niceness” rules of common text file formats.
Returns
- object
Usually a dictionary representing the serialized object, but may also be another native Python type, e.g. a string or list.
- write(path, **format_kwargs)
Writes this object to a file.
Parameters
- pathstr or Path
The name of the file that is written.
- format_kwargsdict, optional
Additional arguments specific to the format being used. For example, the JSON format accepts indent as an argument because json.dump does.
Returns
None
- dump(f, format='json', **format_kwargs)
Serializes and writes this object to a given output stream.
Parameters
- ffile-like
A writable output stream.
- format{‘json’, ‘repr’}
The format to write.
- format_kwargsdict, optional
Additional arguments specific to the format being used. For example, the JSON format accepts indent as an argument because json.dump does.
Returns
None
- dumps(format='json', **format_kwargs)
Serializes this object and returns it as a string.
Parameters
- format{‘json’, ‘repr’}
The format to write.
- format_kwargsdict, optional
Additional arguments specific to the format being used. For example, the JSON format accepts indent as an argument because json.dump does.
Returns
str
- class pygsti.baseobjs.MongoSerializable(doc_id=None)
Bases:
object
Base class for objects that can be serialized to a MongoDB database.
At the very least, saving an object to a database creates a document in the collection named by the collection_name class variable (which can be overriden by derived classes). Additionally, saving the object may create other documents outside of this collection (e.g., if the object contains MongoSerializable attributes that specify their own collection name).
This interface also allows an object to save large chunks of data using, e.g., MongoDB’s GridFS system, when serializing itself instead of trying to write an enourmous JSON dictionary as a single document (as an object that is NicelySerializable might do).
- collection_name = 'pygsti_objects'
- classmethod from_mongodb(mongodb, doc_id, **kwargs)
Create and initialize an object from a MongoDB instance.
Parameters
- mongodbpymongo.database.Database
The MongoDB instance to load from.
- doc_idbson.objecctid.ObjectId or dict
The object ID or filter used to find a single object ID wihtin the database. This document is loaded from the collection given by the collection_name attribute of this class.
- **kwargsdict
Additional keyword arguments poentially used by subclass implementations. Any arguments allowed by a subclass’s _create_obj_from_doc_and_mongodb method is allowed here.
Returns
object
- classmethod from_mongodb_doc(mongodb, collection_name, doc, **kwargs)
Create and initialize an object from a MongoDB instance and pre-loaded primary document.
Parameters
- mongodbpymongo.database.Database
The MongoDB instance to load from.
- collection_namestr
The collection name within mongodb that doc was loaded from. This is needed for the sole purpose of setting the created (returned) object’s database “coordinates”.
- docdict
The already-retrieved main document for the object being loaded. This takes the place of giving an identifier for this object.
- **kwargsdict
Additional keyword arguments poentially used by subclass implementations. Any arguments allowed by a subclass’s _create_obj_from_doc_and_mongodb method is allowed here.
Returns
object
- write_to_mongodb(mongodb, session=None, overwrite_existing=False, **kwargs)
Write this object to a MongoDB database.
The collection name used is self.collection_name, and the _id is either: 1) the ID used by a previous write or initial read-in, if one exists, OR 2) a new random bson.objectid.ObjectId
Parameters
- mongodbpymongo.database.Database
The MongoDB instance to write data to.
- sessionpymongo.client_session.ClientSession, optional
MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.
- overwrite_existingbool, optional
Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given _id already exists and is different from what is being written.
- **kwargsdict
Additional keyword arguments poentially used by subclass implementations. Any arguments allowed by a subclass’s _add_auxiliary_write_ops_and_update_doc method is allowed here.
Returns
- bson.objectid.ObjectId
The identifier (_id value) of the main document that was written.
- add_mongodb_write_ops(write_ops, mongodb, overwrite_existing=False, **kwargs)
Accumulate write and update operations for writing this object to a MongoDB database.
Similar to
write_to_mongodb()
but collects write operations instead of actually executing any write operations on the database. This function may be preferred towrite_to_mongodb()
when this object is being written as a part of a larger entity and executing write operations is saved until the end.As in
write_to_mongodb()
, self.collection_name is the collection name and _id is either: 1) the ID used by a previous write or initial read-in, if one exists, OR 2) a new random bson.objectid.ObjectIdParameters
- write_opsWriteOpsByCollection
An object that keeps track of pymongo write operations on a per-collection basis. This object accumulates write operations to be performed at some point in the future.
- mongodbpymongo.database.Database
The MongoDB instance to write data to.
- overwrite_existingbool, optional
Whether existing documents should be overwritten. The default of False causes a ValueError to be raised if a document with the given _id already exists and is different from what is being written.
- **kwargsdict
Additional keyword arguments poentially used by subclass implementations. Any arguments allowed by a subclass’s _add_auxiliary_write_ops_and_update_doc method is allowed here.
Returns
- bson.objectid.ObjectId
The identifier (_id value) of the main document that was written.
- remove_me_from_mongodb(mongodb, session=None, recursive='default')
- classmethod remove_from_mongodb(mongodb, doc_id, collection_name=None, session=None, recursive='default')
Remove the documents corresponding to an instance of this class from a MongoDB database.
Parameters
- mongodbpymongo.database.Database
The MongoDB instance to remove documents from.
- doc_idbson.objectid.ObjectId
The identifier of the root document stored in the database.
- collection_namestr, optional
the MongoDB collection within mongodb where the main document resides. If None, then <this_class>.collection_name is used (which is usually what you want).
- sessionpymongo.client_session.ClientSession, optional
MongoDB session object to use when interacting with the MongoDB database. This can be used to implement transactions among other things.
- recursiveRecursiveRemovalSpecification, optional
An object that filters the type of documents that are removed. Used when working with inter-related experiment designs, data, and results objects to only remove the types of documents you know aren’t being shared with other documents.
Returns
None
- class pygsti.baseobjs.OutcomeLabelDict(items=None)
Bases:
collections.OrderedDict
An ordered dictionary of outcome labels, whose keys are tuple-valued outcome labels.
This class extends an ordinary OrderedDict by implements mapping string-values single-outcome labels to 1-tuples containing that label (and vice versa), allowing the use of strings as outcomes labels from the user’s perspective.
Parameters
- itemslist or dict, optional
Initial values. Should only be used as part of de-serialization.
Attributes
- _strictbool
Whether mapping from strings to 1-tuples is performed.
Creates a new OutcomeLabelDict.
Parameters
- itemslist, optional
Used by pickle and other serializations to initialize elements.
- classmethod to_outcome(val)
Converts string outcomes like “0” to proper outcome tuples, like (“0”,).
(also converts non-tuples to tuples, e.g. [“0”,”1”] to (“0”,”1”) )
Parameters
- valstr or tuple
The value to convert into an outcome label (i.e. a tuple)
Returns
tuple
- get(key, default)
Return the value for key if key is in the dictionary, else default.
- getitem_unsafe(key, defaultval)
Gets an item without checking that key is a properly formatted outcome tuple.
Only use this method when you’re sure key is an outcome tuple and not, e.g., just a string.
Parameters
- keyobject
The key to retrieve
- defaultvalobject
The default value to use (if the key is absent).
Returns
object
- setitem_unsafe(key, val)
Sets item without checking that the key is a properly formatted outcome tuple.
Only use this method when you’re sure key is an outcome tuple and not, e.g., just a string.
Parameters
- keyobject
The key to retrieve.
- valobject
the value to set.
Returns
None
- class pygsti.baseobjs.StateSpace
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
Base class for defining a state space (Hilbert or Hilbert-Schmidt space).
This base class just sets the API for a “state space” in pyGSTi, accessed as the direct sum of one or more tensor products of Hilbert spaces.
- abstract property udim
Integer Hilbert (unitary operator) space dimension of this quantum state space.
Raises an error if this space is not a quantum state space.
- abstract property dim
Integer Hilbert-Schmidt (super-operator) or classical dimension of this state space.
- abstract property num_qubits
The number of qubits in this quantum state space.
Raises a ValueError if this state space doesn’t consist entirely of qubits.
- abstract property num_qudits
The number of qudits in this quantum state space.
Raises a ValueError if this state space doesn’t consist entirely of qudits.
- abstract property num_tensor_product_blocks
The number of tensor-product blocks which are direct-summed to get the final state space.
Returns
int
- property sole_tensor_product_block_labels
The labels of the first and only tensor product block within this state space.
If there are multiple blocks, a ValueError is raised.
- abstract property tensor_product_blocks_labels
The labels for all the tensor-product blocks.
Returns
tuple of tuples
- abstract property tensor_product_blocks_dimensions
The superoperator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- abstract property tensor_product_blocks_udimensions
The unitary operator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- abstract property tensor_product_blocks_types
The type (quantum vs classical) of all the tensor-product blocks.
Returns
tuple of tuples
- property is_entirely_qubits
Whether this state space is just the tensor product of qubit subspaces.
Returns
bool
- property common_dimension
Returns the common super-op dimension of all the labels in this space.
If not all the labels in this space have the same dimension, then None is returned to indicate this.
This property is useful when working with stencils, where operations are created for a “stencil space” that is not exactly a subspace of a StateSpace space but will be mapped to one in the future.
Returns
int or None
- property common_udimension
Returns the common unitary-op dimension of all the labels in this space.
If not all the labels in this space have the same dimension, then None is returned to indicate this.
This property is useful when working with stencils, where operations are created for a “stencil space” that is not exactly a subspace of a StateSpace space but will be mapped to one in the future.
Returns
int or None
- property state_space_labels
Return a tuple corresponding to the concatenation of the constituent state space labels within each tensor product block of this StateSpace object.
Returns
- flattened_state_space_label_listtuple
A tuple containing a flattened list of all of the state space labels appearing within the tensor product blocks of this StateSpace objects label list.
- classmethod cast(obj)
Casts obj into a
StateSpace
object if possible.If obj is already of this type, it is simply returned without modification.
Parameters
- objStateSpace or int or list
Either an already-built state space object or an integer specifying the number of qubits, or a list of labels as would be provided to the first argument of
ExplicitStateSpace.__init__()
.
Returns
StateSpace
- abstract label_dimension(label)
The superoperator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- abstract label_udimension(label)
The unitary operator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- abstract label_tensor_product_block_index(label)
The index of the tensor product block containing the given label.
Parameters
- labelstr or int
The label whose index should be retrieved.
Returns
int
- abstract label_type(label)
The type (quantum or classical) of the given label (from any tensor product block).
Parameters
- labelstr or int
The label whose type should be retrieved.
Returns
str
- tensor_product_block_labels(i_tpb)
The labels for the iTBP-th tensor-product block.
Parameters
- i_tpbint
Tensor-product block index.
Returns
tuple
- tensor_product_block_dimensions(i_tpb)
The superoperator dimensions for the factors in the iTBP-th tensor-product block.
Parameters
- i_tpbint
Tensor-product block index.
Returns
tuple
- tensor_product_block_udimensions(i_tpb)
The unitary-operator dimensions for the factors in the iTBP-th tensor-product block.
Parameters
- i_tpbint
Tensor-product block index.
Returns
tuple
- is_compatible_with(other_state_space)
Whether another state space is compatible with this one.
Two state spaces are considered “compatible” when their overall dimensions agree (even if their tensor product block structure and labels do not). (This checks whether the Hilbert spaces are isomorphic.)
Parameters
- other_state_spaceStateSpace
The state space to check compatibility with.
Returns
bool
- is_entire_space(labels)
True if this state space is a single tensor product block with (exactly, in order) the given set of labels.
Parameters
- labelsiterable
the labels to test.
Returns
bool
- contains_labels(labels)
True if this state space contains all of a given set of labels.
Parameters
- labelsiterable
the labels to test.
Returns
bool
- contains_label(label)
True if this state space contains a given label.
Parameters
- labelstr or int
the label to test for.
Returns
bool
- create_subspace(labels)
Create a sub-StateSpace object from a set of existing labels.
Parameters
- labelsiterable
The labels to include in the returned state space.
Returns
StateSpace
- intersection(other_state_space)
Create a state space whose labels are the intersection of the labels of this space and one other.
Dimensions associated with the labels are preserved, as is the ordering of tensor product blocks. If the two spaces have the same label, but their dimensions or indices do not agree, an error is raised.
Parameters
- other_state_spaceStateSpace
The other state space.
Returns
StateSpace
- union(other_state_space)
Create a state space whose labels are the union of the labels of this space and one other.
Dimensions associated with the labels are preserved, as is the tensor product block index. If the two spaces have the same label, but their dimensions or indices do not agree, an error is raised.
Parameters
- other_state_spaceStateSpace
The other state space.
Returns
StateSpace
- create_stencil_subspace(labels)
Create a template sub-StateSpace object from a set of potentially stencil-type labels.
That is, the elements of labels don’t need to actually exist within this state space – they may be stencil labels that will resolve to a label in this state space later on.
Parameters
- labelsiterable
The labels to include in the returned state space.
Returns
StateSpace
- class pygsti.baseobjs.QubitSpace(nqubits_or_labels)
Bases:
QuditSpace
A state space consisting of N qubits.
- property udim
Integer Hilbert (unitary operator) space dimension of this quantum state space.
- property dim
Integer Hilbert-Schmidt (super-operator) or classical dimension of this state space.
- property qubit_labels
The labels of the qubits
- property num_qubits
The number of qubits in this quantum state space.
- property num_tensor_product_blocks
Get the number of tensor-product blocks which are direct-summed to get the final state space.
Returns
int
- property tensor_product_blocks_labels
Get the labels for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_dimensions
Get the superoperator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_udimensions
Get the unitary operator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_types
Get the type (quantum vs classical) of all the tensor-product blocks.
Returns
tuple of tuples
- label_dimension(label)
The superoperator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- label_udimension(label)
The unitary operator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- class pygsti.baseobjs.ExplicitStateSpace(label_list, udims=None, types=None)
Bases:
StateSpace
A customizable definition of a state space.
An ExplicitStateSpace object describes, using string/int labels, how an entire Hilbert state space is decomposed into the direct sum of terms which themselves are tensor products of smaller (typically qubit-sized) Hilbert spaces.
Parameters
- label_liststr or int or iterable
Most generally, this can be a list of tuples, where each tuple contains the state-space labels (which can be strings or integers) for a single “tensor product block” formed by taking the tensor product of the spaces asociated with the labels. The full state space is the direct sum of all the tensor product blocks. E.g. [(‘Q0’,’Q1’), (‘Q2’,)].
If just an iterable of labels is given, e.g. (‘Q0’,’Q1’), it is assumed to specify the first and only tensor product block.
If a single state space label is given, e.g. ‘Q2’, then it is assumed to completely specify the first and only tensor product block.
- udimsint or iterable, optional
The dimension of each state space label as an integer, tuple of integers, or list or tuples of integers to match the structure of label_list. e.g., if label_list=(‘Q0’,’Q1’) then udims should be a tuple of 2 integers, or if label_list=’Q0’ then udims should be an integer. Values specify unitary evolution state-space dimensions, i.e. 2 for a qubit, 3 for a qutrit, etc. If None, then the dimensions are inferred, if possible, from the following naming rules:
if the label starts with ‘L’, udim=1 (a single Level)
if the label starts with ‘Q’ OR is an int, udim=2 (a Qubit)
if the label starts with ‘T’, udim=3 (a quTrit)
- typesstr or iterable, optional
A list of label types, either ‘Q’ or ‘C’ for “quantum” and “classical” respectively, indicating the type of state-space associated with each label. Like dims, types must match the structure of label_list. A quantum state space of dimension d is a d-by-d density matrix, whereas a classical state space of dimension d is a vector of d probabilities. If None, then all labels are assumed to be quantum.
- property udim
Integer Hilbert (unitary operator) space dimension of this quantum state space.
Raises an error if this space is not a quantum state space.
- property dim
Integer Hilbert-Schmidt (super-operator) or classical dimension of this state space.
- property num_qubits
The number of qubits in this quantum state space.
Raises a ValueError if this state space doesn’t consist entirely of qubits.
- property num_qudits
The number of qudits in this quantum state space.
Raises a ValueError if this state space doesn’t consist entirely of qudits.
- property num_tensor_product_blocks
The number of tensor-product blocks which are direct-summed to get the final state space.
Returns
int
- property tensor_product_blocks_labels
The labels for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_dimensions
The superoperator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_udimensions
The unitary operator dimensions for all the tensor-product blocks.
Returns
tuple of tuples
- property tensor_product_blocks_types
The type (quantum vs classical) of all the tensor-product blocks.
Returns
tuple of tuples
- label_dimension(label)
The superoperator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- label_udimension(label)
The unitary operator dimension of the given label (from any tensor product block)
Parameters
- labelstr or int
The label whose dimension should be retrieved.
Returns
int
- class pygsti.baseobjs.ResourceAllocation(comm=None, mem_limit=None, profiler=None, distribute_method='default', allocated_memory=0)
Bases:
object
Describes available resources and how they should be allocated.
This includes the number of processors and amount of memory, as well as a strategy for how computations should be distributed among them.
Parameters
- commmpi4py.MPI.Comm, optional
MPI communicator holding the number of available processors.
- mem_limitint, optional
A rough per-processor memory limit in bytes.
- profilerProfiler, optional
A lightweight profiler object for tracking resource usage.
- distribute_methodstr, optional
The name of a distribution strategy.
- property comm_rank
A safe way to get self.comm.rank (0 if self.comm is None)
- property comm_size
A safe way to get self.comm.size (1 if self.comm is None)
- property is_host_leader
True if this processors is the rank-0 “leader” of its host (node). False otherwise.
- classmethod cast(arg)
Cast arg to a
ResourceAllocation
object.If arg already is a
ResourceAllocation
instance, it just returned. Otherwise this function attempts to create a new instance from arg.Parameters
- argResourceAllocation or dict
An object that can be cast to a
ResourceAllocation
.
Returns
ResourceAllocation
- build_hostcomms()
- host_comm_barrier()
Calls self.host_comm.barrier() when self.host_comm is not None.
This convenience function provides an often-used barrier that follows code where a single “leader” processor modifies a memory block shared between all members of self.host_comm, and the other processors must wait until this modification is performed before proceeding with their own computations.
Returns
None
- reset(allocated_memory=0)
Resets internal allocation counters to given values (defaults to zero).
Parameters
- allocated_memoryint64
The value to set the memory allocation counter to.
Returns
None
- add_tracked_memory(num_elements, dtype='d')
Adds nelements * itemsize bytes to the total amount of allocated memory being tracked.
If the total (tracked) memory exceeds self.mem_limit a
MemoryError
exception is raised.Parameters
- num_elementsint
The number of elements to track allocation of.
- dtypenumpy.dtype, optional
The type of elements, needed to compute the number of bytes per element.
Returns
None
- check_can_allocate_memory(num_elements, dtype='d')
Checks that allocating nelements doesn’t cause the memory limit to be exceeded.
This memory isn’t tracked - it’s just added to the current tracked memory and a
MemoryError
exception is raised if the result exceeds self.mem_limit.Parameters
- num_elementsint
The number of elements to track allocation of.
- dtypenumpy.dtype, optional
The type of elements, needed to compute the number of bytes per element.
Returns
None
- temporarily_track_memory(num_elements, dtype='d')
Temporarily adds nelements to tracked memory (a context manager).
A
MemoryError
exception is raised if the tracked memory exceeds self.mem_limit.Parameters
- num_elementsint
The number of elements to track allocation of.
- dtypenumpy.dtype, optional
The type of elements, needed to compute the number of bytes per element.
Returns
contextmanager
- gather_base(result, local, slice_of_global, unit_ralloc=None, all_gather=False)
Gather or all-gather operation using local arrays and a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- slice_of_globalslice or numpy.ndarray
The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
- all_gatherbool, optional
Whether the final result should be gathered on all the processors of this
ResourceAllocation
or just the root (rank 0) processor.
Returns
None
- gather(result, local, slice_of_global, unit_ralloc=None)
Gather local arrays into a global result array potentially with a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
The global array is only gathered on the root (rank 0) processor of this resource allocation.
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array, only needed on the root (rank 0) processor. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- slice_of_globalslice or numpy.ndarray
The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
Returns
None
- allgather(result, local, slice_of_global, unit_ralloc=None)
All-gather local arrays into global arrays on each processor, potentially using a unit resource allocation.
Similar to a normal MPI gather call, but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array, i.e., slice of the final to-be gathered array. So, when gathering the result, only processors with unit_ralloc.rank == 0 need to contribute to the gather operation.
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by having multiple smaller gather operations in parallel instead of one large gather.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- slice_of_globalslice or numpy.ndarray
The slice of result that local constitutes, i.e., in the end result[slice_of_global] = local. This may be a Python slice or a NumPy array of indices.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the gather operation. If None, then it is assumed that all processors compute different local results.
Returns
None
- allreduce_sum(result, local, unit_ralloc=None)
Sum local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.SUM type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the sum, only processors with unit_ralloc.rank == 0 contribute to the sum. This handles the case where simply summing the local contributions from all processors would result in over-counting because of multiple processors hold the same logical result (summand).
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array, with the same shape as all the local arrays being summed. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
Returns
None
- allreduce_sum_simple(local, unit_ralloc=None)
A simplified sum over quantities on different processors that doesn’t use shared memory.
The shared memory usage of
allreduce_sum()
can be overkill when just summing a single scalar quantity. This method provides a way to easily sum a quantity across all the processors in thisResourceAllocation
object using a unit resource allocation.Parameters
- localint or float
The local (per-processor) value to sum.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local value, so that only the unit_ralloc.rank == 0 processors will contribute to the sum. If None, then it is assumed that each processor computes a logically different local value.
Returns
- float or int
The sum of all local quantities, returned on all the processors.
- allreduce_min(result, local, unit_ralloc=None)
Take elementwise min of local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.MIN type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the min operation, only processors with unit_ralloc.rank == 0 contribute.
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
Returns
None
- allreduce_max(result, local, unit_ralloc=None)
Take elementwise max of local arrays on different processors, potentially using a unit resource allocation.
Similar to a normal MPI reduce call (with MPI.MAX type), but more easily integrates with a hierarchy of processor divisions, or nested comms, by taking a unit_ralloc argument. This is essentially another comm that specifies the groups of processors that have all computed the same local array. So, when performing the max operation, only processors with unit_ralloc.rank == 0 contribute.
Parameters
- resultnumpy.ndarray, possibly shared
The destination “global” array, with the same shape as all the local arrays being operated on. This can be any shape (including any number of dimensions). When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, this array must be allocated as a shared array using this ralloc or a larger so that result is shared between all the processors for this resource allocation’s intra-host communicator. This allows a speedup when shared memory is used by distributing computation of result over each host’s processors and performing these sums in parallel.- localnumpy.ndarray
The locally computed quantity. This can be a shared-memory array, but need not be.
- unit_rallocResourceAllocation, optional
A resource allocation (essentially a comm) for the group of processors that all compute the same local result, so that only the unit_ralloc.rank == 0 processors will contribute to the sum operation. If None, then it is assumed that all processors compute different local results.
Returns
None
- bcast(value, root=0)
Broadcasts a value from the root processor/host to the others in this resource allocation.
This is similar to a usual MPI broadcast, except it takes advantage of shared memory when it is available. When shared memory is being used, i.e. when this
ResourceAllocation
object has a nontrivial inter-host comm, then this routine places value in a shared memory buffer and uses the resource allocation’s inter-host communicator to broadcast the result from the root host to all the other hosts using all the processor on the root host in parallel (all processors with the same intra-host rank participate in a MPI broadcast).Parameters
- valuenumpy.ndarray
The value to broadcast. May be shared memory but doesn’t need to be. Only need to specify this on the rank root processor, other processors can provide any value for this argument (it’s unused).
- rootint
The rank of the processor whose value will be to broadcast.
Returns
- numpy.ndarray
The broadcast value, in a new, non-shared-memory array.
- class pygsti.baseobjs.QubitGraph(qubit_labels, initial_connectivity=None, initial_edges=None, directed=True, direction_names=None)
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
A directed or undirected graph data structure used to represent geometrical layouts of qubits or qubit gates.
Qubits are nodes in the graph (and can be labeled), and edges represent the ability to perform one or more types of gates between qubits (equivalent, usually, to geometrical proximity).
Parameters
- qubit_labelslist
A list of string or integer labels of the qubits. The length of this list equals the number of qubits (nodes) in the graph.
- initial_connectivitynumpy.ndarray, optional
A (nqubits, nqubits) boolean or integer array giving the initial connectivity of the graph. If an integer array, then 0 indicates no edge and positive integers indicate present edges in the “direction” given by the positive integer. For example 1 may corresond to “left” and 2 to “right”. Names must be associated with these directions using direction_names. If a boolean array, if there’s an edge from qubit i to j then initial_connectivity[i,j]=True (integer indices of qubit labels are given by their position in qubit_labels). When directed=False, only the upper triangle is used.
- initial_edgeslist
A list of (qubit_label1, qubit_label2) 2-tuples or (qubit_label1, qubit_label2, direction) 3-tuples specifying which edges are initially present. direction can either be a positive integer, similar to those used in initial_connectivity (in which case direction_names must be specified) or a string labeling the direction, e.g. “left”.
- directedbool, optional
Whether the graph is directed or undirected. Directions can only be used when directed=True.
- direction_namesiterable, optional
A list (or tuple, etc) of string-valued direction names such as “left” or “right”. These strings label the directions referenced by index in either initial_connectivity or initial_edges, and this argument is required whenever such indices are used.
Initialize a new QubitGraph.
Can specify at most one of initial_connectivity and initial_edges.
Parameters
- qubit_labelslist
A list of string or integer labels of the qubits. The length of this list equals the number of qubits (nodes) in the graph.
- initial_connectivitynumpy.ndarray, optional
A (nqubits, nqubits) boolean or integer array giving the initial connectivity of the graph. If an integer array, then 0 indicates no edge and positive integers indicate present edges in the “direction” given by the positive integer. For example 1 may corresond to “left” and 2 to “right”. Names must be associated with these directions using direction_names. If a boolean array, if there’s an edge from qubit i to j then initial_connectivity[i,j]=True (integer indices of qubit labels are given by their position in qubit_labels). When directed=False, only the upper triangle is used.
- initial_edgeslist
A list of (qubit_label1, qubit_label2) 2-tuples or (qubit_label1, qubit_label2, direction) 3-tuples specifying which edges are initially present. direction can either be a positive integer, similar to those used in initial_connectivity (in which case direction_names must be specified) or a string labeling the direction, e.g. “left”.
- directedbool, optional
Whether the graph is directed or undirected. Directions can only be used when directed=True.
- direction_namesiterable, optional
A list (or tuple, etc) of string-valued direction names such as “left” or “right”. These strings label the directions referenced by index in either initial_connectivity or initial_edges, and this argument is required whenever such indices are used.
- property node_names
All the node labels of this graph.
These correpond to integer indices where appropriate, e.g. for
shortest_path_distance_matrix()
.Returns
tuple
- classmethod common_graph(num_qubits=0, geometry='line', directed=True, qubit_labels=None, all_directions=False)
Create a QubitGraph that is one of several standard types of graphs.
Parameters
- num_qubitsint, optional
The number of qubits (nodes in the graph).
- geometry{“line”,”ring”,”grid”,”torus”}
The type of graph. What these correspond to should be self-evident.
- directedbool, optional
Whether the graph is directed or undirected.
- qubit_labelsiterable, optional
The labels for the qubits. Must be of length num_qubits. If None, then the integers from 0 to num_qubits-1 are used.
- all_directionsbool, optional
Whether to include edges with all directions. Typically it only makes sense to set this to True when directed=True also.
Returns
QubitGraph
- map_qubit_labels(mapper)
Creates a new QubitGraph whose qubit (node) labels are updated according to the mapping function mapper.
Parameters
- mapperdict or function
A dictionary whose keys are the existing self.node_names values and whose value are the new labels, or a function which takes a single (existing qubit-label) argument and returns a new qubit label.
Returns
QubitProcessorSpec
- add_edges(edges)
Add edges (list of tuple pairs) to graph.
Parameters
- edgeslist
A list of (qubit_label1, qubit_label2) 2-tuples.
Returns
None
- add_edge(node1, node2, direction=None)
Add an edge between the qubits labeled by node1 and node2.
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
- directionstr or int, optional
Either a direction name or a direction indicex
Returns
None
- remove_edge(node1, node2)
Add an edge between the qubits labeled by node1 and node2.
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
None
- edges(double_for_undirected=False, include_directions=False)
Get a list of the edges in this graph as 2-tuples of node/qubit labels).
When undirected, the index of the 2-tuple’s first label will always be less than its second unless double_for_undirected == True, in which case both directed edges are included. The edges are sorted (by label index) in ascending order.
Parameters
- double_for_undirectedbool, optional
Whether, for the case of an undirected graph, two 2-tuples, giving both edge directions, should be included in the returned list.
- include_directionsbool, optional
Whether to include direction labels. If True and directions are present, a list of (node1, node2, direction_name) 3-tuples is returned instead of the usual (node1, node2) 2-tuples.
Returns
list
- radius(base_nodes, max_hops)
Find all the nodes reachable in max_hops from any node in base_nodes.
Get a (sorted) array of node labels that can be reached from traversing at most max_hops edges starting from a node (vertex) in base_nodes.
Parameters
- base_nodesiterable
A list of node/qubit labels giving the possible starting locations.
- max_hopsint
The maximum number of hops (see above).
Returns
- list
A list of the node labels reachable from base_nodes in at most max_hops edge traversals.
- connected_combos(possible_nodes, size)
Computes the number of different connected subsets of possible_nodes containing size nodes.
Parameters
- possible_nodeslist
A list of node (qubit) labels.
- sizeint
The size of the connected subsets being sought (counted).
Returns
int
- is_connected(node1, node2)
Is node1 connected to node2 (does there exist a path of any length between them?)
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
bool
- has_edge(edge)
Is edge an edge in this graph.
Note that if this graph is undirected, either node order in edge will return True.
Parameters
- edgetuple
(node1,node2) tuple specifying the edge.
Returns
bool
- is_directly_connected(node1, node2)
Is node1 directly connected to node2 (does there exist an edge between them?)
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
bool
- is_connected_graph()
Computes whether this graph is connected (there exist paths between every pair of nodes).
Returns
bool
- is_connected_subgraph(nodes)
Do a give set of nodes form a connected subgraph?
That is, does there exist a path from every node in nodes to every other node in nodes using only the edges between the nodes in nodes.
Parameters
- nodeslist
A list of node (qubit) labels.
Returns
bool
- find_all_connected_sets()
Finds all subgraphs (connected sets of vertices) up to the full graph size.
Graph edges are treated as undirected.
Returns
- dict
A dictionary with integer keys. The value of key k is a list of all the subgraphs of length k. A subgraph is given as a tuple of sorted vertex labels.
- shortest_path(node1, node2)
Get the shortest path between two nodes (qubits).
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
- list
A list of the node labels to traverse.
- shortest_path_edges(node1, node2)
Like
shortest_path()
, but returns a list of (nodeA,nodeB) tuples.These tuples define a path from node1 to node2, so the first tuple’s nodeA == node1 and the final tuple’s nodeB == node2.
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
- list
A list of the edges (2-tuples of node labels) to traverse.
- shortest_path_intersect(node1, node2, nodes_to_intersect)
Check whether the shortest path between node1 and node2 contains any of the nodes in nodes_to_intersect.
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
- nodes_to_intersectlist
A list of node labels.
Returns
- bool
True if the shortest path intersects any node in nodeToIntersect.
- shortest_path_distance(node1, node2)
Get the distance of the shortest path between node1 and node2.
Parameters
- node1str or int
Qubit (node) label.
- node2str or int
Qubit (node) label.
Returns
int
- shortest_path_distance_matrix()
Returns a matrix of shortest path distances.
This matrix is indexed by the integer-index of each node label (as specified to __init__). The list of index-ordered node labels is given by
node_names()
.Returns
- numpy.ndarray
A boolean array of shape (n,n) where n is the number of nodes in this graph.
- shortest_path_predecessor_matrix()
Returns a matrix of predecessors used to construct the shortest path between two nodes.
This matrix is indexed by the integer-index of each node label (as specified to __init__). The list of index-ordered node labels is given by
node_names()
.Returns
- numpy.ndarray
A boolean array of shape (n,n) where n is the number of nodes in this graph.
- subgraph(nodes_to_keep, reset_nodes=False)
Return a graph that includes only nodes_to_keep and the edges between them.
Parameters
- nodes_to_keeplist
A list of node labels defining the subgraph to return.
- reset_nodesbool, optional
If True, nodes of returned subgraph are relabelled to be the integers starting at 0 (in 1-1 correspondence with the ordering in nodes_to_keep).
Returns
QubitGraph
- resolve_relative_nodelabel(relative_nodelabel, target_labels)
Resolve a “relative nodelabel” into an actual node in this graph.
Relative node labels can use “@” to index elements of target_labels and can contain “+<dir>” directives to move along directions defined in this graph.
Parameters
- relative_nodelabelint or str
An absolute or relative node-label. For example: 0, “@0”, “@0+right”, “@1+left+up”
- target_labelslist or tuple
A list of (absolute) node labels present in this graph that may be referred to using the “@” syntax within relative_nodelabel.
Returns
int or str
- move_in_directions(start_node, directions)
The node you end up on after moving in directions from start_node.
Parameters
- start_nodestr or int
Qubit (node) label.
- directionsiterable
A sequence of direction names.
Returns
- str or int or None
The ending node label or None if the directions were invalid.
- move_in_direction(start_node, direction)
Get the node that is one step in direction of start_node.
Parameters
- start_nodeint or str
the starting point (a node label of this graph)
- directionstr or int
the name of a direction or its index within this graphs .directions member.
Returns
- str or int or None
the node in the given direction or None if there is no node in that direction (e.g. if you reach the end of a chain).
- class pygsti.baseobjs.ElementaryErrorgenBasis
Bases:
object
A basis for error-generator space defined by a set of elementary error generators.
Elements are ordered (have definite indices) and labeled. Intersection and union can be performed as a set.
- label_indices(labels, ok_if_missing=False)
TODO: docstring
- class pygsti.baseobjs.ExplicitElementaryErrorgenBasis(state_space, labels, basis1q=None)
Bases:
ElementaryErrorgenBasis
A basis for error-generator space defined by a set of elementary error generators.
Elements are ordered (have definite indices) and labeled. Intersection and union can be performed as a set.
- property labels
- property elemgen_supports_and_matrices
- label_index(label, ok_if_missing=False)
TODO: docstring
Parameters
label
- ok_if_missingbool
If True, then returns None instead of an integer when the given label is not present.
- create_subbasis(must_overlap_with_these_sslbls)
Create a sub-basis of this basis by including only the elements that overlap the given support (state space labels)
- union(other_basis)
- intersection(other_basis)
- difference(other_basis)
- class pygsti.baseobjs.CompleteElementaryErrorgenBasis(basis_1q, state_space, elementary_errorgen_types=('H', 'S', 'C', 'A'), max_ham_weight=None, max_other_weight=None, must_overlap_with_these_sslbls=None)
Bases:
ElementaryErrorgenBasis
Spanned by the elementary error generators of given type(s) (e.g. “Hamiltonian” and/or “other”) and with elements corresponding to a Basis, usually of Paulis.
- property labels
- property elemgen_supports_and_dual_matrices
- property elemgen_supports_and_matrices
- to_explicit_basis()
- label_index(elemgen_label, ok_if_missing=False)
TODO: docstring
Parameters
elemgen_label
- ok_if_missingbool
If True, then returns None instead of an integer when the given label is not present.
- create_subbasis(must_overlap_with_these_sslbls, retain_max_weights=True)
Create a sub-basis of this basis by including only the elements that overlap the given support (state space labels)
- union(other_basis)
- intersection(other_basis)
- difference(other_basis)
- class pygsti.baseobjs.ErrorgenSpace(vectors, basis)
Bases:
object
A vector space of error generators, spanned by some basis.
This object collects the information needed to specify a space within the space of all error generators.
- intersection(other_space, free_on_unspecified_space=False, use_nice_nullspace=False)
TODO: docstring
- abstract union(other_space)
TODO: docstring
- class pygsti.baseobjs.UnitaryGateFunction
Bases:
pygsti.baseobjs.nicelyserializable.NicelySerializable
A convenient base class for building serializable “functions” for unitary gate matrices.
Subclasses that don’t need to initialize any attributes other than shape only need to impliement the __call__ method and declare their shape as either a class or instance variable.