lambeq package

class lambeq.AtomicType(value)[source]

Bases: discopy.rigid.Ty, enum.Enum

Standard pregroup atomic types mapping to their rigid type.

CONJUNCTION = Ty('conj')
NOUN = Ty('n')
NOUN_PHRASE = Ty('n')
PREPOSITIONAL_PHRASE = Ty('p')
PUNCTUATION = Ty('punc')
SENTENCE = Ty('s')
class lambeq.BaseAnsatz(ob_map: Mapping[rigid.Ty, monoidal.Ty])[source]

Bases: abc.ABC

Base class for ansatz.

abstract __call__(diagram: discopy.rigid.Diagram) discopy.monoidal.Diagram[source]

Convert a DisCoPy diagram into a DisCoPy circuit or tensor.

abstract __init__(ob_map: Mapping[rigid.Ty, monoidal.Ty]) None[source]

Instantiate an ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to a type in the target category. In the category of quantum circuits, this type is the number of qubits; in the category of vector spaces, this type is a vector space.

exception lambeq.BobcatParseError(sentence: str)[source]

Bases: Exception

__init__(sentence: str) None[source]
class lambeq.BobcatParser(model_name_or_path: str = 'bert', root_cats: Optional[Iterable[str]] = None, device: int = - 1, cache_dir: Optional[StrPathT] = None, force_download: bool = False, verbose: str = 'progress', **kwargs: Any)[source]

Bases: lambeq.text2diagram.ccg_parser.CCGParser

CCG parser using Bobcat as the backend.

__init__(model_name_or_path: str = 'bert', root_cats: Optional[Iterable[str]] = None, device: int = - 1, cache_dir: Optional[StrPathT] = None, force_download: bool = False, verbose: str = 'progress', **kwargs: Any) None[source]

Instantiate a BobcatParser.

Parameters
model_name_or_pathstr, default: ‘bert’
Can be either:
  • The path to a directory containing a Bobcat model.

  • The name of a pre-trained model. By default, it uses the “bert” model. See also: BobcatParser.available_models()

root_catsiterable of str, optional

A list of the categories allowed at the root of the parse tree.

deviceint, default: -1

The GPU device ID on which to run the model, if positive. If negative (the default), run on the CPU.

cache_dirstr or os.PathLike, optional

The directory to which a downloaded pre-trained model should be cached instead of the standard cache ($XDG_CACHE_HOME or ~/.cache).

force_downloadbool, default: False

Force the model to be downloaded, even if it is already available locally.

verbosestr, default: ‘progress’,

See VerbosityLevel for options.

**kwargsdict, optional

Additional keyword arguments to be passed to the underlying parsers (see Other Parameters). By default, they are set to the values in the pipeline_config.json file in the model directory.

Other Parameters
Tagger parameters:
batch_sizeint, optional

The number of sentences per batch.

tag_top_kint, optional

The maximum number of tags to keep. If 0, keep all tags.

tag_prob_thresholdfloat, optional

The probability multiplier used for the threshold to keep tags.

tag_prob_threshold_strategy{‘relative’, ‘absolute’}

If “relative”, the probablity threshold is relative to the highest scoring tag. Otherwise, the probability is an absolute threshold.

span_top_kint, optional

The maximum number of entries to keep per span. If 0, keep all entries.

span_prob_thresholdfloat, optional

The probability multiplier used for the threshold to keep entries for a span.

span_prob_threshold_strategy{‘relative’, ‘absolute’}

If “relative”, the probablity threshold is relative to the highest scoring entry. Otherwise, the probability is an absolute threshold.

Chart parser parameters:
eisner_normal_formbool, default: True

Whether to use eisner normal form.

max_parse_treesint, optional

A safety limit to the number of parse trees that can be generated per parse before automatically failing.

beam_sizeint, optional

The beam size to use in the chart cells.

input_tag_score_weightfloat, optional

A scaling multiplier to the log-probabilities of the input tags. This means that a weight of 0 causes all of the input tags to have the same score.

missing_cat_scorefloat, optional

The default score for a category that is generated but not part of the grammar.

missing_span_scorefloat, optional

The default score for a category that is part of the grammar but has no score, due to being below the threshold kept by the tagger.

static available_models() list[str][source]

List the available models.

sentences2trees(sentences: SentenceBatchType, tokenised: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[CCGTree]][source]

Parse multiple sentences into a list of CCGTree s.

Parameters
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns
list of CCGTree or None

The parsed trees. (May contain None if exceptions are suppressed)

class lambeq.CCGAtomicType(value)

Bases: lambeq.text2diagram.ccg_types._CCGAtomicTypeMeta

Standard CCG atomic types mapping to their biclosed type.

CONJUNCTION = Ty('conj')
NOUN = Ty('n')
NOUN_PHRASE = Ty('n')
PREPOSITIONAL_PHRASE = Ty('p')
PUNCTUATION = Ty('punc')
SENTENCE = Ty('s')
exception lambeq.CCGBankParseError(sentence: str = '', message: str = '')[source]

Bases: Exception

Error raised if parsing fails in CCGBank.

__init__(sentence: str = '', message: str = '')[source]
class lambeq.CCGBankParser(root: Union[str, os.PathLike[str]], verbose: str = 'suppress')[source]

Bases: lambeq.text2diagram.ccg_parser.CCGParser

A parser for CCGBank trees.

__init__(root: Union[str, os.PathLike[str]], verbose: str = 'suppress')[source]

Initialise a CCGBank parser.

Parameters
rootstr or os.PathLike

Path to the root of the corpus. The sections must be located in <root>/data/AUTO.

verbosestr, default: ‘suppress’,

See VerbosityLevel for options.

section2diagrams(section_id: int, planar: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) dict[str, Optional[Diagram]][source]

Parse a CCGBank section into diagrams.

Parameters
section_idint

The section to parse.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a diagram.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns
——-
diagramsdict

A dictionary of diagrams labelled by their ID in CCGBank. If a diagram fails to draw and exceptions are suppressed, that entry is replaced by None.

Raises
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2diagrams_gen(section_id: int, planar: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) Iterator[tuple[str, Optional[Diagram]]][source]

Parse a CCGBank section into diagrams, given as a generator.

The generator only reads data when it is accessed, providing the user with control over the reading process.

Parameters
section_idint

The section to parse.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a diagram.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Yields
ID, diagramtuple of str and Diagram

ID in CCGBank and the corresponding diagram. If a diagram fails to draw and exceptions are suppressed, that entry is replaced by None.

Raises
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2trees(section_id: int, suppress_exceptions: bool = False, verbose: Optional[str] = None) dict[str, Optional[CCGTree]][source]

Parse a CCGBank section into trees.

Parameters
section_idint

The section to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns
treesdict

A dictionary of trees labelled by their ID in CCGBank. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2trees_gen(section_id: int, suppress_exceptions: bool = False, verbose: Optional[str] = None) Iterator[tuple[str, Optional[CCGTree]]][source]

Parse a CCGBank section into trees, given as a generator.

The generator only reads data when it is accessed, providing the user with control over the reading process.

Parameters
section_idint

The section to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Yields
ID, treetuple of str and CCGTree

ID in CCGBank and the corresponding tree. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises
CCGBankParseError

If parsing fails and exceptions are not suppressed.

sentences2trees(sentences: SentenceBatchType, tokenised: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[CCGTree]][source]

Parse a CCGBank sentence derivation into a CCGTree.

The sentence must be in the format outlined in the CCGBank manual section D.2 and not just a list of words.

Parameters
sentenceslist of str

List of sentences to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens. For CCGBankParser, it should be kept False.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns
treeslist of CCGTree

A list of trees. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises
CCGBankParseError

If parsing fails and exceptions are not suppressed.

ValueError

If tokenised flag is True (not valid for CCGBankParser).

class lambeq.CCGParser(root_cats: Optional[Iterable[str]] = None, verbose: str = 'suppress')[source]

Bases: lambeq.text2diagram.base.Reader

Base class for CCG parsers.

abstract __init__(root_cats: Optional[Iterable[str]] = None, verbose: str = 'suppress') None[source]

Initialise the CCG parser.

sentence2diagram(sentence: Union[str, List[str]], tokenised: bool = False, planar: bool = False, suppress_exceptions: bool = False) Optional[discopy.rigid.Diagram][source]

Parse a sentence into a DisCoPy diagram.

Parameters
sentencestr or list of str

The sentence to be parsed.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns
discopy.Diagram or None

The parsed diagram, or None on failure.

sentence2tree(sentence: Union[str, List[str]], tokenised: bool = False, suppress_exceptions: bool = False) Optional[lambeq.text2diagram.ccg_tree.CCGTree][source]

Parse a sentence into a CCGTree.

Parameters
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns
CCGTree or None

The parsed tree, or None on failure.

sentences2diagrams(sentences: SentenceBatchType, tokenised: bool = False, planar: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[Diagram]][source]

Parse multiple sentences into a list of discopy diagrams.

Parameters
sentenceslist of str, or list of list of str

The sentences to be parsed.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

See VerbosityLevel for options. Not all parsers implement all three levels of progress reporting, see the respective documentation for each parser. If set, takes priority over the verbose attribute of the parser.

Returns
list of discopy.Diagram or None

The parsed diagrams. May contain None if exceptions are suppressed.

abstract sentences2trees(sentences: SentenceBatchType, tokenised: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[CCGTree]][source]

Parse multiple sentences into a list of CCGTree s.

Parameters
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

See VerbosityLevel for options. Not all parsers implement all three levels of progress reporting, see the respective documentation for each parser. If set, takes priority over the verbose attribute of the parser.

Returns
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

class lambeq.CCGRule(value)[source]

Bases: str, enum.Enum

An enumeration of the available CCG rules.

BACKWARD_APPLICATION = 'BA'
BACKWARD_COMPOSITION = 'BC'
BACKWARD_CROSSED_COMPOSITION = 'BX'
BACKWARD_TYPE_RAISING = 'BTR'
CONJUNCTION = 'CONJ'
FORWARD_APPLICATION = 'FA'
FORWARD_COMPOSITION = 'FC'
FORWARD_CROSSED_COMPOSITION = 'FX'
FORWARD_TYPE_RAISING = 'FTR'
GENERALIZED_BACKWARD_COMPOSITION = 'GBC'
GENERALIZED_BACKWARD_CROSSED_COMPOSITION = 'GBX'
GENERALIZED_FORWARD_COMPOSITION = 'GFC'
GENERALIZED_FORWARD_CROSSED_COMPOSITION = 'GFX'
LEXICAL = 'L'
REMOVE_PUNCTUATION_LEFT = 'LP'
REMOVE_PUNCTUATION_RIGHT = 'RP'
UNARY = 'U'
UNKNOWN = 'UNK'
__call__(dom: discopy.biclosed.Ty, cod: discopy.biclosed.Ty) discopy.biclosed.Diagram[source]

Produce a DisCoPy diagram for this rule.

If it is not possible to produce a valid diagram with the given parameters, the domain may be rewritten.

Parameters
domdiscopy.biclosed.Ty

The expected domain of the diagram.

coddiscopy.biclosed.Ty

The expected codomain of the diagram.

Returns
discopy.biclosed.Diagram

The resulting diagram.

Raises
CCGRuleUseError

If a diagram cannot be produced.

check_match(left: discopy.biclosed.Ty, right: discopy.biclosed.Ty) None[source]

Raise an exception if left does not match right.

classmethod infer_rule(dom: discopy.biclosed.Ty, cod: discopy.biclosed.Ty) lambeq.text2diagram.ccg_rule.CCGRule[source]

Infer the CCG rule that admits the given domain and codomain.

Return CCGRule.UNKNOWN if no other rule matches.

Parameters
domdiscopy.biclosed.Ty

The domain of the rule.

coddiscopy.biclosed.Ty

The codomain of the rule.

Returns
CCGRule

A CCG rule that admits the required domain and codomain.

property symbol: str

The standard CCG symbol for the rule.

exception lambeq.CCGRuleUseError(rule: lambeq.text2diagram.ccg_rule.CCGRule, message: str)[source]

Bases: Exception

Error raised when a CCGRule is applied incorrectly.

__init__(rule: lambeq.text2diagram.ccg_rule.CCGRule, message: str) None[source]
class lambeq.CCGTree(text: Optional[str] = None, *, rule: Union[str, CCGRule] = CCGRule.UNKNOWN, biclosed_type: Ty, children: Optional[Sequence[CCGTree]] = None)[source]

Bases: object

Derivation tree for a CCG.

This provides a standard derivation interface between the parser and the rest of the model.

__init__(text: Optional[str] = None, *, rule: Union[str, CCGRule] = CCGRule.UNKNOWN, biclosed_type: Ty, children: Optional[Sequence[CCGTree]] = None) None[source]

Initialise a CCG tree.

Parameters
textstr, optional

The word or phrase associated to the whole tree. If None, it is inferred from its children.

ruleCCGRule, default: CCGRule.UNKNOWN

The final CCGRule used in the derivation.

biclosed_typediscopy.biclosed.Ty

The type associated to the derived phrase.

childrenlist of CCGTree, optional

A list of JSON subtrees. The types of these subtrees can be combined with the rule to produce the output type. A leaf node has an empty list of children.

property child: lambeq.text2diagram.ccg_tree.CCGTree

Get the child of a unary tree.

deriv(word_spacing: int = 2, use_slashes: bool = True, use_ascii: bool = False, vertical: bool = False) str[source]

Produce a string representation of the tree.

Parameters
word_spacingint, default: 2

The minimum number of spaces between the words of the diagram. Only used for horizontal diagrams.

use_slashes: bool, default: True

Whether to use slashes in the CCG types instead of arrows. Automatically set to True when use_ascii is True.

use_ascii: bool, default: False

Whether to draw using ASCII characters only.

vertical: bool, default: False

Whether to create a vertical tree representation, instead of the standard horizontal one.

Returns
str

A string that contains the graphical representation of the CCG tree.

classmethod from_json(data: None) None[source]
classmethod from_json(data: Union[str, Dict[str, Any]]) lambeq.text2diagram.ccg_tree.CCGTree

Create a CCGTree from a JSON representation.

A JSON representation of a derivation contains the following fields:

textstr or None

The word or phrase associated to the whole tree. If None, it is inferred from its children.

ruleCCGRule

The final CCGRule used in the derivation.

typediscopy.biclosed.Ty

The type associated to the derived phrase.

childrenlist or None

A list of JSON subtrees. The types of these subtrees can be combined with the rule to produce the output type. A leaf node has an empty list of children.

property left: lambeq.text2diagram.ccg_tree.CCGTree

Get the left child of a binary tree.

property right: lambeq.text2diagram.ccg_tree.CCGTree

Get the right child of a binary tree.

property text: str

The word or phrase associated to the tree.

to_biclosed_diagram(planar: bool = False) discopy.biclosed.Diagram[source]

Convert tree to a derivation in DisCoPy form.

Parameters
planarbool, default: False

Force the diagram to be planar. This only affects trees using cross composition.

to_diagram(planar: bool = False) discopy.rigid.Diagram[source]

Convert tree to a DisCoCat diagram.

Parameters
planarbool, default: False

Force the diagram to be planar. This only affects trees using cross composition.

to_json() Dict[str, Any][source]

Convert tree into JSON form.

without_trivial_unary_rules() lambeq.text2diagram.ccg_tree.CCGTree[source]

Create a new CCGTree from the current tree, with all trivial unary rules (i.e. rules that map X to X) removed.

This might happen because there is no exact correspondence between CCG types and pregroup types, e.g. both CCG types NP and N are mapped to the same pregroup type n.

Returns
lambeq.text2diagram.CCGTree

A new tree free of trivial unary rules.

class lambeq.Checkpoint[source]

Bases: collections.abc.Mapping

Checkpoint class.

Attributes
entriesdict

All data, stored as part of the checkpoint.

__init__() None[source]

Initialise a Checkpoint.

add_many(values: Mapping[str, Any]) None[source]

Adds several values into the checkpoint.

Parameters
valuesMapping from str to any

The values to be added into the checkpoint.

classmethod from_file(path: Union[str, os.PathLike[str]]) lambeq.training.checkpoint.Checkpoint[source]

Load the checkpoint contents from the file.

Parameters
pathstr or PathLike

Path to the checkpoint file.

Raises
FileNotFoundError

If no file is found at the given path.

to_file(path: Union[str, os.PathLike[str]]) None[source]

Save entries to a file and deletes the in-memory copy.

Parameters
pathstr or PathLike

Path to the checkpoint file.

class lambeq.CircuitAnsatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int, circuit: Callable[[int, np.ndarray], Circuit], discard: bool = False, single_qubit_rotations: Optional[list[Circuit]] = None, postselection_basis: Circuit = Id(1))[source]

Bases: lambeq.ansatz.base.BaseAnsatz

Base class for circuit ansatz.

__call__(diagram: discopy.rigid.Diagram) discopy.quantum.circuit.Circuit[source]

Convert a DisCoPy diagram into a DisCoPy circuit.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int, circuit: Callable[[int, np.ndarray], Circuit], discard: bool = False, single_qubit_rotations: Optional[list[Circuit]] = None, postselection_basis: Circuit = Id(1)) None[source]

Instantiate a circuit ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint

The number of single qubit rotations used by the ansatz.

circuitcallable

Circuit generator used by the ansatz. This is a function (or a class constructor) that takes a number of qubits and a numpy array of parameters, and returns the ansatz of that size, with parameterised boxes.

discardbool, default: False

Discard open wires instead of post-selecting.

postselection_basis: Circuit, default: Id(qubit)

Basis to post-select in, by default the computational basis.

single_qubit_rotations: list of Circuit, optional

The rotations to be used for a single qubit. When only a single qubit is present, the ansatz defaults to applying a series of rotations in a cycle, determined by this parameter and n_single_qubit_params.

ob_size(pg_type: discopy.rigid.Ty) int[source]

Calculate the number of qubits used for a given type.

abstract params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.CoordinationRewriteRule(words: Optional[Container[str]] = None)[source]

Bases: lambeq.rewrite.base.RewriteRule

A rewrite rule for coordination.

This rule matches the word ‘and’ with codomain a.r @ a @ a.l for pregroup type a, and replaces the word, based on [Kar2016], with a layer of interleaving spiders.

__init__(words: Optional[Container[str]] = None) None[source]

Instantiate a CoordinationRewriteRule.

Parameters
wordscontainer of str, optional

A list of words to be rewritten by this rule. If a box does not have one of these words, it will not be rewritten, even if the codomain matches. If omitted, the rewrite applies only to the word “and”.

matches(box: discopy.rigid.Box) bool[source]

Check if the given box should be rewritten.

rewrite(box: discopy.rigid.Box) discopy.rigid.Diagram[source]

Rewrite the given box.

class lambeq.CurryRewriteRule[source]

Bases: lambeq.rewrite.base.RewriteRule

A rewrite rule using map-state duality.

__init__() None[source]

Instantiate a CurryRewriteRule.

This rule uses the map-state duality by iteratively uncurrying on both sides of each box. When used in conjunction with normal_form(), this removes cups from the diagram in exchange for depth. Diagrams with less cups become circuits with less post-selection, which results in faster QML experiments.

matches(box: discopy.rigid.Box) bool[source]

Check if the given box should be rewritten.

rewrite(box: discopy.rigid.Box) discopy.rigid.Diagram[source]

Rewrite the given box.

class lambeq.Dataset(data: list[Any], targets: list[Any], batch_size: int = 0, shuffle: bool = True)[source]

Bases: object

Dataset class for the training of a lambeq model.

Data is returned in the format of discopy.tensor.Tensor’s backend, which by default is set to NumPy. For example, to access the dataset as PyTorch tensors:

>>> dataset = Dataset(['data1'], [[0, 1, 2, 3]])
>>> with Tensor.backend('pytorch'):
...     print(dataset[0])  # becomes pytorch tensor
('data1', tensor([0, 1, 2, 3]))
>>> print(dataset[0])  # numpy array again
('data1', array([0, 1, 2, 3]))
__init__(data: list[Any], targets: list[Any], batch_size: int = 0, shuffle: bool = True) None[source]

Initialise a Dataset for lambeq training.

Parameters
datalist

Data used for training.

targetslist

List of labels.

batch_sizeint, default: 0

Batch size for batch generation, by default full dataset.

shufflebool, default: True

Enable data shuffling during training.

Raises
ValueError

When ‘data’ and ‘targets’ do not match in size.

static shuffle_data(data: list[Any], targets: list[Any]) tuple[list[Any], list[Any]][source]

Shuffle a given dataset.

Parameters
datalist

List of data points.

targetslist

List of labels.

Returns
Tuple of list and list

The shuffled dataset.

exception lambeq.DepCCGParseError(sentence: str)[source]

Bases: Exception

__init__(sentence: str) None[source]
class lambeq.DepCCGParser(*, lang: str = 'en', model: Optional[str] = None, use_model_unary_rules: bool = False, annotator: str = 'janome', tokenize: Optional[bool] = None, device: int = - 1, root_cats: Optional[Iterable[str]] = None, verbose: str = 'progress', **kwargs: Any)[source]

Bases: lambeq.text2diagram.ccg_parser.CCGParser

CCG parser using depccg as the backend.

__init__(*, lang: str = 'en', model: Optional[str] = None, use_model_unary_rules: bool = False, annotator: str = 'janome', tokenize: Optional[bool] = None, device: int = - 1, root_cats: Optional[Iterable[str]] = None, verbose: str = 'progress', **kwargs: Any) None[source]

Instantiate a parser based on depccg.

Parameters
lang{ ‘en’, ‘ja’ }

The language to use: ‘en’ for English, ‘ja’ for Japanese.

modelstr, optional

The name of the model variant to use, if any. depccg only has English model variants, namely ‘elmo’, ‘rebank’ and ‘elmo_rebank’.

use_model_unary_rulesbool, default: False

Use the unary rules supplied by the model instead of the ones by lambeq.

annotatorstr, default: ‘janome’

The annotator to use, if any. depccg supports ‘candc’ and ‘spacy’ for English, and ‘janome’ and ‘jigg’ for Japanese. By default, no annotator is used for English, and ‘janome’ is used for Japanese.

tokenizebool, optional

Whether to tokenise the input when annotating. This option should only be specified when using the ‘spacy’ annotator.

deviceint, optional

The ID of the GPU to use. By default, uses the CPU.

root_catsiterable of str, optional

A list of categories allowed at the root of the parse. By default, the English categories are:

  • S[dcl]

  • S[wq]

  • S[q]

  • S[qem]

  • NP

and the Japanese categories are:
  • NP[case=nc,mod=nm,fin=f]

  • NP[case=nc,mod=nm,fin=t]

  • S[mod=nm,form=attr,fin=t]

  • S[mod=nm,form=base,fin=f]

  • S[mod=nm,form=base,fin=t]

  • S[mod=nm,form=cont,fin=f]

  • S[mod=nm,form=cont,fin=t]

  • S[mod=nm,form=da,fin=f]

  • S[mod=nm,form=da,fin=t]

  • S[mod=nm,form=hyp,fin=t]

  • S[mod=nm,form=imp,fin=f]

  • S[mod=nm,form=imp,fin=t]

  • S[mod=nm,form=r,fin=t]

  • S[mod=nm,form=s,fin=t]

  • S[mod=nm,form=stem,fin=f]

  • S[mod=nm,form=stem,fin=t]

verbosestr, default: ‘progress’,

Controls the command-line output of the parser. Only ‘progress’ option is available for this parser.

**kwargsdict, optional

Optional arguments passed to depccg.

sentence2diagram(sentence: Union[str, List[str]], tokenised: bool = False, planar: bool = False, suppress_exceptions: bool = False) Optional[discopy.rigid.Diagram][source]

Parse a sentence into a DisCoPy diagram.

Parameters
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns
discopy.Diagram or None

The parsed diagram, or None on failure.

Raises
ValueErrorIf tokenised does not match with the input type.
sentence2tree(sentence: Union[str, List[str]], tokenised: bool = False, suppress_exceptions: bool = False) Optional[lambeq.text2diagram.ccg_tree.CCGTree][source]

Parse a sentence into a CCGTree.

Parameters
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns
CCGTree or None

The parsed tree, or None on failure.

Raises
ValueErrorIf tokenised does not match with the input type.
sentences2trees(sentences: SentenceBatchType, tokenised: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[CCGTree]][source]

Parse multiple sentences into a list of CCGTree s.

Parameters
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

Controls the form of progress tracking. If set, takes priority over the verbose attribute of the parser. This class only supports ‘progress’ verbosity level - a progress bar.

Returns
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

Raises
ValueErrorIf tokenised does not match with the input type
or if verbosity is set to an unsupported value
class lambeq.IQPAnsatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: lambeq.ansatz.circuit.CircuitAnsatz

Instantaneous Quantum Polynomial ansatz.

An IQP ansatz interleaves layers of Hadamard gates with diagonal unitaries. This class uses n_layers-1 adjacent CRz gates to implement each diagonal unitary.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate an IQP ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz.

discardbool, default: False

Discard open wires instead of post-selecting.

params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.LinearReader(combining_diagram: discopy.rigid.Diagram, word_type: discopy.rigid.Ty = Ty('s'), start_box: discopy.rigid.Diagram = Id(Ty()))[source]

Bases: lambeq.text2diagram.base.Reader

A reader that combines words linearly using a stair diagram.

__init__(combining_diagram: discopy.rigid.Diagram, word_type: discopy.rigid.Ty = Ty('s'), start_box: discopy.rigid.Diagram = Id(Ty())) None[source]

Initialise a linear reader.

Parameters
combining_diagramDiagram

The diagram that is used to combine two word boxes. It is continuously applied on the left-most wires until a single output wire remains.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

start_boxDiagram, default: Id()

The start box used as a sentinel value for combining. By default, the empty diagram is used.

sentence2diagram(sentence: Union[str, List[str]], tokenised: bool = False) discopy.rigid.Diagram[source]

Parse a sentence into a DisCoPy diagram.

If tokenise is True, sentence is tokenised, otherwise it is split into tokens by whitespace. This method creates a box for each token, and combines them linearly.

Parameters
sentencestr or list of str

The input sentence, passed either as a string or as a list of tokens.

tokenisedbool, default: False

Set to True, if the sentence is passed as a list of tokens instead of a single string. If set to False, words are split by whitespace.

Raises
ValueError

If sentence does not match tokenised flag, or if an invalid mode or parser is passed to the initialiser.

class lambeq.MPSAnsatz(ob_map: Mapping[Ty, Dim], bond_dim: int, max_order: int = 3)[source]

Bases: lambeq.ansatz.tensor.TensorAnsatz

Split large boxes into matrix product states.

BOND_TYPE: discopy.rigid.Ty = Ty('B')
__call__(diagram: discopy.rigid.Diagram) discopy.tensor.Diagram[source]

Convert a DisCoPy diagram into a DisCoPy tensor.

__init__(ob_map: Mapping[Ty, Dim], bond_dim: int, max_order: int = 3) None[source]

Instantiate a matrix product state ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the dimension space it uses in a tensor network.

bond_dim: int

The size of the bonding dimension.

max_order: int

The maximum order of each tensor in the matrix product state, which must be at least 3.

class lambeq.Model[source]

Bases: abc.ABC

Model base class.

Attributes
symbolslist of symbols

A sorted list of all Symbols occuring in the data.

weightsCollection

A data structure containing the numeric values of the model’s parameters.

__call__(*args: Any, **kwds: Any) Any[source]

Call self as a function.

__init__() None[source]

Initialise an instance of Model base class.

abstract forward(x: list[Any]) Any[source]

The forward pass of the model.

classmethod from_checkpoint(checkpoint_path: Union[str, os.PathLike[str]], **kwargs: Any) lambeq.training.model.Model[source]

Load the weights and symbols from a training checkpoint.

Parameters
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

Other Parameters
backend_configdict

Dictionary containing the backend configuration for the TketModel. Must include the fields ‘backend’, ‘compilation’ and ‘shots’.

classmethod from_diagrams(diagrams: list[Diagram], **kwargs: Any) Model[source]

Build model from a list of Diagrams.

Parameters
diagramslist of Diagram

The tensor or circuit diagrams to be evaluated.

Other Parameters
backend_configdict

Dictionary containing the backend configuration for the TketModel. Must include the fields ‘backend’, ‘compilation’ and ‘shots’.

use_jitbool, default: False

Whether to use JAX’s Just-In-Time compilation in NumpyModel.

abstract get_diagram_output(diagrams: list[Diagram]) Any[source]

Return the diagram prediction.

Parameters
diagramslist of Diagram

The tensor or circuit diagrams to be evaluated.

abstract initialise_weights() None[source]

Initialise the weights of the model.

load(checkpoint_path: Union[str, os.PathLike[str]]) None[source]

Load model data from a path pointing to a lambeq checkpoint.

Checkpoints that are created by a lambeq Trainer usually have the extension .lt.

Parameters
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

save(checkpoint_path: Union[str, os.PathLike[str]]) None[source]

Create a lambeq Checkpoint and save to a path.

Example: >>> from lambeq import PytorchModel >>> model = PytorchModel() >>> model.save(‘my_checkpoint.lt’)

Parameters
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

class lambeq.NumpyModel(use_jit: bool = False)[source]

Bases: lambeq.training.quantum_model.QuantumModel

A lambeq model for an exact classical simulation of a quantum pipeline.

__init__(use_jit: bool = False) None[source]

Initialise an NumpyModel.

Parameters
use_jitbool, default: False

Whether to use JAX’s Just-In-Time compilation.

forward(x: list[Diagram]) Any[source]

Perform default forward pass of a lambeq model.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters
xlist of Diagram

The Circuits to be evaluated.

Returns
numpy.ndarray

Array containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) Union[jnp.ndarray, numpy.ndarray][source]

Return the exact prediction for each diagram.

Parameters
diagramslist of Diagram

The Circuits to be evaluated.

Returns
np.ndarray

Resulting array.

Raises
ValueError

If model.weights or model.symbols are not initialised.

lambdas: dict[Diagram, Callable[..., Any]]
symbols: list[Union[Symbol, SymPySymbol]]
weights: np.ndarray
class lambeq.Optimizer(model: Model, hyperparams: dict[Any, Any], loss_fn: Callable[[Any, Any], float], bounds: Optional[ArrayLike] = None)[source]

Bases: abc.ABC

Optimizer base class.

__init__(model: Model, hyperparams: dict[Any, Any], loss_fn: Callable[[Any, Any], float], bounds: Optional[ArrayLike] = None) None[source]

Initialise the optimizer base class.

Parameters
modelQuantumModel

A lambeq model.

hyperparamsdict of str to float.

A dictionary containing the models hyperparameters.

loss_fnCallable

A loss function of form loss(prediction, labels).

boundsArrayLike, optional

The range of each of the model’s parameters.

abstract backward(batch: tuple[Iterable[Any], np.ndarray]) float[source]

Calculate the gradients of the loss function.

The gradient is calculated with respect to the model parameters.

Parameters
batchtuple of list and numpy.ndarray

Current batch.

Returns
float

The calculated loss.

abstract load_state_dict(state: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

abstract state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

abstract step() None[source]

Perform optimisation step.

zero_grad() None[source]

Reset the gradients to zero.

class lambeq.PytorchModel[source]

Bases: lambeq.training.model.Model, torch.nn.modules.module.Module

A lambeq model for the classical pipeline using PyTorch.

__init__() None[source]

Initialise a PytorchModel.

forward(x: list[Diagram]) torch.Tensor[source]

Perform default forward pass by contracting tensors.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters
xlist of Diagram

The Diagrams to be evaluated.

Returns
torch.Tensor

Tensor containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) torch.Tensor[source]

Contract diagrams using tensornetwork.

Parameters
diagramslist of Diagram

The Diagrams to be evaluated.

Returns
torch.Tensor

Resulting tensor.

Raises
ValueError

If model.weights or model.symbols are not initialised.

initialise_weights() None[source]

Initialise the weights of the model.

Raises
ValueError

If model.symbols are not initialised.

symbols: list[Symbol]
training: bool
weights: torch.nn.ParameterList
class lambeq.PytorchTrainer(model: PytorchModel, loss_function: Callable[..., torch.Tensor], epochs: int, optimizer: type[torch.optim.Optimizer] = <class 'torch.optim.adamw.AdamW'>, learning_rate: float = 0.001, device: int = -1, *, optimizer_args: Optional[dict[str, Any]] = None, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None)[source]

Bases: lambeq.training.trainer.Trainer

A PyTorch trainer for the classical pipeline.

__init__(model: PytorchModel, loss_function: Callable[..., torch.Tensor], epochs: int, optimizer: type[torch.optim.Optimizer] = <class 'torch.optim.adamw.AdamW'>, learning_rate: float = 0.001, device: int = -1, *, optimizer_args: Optional[dict[str, Any]] = None, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None) None[source]

Initialise a Trainer instance using the PyTorch backend.

Parameters
modelPytorchModel

A lambeq Model using PyTorch for tensor computation.

loss_functioncallable

A PyTorch loss function from torch.nn.

epochsint

Number of training epochs.

optimizertorch.optim.Optimizer, default: torch.optim.AdamW

A PyTorch optimizer from torch.optim.

learning_ratefloat, default: 1e-3

The learning rate provided to the optimizer for training.

deviceint, default: -1

CUDA device ID used for tensor operation speed-up. A negative value uses the CPU.

optimizer_argsdict of str to Any, optional

Any extra arguments to pass to the optimizer.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names. Structure [{“metric”: func}]. Each function takes the prediction “y_hat” and the label “y” as input. The validation step calls “func(y_hat, y)”.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

model: PytorchModel
train_costs: list[float]
train_epoch_costs: list[float]
train_results: dict[str, list[Any]]
training_step(batch: tuple[list[Any], torch.Tensor]) tuple[torch.Tensor, float][source]

Perform a training step.

Parameters
batchtuple of list and torch.Tensor

Current batch.

Returns
Tuple of torch.Tensor and float

The model predictions and the calculated loss.

val_costs: list[float]
val_results: dict[str, list[Any]]
validation_step(batch: tuple[list[Any], torch.Tensor]) tuple[torch.Tensor, float][source]

Perform a validation step.

Parameters
batchtuple of list and torch.Tensor

Current batch.

Returns
Tuple of torch.Tensor and float

The model predictions and the calculated loss.

class lambeq.QuantumModel[source]

Bases: lambeq.training.model.Model

Quantum Model base class.

Attributes
symbolslist of symbols

A sorted list of all Symbols occurring in the data.

weightsarray

A data structure containing the numeric values of the model parameters

SMOOTHINGfloat

A smoothing constant

__call__(*args: Any, **kwargs: Any) Any[source]

Call self as a function.

__init__() None[source]

Initialise a QuantumModel.

abstract forward(x: list[Diagram]) Any[source]

Compute the forward pass of the model using get_model_output

abstract get_diagram_output(diagrams: list[Diagram]) Union[jnp.ndarray, np.ndarray][source]

Return the diagram prediction.

Parameters
diagramslist of Diagram

The Circuits to be evaluated.

initialise_weights() None[source]

Initialise the weights of the model.

Raises
ValueError

If model.symbols are not initialised.

symbols: list[Union[Symbol, SymPySymbol]]
weights: np.ndarray
class lambeq.QuantumTrainer(model: QuantumModel, loss_function: Callable[..., float], epochs: int, optimizer: type[Optimizer], optim_hyperparams: dict[str, float], *, optimizer_args: Optional[dict[str, Any]] = None, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None)[source]

Bases: lambeq.training.trainer.Trainer

A Trainer for the quantum pipeline.

__init__(model: QuantumModel, loss_function: Callable[..., float], epochs: int, optimizer: type[Optimizer], optim_hyperparams: dict[str, float], *, optimizer_args: Optional[dict[str, Any]] = None, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None) None[source]

Initialise a Trainer using a quantum backend.

Parameters
modelQuantumModel

A lambeq Model.

loss_functioncallable

A loss function.

epochsint

Number of training epochs

optimizerOptimizer

An optimizer of type lambeq.training.Optimizer.

optim_hyperparamsdict of str to float

The hyperparameters to be used by the optimizer.

optimizer_argsdict of str to Any, optional

Any extra arguments to pass to the optimizer.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names. Structure [{“metric”: func}]. Each function takes the prediction “y_hat” and the label “y” as input. The validation step calls “func(y_hat, y)”.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

fit(train_dataset: lambeq.training.dataset.Dataset, val_dataset: Optional[lambeq.training.dataset.Dataset] = None, evaluation_step: int = 1, logging_step: int = 1) None[source]

Fit the model on the training data and, optionally, evaluate it on the validation data.

Parameters
train_datasetDataset

Dataset used for training.

val_datasetDataset, optional

Validation dataset.

evaluation_stepint, default: 1

Sets the intervals at which the metrics are evaluated on the validation dataset.

logging_stepint, default: 1

Sets the intervals at which the training statistics are printed if verbose = ‘text’ (otherwise ignored).

model: QuantumModel
train_costs: list[float]
train_epoch_costs: list[float]
train_results: dict[str, list[Any]]
training_step(batch: tuple[list[Any], np.ndarray]) tuple[np.ndarray, float][source]

Perform a training step.

Parameters
batchtuple of list and np.ndarray

Current batch.

Returns
Tuple of np.ndarray and float

The model predictions and the calculated loss.

val_costs: list[float]
val_results: dict[str, list[Any]]
validation_step(batch: tuple[list[Any], np.ndarray]) tuple[np.ndarray, float][source]

Perform a validation step.

Parameters
batchtuple of list and np.ndarray

Current batch.

Returns
tuple of np.ndarray and float

The model predictions and the calculated loss.

class lambeq.Reader[source]

Bases: abc.ABC

Base class for readers and parsers.

abstract sentence2diagram(sentence: Union[str, List[str]], tokenised: bool = False) Optional[discopy.rigid.Diagram][source]

Parse a sentence into a DisCoPy diagram.

sentences2diagrams(sentences: SentenceBatchType, tokenised: bool = False) list[Optional[Diagram]][source]

Parse multiple sentences into a list of DisCoPy diagrams.

class lambeq.RewriteRule[source]

Bases: abc.ABC

Base class for rewrite rules.

__call__(box: discopy.rigid.Box) Optional[discopy.rigid.Diagram][source]

Apply the rewrite rule to a box.

Parameters
boxdiscopy.rigid.Box

The candidate box to be tested against this rewrite rule.

Returns
discopy.rigid.Diagram, optional

The rewritten diagram, or None if rule does not apply.

Notes

The default implementation uses the matches() and rewrite() methods, but derived classes may choose to not use them, since the default Rewriter implementation does not call those methods directly, only this one.

abstract matches(box: discopy.rigid.Box) bool[source]

Check if the given box should be rewritten.

abstract rewrite(box: discopy.rigid.Box) discopy.rigid.Diagram[source]

Rewrite the given box.

class lambeq.Rewriter(rules: Optional[Iterable[Union[str, RewriteRule]]] = None)[source]

Bases: object

Class that rewrites diagrams.

Comes with a set of default rules.

__call__(diagram: discopy.rigid.Diagram) discopy.rigid.Diagram[source]

Apply the rewrite rules to the given diagram.

__init__(rules: Optional[Iterable[Union[str, RewriteRule]]] = None) None[source]

Initialise a rewriter.

Parameters
rulesiterable of str or RewriteRule, optional

A list of rewrite rules to use. RewriteRule instances are used directly, str objects are used as names of the default rules. See Rewriter.available_rules() for the list of rule names. If omitted, all the default rules are used.

add_rules(*rules: Union[str, lambeq.rewrite.base.RewriteRule]) None[source]

Add rules to this rewriter.

classmethod available_rules() list[str][source]

The list of default rule names.

class lambeq.SPSAOptimizer(model: QuantumModel, hyperparams: dict[str, float], loss_fn: Callable[[Any, Any], float], bounds: Optional[ArrayLike] = None)[source]

Bases: lambeq.training.optimizer.Optimizer

An Optimizer using SPSA.

SPSA = Simultaneous Perturbation Stochastic Spproximations. See https://ieeexplore.ieee.org/document/705889 for details.

__init__(model: QuantumModel, hyperparams: dict[str, float], loss_fn: Callable[[Any, Any], float], bounds: Optional[ArrayLike] = None) None[source]

Initialise the SPSA optimizer.

The hyperparameters must contain the following key value pairs:

hyperparams = {
    'a': A learning rate parameter, float
    'c': The parameter shift scaling factor, float
    'A': A stability constant, float
}

A good value for ‘A’ is approximately: 0.01 * Num Training steps

Parameters
modelQuantumModel

A lambeq quantum model.

hyperparamsdict of str to float.

A dictionary containing the models hyperparameters.

loss_fnCallable

A loss function of form loss(prediction, labels).

boundsArrayLike, optional

The range of each of the model parameters.

Raises
ValueError

If the hyperparameters are not set correctly, or if the length of bounds does not match the number of the model parameters.

backward(batch: tuple[Iterable[Any], np.ndarray]) float[source]

Calculate the gradients of the loss function.

The gradients are calculated with respect to the model parameters.

Parameters
batchtuple of Iterable and numpy.ndarray

Current batch. Contains an Iterable of diagrams in index 0, and the targets in index 1.

Returns
float

The calculated loss.

load_state_dict(state_dict: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

Parameters
state_dictdict

A dictionary containing a snapshot of the optimizer state.

model: QuantumModel
project: Callable[[np.ndarray], np.ndarray]
state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

Returns
dict

A dictionary containing the current state of the optimizer.

step() None[source]

Perform optimisation step.

update_hyper_params() None[source]

Update the hyperparameters of the SPSA algorithm.

class lambeq.Sim14Ansatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: lambeq.ansatz.circuit.CircuitAnsatz

Modification of circuit 14 from Sim et al.

Replaces circuit-block construction with two rings of CRx gates, in opposite orientation.

Paper at: https://arxiv.org/pdf/1905.10876.pdf

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate a Sim 14 ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz.

discardbool, default: False

Discard open wires instead of post-selecting.

params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.Sim15Ansatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: lambeq.ansatz.circuit.CircuitAnsatz

Modification of circuit 15 from Sim et al.

Replaces circuit-block construction with two rings of CNOT gates, in opposite orientation.

Paper at: https://arxiv.org/pdf/1905.10876.pdf

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate a Sim 15 ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz.

discardbool, default: False

Discard open wires instead of post-selecting.

params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.SimpleRewriteRule(cod: Ty, template: Diagram, words: Optional[Container[str]] = None, case_sensitive: bool = False)[source]

Bases: lambeq.rewrite.base.RewriteRule

A simple rewrite rule.

This rule matches each box against a required codomain and, if provided, a set of words. If they match, the word box is rewritten into a set template.

__init__(cod: Ty, template: Diagram, words: Optional[Container[str]] = None, case_sensitive: bool = False) None[source]

Instantiate a simple rewrite rule.

Parameters
coddiscopy.rigid.Ty

The type that the codomain of each box is matched against.

templatediscopy.rigid.Diagram

The diagram that a matching box is replaced with. A special placeholder box is replaced by the word in the matched box, and can be created using SimpleRewriteRule.placeholder().

wordscontainer of str, optional

If provided, this is a list of words that are rewritten by this rule. If a box does not have one of these words, it is not rewritten, even if the codomain matches. If omitted, all words are permitted.

case_sensitivebool, default: False

This indicates whether the list of words specified above are compared case-sensitively. The default is False.

matches(box: discopy.rigid.Box) bool[source]

Check if the given box should be rewritten.

classmethod placeholder(cod: discopy.rigid.Ty) discopy.grammar.pregroup.Word[source]

Helper function to generate the placeholder for a template.

Parameters
coddiscopy.rigid.Ty

The codomain of the placeholder, and hence the word in the resulting rewritten diagram.

Returns
discopy.rigid.Box

A placeholder box with the given codomain.

rewrite(box: discopy.rigid.Box) discopy.rigid.Diagram[source]

Rewrite the given box.

class lambeq.SpacyTokeniser[source]

Bases: lambeq.tokeniser.base.Tokeniser

Tokeniser class based on SpaCy.

__init__() None[source]
split_sentences(text: str) list[str][source]

Split input text into a list of sentences.

Parameters
textstr

A single string that contains one or multiple sentences.

Returns
list of str

List of sentences, one sentence in each string.

tokenise_sentences(sentences: Iterable[str]) list[list[str]][source]

Tokenise a list of sentences.

Parameters
sentenceslist of str

A list of untokenised sentences.

Returns
list of list of str

A list of tokenised sentences, where each sentence is a list of tokens.

class lambeq.SpiderAnsatz(ob_map: Mapping[Ty, Dim], max_order: int = 2)[source]

Bases: lambeq.ansatz.tensor.TensorAnsatz

Split large boxes into spiders.

__call__(diagram: discopy.rigid.Diagram) discopy.tensor.Diagram[source]

Convert a DisCoPy diagram into a DisCoPy tensor.

__init__(ob_map: Mapping[Ty, Dim], max_order: int = 2) None[source]

Instantiate a spider ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the dimension space it uses in a tensor network.

max_order: int

The maximum order of each tensor, which must be at least 2.

class lambeq.StronglyEntanglingAnsatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, ranges: Optional[list[int]] = None, discard: bool = False)[source]

Bases: lambeq.ansatz.circuit.CircuitAnsatz

Strongly entangling ansatz.

Ansatz using three single qubit rotations (RzRyRz) followed by a ladder of CNOT gates with different ranges per layer.

This is adapted from the PennyLane implementation of the pennylane.StronglyEntanglingLayers, pursuant to Apache 2.0 licence.

The original paper which introduces the architecture can be found here.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, ranges: Optional[list[int]] = None, discard: bool = False) None[source]

Instantiate a strongly entangling ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of circuit layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz.

rangeslist of int, optional

The range of the CNOT gate between wires in each layer. By default, the range starts at one (i.e. adjacent wires) and increases by one for each subsequent layer.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: numpy.ndarray) discopy.quantum.circuit.Circuit[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.Symbol(name: str, size: int = 1, **assumptions: bool)[source]

Bases: sympy.core.symbol.Symbol

A sympy symbol augmented with extra information.

Attributes
sizeint

The size of the tensor that this symbol represents.

default_assumptions = {}
name: str
size: int
sort_key(order: Literal[None] = None) tuple[Any, ...][source]

Return a sort key.

Examples

>>> from sympy import S, I
>>> sorted([S(1)/2, I, -I], key=lambda x: x.sort_key())
[1/2, -I, I]
>>> S("[x, 1/x, 1/x**2, x**2, x**(1/2), x**(1/4), x**(3/2)]")
[x, 1/x, x**(-2), x**2, sqrt(x), x**(1/4), x**(3/2)]
>>> sorted(_, key=lambda x: x.sort_key())
[x**(-2), 1/x, x**(1/4), sqrt(x), x, x**(3/2), x**2]
class lambeq.TensorAnsatz(ob_map: Mapping[Ty, Dim])[source]

Bases: lambeq.ansatz.base.BaseAnsatz

Base class for tensor network ansatz.

__call__(diagram: discopy.rigid.Diagram) discopy.tensor.Diagram[source]

Convert a DisCoPy diagram into a DisCoPy tensor.

__init__(ob_map: Mapping[Ty, Dim]) None[source]

Instantiate a tensor network ansatz.

Parameters
ob_mapdict

A mapping from discopy.rigid.Ty to the dimension space it uses in a tensor network.

class lambeq.TketModel(backend_config: dict[str, Any])[source]

Bases: lambeq.training.quantum_model.QuantumModel

Model based on tket.

This can run either shot-based simulations of a quantum pipeline or experiments run on quantum hardware using tket.

__init__(backend_config: dict[str, Any]) None[source]

Initialise TketModel based on the t|ket> backend.

Other Parameters
backend_configdict

Dictionary containing the backend configuration. Must include the fields backend, compilation and shots.

Raises
KeyError

If backend_config is not provided or has missing fields.

forward(x: list[Diagram]) np.ndarray[source]

Perform default forward pass of a lambeq quantum model.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters
xlist of Diagram

The Circuits to be evaluated.

Returns
np.ndarray

Array containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) np.ndarray[source]

Return the prediction for each diagram using t|ket>.

Parameters
diagramslist of Diagram

The Circuits to be evaluated.

Returns
np.ndarray

Resulting array.

Raises
ValueError

If model.weights or model.symbols are not initialised.

symbols: list[Union[Symbol, SymPySymbol]]
weights: np.ndarray
class lambeq.Tokeniser[source]

Bases: abc.ABC

Base Class for all tokenisers

abstract split_sentences(text: str) list[str][source]

Split input text into a list of sentences.

Parameters
textstr

A single string that contains one or multiple sentences.

Returns
list of str

List of sentences, one sentence in each string.

tokenise_sentence(sentence: str) list[str][source]

Tokenise a sentence.

Parameters
sentencestr

An untokenised sentence.

Returns
list of str

A tokenised sentence given as a list of tokens - strings.

abstract tokenise_sentences(sentences: Iterable[str]) list[list[str]][source]

Tokenise a list of sentences.

Parameters
sentenceslist of str

A list of untokenised sentences.

Returns
list of list of str

A list of tokenised sentences, where each sentence is a list of tokens - strings

class lambeq.Trainer(model: Model, loss_function: Callable[..., Any], epochs: int, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None)[source]

Bases: abc.ABC

Base class for a lambeq trainer.

__init__(model: Model, loss_function: Callable[..., Any], epochs: int, evaluate_functions: Optional[Mapping[str, _EvalFuncT]] = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: Optional[_StrPathT] = None, from_checkpoint: bool = False, verbose: str = 'text', seed: Optional[int] = None) None[source]

Initialise a lambeq trainer.

Parameters
modelModel

A lambeq Model.

loss_functioncallable

A loss function to compare the prediction to the true label.

epochsint

Number of training epochs.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

fit(train_dataset: lambeq.training.dataset.Dataset, val_dataset: Optional[lambeq.training.dataset.Dataset] = None, evaluation_step: int = 1, logging_step: int = 1) None[source]

Fit the model on the training data and, optionally, evaluate it on the validation data.

Parameters
train_datasetDataset

Dataset used for training.

val_datasetDataset, optional

Validation dataset.

evaluation_stepint, default: 1

Sets the intervals at which the metrics are evaluated on the validation dataset.

logging_stepint, default: 1

Sets the intervals at which the training statistics are printed if verbose = ‘text’ (otherwise ignored).

load_training_checkpoint(log_dir: Union[str, os.PathLike[str]]) lambeq.training.checkpoint.Checkpoint[source]

Load model from a checkpoint.

Parameters
log_dirstr or PathLike

The path to the model.lt checkpoint file.

Returns
py:class:.Checkpoint

Checkpoint containing the model weights, symbols and the training history.

Raises
FileNotFoundError

If the file does not exist.

save_checkpoint(save_dict: Mapping[str, Any], log_dir: _StrPathT) None[source]

Save checkpoint.

Parameters
save_dictmapping of str to any

Mapping containing the checkpoint information.

log_dirstr or PathLike

The path where to store the model.lt checkpoint file.

abstract training_step(batch: tuple[list[Any], Any]) tuple[Any, float][source]

Perform a training step.

Parameters
batchtuple of list and any

Current batch.

Returns
Tuple of any and float

The model predictions and the calculated loss.

abstract validation_step(batch: tuple[list[Any], Any]) tuple[Any, float][source]

Perform a validation step.

Parameters
batchtuple of list and any

Current batch.

Returns
Tuple of any and float

The model predictions and the calculated loss.

class lambeq.TreeReader(ccg_parser: typing.Union[lambeq.text2diagram.ccg_parser.CCGParser, typing.Callable[[], lambeq.text2diagram.ccg_parser.CCGParser]] = <class 'lambeq.text2diagram.bobcat_parser.BobcatParser'>, mode: lambeq.text2diagram.tree_reader.TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: discopy.rigid.Ty = Ty('s'))[source]

Bases: lambeq.text2diagram.base.Reader

A reader that combines words according to a parse tree.

__init__(ccg_parser: typing.Union[lambeq.text2diagram.ccg_parser.CCGParser, typing.Callable[[], lambeq.text2diagram.ccg_parser.CCGParser]] = <class 'lambeq.text2diagram.bobcat_parser.BobcatParser'>, mode: lambeq.text2diagram.tree_reader.TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: discopy.rigid.Ty = Ty('s')) None[source]

Initialise a tree reader.

Parameters
ccg_parserCCGParser or callable, default: BobcatParser

A CCGParser object or a function that returns it. The parse tree produced by the parser is used to generate the tree diagram.

modeTreeReaderMode, default: TreeReaderMode.NO_TYPE

Determines what boxes are used to combine the tree. See TreeReaderMode for options.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

classmethod available_modes() list[str][source]

The list of modes for initialising a tree reader.

sentence2diagram(sentence: Union[str, List[str]], tokenised: bool = False, suppress_exceptions: bool = False) Optional[discopy.rigid.Diagram][source]

Parse a sentence into a Diagram .

This produces a tree-shaped diagram based on the output of the CCG parser.

Parameters
sentencestr or list of str

The sentence to be parsed.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

Returns
discopy.rigid.Diagram or None

The parsed diagram, or None on failure.

static tree2diagram(tree: lambeq.text2diagram.ccg_tree.CCGTree, mode: lambeq.text2diagram.tree_reader.TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: discopy.rigid.Ty = Ty('s'), suppress_exceptions: bool = False) Optional[discopy.rigid.Diagram][source]

Convert a CCGTree into a Diagram .

This produces a tree-shaped diagram based on the output of the CCG parser.

Parameters
treeCCGTree

The CCG tree to be converted.

modeTreeReaderMode, default: TreeReaderMode.NO_TYPE

Determines what boxes are used to combine the tree. See TreeReaderMode for options.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

Returns
discopy.rigid.Diagram or None

The parsed diagram, or None on failure.

class lambeq.TreeReaderMode(value)[source]

Bases: enum.Enum

An enumeration for TreeReader.

The words in the tree diagram can be combined using 3 modes:

NO_TYPE

The ‘no type’ mode names every rule box UNIBOX.

RULE_ONLY

The ‘rule name’ mode names every rule box based on the name of the original CCG rule. For example, for the forward application rule FA(N << N), the rule box will be named FA.

RULE_TYPE

The ‘rule type’ mode names every rule box based on the name and type of the original CCG rule. For example, for the forward application rule FA(N << N), the rule box will be named FA(N << N).

HEIGHT

The ‘height’ mode names every rule box based on the tree height of its subtree. For example, a rule box directly combining two words will be named layer_1.

HEIGHT = 3
NO_TYPE = 0
RULE_ONLY = 1
RULE_TYPE = 2
class lambeq.VerbosityLevel(value)[source]

Bases: enum.Enum

Level of verbosity for progress reporting.

Table 3 Available Options

Option

Value

Description

PROGESS

'progress'

Use progress bar.

TEXT

'text'

Give text report.

SUPPRESS

'suppress'

No output.

All outputs are printed to stderr. Visual Studio Code does not always display progress bars correctly, use 'progress' level reporting in Visual Studio Code at your own risk.

PROGRESS = 'progress'
SUPPRESS = 'suppress'
TEXT = 'text'
classmethod has_value(value: str) bool[source]
exception lambeq.WebParseError(sentence: str)[source]

Bases: OSError

__init__(sentence: str) None[source]
class lambeq.WebParser(parser: str = 'depccg', verbose: str = 'suppress')[source]

Bases: lambeq.text2diagram.ccg_parser.CCGParser

Wrapper that allows passing parser queries to an online service.

__init__(parser: str = 'depccg', verbose: str = 'suppress') None[source]

Initialise a web parser.

Parameters
parserstr, optional

The web parser to use. By default, this is depccg parser.

verbosestr, default: ‘suppress’,

See VerbosityLevel for options.

sentences2trees(sentences: SentenceBatchType, tokenised: bool = False, suppress_exceptions: bool = False, verbose: Optional[str] = None) list[Optional[CCGTree]][source]

Parse multiple sentences into a list of CCGTree s.

Parameters
sentenceslist of str, or list of list of str

The sentences to be parsed.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

verbosestr, optional

See VerbosityLevel for options. If set, it takes priority over the verbose attribute of the parser.

Returns
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

Raises
URLError

If the service URL is not well formed.

ValueError

If a sentence is blank or type of the sentence does not match tokenised flag.

WebParseError

If the parser fails to obtain a parse tree from the server.

lambeq.create_pregroup_diagram(words: list[Word], cod: Ty, morphisms: list[tuple[type, int, int]]) Diagram[source]

Create a discopy.rigid.Diagram from cups and swaps.

>>> n, s = Ty('n'), Ty('s')
>>> words = [Word('she', n), Word('goes', n.r @ s @ n.l),
...          Word('home', n)]
>>> morphisms = [(Cup, 0, 1), (Cup, 3, 4)]
>>> diagram = create_pregroup_diagram(words, Ty('s'), morphisms)
Parameters
wordslist of discopy.grammar.pregroup.Word

A list of Word s corresponding to the words of the sentence.

coddiscopy.rigid.Ty

The output type of the diagram.

morphisms: list of tuple[type, int, int]
A list of tuples of the form:

(morphism, start_wire_idx, end_wire_idx).

Morphisms can be Cup s or Swap s, while the two numbers define the indices of the wires on which the morphism is applied.

Returns
discopy.rigid.Diagram

The generated pregroup diagram.

Raises
discopy.cat.AxiomError

If the provided morphism list does not type-check properly.

lambeq.diagram2str(diagram: discopy.rigid.Diagram, word_spacing: int = 2, discopy_types: bool = False, compress_layers: bool = True, use_ascii: bool = False) str[source]

Produces a string that graphically represents the input diagram with text characters, without the need of first creating a printer. For specific arguments, see the constructor of the TextDiagramPrinter class.

lambeq.is_pregroup_diagram(diagram: discopy.rigid.Diagram) bool[source]

Check if a diagram is a pregroup diagram.

Adapted from discopy.grammar.pregroup.draw.

Parameters
diagramdiscopy.rigid.Diagram

The diagram to be checked.

Returns
bool

Whether the diagram is a pregroup diagram.

lambeq.remove_cups(diagram: discopy.rigid.Diagram) discopy.rigid.Diagram[source]

Remove cups from a discopy.rigid.Diagram.

Diagrams with less cups become circuits with less post-selection, which results in faster QML experiments.

Parameters
diagramdiscopy.rigid.Diagram

The diagram from which cups will be removed.

Returns
discopy.rigid.Diagram

Diagram with some cups removed.

lambeq.remove_swaps(diagram: discopy.rigid.Diagram) discopy.rigid.Diagram[source]

Produce a proper pregroup diagram by removing any swaps.

Direct conversion of a CCG derivation into a string diagram form may introduce swaps, caused by cross-composition rules and unary rules that may change types and the directionality of composition at any point of the derivation. This method removes swaps, producing a valid pregroup diagram (in J. Lambek’s sense) as follows:

  1. Eliminate swap morphisms by swapping the actual atomic types of the words.

  2. Scan the new diagram for any detached parts, and remove them by merging words together when possible.

Parameters
diagramdiscopy.rigid.Diagram

The input diagram.

Returns
discopy.rigid.Diagram

A copy of the input diagram without swaps.

Raises
ValueError

If the input diagram is not in DisCoPy’s “pregroup” form, i.e. when words do not strictly precede the morphisms.

Notes

The method trades off diagrammatic simplicity and conformance to a formal pregroup grammar for a larger vocabulary, since each word is associated with more types than before and new words (combined tokens) are added to the vocabulary. Depending on the size of your dataset, this might lead to data sparsity problems during training.

Examples

In the following example, “am” and “not” are combined at the CCG level using cross composition, which introduces the interwoven pattern of wires.

I       am            not        sleeping
─  ───────────  ───────────────  ────────
n  n.r·s·s.l·n  s.r·n.r.r·n.r·s   n.r·s
│   │  │  │  ╰─╮─╯    │    │  │    │  │
│   │  │  │  ╭─╰─╮    │    │  │    │  │
│   │  │  ╰╮─╯   ╰─╮──╯    │  │    │  │
│   │  │  ╭╰─╮   ╭─╰──╮    │  │    │  │
│   │  ╰──╯  ╰─╮─╯    ╰─╮──╯  │    │  │
│   │        ╭─╰─╮    ╭─╰──╮  │    │  │
│   ╰────────╯   ╰─╮──╯    ╰╮─╯    │  │
│                ╭─╰──╮    ╭╰─╮    │  │
╰────────────────╯    ╰─╮──╯  ╰────╯  │
                      ╭─╰──╮          │
                      │    ╰──────────╯

Applying the remove_swaps() method will return:

I     am not    sleeping
─  ───────────  ────────
n  n.r·s·s.l·n   n.r·s
╰───╯  │  │  ╰────╯  │
       │  ╰──────────╯

removing the swaps and combining “am” and “not” into one token.