lambeq package

class lambeq.AtomicType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Ty, Enum

Standard pregroup atomic types.

CONJUNCTION = Ty(conj)
NOUN = Ty(n)
NOUN_PHRASE = Ty(n)
PREPOSITIONAL_PHRASE = Ty(p)
PUNCTUATION = Ty(punc)
SENTENCE = Ty(s)
class lambeq.BaseAnsatz(ob_map: Mapping[Ty, Dim])[source]

Bases: ABC

Base class for ansatz.

abstract __call__(diagram: Diagram) Diagram[source]

Convert a diagram into a circuit or tensor.

abstract __init__(ob_map: Mapping[Ty, Dim]) None[source]

Instantiate an ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to a type in the target category. In the category of quantum circuits, this type is the number of qubits; in the category of vector spaces, this type is a vector space.

class lambeq.BinaryCrossEntropyLoss(sparse: bool = False, use_jax: bool = False, epsilon: float = 1e-09)[source]

Bases: CrossEntropyLoss

Binary cross-entropy loss function.

Parameters:
y_pred: np.ndarray or jnp.ndarray

Predicted labels from model. When sparse is False, expected to be of shape [batch_size, 2], where each row is a probability distribution. When sparse is True, expected to be of shape [batch_size, ] where each element indicates P(1).

y_true: np.ndarray or jnp.ndarray

Ground truth labels. When sparse is False, expected to be of shape [batch_size, 2], where each row is a one-hot vector. When sparse is True, expected to be of shape [batch_size, ] where each element is an integer indicating class label.

__init__(sparse: bool = False, use_jax: bool = False, epsilon: float = 1e-09) None[source]

Initialise a binary cross-entropy loss function.

Parameters:
sparsebool, default: False
If True, each input element indicates P(1), else the

probability distribution over classes is expected.

use_jaxbool, default: False

Whether to use the Jax variant of numpy.

epsilonfloat, default: 1e-9

Smoothing constant used to prevent calculating log(0).

calculate_loss(y_pred: np.ndarray | jnp.ndarray, y_true: np.ndarray | jnp.ndarray) float[source]

Calculate value of BCE loss function.

exception lambeq.BobcatParseError(sentence: str)[source]

Bases: Exception

__init__(sentence: str) None[source]
class lambeq.BobcatParser(model_name_or_path: str = 'bert', root_cats: Iterable[str] | None = None, device: int = -1, cache_dir: StrPathT | None = None, force_download: bool = False, verbose: str = 'progress', **kwargs: Any)[source]

Bases: CCGParser

CCG parser using Bobcat as the backend.

__init__(model_name_or_path: str = 'bert', root_cats: Iterable[str] | None = None, device: int = -1, cache_dir: StrPathT | None = None, force_download: bool = False, verbose: str = 'progress', **kwargs: Any) None[source]

Instantiate a BobcatParser.

Parameters:
model_name_or_pathstr, default: ‘bert’
Can be either:
  • The path to a directory containing a Bobcat model.

  • The name of a pre-trained model. By default, it uses the “bert” model. See also: BobcatParser.available_models()

root_catsiterable of str, optional

A list of the categories allowed at the root of the parse tree.

deviceint, default: -1

The GPU device ID on which to run the model, if positive. If negative (the default), run on the CPU.

cache_dirstr or os.PathLike, optional

The directory to which a downloaded pre-trained model should be cached instead of the standard cache ($XDG_CACHE_HOME or ~/.cache).

force_downloadbool, default: False

Force the model to be downloaded, even if it is already available locally.

verbosestr, default: ‘progress’,

See VerbosityLevel for options.

**kwargsdict, optional

Additional keyword arguments to be passed to the underlying parsers (see Other Parameters). By default, they are set to the values in the pipeline_config.json file in the model directory.

Other Parameters:
Tagger parameters:
batch_sizeint, optional

The number of sentences per batch.

tag_top_kint, optional

The maximum number of tags to keep. If 0, keep all tags.

tag_prob_thresholdfloat, optional

The probability multiplier used for the threshold to keep tags.

tag_prob_threshold_strategy{‘relative’, ‘absolute’}

If “relative”, the probablity threshold is relative to the highest scoring tag. Otherwise, the probability is an absolute threshold.

span_top_kint, optional

The maximum number of entries to keep per span. If 0, keep all entries.

span_prob_thresholdfloat, optional

The probability multiplier used for the threshold to keep entries for a span.

span_prob_threshold_strategy{‘relative’, ‘absolute’}

If “relative”, the probablity threshold is relative to the highest scoring entry. Otherwise, the probability is an absolute threshold.

Chart parser parameters:
eisner_normal_formbool, default: True

Whether to use eisner normal form.

max_parse_treesint, optional

A safety limit to the number of parse trees that can be generated per parse before automatically failing.

beam_sizeint, optional

The beam size to use in the chart cells.

input_tag_score_weightfloat, optional

A scaling multiplier to the log-probabilities of the input tags. This means that a weight of 0 causes all of the input tags to have the same score.

missing_cat_scorefloat, optional

The default score for a category that is generated but not part of the grammar.

missing_span_scorefloat, optional

The default score for a category that is part of the grammar but has no score, due to being below the threshold kept by the tagger.

static available_models() list[str][source]

List the available models.

sentences2trees(sentences: List[str] | List[List[str]], tokenised: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) list[CCGTree] | None[source]

Parse multiple sentences into a list of CCGTree s.

Parameters:
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns:
list of CCGTree or None

The parsed trees. (May contain None if exceptions are suppressed)

exception lambeq.CCGBankParseError(sentence: str = '', message: str = '')[source]

Bases: Exception

Error raised if parsing fails in CCGBank.

__init__(sentence: str = '', message: str = '') None[source]
class lambeq.CCGBankParser(root: StrPathT, verbose: str = 'suppress')[source]

Bases: CCGParser

A parser for CCGBank trees.

__init__(root: StrPathT, verbose: str = 'suppress') None[source]

Initialise a CCGBank parser.

Parameters:
rootstr or os.PathLike

Path to the root of the corpus. The sections must be located in <root>/data/AUTO.

verbosestr, default: ‘suppress’,

See VerbosityLevel for options.

section2diagrams(section_id: int, planar: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) dict[str, Diagram | None][source]

Parse a CCGBank section into diagrams.

Parameters:
section_idint

The section to parse.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a diagram.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns
——-
diagramsdict

A dictionary of diagrams labelled by their ID in CCGBank. If a diagram fails to draw and exceptions are suppressed, that entry is replaced by None.

Raises:
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2diagrams_gen(section_id: int, planar: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) Iterator[tuple[str, Diagram | None]][source]

Parse a CCGBank section into diagrams, given as a generator.

The generator only reads data when it is accessed, providing the user with control over the reading process.

Parameters:
section_idint

The section to parse.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a diagram.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Yields:
ID, diagramtuple of str and Diagram

ID in CCGBank and the corresponding diagram. If a diagram fails to draw and exceptions are suppressed, that entry is replaced by None.

Raises:
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2trees(section_id: int, suppress_exceptions: bool = False, verbose: str | None = None) dict[str, CCGTree | None][source]

Parse a CCGBank section into trees.

Parameters:
section_idint

The section to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns:
treesdict

A dictionary of trees labelled by their ID in CCGBank. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises:
CCGBankParseError

If parsing fails and exceptions are not suppressed.

section2trees_gen(section_id: int, suppress_exceptions: bool = False, verbose: str | None = None) Iterator[tuple[str, CCGTree | None]][source]

Parse a CCGBank section into trees, given as a generator.

The generator only reads data when it is accessed, providing the user with control over the reading process.

Parameters:
section_idint

The section to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Yields:
ID, treetuple of str and CCGTree

ID in CCGBank and the corresponding tree. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises:
CCGBankParseError

If parsing fails and exceptions are not suppressed.

sentences2trees(sentences: List[str] | List[List[str]], tokenised: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) list[CCGTree | None][source]

Parse a CCGBank sentence derivation into a CCGTree.

The sentence must be in the format outlined in the CCGBank manual section D.2 and not just a list of words.

Parameters:
sentenceslist of str

List of sentences to parse.

suppress_exceptionsbool, default: False

Stop exceptions from being raised, instead returning None for a tree.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens. For CCGBankParser, it should be kept False.

verbosestr, optional

See VerbosityLevel for options. If set, takes priority over the verbose attribute of the parser.

Returns:
treeslist of CCGTree

A list of trees. If a tree fails to parse and exceptions are suppressed, that entry is None.

Raises:
CCGBankParseError

If parsing fails and exceptions are not suppressed.

ValueError

If tokenised flag is True (not valid for CCGBankParser).

class lambeq.CCGParser(root_cats: Iterable[str] | None = None, verbose: str = 'suppress')[source]

Bases: Reader

Base class for CCG parsers.

abstract __init__(root_cats: Iterable[str] | None = None, verbose: str = 'suppress') None[source]

Initialise the CCG parser.

sentence2diagram(sentence: str | List[str], tokenised: bool = False, planar: bool = False, collapse_noun_phrases: bool = True, suppress_exceptions: bool = False) Diagram | None[source]

Parse a sentence into a lambeq diagram.

Parameters:
sentencestr or list of str

The sentence to be parsed.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

collapse_noun_phrasesbool, default: True

If set, then before converting the tree to a diagram, all noun phrase types in the tree are changed into nouns. This includes sub-types, e.g. S/NP becomes S/N.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

Returns:
lambeq.backend.grammar.Diagram or None

The parsed diagram, or None on failure.

sentence2tree(sentence: str | List[str], tokenised: bool = False, suppress_exceptions: bool = False) CCGTree | None[source]

Parse a sentence into a CCGTree.

Parameters:
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns:
CCGTree or None

The parsed tree, or None on failure.

sentences2diagrams(sentences: List[str] | List[List[str]], tokenised: bool = False, planar: bool = False, collapse_noun_phrases: bool = True, suppress_exceptions: bool = False, verbose: str | None = None) list[Diagram | None][source]

Parse multiple sentences into a list of lambeq diagrams.

Parameters:
sentenceslist of str, or list of list of str

The sentences to be parsed.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

planarbool, default: False

Force diagrams to be planar when they contain crossed composition.

collapse_noun_phrasesbool, default: True

If set, then before converting each tree to a diagram, any noun phrase types in the tree are changed into nouns. This includes sub-types, e.g. S/NP becomes S/N.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

verbosestr, optional

See VerbosityLevel for options. Not all parsers implement all three levels of progress reporting, see the respective documentation for each parser. If set, takes priority over the verbose attribute of the parser.

Returns:
list of lambeq.backend.grammar.Diagram or None

The parsed diagrams. May contain None if exceptions are suppressed.

abstract sentences2trees(sentences: List[str] | List[List[str]], tokenised: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) list[CCGTree | None][source]

Parse multiple sentences into a list of CCGTree s.

Parameters:
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

See VerbosityLevel for options. Not all parsers implement all three levels of progress reporting, see the respective documentation for each parser. If set, takes priority over the verbose attribute of the parser.

Returns:
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

class lambeq.CCGRule(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: str, Enum

An enumeration of the available CCG rules.

BACKWARD_APPLICATION = 'BA'
BACKWARD_COMPOSITION = 'BC'
BACKWARD_CROSSED_COMPOSITION = 'BX'
BACKWARD_TYPE_RAISING = 'BTR'
CONJUNCTION = 'CONJ'
FORWARD_APPLICATION = 'FA'
FORWARD_COMPOSITION = 'FC'
FORWARD_CROSSED_COMPOSITION = 'FX'
FORWARD_TYPE_RAISING = 'FTR'
GENERALIZED_BACKWARD_COMPOSITION = 'GBC'
GENERALIZED_BACKWARD_CROSSED_COMPOSITION = 'GBX'
GENERALIZED_FORWARD_COMPOSITION = 'GFC'
GENERALIZED_FORWARD_CROSSED_COMPOSITION = 'GFX'
LEXICAL = 'L'
REMOVE_PUNCTUATION_LEFT = 'LP'
REMOVE_PUNCTUATION_RIGHT = 'RP'
UNARY = 'U'
UNKNOWN = 'UNK'
__call__(dom: Sequence[CCGType], cod: CCGType | None = None) Diagram[source]

Call self as a function.

apply(dom: Sequence[CCGType], cod: CCGType | None = None) Diagram[source]

Produce a lambeq diagram for this rule.

This is primarily used by CCG trees that have been resolved. This means, for example, that diagrams cannot be produced for the conjunction rule, since they are rewritten when resolved.

Parameters:
domlist of CCGType

The domain of the diagram.

codCCGType, optional

The codomain of the diagram. This is only used for type-raising rules.

Returns:
lambeq.backend.grammar.Diagram

The resulting diagram.

Raises:
CCGRuleUseError

If a diagram cannot be produced.

check_match(left: CCGType, right: CCGType) None[source]

Raise an exception if the two arguments do not match.

classmethod infer_rule(dom: Sequence[CCGType], cod: CCGType) CCGRule[source]

Infer the CCG rule that admits the given domain and codomain.

Return CCGRule.UNKNOWN if no other rule matches.

Parameters:
domlist of CCGType

The domain of the rule.

codCCGType

The codomain of the rule.

Returns:
CCGRule

A CCG rule that admits the required domain and codomain.

resolve(dom: Sequence[CCGType], cod: CCGType) tuple[CCGType, ...][source]

Perform type resolution on this rule use.

This is used to propagate any type changes that has occured in the codomain to the domain, such that applying this rule to the rewritten domain produces the provided codomain, while remaining as compatible as possible with the provided domain.

Parameters:
domlist of CCGType

The original domain of this rule use.

codCCGType

The required codomain of this rule use.

Returns:
tuple of CCGType

The rewritten domain.

property symbol: str

The standard CCG symbol for the rule.

exception lambeq.CCGRuleUseError(rule: CCGRule, message: str)[source]

Bases: Exception

Error raised when a CCGRule is applied incorrectly.

__init__(rule: CCGRule, message: str) None[source]
class lambeq.CCGTree(text: str | None = None, *, rule: CCGRule | str = CCGRule.UNKNOWN, biclosed_type: CCGType, children: Iterable[CCGTree] | None = None, metadata: dict[Any, Any] | None = None)[source]

Bases: object

Derivation tree for a CCG.

This provides a standard derivation interface between the parser and the rest of the model.

__init__(text: str | None = None, *, rule: CCGRule | str = CCGRule.UNKNOWN, biclosed_type: CCGType, children: Iterable[CCGTree] | None = None, metadata: dict[Any, Any] | None = None) None[source]

Initialise a CCG tree.

Parameters:
textstr, optional

The word or phrase associated to the whole tree. If None, it is inferred from its children.

ruleCCGRule, default: CCGRule.UNKNOWN

The final CCGRule used in the derivation.

biclosed_typeCCGType

The type associated to the derived phrase.

childrenlist of CCGTree, optional

A list of JSON subtrees. The types of these subtrees can be combined with the rule to produce the output type. A leaf node has an empty list of children.

metadatadict, optional

A dictionary of miscellaneous data.

property child: CCGTree

Get the child of a unary tree.

collapse_noun_phrases() CCGTree[source]

Change noun phrase types into noun types.

This includes sub-types, e.g. S/NP becomes S/N.

deriv(word_spacing: int = 2, use_slashes: bool = True, use_ascii: bool = False, vertical: bool = False) str[source]

Produce a string representation of the tree.

Parameters:
word_spacingint, default: 2

The minimum number of spaces between the words of the diagram. Only used for horizontal diagrams.

use_slashes: bool, default: True

Whether to use slashes in the CCG types instead of arrows. Automatically set to True when use_ascii is True.

use_ascii: bool, default: False

Whether to draw using ASCII characters only.

vertical: bool, default: False

Whether to create a vertical tree representation, instead of the standard horizontal one.

Returns:
str

A string that contains the graphical representation of the CCG tree.

classmethod from_json(data: None) None[source]
classmethod from_json(data: Dict[str, Any] | str) CCGTree

Create a CCGTree from a JSON representation.

A JSON representation of a derivation contains the following fields:

textstr or None

The word or phrase associated to the whole tree. If None, it is inferred from its children.

ruleCCGRule

The final CCGRule used in the derivation.

typeCCGType

The type associated to the derived phrase.

childrenlist or None

A list of JSON subtrees. The types of these subtrees can be combined with the rule to produce the output type. A leaf node has an empty list of children.

property left: CCGTree

Get the left child of a binary tree.

property right: CCGTree

Get the right child of a binary tree.

property text: str

The word or phrase associated to the tree.

to_diagram(planar: bool = False, collapse_noun_phrases: bool = True) Diagram[source]

Convert tree to a DisCoCat diagram.

Parameters:
planarbool, default: False

Force the diagram to be planar. This only affects trees using cross composition.

to_json() Dict[str, Any][source]

Convert tree into JSON form.

without_trivial_unary_rules() CCGTree[source]

Create a new CCGTree from the current tree, with all trivial unary rules (i.e. rules that map X to X) removed.

This might happen because there is no exact correspondence between CCG types and pregroup types, e.g. both CCG types NP and N are mapped to the same pregroup type n.

Returns:
lambeq.text2diagram.CCGTree

A new tree free of trivial unary rules.

class lambeq.CCGType(name: str | None = None, result: CCGType | None = None, direction: str | None = None, argument: CCGType | None = None)[source]

Bases: object

A type in the Combinatory Categorical Grammar (CCG).

Attributes:
namestr

The name of an atomic CCG type.

resultCCGType

The result of a complex CCG type.

direction‘/’ or ‘'

The direction of a complex CCG type.

argumentCCGType

The argument of a complex CCG type.

is_emptybool

Whether the CCG type is the empty type.

is_atomicbool

Whether the CCG type is an atomic type.

is_complexbool

Whether the CCG type is a complex type.

is_overbool

Whether the argument of a complex CCG type appears on the right, i.e. X/Y.

is_underbool

Whether the argument of a complex CCG type appears on the left, i.e. XY.

CONJUNCTION: ClassVar[CCGType] = CCGType(conj)
CONJ_TAG: ClassVar[str] = '[conj]'
NOUN: ClassVar[CCGType] = CCGType(n)
NOUN_PHRASE: ClassVar[CCGType] = CCGType(np)
PREPOSITIONAL_PHRASE: ClassVar[CCGType] = CCGType(p)
PUNCTUATION: ClassVar[CCGType] = CCGType(punc)
SENTENCE: ClassVar[CCGType] = CCGType(s)
__init__(name: str | None = None, result: CCGType | None = None, direction: str | None = None, argument: CCGType | None = None) None[source]

Initialise a CCG type.

Parameters:
namestr, optional

(Atomic types only) The name of an atomic CCG type.

resultCCGType, optional

(Complex types only) The result of a complex CCG type.

direction{ ‘/’, ‘' }, optional

(Complex types only) The direction of a complex CCG type.

argumentCCGType, optional

(Complex types only) The argument of a complex CCG type.

property argument: CCGType

The argument of a complex CCG type.

Raises an error if called on a non-complex CCG type.

property direction: str

The direction of a complex CCG type.

Raises an error if called on a non-complex CCG type.

is_atomic: bool
is_complex: bool
property is_conjoinable: bool

Whether the CCG type can be used to conjoin words.

is_empty: bool
is_over: bool
is_under: bool
property left: CCGType

The left-hand side (diagrammatically) of a complex CCG type.

Raises an error if called on a non-complex CCG type.

property name: str

The name of an atomic CCG type.

Raises an error if called on a non-atomic CCG type.

over(argument: CCGType) CCGType[source]

Create a complex CCG type with the argument on the right.

classmethod parse(cat: str, map_atomic: Callable[[str], str] | None = None) CCGType[source]

Parse a CCG category string into a CCGType.

The string should follow the following grammar:

atomic_cat  = { <any character except "(", ")", "/", "\"> }
op          = "/" | "\"
bracket_cat = atomic_cat
              | "(" bracket_cat [ op bracket_cat ] ")"
cat         = bracketed_cat [ op bracket_cat ] [ "[conj]" ]
Parameters:
map_atomic: callable, optional

If provided, this function is called on the atomic type names in the original string, and should return their name in the output CCGType. This can be used to fix any inconsistencies in capitalisation or unify types, such as noun and noun phrase types.

Returns:
CCGType

The parsed category as a CCGType.

Raises:
CCGParseError

If parsing fails.

Notes

Conjunctions follow the CCGBank convention of:

x   and  y
C  conj  C
 \    \ /
  \ C[conj]
   \ /
    C

thus C[conj] is equivalent to C\C.

replace(original: CCGType, replacement: CCGType) CCGType[source]

Replace all occurrences of a sub-type with a different type.

replace_result(original: CCGType, replacement: CCGType, direction: str = '|') tuple[CCGType, CCGType | None][source]

Replace the innermost category result with a new category.

This performs a lenient replacement operation. This means that it will attempt to replace the specified result category original with replacement, but if original cannot be found, the innermost result category will be replaced (still by replacement). This makes it suitable for cases where type resolution has occurred, so that type rewrites can propagate. This method returns the new category, alongside which category has been replaced. direction can be used to specify a particular structure that must be satisfied by the replacement operation. If this is not satisfied, then no replacement takes place, and the returned replaced result category is None.

Parameters:
originalCCGType

The category that should be replaced.

replacementCCGType

The replacement for the new category.

directionstr

Used to check the operations in the category. Consists of either 1 or 2 characters, each being one of ‘/’, ‘', ‘|’. If 2 characters, the first checks the innermost operation, and the second checks the rest. If only 1 character, it is used for all checks.

Returns:
CCGType

The new category. If replacement fails, this is set to the original category.

CCGType or None

The replaced result category. If replacement fails, this is set to None.

Notes

This function is mainly used for substituting inner types in generalised versions of CCG rules. (See infer_rule())

Examples

>>> a, b, c, x, y = map(CCGType, 'abcxy')

Example 1: b >> c in a >> (b >> c) is matched and replaced with x.

>>> new, replaced = (a >> (b >> c)).replace_result(b >> c, x)
>>> print(new, replaced)
x\a c\b

Example 2: x cannot be matched, so the innermost category c is replaced instead.

>>> new, replaced = (a >> (b >> c)).replace_result(x, x << y)
>>> print(new, replaced)
((x/y)\b)\a c

Example 3: if not all operators are <<, then nothing is replaced.

>>> new, replaced = (a >> (c << b)).replace_result(x, y, '/')
>>> print(new, replaced)
(c/b)\a None

Example 4: the innermost use of << is on c and b, so the target c is replaced with y.

>>> new, replaced = (a >> (c << b)).replace_result(x, y, '/|')
>>> print(new, replaced)
(y/b)\a c

Example 5: the innermost use of >> is on a and (c << b), so its target (c << b) is replaced by y.

>>> new, replaced = (a >> (c << b)).replace_result(x, y, r'\|')
>>> print(new, replaced)
y\a c/b
property result: CCGType

The result of a complex CCG type.

Raises an error if called on a non-complex CCG type.

property right: CCGType

The right-hand side (diagrammatically) of a complex CCG type.

Raises an error if called on a non-complex CCG type.

slash(direction: str, argument: CCGType) CCGType[source]

Create a complex CCG type.

split(base: CCGType) tuple[grammar.Ty, grammar.Ty, grammar.Ty][source]

Isolate the inner type of a CCG type, in lambeq.

For example, if the input is T = (XY)/Z, the lambeq type would be Y.r @ X @ Z.l so:

>>> T = CCGType.parse(r'(X\Y)/Z')
>>> left, mid, right = T.split(CCGType('X'))
>>> print(left, mid, right, sep='  +  ')
Y.r  +  X  +  Z.l
>>> left, mid, right = T.split(CCGType.parse(r'X\Y'))
>>> print(left, mid, right, sep='  +  ')
Ty()  +  Y.r @ X  +  Z.l
to_grammar(Ty: type | None = None) grammar.Ty | Any[source]

Turn the CCG type into a lambeq grammar type.

to_string(pretty: bool = False) str[source]

Convert a CCG type to string.

Parameters:
prettybool

Stringify in a pretty format, using arrows instead of slashes. Note that this switches the placement of types in an “under” type, i.e. XY becomes Y↣X.

under(argument: CCGType) CCGType[source]

Create a complex CCG type with the argument on the left.

class lambeq.Checkpoint[source]

Bases: Mapping

Checkpoint class.

Attributes:
entriesdict

All data, stored as part of the checkpoint.

__init__() None[source]

Initialise a Checkpoint.

add_many(values: Mapping[str, Any]) None[source]

Adds several values into the checkpoint.

Parameters:
valuesMapping from str to any

The values to be added into the checkpoint.

classmethod from_file(path: str | PathLike[str]) Checkpoint[source]

Load the checkpoint contents from the file.

Parameters:
pathstr or PathLike

Path to the checkpoint file.

Raises:
FileNotFoundError

If no file is found at the given path.

to_file(path: str | PathLike[str]) None[source]

Save entries to a file and deletes the in-memory copy.

Parameters:
pathstr or PathLike

Path to the checkpoint file.

class lambeq.CircuitAnsatz(ob_map: ~collections.abc.Mapping[~lambeq.backend.grammar.Ty, int], n_layers: int, n_single_qubit_params: int, circuit: ~collections.abc.Callable[[int, ~numpy.ndarray], ~lambeq.backend.quantum.Diagram], discard: bool = False, single_qubit_rotations: list[~typing.Type[~lambeq.backend.quantum.Rotation]] | None = None, postselection_basis: ~lambeq.backend.quantum.Diagram = Diagram(dom=Ty(qubit), cod=Ty(qubit), layers=[], __hash__=<function Diagram.__hash__>))[source]

Bases: BaseAnsatz

Base class for circuit ansatz.

__call__(diagram: Diagram) Diagram[source]

Convert a lambeq diagram into a lambeq circuit.

__init__(ob_map: ~collections.abc.Mapping[~lambeq.backend.grammar.Ty, int], n_layers: int, n_single_qubit_params: int, circuit: ~collections.abc.Callable[[int, ~numpy.ndarray], ~lambeq.backend.quantum.Diagram], discard: bool = False, single_qubit_rotations: list[~typing.Type[~lambeq.backend.quantum.Rotation]] | None = None, postselection_basis: ~lambeq.backend.quantum.Diagram = Diagram(dom=Ty(qubit), cod=Ty(qubit), layers=[], __hash__=<function Diagram.__hash__>)) None[source]

Instantiate a circuit ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

circuitcallable

Circuit generator used by the ansatz. This is a function (or a class constructor) that takes a number of qubits and a numpy array of parameters, and returns the ansatz of that size, with parameterised boxes.

discardbool, default: False

Discard open wires instead of post-selecting.

postselection_basis: Circuit, default: Id(qubit)

Basis to post-select in, by default the computational basis.

single_qubit_rotations: list of Circuit, optional

The rotations to be used for a single qubit. When only a single qubit is present, the ansatz defaults to applying a series of rotations in a cycle, determined by this parameter and n_single_qubit_params.

ob_size(pg_type: Ty) int[source]

Calculate the number of qubits used for a given type.

abstract params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.CoordinationRewriteRule(words: Container[str] | None = None)[source]

Bases: RewriteRule

A rewrite rule for coordination.

This rule matches the word ‘and’ with codomain a.r @ a @ a.l for pregroup type a, and replaces the word, based on [Kar2016], with a layer of interleaving spiders.

__init__(words: Container[str] | None = None) None[source]

Instantiate a CoordinationRewriteRule.

Parameters:
wordscontainer of str, optional

A list of words to be rewritten by this rule. If a box does not have one of these words, it will not be rewritten, even if the codomain matches. If omitted, the rewrite applies only to the word “and”.

matches(box: Box) bool[source]

Check if the given box should be rewritten.

rewrite(box: Box) Diagrammable[source]

Rewrite the given box.

class lambeq.CrossEntropyLoss(use_jax: bool = False, epsilon: float = 1e-09)[source]

Bases: LossFunction

Multiclass cross-entropy loss function.

Parameters:
y_pred: np.ndarray or jnp.ndarray

Predicted labels from model. Expected to be of shape [batch_size, n_classes], where each row is a probability distribution.

y_true: np.ndarray or jnp.ndarray

Ground truth labels. Expected to be of shape [batch_size, n_classes], where each row is a one-hot vector.

__init__(use_jax: bool = False, epsilon: float = 1e-09) None[source]

Initialise a multiclass cross-entropy loss function.

Parameters:
use_jaxbool, default: False

Whether to use the Jax variant of numpy.

epsilonfloat, default: 1e-9

Smoothing constant used to prevent calculating log(0).

calculate_loss(y_pred: np.ndarray | jnp.ndarray, y_true: np.ndarray | jnp.ndarray) float[source]

Calculate value of CE loss function.

class lambeq.CurryRewriteRule[source]

Bases: RewriteRule

A rewrite rule using map-state duality.

__init__() None[source]

Instantiate a CurryRewriteRule.

This rule uses the map-state duality by iteratively uncurrying on both sides of each box. When used in conjunction with lambeq.backend.grammar.Diagram.pregroup_normal_form(), this removes cups from the diagram in exchange for depth. Diagrams with fewer cups become circuits with fewer post-selection, which results in faster QML experiments.

matches(box: Box) bool[source]

Check if the given box should be rewritten.

rewrite(box: Box) Diagrammable[source]

Rewrite the given box.

class lambeq.Dataset(data: list[Any], targets: list[Any], batch_size: int = 0, shuffle: bool = True)[source]

Bases: object

Dataset class for the training of a lambeq model.

Data is returned in the format of lambeq’s numerical backend, which by default is set to NumPy. For example, to access the dataset as PyTorch tensors:

>>> from lambeq.backend import numerical_backend
>>> dataset = Dataset(['data1'], [[0, 1, 2, 3]])
>>> with numerical_backend.backend('pytorch'):
...     print(dataset[0])  # becomes pytorch tensor
('data1', tensor([0, 1, 2, 3]))
>>> print(dataset[0])  # numpy array again
('data1', array([0, 1, 2, 3]))
__init__(data: list[Any], targets: list[Any], batch_size: int = 0, shuffle: bool = True) None[source]

Initialise a Dataset for lambeq training.

Parameters:
datalist

Data used for training.

targetslist

List of labels.

batch_sizeint, default: 0

Batch size for batch generation, by default full dataset.

shufflebool, default: True

Enable data shuffling during training.

Raises:
ValueError

When ‘data’ and ‘targets’ do not match in size.

static shuffle_data(data: list[Any], targets: list[Any]) tuple[list[Any], list[Any]][source]

Shuffle a given dataset.

Parameters:
datalist

List of data points.

targetslist

List of labels.

Returns:
Tuple of list and list

The shuffled dataset.

exception lambeq.DepCCGParseError(sentence: str)[source]

Bases: Exception

__init__(sentence: str) None[source]
class lambeq.DepCCGParser(*, lang: str = 'en', model: str | None = None, use_model_unary_rules: bool = False, annotator: str = 'janome', tokenize: bool | None = None, device: int = -1, root_cats: Iterable[str] | None = None, verbose: str = 'progress', **kwargs: Any)[source]

Bases: CCGParser

CCG parser using depccg as the backend.

__init__(*, lang: str = 'en', model: str | None = None, use_model_unary_rules: bool = False, annotator: str = 'janome', tokenize: bool | None = None, device: int = -1, root_cats: Iterable[str] | None = None, verbose: str = 'progress', **kwargs: Any) None[source]

Instantiate a parser based on depccg.

Parameters:
lang{ ‘en’, ‘ja’ }

The language to use: ‘en’ for English, ‘ja’ for Japanese.

modelstr, optional

The name of the model variant to use, if any. depccg only has English model variants, namely ‘elmo’, ‘rebank’ and ‘elmo_rebank’.

use_model_unary_rulesbool, default: False

Use the unary rules supplied by the model instead of the ones by lambeq.

annotatorstr, default: ‘janome’

The annotator to use, if any. depccg supports ‘candc’ and ‘spacy’ for English, and ‘janome’ and ‘jigg’ for Japanese. By default, no annotator is used for English, and ‘janome’ is used for Japanese.

tokenizebool, optional

Whether to tokenise the input when annotating. This option should only be specified when using the ‘spacy’ annotator.

deviceint, optional

The ID of the GPU to use. By default, uses the CPU.

root_catsiterable of str, optional

A list of categories allowed at the root of the parse. By default, the English categories are:

  • S[dcl]

  • S[wq]

  • S[q]

  • S[qem]

  • NP

and the Japanese categories are:
  • NP[case=nc,mod=nm,fin=f]

  • NP[case=nc,mod=nm,fin=t]

  • S[mod=nm,form=attr,fin=t]

  • S[mod=nm,form=base,fin=f]

  • S[mod=nm,form=base,fin=t]

  • S[mod=nm,form=cont,fin=f]

  • S[mod=nm,form=cont,fin=t]

  • S[mod=nm,form=da,fin=f]

  • S[mod=nm,form=da,fin=t]

  • S[mod=nm,form=hyp,fin=t]

  • S[mod=nm,form=imp,fin=f]

  • S[mod=nm,form=imp,fin=t]

  • S[mod=nm,form=r,fin=t]

  • S[mod=nm,form=s,fin=t]

  • S[mod=nm,form=stem,fin=f]

  • S[mod=nm,form=stem,fin=t]

verbosestr, default: ‘progress’,

Controls the command-line output of the parser. Only ‘progress’ option is available for this parser.

**kwargsdict, optional

Optional arguments passed to depccg.

sentence2diagram(sentence: SentenceType, tokenised: bool = False, planar: bool = False, collapse_noun_phrases: bool = True, suppress_exceptions: bool = False) Diagram | None[source]

Parse a sentence into a lambeq diagram.

Parameters:
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

collapse_noun_phrasesbool, default: True

If set, then before converting each tree to a diagram, all noun phrase types in the tree are changed into nouns. This includes sub-types, e.g. S/NP becomes S/N.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

Returns:
lambeq.backend.grammar.Diagram or None

The parsed diagram, or None on failure.

Raises:
ValueErrorIf tokenised does not match with the input type.
sentence2tree(sentence: str | List[str], tokenised: bool = False, suppress_exceptions: bool = False) CCGTree | None[source]

Parse a sentence into a CCGTree.

Parameters:
sentencestr, list[str]

The sentence to be parsed, passed either as a string, or as a list of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if the sentence fails to parse, instead of raising an exception, returns None.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

Returns:
CCGTree or None

The parsed tree, or None on failure.

Raises:
ValueErrorIf tokenised does not match with the input type.
sentences2trees(sentences: List[str] | List[List[str]], tokenised: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) list[CCGTree | None][source]

Parse multiple sentences into a list of CCGTree s.

Parameters:
sentenceslist of str, or list of list of str

The sentences to be parsed, passed either as strings or as lists of tokens.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

tokenisedbool, default: False

Whether each sentence has been passed as a list of tokens.

verbosestr, optional

Controls the form of progress tracking. If set, takes priority over the verbose attribute of the parser. This class only supports ‘progress’ verbosity level - a progress bar.

Returns:
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

Raises:
ValueErrorIf tokenised does not match with the input type
or if verbosity is set to an unsupported value
class lambeq.DiagramRewriter[source]

Bases: ABC

Base class for diagram level rewriters.

__call__(target: list[Diagram]) list[Diagram][source]
__call__(target: Diagram) Diagram

Rewrite the given diagram(s) if the rule applies.

Parameters:
diagramlambeq.backend.grammar.Diagram

or list of Diagram

The candidate diagram(s) to be rewritten.

Returns:
lambeq.backend.gramar.Diagram or list of Diagram

The rewritten diagram. If the rule does not apply, the original diagram is returned.

abstract matches(diagram: Diagram) bool[source]

Check if the given diagram should be rewritten.

abstract rewrite(diagram: Diagram) Diagram[source]

Rewrite the given diagram.

class lambeq.IQPAnsatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: CircuitAnsatz

Instantaneous Quantum Polynomial ansatz.

An IQP ansatz interleaves layers of Hadamard gates with diagonal unitaries. This class uses n_layers-1 adjacent CRz gates to implement each diagonal unitary.

Code adapted from DisCoPy.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate an IQP ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: ndarray) Diagram[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.LinearReader(combining_diagram: Diagram, word_type: Ty = Ty(s), start_box: Diagram = Id(Ty()))[source]

Bases: Reader

A reader that combines words linearly using a stair diagram.

__init__(combining_diagram: Diagram, word_type: Ty = Ty(s), start_box: Diagram = Id(Ty())) None[source]

Initialise a linear reader.

Parameters:
combining_diagramDiagram

The diagram that is used to combine two word boxes. It is continuously applied on the left-most wires until a single output wire remains.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

start_boxDiagram, default: Id()

The start box used as a sentinel value for combining. By default, the empty diagram is used.

sentence2diagram(sentence: str | List[str], tokenised: bool = False) Diagram[source]

Parse a sentence into a lambeq diagram.

If tokenise is True, sentence is tokenised, otherwise it is split into tokens by whitespace. This method creates a box for each token, and combines them linearly.

Parameters:
sentencestr or list of str

The input sentence, passed either as a string or as a list of tokens.

tokenisedbool, default: False

Set to True, if the sentence is passed as a list of tokens instead of a single string. If set to False, words are split by whitespace.

Raises:
ValueError

If sentence does not match tokenised flag, or if an invalid mode or parser is passed to the initialiser.

class lambeq.LossFunction(use_jax: bool = False)[source]

Bases: ABC

Loss function base class.

Attributes:
backendModuleType
The module to use for array numerical functions.

Either numpy or jax.numpy.

__call__(y_pred: np.ndarray | jnp.ndarray, y_true: np.ndarray | jnp.ndarray) float[source]

Call self as a function.

__init__(use_jax: bool = False) None[source]

Initialise a loss function.

Parameters:
use_jaxbool, default: False

Whether to use the Jax variant of numpy as backend.

abstract calculate_loss(y_pred: np.ndarray | jnp.ndarray, y_true: np.ndarray | jnp.ndarray) float[source]

Calculate value of loss function.

class lambeq.MPSAnsatz(ob_map: Mapping[Ty, Dim], bond_dim: int, max_order: int = 3)[source]

Bases: TensorAnsatz

Split large boxes into matrix product states.

BOND_TYPE: Ty = Ty(B)
__call__(diagram: Diagram) Diagram[source]

Convert a diagram into a tensor.

__init__(ob_map: Mapping[Ty, Dim], bond_dim: int, max_order: int = 3) None[source]

Instantiate a matrix product state ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the dimension space it uses in a tensor network.

bond_dim: int

The size of the bonding dimension.

max_order: int

The maximum order of each tensor in the matrix product state, which must be at least 3.

class lambeq.MSELoss(use_jax: bool = False)[source]

Bases: LossFunction

Mean squared error loss function.

Parameters:
y_pred: np.ndarray or jnp.ndarray

Predicted values from model. Shape must match y_true.

y_true: np.ndarray or jnp.ndarray

Ground truth values.

calculate_loss(y_pred: np.ndarray | jnp.ndarray, y_true: np.ndarray | jnp.ndarray) float[source]

Calculate value of MSE loss function.

class lambeq.Model[source]

Bases: ABC

Model base class.

Attributes:
symbolslist of symbols

A sorted list of all Symbols occuring in the data.

weightsCollection

A data structure containing the numeric values of the model’s parameters.

__call__(*args: Any, **kwds: Any) Any[source]

Call self as a function.

__init__() None[source]

Initialise an instance of Model base class.

abstract forward(x: list[Any]) Any[source]

The forward pass of the model.

classmethod from_checkpoint(checkpoint_path: StrPathT, **kwargs: Any) Model[source]

Load the weights and symbols from a training checkpoint.

Parameters:
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

Other Parameters:
backend_configdict

Dictionary containing the backend configuration for the TketModel. Must include the fields ‘backend’, ‘compilation’ and ‘shots’.

classmethod from_diagrams(diagrams: list[Diagram], **kwargs: Any) Model[source]

Build model from a list of Diagrams.

Parameters:
diagramslist of Diagram

The tensor or circuit diagrams to be evaluated.

Other Parameters:
backend_configdict

Dictionary containing the backend configuration for the TketModel. Must include the fields ‘backend’, ‘compilation’ and ‘shots’.

use_jitbool, default: False

Whether to use JAX’s Just-In-Time compilation in NumpyModel.

abstract get_diagram_output(diagrams: list[Diagram]) Any[source]

Return the diagram prediction.

Parameters:
diagramslist of Diagram

The tensor or circuit diagrams to be evaluated.

abstract initialise_weights() None[source]

Initialise the weights of the model.

load(checkpoint_path: StrPathT) None[source]

Load model data from a path pointing to a lambeq checkpoint.

Checkpoints that are created by a lambeq Trainer usually have the extension .lt.

Parameters:
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

save(checkpoint_path: StrPathT) None[source]

Create a lambeq Checkpoint and save to a path.

Example: >>> from lambeq import PytorchModel >>> model = PytorchModel() >>> model.save(‘my_checkpoint.lt’)

Parameters:
checkpoint_pathstr or PathLike

Path that points to the checkpoint file.

class lambeq.NelderMeadOptimizer(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, float] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None)[source]

Bases: Optimizer

An optimizer based on the Nelder-Mead algorithm.

This implementation is based heavily on SciPy’s optimize.minimize.

__init__(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, float] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) None[source]

Initialise the Nelder-Mead optimizer.

The hyperparameters may contain the following key-value pairs:

  • adaptive: bool, default: False

    Adjust the algorithm’s parameters based on the dimensionality of the problem. This is particularly helpful when minimizing functions in high-dimensional spaces.

  • maxfev: int, default: 1000

    Maximum number of function evaluations allowed.

  • initial_simplex: ArrayLike (N+1, N), default: None

    If provided, replaces the initial model weights. Each row should contain the coordinates of the i`th vertex of the `N+1 vertices in the simplex, where N is the dimension.

  • xatol: float, default: 1e-4

    The acceptable level of absolute error in the optimal model weights (optimal solution) between iterations that indicates convergence.

  • fatol: float, default: 1e-4

    The acceptable level of absolute error in the loss value between iterations that indicates convergence.

Parameters:
modelQuantumModel

A lambeq quantum model.

hyperparamsdict of str to float

A dictionary containing the models hyperparameters.

loss_fnCallable[[ArrayLike, ArrayLike], float]]

A loss function of form loss(prediction, labels).

boundsArrayLike, optional

The range of each of the model parameters.

Raises:
ValueError
  • If the hyperparameters are not set correctly, or if the length of bounds does not match the number of the model parameters.

  • If the lower bounds are greater than the upper bounds.

  • If the initial simplex is not a 2D array.

  • If the initial simplex does not have N+1 rows, where N is the number of model parameters.

Warning
  • If the initial model weights are not within the bounds.

References

Gao, Fuchang & Han, Lixing. (2012). Implementing the Nelder-Mead Simplex Algorithm with Adaptive Parameters. Computational Optimization and Applications, 51. 259-277. 10.1007/s10589-010-9329-3.

backward(batch: tuple[Iterable[Any], ndarray]) float[source]

Calculate the gradients of the loss function.

The gradients are calculated with respect to the model parameters.

Parameters:
batchtuple of Iterable and numpy.ndarray

Current batch. Contains an Iterable of diagrams in index 0, and the targets in index 1.

Returns:
float

The calculated loss.

bounds: ndarray | None
load_state_dict(state_dict: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

Parameters:
state_dictdict

A dictionary containing a snapshot of the optimizer state.

model: QuantumModel
objective(x: Iterable[Any], y: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], w: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) float[source]

The objective function to be minimized.

Parameters:
xArrayLike

The input data.

yArrayLike

The labels.

wArrayLike

The model parameters.

Returns:
result: float

The result of the objective function.

Raises:
ValueError

If the objective function does not return a scalar value.

project(x: ndarray) ndarray[source]
state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

Returns:
dict

A dictionary containing the current state of the optimizer.

step() None[source]

Perform optimisation step.

update_hyper_params() None[source]

Update the hyperparameters of the Nelder-Mead algorithm.

class lambeq.NumpyModel(use_jit: bool = False)[source]

Bases: QuantumModel

A lambeq model for an exact classical simulation of a quantum pipeline.

__init__(use_jit: bool = False) None[source]

Initialise an NumpyModel.

Parameters:
use_jitbool, default: False

Whether to use JAX’s Just-In-Time compilation.

forward(x: list[Diagram]) Any[source]

Perform default forward pass of a lambeq model.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters:
xlist of Diagram

The Circuits to be evaluated.

Returns:
numpy.ndarray

Array containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) jnp.ndarray | numpy.ndarray[source]

Return the exact prediction for each diagram.

Parameters:
diagramslist of Diagram

The Circuits to be evaluated.

Returns:
np.ndarray

Resulting array.

Raises:
ValueError

If model.weights or model.symbols are not initialised.

lambdas: dict[Diagram, Callable[..., Any]]
symbols: list[Symbol | SymPySymbol]
weights: np.ndarray
class lambeq.Optimizer(*, model: Model, loss_fn: Callable[[Any, Any], float], hyperparams: dict[Any, Any] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None)[source]

Bases: ABC

Optimizer base class.

__init__(*, model: Model, loss_fn: Callable[[Any, Any], float], hyperparams: dict[Any, Any] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) None[source]

Initialise the optimizer base class.

Parameters:
modelQuantumModel

A lambeq model.

loss_fnCallable

A loss function of form loss(prediction, labels).

hyperparamsdict of str to float, optional

A dictionary containing the models hyperparameters.

boundsArrayLike, optional

The range of each of the model’s parameters.

abstract backward(batch: tuple[Iterable[Any], ndarray]) float[source]

Calculate the gradients of the loss function.

The gradient is calculated with respect to the model parameters.

Parameters:
batchtuple of list and numpy.ndarray

Current batch.

Returns:
float

The calculated loss.

abstract load_state_dict(state: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

abstract state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

abstract step() None[source]

Perform optimisation step.

zero_grad() None[source]

Reset the gradients to zero.

class lambeq.PennyLaneModel(probabilities: bool = True, normalize: bool = True, diff_method: str = 'best', backend_config: dict[str, Any] | None = None)[source]

Bases: Model, Module

A lambeq model for the quantum and hybrid quantum/classical pipeline using PennyLane circuits. It uses PyTorch as a backend for all tensor operations.

__init__(probabilities: bool = True, normalize: bool = True, diff_method: str = 'best', backend_config: dict[str, Any] | None = None) None[source]

Initialise a PennyLaneModel instance with an empty circuit_map dictionary.

Parameters:
probabilitiesbool, default: True

Whether to use probabilities or states for the output.

backend_configdict, optional

Configuration for hardware or simulator to be used. Defaults to using the default.qubit PennyLane simulator analytically, with normalized probability outputs. Keys that can be used include ‘backend’, ‘device’, ‘probabilities’, ‘normalize’, ‘shots’, and ‘noise_model’.

circuit_map: dict[Diagram, PennyLaneCircuit]
forward(x: list[Diagram]) Tensor[source]

Perform default forward pass by running circuits.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters:
xlist of Diagram

The Circuits to be evaluated.

Returns:
torch.Tensor

Tensor containing model’s prediction.

classmethod from_diagrams(diagrams: list[Diagram], probabilities: bool = True, normalize: bool = True, diff_method: str = 'best', backend_config: dict[str, Any] | None = None, **kwargs: Any) PennyLaneModel[source]

Build model from a list of Circuits.

Parameters:
diagramslist of Diagram

The circuit diagrams to be evaluated.

backend_configdict, optional

Configuration for hardware or simulator to be used. Defaults to using the default.qubit PennyLane simulator analytically, with normalized probability outputs. Keys that can be used include ‘backend’, ‘device’, ‘probabilities’, ‘normalize’, ‘shots’, and ‘noise_model’.

get_diagram_output(diagrams: list[Diagram]) Tensor[source]

Evaluate outputs of circuits using PennyLane.

Parameters:
diagramslist of Diagram

The Diagrams to be evaluated.

Returns:
torch.Tensor

Resulting tensor.

Raises:
ValueError

If model.weights or model.symbols are not initialised.

initialise_weights() None[source]

Initialise the weights of the model.

Raises:
ValueError

If model.symbols are not initialised.

symbol_weight_map: dict[Symbol, torch.FloatTensor]
symbols: list[Symbol]
training: bool
weights: torch.nn.ParameterList
class lambeq.PytorchModel[source]

Bases: Model, Module

A lambeq model for the classical pipeline using PyTorch.

__init__() None[source]

Initialise a PytorchModel.

forward(x: list[Diagram]) Tensor[source]

Perform default forward pass by contracting tensors.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters:
xlist of Diagram

The Diagrams to be evaluated.

Returns:
torch.Tensor

Tensor containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) Tensor[source]

Contract diagrams using tensornetwork.

Parameters:
diagramslist of Diagram

The Diagrams to be evaluated.

Returns:
torch.Tensor

Resulting tensor.

Raises:
ValueError

If model.weights or model.symbols are not initialised.

initialise_weights() None[source]

Initialise the weights of the model.

Raises:
ValueError

If model.symbols are not initialised.

symbols: list[Symbol]
training: bool
weights: torch.nn.ParameterList
class lambeq.PytorchTrainer(model: PytorchModel, loss_function: Callable[..., torch.Tensor], epochs: int, optimizer: type[torch.optim.Optimizer] = <class 'torch.optim.adamw.AdamW'>, learning_rate: float = 0.001, device: int = -1, *, optimizer_args: dict[str, Any] | None = None, evaluate_functions: Mapping[str, EvalFuncT] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: StrPathT | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None)[source]

Bases: Trainer

A PyTorch trainer for the classical pipeline.

__init__(model: PytorchModel, loss_function: Callable[..., torch.Tensor], epochs: int, optimizer: type[torch.optim.Optimizer] = <class 'torch.optim.adamw.AdamW'>, learning_rate: float = 0.001, device: int = -1, *, optimizer_args: dict[str, Any] | None = None, evaluate_functions: Mapping[str, EvalFuncT] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: StrPathT | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None) None[source]

Initialise a Trainer instance using the PyTorch backend.

Parameters:
modelPytorchModel

A lambeq Model using PyTorch for tensor computation.

loss_functioncallable

A PyTorch loss function from torch.nn.

epochsint

Number of training epochs.

optimizertorch.optim.Optimizer, default: torch.optim.AdamW

A PyTorch optimizer from torch.optim.

learning_ratefloat, default: 1e-3

The learning rate provided to the optimizer for training.

deviceint, default: -1

CUDA device ID used for tensor operation speed-up. A negative value uses the CPU.

optimizer_argsdict of str to Any, optional

Any extra arguments to pass to the optimizer.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names. Structure [{“metric”: func}]. Each function takes the prediction “y_hat” and the label “y” as input. The validation step calls “func(y_hat, y)”.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

model: PytorchModel
train_costs: list[float]
train_durations: list[float]
train_epoch_costs: list[float]
train_epoch_durations: list[float]
train_eval_results: dict[str, list[Any]]
training_step(batch: tuple[list[Any], Tensor]) tuple[Tensor, float][source]

Perform a training step.

Parameters:
batchtuple of list and torch.Tensor

Current batch.

Returns:
Tuple of torch.Tensor and float

The model predictions and the calculated loss.

val_costs: list[float]
val_durations: list[float]
val_eval_results: dict[str, list[Any]]
validation_step(batch: tuple[list[Any], Tensor]) tuple[Tensor, float][source]

Perform a validation step.

Parameters:
batchtuple of list and torch.Tensor

Current batch.

Returns:
Tuple of torch.Tensor and float

The model predictions and the calculated loss.

class lambeq.QuantumModel[source]

Bases: Model

Quantum Model base class.

Attributes:
symbolslist of symbols

A sorted list of all Symbols occurring in the data.

weightsarray

A data structure containing the numeric values of the model parameters

__call__(*args: Any, **kwargs: Any) Any[source]

Call self as a function.

__init__() None[source]

Initialise a QuantumModel.

abstract forward(x: list[Diagram]) Any[source]

Compute the forward pass of the model using get_model_output

abstract get_diagram_output(diagrams: list[Diagram]) jnp.ndarray | np.ndarray[source]

Return the diagram prediction.

Parameters:
diagramslist of Diagram

The Circuits to be evaluated.

initialise_weights() None[source]

Initialise the weights of the model.

Raises:
ValueError

If model.symbols are not initialised.

symbols: list[Symbol | SymPySymbol]
weights: np.ndarray
class lambeq.QuantumTrainer(model: QuantumModel, loss_function: Callable[..., float], epochs: int, optimizer: type[Optimizer], optim_hyperparams: dict[str, float], *, optimizer_args: dict[str, Any] | None = None, evaluate_functions: Mapping[str, EvalFuncT] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: StrPathT | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None)[source]

Bases: Trainer

A Trainer for the quantum pipeline.

__init__(model: QuantumModel, loss_function: Callable[..., float], epochs: int, optimizer: type[Optimizer], optim_hyperparams: dict[str, float], *, optimizer_args: dict[str, Any] | None = None, evaluate_functions: Mapping[str, EvalFuncT] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: StrPathT | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None) None[source]

Initialise a Trainer using a quantum backend.

Parameters:
modelQuantumModel

A lambeq Model.

loss_functioncallable

A loss function.

epochsint

Number of training epochs

optimizerOptimizer

An optimizer of type lambeq.training.Optimizer.

optim_hyperparamsdict of str to float

The hyperparameters to be used by the optimizer.

optimizer_argsdict of str to Any, optional

Any extra arguments to pass to the optimizer.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names. Structure [{“metric”: func}]. Each function takes the prediction “y_hat” and the label “y” as input. The validation step calls “func(y_hat, y)”.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

fit(train_dataset: Dataset, val_dataset: Dataset | None = None, log_interval: int = 1, eval_interval: int = 1, eval_mode: str = 'epoch', early_stopping_criterion: str | None = None, early_stopping_interval: int | None = None, minimize_criterion: bool = True, full_timing_report: bool = False) None[source]

Fit the model on the training data and, optionally, evaluate it on the validation data.

Parameters:
train_datasetDataset

Dataset used for training.

val_datasetDataset, optional

Validation dataset.

log_intervalint, default: 1

Sets the intervals at which the training statistics are printed if verbose = ‘text’ (otherwise ignored). If None, the statistics are printed at the end of each epoch.

eval_intervalint, default: 1

Sets the number of epochs at which the metrics are evaluated on the validation dataset. If None, the validation is performed at the end of each epoch.

eval_modeEvalMode, default: ‘epoch’

Sets the evaluation mode. If ‘epoch’, the metrics are evaluated after multiples of eval_interval epochs. If ‘step’, the metrics are evaluated after multiples of eval_interval steps. Ignored if val_dataset is None.

early_stopping_criterionstr, optional

If specified, the value of this on val_dataset (if provided) will be used as the stopping criterion instead of the (default) validation loss.

early_stopping_intervalint, optional

If specified, training is stopped if the validation loss does not improve for early_stopping_interval validation cycles.

minimize_criterion: bool, default: True

Flag indicating if we should minimize or maximize the early stopping criterion.

full_timing_report: bool, default: False

Flag for including mean timing statistics in the logs.

Raises:
ValueError

If eval_mode is not a valid EvalMode.

model: QuantumModel
train_costs: list[float]
train_durations: list[float]
train_epoch_costs: list[float]
train_epoch_durations: list[float]
train_eval_results: dict[str, list[Any]]
training_step(batch: tuple[list[Any], ndarray]) tuple[ndarray, float][source]

Perform a training step.

Parameters:
batchtuple of list and np.ndarray

Current batch.

Returns:
Tuple of np.ndarray and float

The model predictions and the calculated loss.

val_costs: list[float]
val_durations: list[float]
val_eval_results: dict[str, list[Any]]
validation_step(batch: tuple[list[Any], ndarray]) tuple[ndarray, float][source]

Perform a validation step.

Parameters:
batchtuple of list and np.ndarray

Current batch.

Returns:
tuple of np.ndarray and float

The model predictions and the calculated loss.

class lambeq.Reader[source]

Bases: ABC

Base class for readers and parsers.

abstract sentence2diagram(sentence: str | List[str], tokenised: bool = False) Diagram | None[source]

Parse a sentence into a lambeq diagram.

sentences2diagrams(sentences: List[str] | List[List[str]], tokenised: bool = False) list[Diagram | None][source]

Parse multiple sentences into a list of lambeq diagrams.

class lambeq.RemoveCupsRewriter[source]

Bases: DiagramRewriter

Removes cups from a given diagram.

Diagrams with less cups become circuits with less post-selection, which results in faster QML experiments.

matches(diagram: Diagram) bool[source]

Check if the given diagram should be rewritten.

rewrite(diagram: Diagram) Diagram[source]

Rewrite the given diagram.

class lambeq.RemoveSwapsRewriter[source]

Bases: DiagramRewriter

Produce a proper pregroup diagram by removing any swaps.

Direct conversion of a CCG derivation into a string diagram form may introduce swaps, caused by cross-composition rules and unary rules that may change types and the directionality of composition at any point of the derivation. This class removes swaps, producing a valid pregroup diagram (in J. Lambek’s sense) as follows:

  1. Eliminate swap morphisms by swapping the actual atomic types of the words.

  2. Scan the new diagram for any detached parts, and remove them by merging words together when possible.

Parameters:
diagramlambeq.backend.grammar.Diagram

The input diagram.

Returns:
lambeq.backend.grammar.Diagram

A copy of the input diagram without swaps.

Raises:
ValueError

If the input diagram is not in “pregroup” form, i.e. when words do not strictly precede the morphisms.

Notes

The class trades off diagrammatic simplicity and conformance to a formal pregroup grammar for a larger vocabulary, since each word is associated with more types than before and new words (combined tokens) are added to the vocabulary. Depending on the size of your dataset, this might lead to data sparsity problems during training.

Examples

In the following example, “am” and “not” are combined at the CCG level using cross composition, which introduces the interwoven pattern of wires.

I       am            not        sleeping
─  ───────────  ───────────────  ────────
n  n.r·s·s.l·n  s.r·n.r.r·n.r·s   n.r·s
│   │  │  │  ╰─╮─╯    │    │  │    │  │
│   │  │  │  ╭─╰─╮    │    │  │    │  │
│   │  │  ╰╮─╯   ╰─╮──╯    │  │    │  │
│   │  │  ╭╰─╮   ╭─╰──╮    │  │    │  │
│   │  ╰──╯  ╰─╮─╯    ╰─╮──╯  │    │  │
│   │        ╭─╰─╮    ╭─╰──╮  │    │  │
│   ╰────────╯   ╰─╮──╯    ╰╮─╯    │  │
│                ╭─╰──╮    ╭╰─╮    │  │
╰────────────────╯    ╰─╮──╯  ╰────╯  │
                      ╭─╰──╮          │
                      │    ╰──────────╯

Rewriting with the RemoveSwapsRewriter class will return:

I     am not    sleeping
─  ───────────  ────────
n  n.r·s·s.l·n   n.r·s
╰───╯  │  │  ╰────╯  │
       │  ╰──────────╯

removing the swaps and combining “am” and “not” into one token.

matches(diagram: Diagram) bool[source]

Check if the given diagram should be rewritten.

rewrite(diagram: Diagram) Diagram[source]

Rewrite the given diagram.

class lambeq.RewriteRule[source]

Bases: ABC

Base class for rewrite rules.

__call__(box: Box) Diagrammable | None[source]

Apply the rewrite rule to a box.

Parameters:
boxlambeq.backend.grammar.Box

The candidate box to be tested against this rewrite rule.

Returns:
lambeq.backend.grammar.Diagram, optional

The rewritten diagram, or None if rule does not apply.

Notes

The default implementation uses the matches() and rewrite() methods, but derived classes may choose to not use them, since the default Rewriter implementation does not call those methods directly, only this one.

abstract matches(box: Box) bool[source]

Check if the given box should be rewritten.

abstract rewrite(box: Box) Diagrammable[source]

Rewrite the given box.

class lambeq.Rewriter(rules: Iterable[RewriteRule | str] | None = None)[source]

Bases: object

Class that rewrites diagrams.

Comes with a set of default rules.

__call__(diagram: Diagram) Diagram[source]

Apply the rewrite rules to the given diagram.

__init__(rules: Iterable[RewriteRule | str] | None = None) None[source]

Initialise a rewriter.

Parameters:
rulesiterable of str or RewriteRule, optional

A list of rewrite rules to use. RewriteRule instances are used directly, str objects are used as names of the default rules. See Rewriter.available_rules() for the list of rule names. If omitted, all the default rules are used.

add_rules(*rules: RewriteRule | str) None[source]

Add rules to this rewriter.

classmethod available_rules() list[str][source]

The list of default rule names.

class lambeq.RotosolveOptimizer(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, float] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None)[source]

Bases: Optimizer

An optimizer using the Rotosolve algorithm.

Rotosolve is an optimizer for parametrized quantum circuits. It applies a shift of ±π/2 radians to each parameter, then updates the parameter based on the resulting loss. The loss function is assumed to be a linear combination of Hamiltonian measurements.

This optimizer is designed to work with ansätze that are composed of single-qubit rotations, such as the StronglyEntanglingAnsatz, Sim14Ansatz and Sim15Ansatz.

See Ostaszewski et al. for details.

__init__(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, float] | None = None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) None[source]

Initialise the Rotosolve optimizer.

Parameters:
modelQuantumModel

A lambeq quantum model.

loss_fncallable

A loss function of the form loss(prediction, labels).

hyperparamsdict of str to float, optional

Unused.

boundsArrayLike, optional

Unused.

backward(batch: tuple[Iterable[Any], ndarray]) float[source]

Perform a single backward pass.

Rotosolve does not calculate a global gradient. Instead, the parameters are updated after applying a shift of ±π/2 radians to each parameter. Therefore, there is no global step to take.

Parameters:
batchtuple of Iterable and numpy.ndarray

Current batch. Contains an Iterable of diagrams in index 0, and the targets in index 1.

Returns:
float

The calculated loss after the backward pass.

load_state_dict(state_dict: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

model: QuantumModel
static project(x: ndarray) ndarray[source]
state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

step() None[source]

Perform optimisation step.

class lambeq.SPSAOptimizer(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, Any] | None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None)[source]

Bases: Optimizer

An Optimizer using SPSA.

SPSA = Simultaneous Perturbation Stochastic Spproximations. See https://ieeexplore.ieee.org/document/705889 for details.

__init__(*, model: QuantumModel, loss_fn: Callable[[Any, Any], float], hyperparams: dict[str, Any] | None, bounds: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None) None[source]

Initialise the SPSA optimizer.

The hyperparameters must contain the following key value pairs:

hyperparams = {
    'a': A learning rate parameter, float
    'c': The parameter shift scaling factor, float
    'A': A stability constant, float
}

A good value for ‘A’ is approximately: 0.01 * Num Training steps

Parameters:
modelQuantumModel

A lambeq quantum model.

loss_fnCallable

A loss function of form loss(prediction, labels).

hyperparamsdict of str to float.

A dictionary containing the models hyperparameters.

boundsArrayLike, optional

The range of each of the model parameters.

Raises:
ValueError

If the hyperparameters are not set correctly, or if the length of bounds does not match the number of the model parameters.

backward(batch: tuple[Iterable[Any], ndarray]) float[source]

Calculate the gradients of the loss function.

The gradients are calculated with respect to the model parameters.

Parameters:
batchtuple of Iterable and numpy.ndarray

Current batch. Contains an Iterable of diagrams in index 0, and the targets in index 1.

Returns:
float

The calculated loss.

load_state_dict(state_dict: Mapping[str, Any]) None[source]

Load state of the optimizer from the state dictionary.

Parameters:
state_dictdict

A dictionary containing a snapshot of the optimizer state.

model: QuantumModel
project: Callable[[ndarray], ndarray]
state_dict() dict[str, Any][source]

Return optimizer states as dictionary.

Returns:
dict

A dictionary containing the current state of the optimizer.

step() None[source]

Perform optimisation step.

update_hyper_params() None[source]

Update the hyperparameters of the SPSA algorithm.

class lambeq.Sim14Ansatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: CircuitAnsatz

Modification of circuit 14 from Sim et al.

Replaces circuit-block construction with two rings of CRx gates, in opposite orientation.

Paper at: https://arxiv.org/abs/1905.10876

Code adapted from DisCoPy.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate a Sim 14 ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: ndarray) Diagram[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.Sim15Ansatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: CircuitAnsatz

Modification of circuit 15 from Sim et al.

Replaces circuit-block construction with two rings of CNOT gates, in opposite orientation.

Paper at: https://arxiv.org/abs/1905.10876

Code adapted from DisCoPy.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate a Sim 15 ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: ndarray) Diagram[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.Sim4Ansatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False)[source]

Bases: CircuitAnsatz

Circuit 4 from Sim et al.

Ansatz with a layer of Rx and Rz gates, followed by a ladder of CRxs.

Paper at: https://arxiv.org/abs/1905.10876

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, discard: bool = False) None[source]

Instantiate a Sim 4 ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: ndarray) Diagram[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.SimpleRewriteRule(cod: Ty, template: Diagrammable, words: Container[str] | None = None, case_sensitive: bool = False)[source]

Bases: RewriteRule

A simple rewrite rule.

This rule matches each box against a required codomain and, if provided, a set of words. If they match, the word box is rewritten into a set template.

__init__(cod: Ty, template: Diagrammable, words: Container[str] | None = None, case_sensitive: bool = False) None[source]

Instantiate a simple rewrite rule.

Parameters:
codlambeq.backend.grammar.Ty

The type that the codomain of each box is matched against.

templatelambeq.backend.grammar.Diagrammable

The diagram that a matching box is replaced with. A special placeholder box is replaced by the word in the matched box, and can be created using SimpleRewriteRule.placeholder().

wordscontainer of str, optional

If provided, this is a list of words that are rewritten by this rule. If a box does not have one of these words, it is not rewritten, even if the codomain matches. If omitted, all words are permitted.

case_sensitivebool, default: False

This indicates whether the list of words specified above are compared case-sensitively. The default is False.

matches(box: Box) bool[source]

Check if the given box should be rewritten.

classmethod placeholder(cod: Ty) Word[source]

Helper function to generate the placeholder for a template.

Parameters:
codlambeq.backend.grammar.Ty

The codomain of the placeholder, and hence the word in the resulting rewritten diagram.

Returns:
lambeq.backend.grammar.Word

A placeholder word with the given codomain.

rewrite(box: Box) Diagrammable[source]

Rewrite the given box.

class lambeq.SpacyTokeniser[source]

Bases: Tokeniser

Tokeniser class based on SpaCy.

__init__() None[source]
split_sentences(text: str) list[str][source]

Split input text into a list of sentences.

Parameters:
textstr

A single string that contains one or multiple sentences.

Returns:
list of str

List of sentences, one sentence in each string.

tokenise_sentences(sentences: Iterable[str]) list[list[str]][source]

Tokenise a list of sentences.

Parameters:
sentenceslist of str

A list of untokenised sentences.

Returns:
list of list of str

A list of tokenised sentences, where each sentence is a list of tokens.

class lambeq.SpiderAnsatz(ob_map: Mapping[Ty, Dim], max_order: int = 2)[source]

Bases: TensorAnsatz

Split large boxes into spiders.

__call__(diagram: Diagram) Diagram[source]

Convert a diagram into a tensor.

__init__(ob_map: Mapping[Ty, Dim], max_order: int = 2) None[source]

Instantiate a spider ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the dimension space it uses in a tensor network.

max_order: int

The maximum order of each tensor, which must be at least 2.

class lambeq.StronglyEntanglingAnsatz(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, ranges: list[int] | None = None, discard: bool = False)[source]

Bases: CircuitAnsatz

Strongly entangling ansatz.

Ansatz using three single qubit rotations (RzRyRz) followed by a ladder of CNOT gates with different ranges per layer.

This is adapted from the PennyLane implementation of the pennylane.StronglyEntanglingLayers, pursuant to Apache 2.0 licence.

The original paper which introduces the architecture can be found here.

__init__(ob_map: Mapping[Ty, int], n_layers: int, n_single_qubit_params: int = 3, ranges: list[int] | None = None, discard: bool = False) None[source]

Instantiate a strongly entangling ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the number of qubits it uses in a circuit.

n_layersint

The number of circuit layers used by the ansatz.

n_single_qubit_paramsint, default: 3

The number of single qubit rotations used by the ansatz. It only affects wires that ob_map maps to a single qubit.

rangeslist of int, optional

The range of the CNOT gate between wires in each layer. By default, the range starts at one (i.e. adjacent wires) and increases by one for each subsequent layer.

discardbool, default: False

Discard open wires instead of post-selecting.

circuit(n_qubits: int, params: ndarray) Diagram[source]
params_shape(n_qubits: int) tuple[int, ...][source]

Calculate the shape of the parameters required.

class lambeq.Symbol(name: str, directed_dom: int = 1, directed_cod: int = 1, **assumptions: bool)[source]

Bases: Symbol

A sympy symbol augmented with extra information.

Attributes:
directed_domint

The size of the domain of the tensor-box that this symbol represents.

directed_codint

The size of the codomain of the tensor-box that this symbol represents.

sizeint

The total size of the tensor that this symbol represents (directed_dom * directed_cod).

default_assumptions = {}
directed_cod: int
directed_dom: int
name: str
property size: int
sort_key(order: Literal[None] = None) tuple[Any, ...][source]

Return a sort key.

Examples

>>> from sympy import S, I
>>> sorted([S(1)/2, I, -I], key=lambda x: x.sort_key())
[1/2, -I, I]
>>> S("[x, 1/x, 1/x**2, x**2, x**(1/2), x**(1/4), x**(3/2)]")
[x, 1/x, x**(-2), x**2, sqrt(x), x**(1/4), x**(3/2)]
>>> sorted(_, key=lambda x: x.sort_key())
[x**(-2), 1/x, x**(1/4), sqrt(x), x, x**(3/2), x**2]
class lambeq.TensorAnsatz(ob_map: Mapping[Ty, Dim])[source]

Bases: BaseAnsatz

Base class for tensor network ansatz.

__call__(diagram: Diagram) Diagram[source]

Convert a diagram into a tensor.

__init__(ob_map: Mapping[Ty, Dim]) None[source]

Instantiate a tensor network ansatz.

Parameters:
ob_mapdict

A mapping from lambeq.backend.grammar.Ty to the dimension space it uses in a tensor network.

class lambeq.TketModel(backend_config: dict[str, Any])[source]

Bases: QuantumModel

Model based on tket.

This can run either shot-based simulations of a quantum pipeline or experiments run on quantum hardware using tket.

__init__(backend_config: dict[str, Any]) None[source]

Initialise TketModel based on the t|ket> backend.

Other Parameters:
backend_configdict

Dictionary containing the backend configuration. Must include the fields backend, compilation and shots.

Raises:
KeyError

If backend_config is not provided or has missing fields.

forward(x: list[Diagram]) ndarray[source]

Perform default forward pass of a lambeq quantum model.

In case of a different datapoint (e.g. list of tuple) or additional computational steps, please override this method.

Parameters:
xlist of Diagram

The Circuits to be evaluated.

Returns:
np.ndarray

Array containing model’s prediction.

get_diagram_output(diagrams: list[Diagram]) ndarray[source]

Return the prediction for each diagram using t|ket>.

Parameters:
diagramslist of :py:class:`~lambeq.backend.quantum.Diagram

The Circuits to be evaluated.

Returns:
np.ndarray

Resulting array.

Raises:
ValueError

If model.weights or model.symbols are not initialised.

symbols: list[Symbol | SymPySymbol]
weights: np.ndarray
class lambeq.Tokeniser[source]

Bases: ABC

Base Class for all tokenisers

abstract split_sentences(text: str) list[str][source]

Split input text into a list of sentences.

Parameters:
textstr

A single string that contains one or multiple sentences.

Returns:
list of str

List of sentences, one sentence in each string.

tokenise_sentence(sentence: str) list[str][source]

Tokenise a sentence.

Parameters:
sentencestr

An untokenised sentence.

Returns:
list of str

A tokenised sentence given as a list of tokens - strings.

abstract tokenise_sentences(sentences: Iterable[str]) list[list[str]][source]

Tokenise a list of sentences.

Parameters:
sentenceslist of str

A list of untokenised sentences.

Returns:
list of list of str

A list of tokenised sentences, where each sentence is a list of tokens - strings

class lambeq.Trainer(model: Model, loss_function: Callable[[...], Any], epochs: int, evaluate_functions: Mapping[str, Callable[[Any, Any], Any]] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: str | PathLike[str] | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None)[source]

Bases: ABC

Base class for a lambeq trainer.

__init__(model: Model, loss_function: Callable[[...], Any], epochs: int, evaluate_functions: Mapping[str, Callable[[Any, Any], Any]] | None = None, evaluate_on_train: bool = True, use_tensorboard: bool = False, log_dir: str | PathLike[str] | None = None, from_checkpoint: bool = False, verbose: str = 'text', seed: int | None = None) None[source]

Initialise a lambeq trainer.

Parameters:
modelModel

A lambeq Model.

loss_functioncallable

A loss function to compare the prediction to the true label.

epochsint

Number of training epochs.

evaluate_functionsmapping of str to callable, optional

Mapping of evaluation metric functions from their names.

evaluate_on_trainbool, default: True

Evaluate the metrics on the train dataset.

use_tensorboardbool, default: False

Use Tensorboard for visualisation of the training logs.

log_dirstr or PathLike, optional

Location of model checkpoints (and tensorboard log). Default is runs/**CURRENT_DATETIME_HOSTNAME**.

from_checkpointbool, default: False

Starts training from the checkpoint, saved in the log_dir.

verbosestr, default: ‘text’,

See VerbosityLevel for options.

seedint, optional

Random seed.

fit(train_dataset: Dataset, val_dataset: Dataset | None = None, log_interval: int = 1, eval_interval: int = 1, eval_mode: str = 'epoch', early_stopping_criterion: str | None = None, early_stopping_interval: int | None = None, minimize_criterion: bool = True, full_timing_report: bool = False) None[source]

Fit the model on the training data and, optionally, evaluate it on the validation data.

Parameters:
train_datasetDataset

Dataset used for training.

val_datasetDataset, optional

Validation dataset.

log_intervalint, default: 1

Sets the intervals at which the training statistics are printed if verbose = ‘text’ (otherwise ignored). If None, the statistics are printed at the end of each epoch.

eval_intervalint, default: 1

Sets the number of epochs at which the metrics are evaluated on the validation dataset. If None, the validation is performed at the end of each epoch.

eval_modeEvalMode, default: ‘epoch’

Sets the evaluation mode. If ‘epoch’, the metrics are evaluated after multiples of eval_interval epochs. If ‘step’, the metrics are evaluated after multiples of eval_interval steps. Ignored if val_dataset is None.

early_stopping_criterionstr, optional

If specified, the value of this on val_dataset (if provided) will be used as the stopping criterion instead of the (default) validation loss.

early_stopping_intervalint, optional

If specified, training is stopped if the validation loss does not improve for early_stopping_interval validation cycles.

minimize_criterion: bool, default: True

Flag indicating if we should minimize or maximize the early stopping criterion.

full_timing_report: bool, default: False

Flag for including mean timing statistics in the logs.

Raises:
ValueError

If eval_mode is not a valid EvalMode.

load_training_checkpoint(log_dir: str | PathLike[str]) Checkpoint[source]

Load model from a checkpoint.

Parameters:
log_dirstr or PathLike

The path to the model.lt checkpoint file.

Returns:
py:class:.Checkpoint

Checkpoint containing the model weights, symbols and the training history.

Raises:
FileNotFoundError

If the file does not exist.

save_checkpoint(save_dict: Mapping[str, Any], log_dir: str | PathLike[str], prefix: str = '') None[source]

Save checkpoint.

Parameters:
save_dictmapping of str to any

Mapping containing the checkpoint information.

log_dirstr or PathLike

The path where to store the model.lt checkpoint file.

prefixstr, default: ‘’

Prefix for the checkpoint file name.

abstract training_step(batch: tuple[list[Any], Any]) tuple[Any, float][source]

Perform a training step.

Parameters:
batchtuple of list and any

Current batch.

Returns:
Tuple of any and float

The model predictions and the calculated loss.

abstract validation_step(batch: tuple[list[Any], Any]) tuple[Any, float][source]

Perform a validation step.

Parameters:
batchtuple of list and any

Current batch.

Returns:
Tuple of any and float

The model predictions and the calculated loss.

class lambeq.TreeReader(ccg_parser: ~lambeq.text2diagram.ccg_parser.CCGParser | ~collections.abc.Callable[[], ~lambeq.text2diagram.ccg_parser.CCGParser] = <class 'lambeq.text2diagram.bobcat_parser.BobcatParser'>, mode: ~lambeq.text2diagram.tree_reader.TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: ~lambeq.backend.grammar.Ty = Ty(s))[source]

Bases: Reader

A reader that combines words according to a parse tree.

__init__(ccg_parser: ~lambeq.text2diagram.ccg_parser.CCGParser | ~collections.abc.Callable[[], ~lambeq.text2diagram.ccg_parser.CCGParser] = <class 'lambeq.text2diagram.bobcat_parser.BobcatParser'>, mode: ~lambeq.text2diagram.tree_reader.TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: ~lambeq.backend.grammar.Ty = Ty(s)) None[source]

Initialise a tree reader.

Parameters:
ccg_parserCCGParser or callable, default: BobcatParser

A CCGParser object or a function that returns it. The parse tree produced by the parser is used to generate the tree diagram.

modeTreeReaderMode, default: TreeReaderMode.NO_TYPE

Determines what boxes are used to combine the tree. See TreeReaderMode for options.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

classmethod available_modes() list[str][source]

The list of modes for initialising a tree reader.

sentence2diagram(sentence: str | List[str], tokenised: bool = False, collapse_noun_phrases: bool = True, suppress_exceptions: bool = False) Diagram | None[source]

Parse a sentence into a lambeq diagram.

This produces a tree-shaped diagram based on the output of the CCG parser.

Parameters:
sentencestr or list of str

The sentence to be parsed.

tokenisedbool, default: False

Whether the sentence has been passed as a list of tokens.

collapse_noun_phrasesbool, default: True

If set, then before converting each tree to a diagram, any noun phrase types in the tree are changed into nouns. This includes sub-types, e.g. S/NP becomes S/N.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

Returns:
lambeq.backend.grammar.Diagram or None

The parsed diagram, or None on failure.

static tree2diagram(tree: CCGTree, mode: TreeReaderMode = TreeReaderMode.NO_TYPE, word_type: Ty = Ty(s), suppress_exceptions: bool = False) Diagram | None[source]

Convert a CCGTree into a Diagram .

This produces a tree-shaped diagram based on the output of the CCG parser.

Parameters:
treeCCGTree

The CCG tree to be converted.

modeTreeReaderMode, default: TreeReaderMode.NO_TYPE

Determines what boxes are used to combine the tree. See TreeReaderMode for options.

word_typeTy, default: core.types.AtomicType.SENTENCE

The type of each word box. By default, it uses the sentence type from core.types.AtomicType.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

Returns:
lambeq.backend.grammar.Diagram or None

The parsed diagram, or None on failure.

class lambeq.TreeReaderMode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

An enumeration for TreeReader.

The words in the tree diagram can be combined using 3 modes:

NO_TYPE

The ‘no type’ mode names every rule box UNIBOX.

RULE_ONLY

The ‘rule name’ mode names every rule box based on the name of the original CCG rule. For example, for the forward application rule FA(N << N), the rule box will be named FA.

RULE_TYPE

The ‘rule type’ mode names every rule box based on the name and type of the original CCG rule. For example, for the forward application rule FA(N << N), the rule box will be named FA(N << N).

HEIGHT

The ‘height’ mode names every rule box based on the tree height of its subtree. For example, a rule box directly combining two words will be named layer_1.

HEIGHT = 3
NO_TYPE = 0
RULE_ONLY = 1
RULE_TYPE = 2
class lambeq.UnifyCodomainRewriter(output_type: Ty = Ty(s))[source]

Bases: DiagramRewriter

Unifies the codomain of diagrams to match a given type.

A rewriter that takes diagrams with d.cod != output_type and append a d.cod -> output_type box.

Attributes:
output_typelambeq.backend.grammar.Ty, default S

The output type of the appended box.

__init__(output_type: Ty = Ty(s)) None
matches(diagram: Diagram) bool[source]

Check if the given diagram should be rewritten.

output_type: Ty = Ty(s)
rewrite(diagram: Diagram) Diagram[source]

Rewrite the given diagram.

class lambeq.UnknownWordsRewriteRule(vocabulary: Container[str | tuple[str, Ty]], unk_token: str = '<UNK>')[source]

Bases: RewriteRule

A rewrite rule for unknown words.

This rule matches any word not included in its vocabulary and, when passed a diagram, replaces all the boxes containing an unknown word with an UNK box corresponding to the same pregroup type.

__init__(vocabulary: Container[str | tuple[str, Ty]], unk_token: str = '<UNK>') None[source]

Instantiate an UnknownWordsRewriteRule.

Parameters:
vocabularycontainer of str or tuple of str and Ty

A list of words (or words with specific output types) to not be rewritten by this rule.

unk_tokenstr, default: ‘<UNK>’

The string to use for the UNK token.

classmethod from_diagrams(diagrams: Iterable[Diagram], min_freq: int = 1, unk_token: str = '<UNK>', ignore_types: bool = False) UnknownWordsRewriteRule[source]

Create the rewrite rule from a set of diagrams.

The vocabulary is the set of words that occur at least min_freq times throughout the set of diagrams.

Parameters:
diagramslist of Diagram

Diagrams from which the vocabulary is created.

min_freqint, default: 1

The minimum frequency required for a word to be included in the vocabulary.

unk_tokenstr, default: ‘<UNK>’

The string to use for the UNK token.

ignore_typesbool, default: False

Whether to just consider the word when determining frequency or to also consider the output type of the box (the default behaviour).

matches(box: Box) bool[source]

Check if the given box should be rewritten.

rewrite(box: Box) Box[source]

Rewrite the given box.

class lambeq.VerbosityLevel(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

Level of verbosity for progress reporting.

Table 4 Available Options

Option

Value

Description

PROGRESS

'progress'

Use progress bar.

TEXT

'text'

Give text report.

SUPPRESS

'suppress'

No output.

All outputs are printed to stderr. Visual Studio Code does not always display progress bars correctly, use 'progress' level reporting in Visual Studio Code at your own risk.

PROGRESS = 'progress'
SUPPRESS = 'suppress'
TEXT = 'text'
classmethod has_value(value: str) bool[source]
exception lambeq.WebParseError(sentence: str)[source]

Bases: OSError

__init__(sentence: str) None[source]
class lambeq.WebParser(parser: str = 'depccg', verbose: str = 'suppress')[source]

Bases: CCGParser

Wrapper that allows passing parser queries to an online service.

__init__(parser: str = 'depccg', verbose: str = 'suppress') None[source]

Initialise a web parser.

Parameters:
parserstr, optional

The web parser to use. By default, this is depccg parser.

verbosestr, default: ‘suppress’,

See VerbosityLevel for options.

sentences2trees(sentences: List[str] | List[List[str]], tokenised: bool = False, suppress_exceptions: bool = False, verbose: str | None = None) list[CCGTree | None][source]

Parse multiple sentences into a list of CCGTree s.

Parameters:
sentenceslist of str, or list of list of str

The sentences to be parsed.

suppress_exceptionsbool, default: False

Whether to suppress exceptions. If True, then if a sentence fails to parse, instead of raising an exception, its return entry is None.

verbosestr, optional

See VerbosityLevel for options. If set, it takes priority over the verbose attribute of the parser.

Returns:
list of CCGTree or None

The parsed trees. May contain None if exceptions are suppressed.

Raises:
URLError

If the service URL is not well formed.

ValueError

If a sentence is blank or type of the sentence does not match tokenised flag.

WebParseError

If the parser fails to obtain a parse tree from the server.