Step 4: Training¶
In lambeq
, all low-level processing that takes place in training is hidden in the training
package, which provides convenient high-level abstractions for all important supervised learning scenarios with the toolkit, classical and quantum. More specifically, the training
package contains the following high-level/abstract classes and several concrete implementations for them:
Dataset
: A class that provides functionality for easy management and manipulation of datasets, including batching, shuffling, and preparation based on the selected backend (tket, NumPy, PyTorch).Model
: The abstract interface forlambeq
models. A model bundles the basic attributes and methods used for training, given a specific backend. It stores the symbols and the corresponding weights, and implements the forward pass of the model. Concrete implementations are thePytorchModel
,TketModel
,NumpyModel
, andPennyLaneModel
classes (for more details see Section Choosing a model below).LossFunction
: Implementations of this class compute the distance between the predicted values of the model and the true values in the dataset. This is used to adjust the model weights so that the average loss accross all data instances can be minimised.lambeq
supports a number of loss functions, such asCrossEntropyLoss
,BinaryCrossEntropyLoss
, andMSELoss
.Optimizer
: alambeq
optimizer calculates the gradient of a given loss function with respect to the parameters of a model. It contains astep()
method to modify the model parameters according to the optimizer’s update rule. Currently, for the quantum case we support the SPSA algorithm by [Spa98], implemented in theSPSAOptimizer
class, the Rotosolve algorithm [OGB21] with classRotosolveOptimizer
, and the Nelder-Mead algorithm [GH12, NM65] with classNelderMeadOptimizer
, while for the classical and hybrid cases we support PyTorch optimizers.Trainer
: The main interface for supervised learning inlambeq
. A trainer implements the (quantum) machine learning routine given a specific backend, using a loss function and an optimizer. Concrete implementations are thePytorchTrainer
andQuantumTrainer
classes.
The process of training a model involves the following steps:
Instantiate the
Model
.Instantiate a
Trainer
, passing to it a model, a loss function, and an optimizer.Create a
Dataset
for training, and optionally, one for evaluation.Train the model by handing the dataset to the
fit()
method of the trainer.
Note
lambeq
covers a wide range of training use cases, which are described in detail under lambeq use cases. Depending on your specific use case (e.g., classical or (simulated) quantum machine learning, etc.), you can choose from a variety of models and their according trainers. Refer to Section Choosing a model for a detailed overview of the available models and trainers.
The following examples demonstrate the usage of the training
package for classical and quantum training scenarios.
See also: