graphdoc.train.single_prompt_trainer module

class graphdoc.train.single_prompt_trainer.SinglePromptTrainer(prompt: SinglePrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]

Bases: ABC

__init__(prompt: SinglePrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]

Initialize the SinglePromptTrainer. This is the base class for implementing a trainer for a single prompt.

Parameters:
  • prompt (SinglePrompt) – The prompt to train.

  • optimizer_type (str) – The type of optimizer to use.

  • optimizer_kwargs (Dict[str, Any]) – The keyword arguments for the optimizer.

  • mlflow_model_name (str) – The name of the model in mlflow.

  • mlflow_experiment_name (str) – The name of the experiment in mlflow.

  • mlflow_tracking_uri (str) – The uri of the mlflow tracking server.

  • trainset (List[dspy.Example]) – The training set.

  • evalset (List[dspy.Example]) – The evaluation set.

abstract evaluation_metrics(base_evaluation, optimized_evaluation)[source]

Log evaluation metrics to mlflow.

Parameters:
  • base_evaluation (Any) – The evaluation metrics of the base model.

  • optimized_evaluation (Any) – The evaluation metrics of the optimized model.

abstract evaluate_training(base_model, optimized_model) Tuple[Dict[str, Any], Dict[str, Any]][source]

Evaluate the training of the model. Comparing the base and optimized models.

Parameters:
  • base_model (Any) – The base model.

  • optimized_model (Any) – The optimized model.

abstract train(load_model_args: Dict[str, Any] | None = None, save_model: bool = True)[source]

Train the model.

Parameters:
  • load_model_args (Dict[str, Any]) – The arguments to load the model.

  • save_model (bool) – Whether to save the model.