graphdoc.train package
- class graphdoc.train.TrainerFactory[source]
Bases:
object
- static single_trainer(trainer_class: str, prompt: SinglePrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_tracking_uri: str, mlflow_model_name: str, mlflow_experiment_name: str, trainset: List[Example], evalset: List[Example])[source]
Returns an instance of the specified trainer class.
- class graphdoc.train.DocGeneratorTrainer(prompt: DocGeneratorPrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Bases:
SinglePromptTrainer
- __init__(prompt: DocGeneratorPrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Initialize the DocGeneratorTrainer.
- Parameters:
prompt (DocGeneratorPrompt) – The prompt to train.
optimizer_type (str) – The type of optimizer to use.
optimizer_kwargs (Dict[str, Any]) – The keyword arguments for the optimizer.
mlflow_model_name (str) – The name of the model in mlflow.
mlflow_experiment_name (str) – The name of the experiment in mlflow.
mlflow_tracking_uri (str) – The uri of the mlflow tracking server.
trainset (List[dspy.Example]) – The training set.
evalset (List[dspy.Example]) – The evaluation set.
- _calculate_average_score(evaluation: dict) float [source]
Given a dictionary of evaluation results, calculate the average score.
- evaluation_metrics(base_evaluation: Dict[str, Any], optimized_evaluation: Dict[str, Any]) None [source]
Log evaluation metrics to mlflow.
- evaluate_training(base_model, optimized_model) Tuple[Dict[str, Any], Dict[str, Any]] [source]
Evaluate the training of the model. Comparing the base and optimized models.
- Parameters:
base_model (Any) – The base model.
optimized_model (Any) – The optimized model.
- class graphdoc.train.DocQualityTrainer(prompt: DocQualityPrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Bases:
SinglePromptTrainer
- __init__(prompt: DocQualityPrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Initialize the DocQualityTrainer. This is the base class for implementing a trainer for a DocQualityPrompt.
- Parameters:
prompt (DocQualityPrompt) – The prompt to train.
optimizer_type (str) – The type of optimizer to use.
optimizer_kwargs (Dict[str, Any]) – The keyword arguments for the optimizer.
mlflow_model_name (str) – The name of the model in mlflow.
mlflow_experiment_name (str) – The name of the experiment in mlflow.
mlflow_tracking_uri (str) – The uri of the mlflow tracking server.
trainset (List[dspy.Example]) – The training set.
evalset (List[dspy.Example]) – The evaluation set.
- evaluation_metrics(base_evaluation, optimized_evaluation)[source]
Log evaluation metrics to mlflow. We will log the overall scores and the per category scores. Per category scores will be logged as a csv file.
- Parameters:
base_evaluation (Any) – The evaluation metrics of the base model.
optimized_evaluation (Any) – The evaluation metrics of the optimized model.
- evaluate_training(base_model, optimized_model) Tuple[Dict[str, Any], Dict[str, Any]] [source]
Evaluate the training of the model. Comparing the base and optimized models.
- Parameters:
base_model (Any) – The base model.
optimized_model (Any) – The optimized model.
- graphdoc.train.optimizer_compile(optimizer_type: str, optimizer_kwargs: Dict[str, Any])[source]
Compiles the optimizer given the optimizer type and optimizer kwargs.
Optimizer kwargs are optimizer specific, and must include a student field that maps to a dspy.ChainOfThought, dspy.Predict, etc.
- class graphdoc.train.SinglePrompt(prompt: Signature | SignatureMeta, prompt_type: Literal['predict', 'chain_of_thought'] | Callable, prompt_metric: Any)[source]
Bases:
ABC
- __init__(prompt: Signature | SignatureMeta, prompt_type: Literal['predict', 'chain_of_thought'] | Callable, prompt_metric: Any) None [source]
Initialize a single prompt.
- Parameters:
prompt (dspy.Signature) – The prompt to use.
prompt_type (Union[Literal["predict", "chain_of_thought"], Callable]) – The type of prompt to use. Can be “predict” or “chain_of_thought”. Optionally, pass another dspy.Module.
prompt_metric (Any) – The metric to use. Marked as Any for flexibility (as metrics can be other prompts).
- abstract evaluate_metric(example: Example, prediction: Prediction, trace=None) Any [source]
This is the metric used to evalaute the prompt.
- Parameters:
example (dspy.Example) – The example to evaluate the metric on.
prediction (dspy.Prediction) – The prediction to evaluate the metric on.
trace (Any) – The trace to evaluate the metric on. This is for DSPy.
- abstract format_metric(examples: List[Example], overall_score: float, results: List, scores: List) Dict[str, Any] [source]
This takes the results from the evaluate_evalset and does any necessary formatting, taking into account the metric type.
- Parameters:
examples (List[dspy.Example]) – The examples to evaluate the metric on.
overall_score (float) – The overall score of the metric.
results (List) – The results from the evaluate_evalset.
scores (List) – The scores from the evaluate_evalset.
- abstract compare_metrics(base_metrics: Any, optimized_metrics: Any, comparison_value: str = 'overall_score') bool [source]
Compare the metrics of the base and optimized models. Return true if the optimized model is better than the base model.
- Parameters:
base_metrics (Any) – The metrics of the base model.
optimized_metrics (Any) – The metrics of the optimized model.
comparison_value (str) – The value to compare the metrics on. Determines which metric is used to compare the models.
- Returns:
True if the optimized model is better than the base model, False otherwise.
- Return type:
- evaluate_evalset(examples: List[Example], num_threads: int = 1, display_progress: bool = True, display_table: bool = True) Dict[str, Any] [source]
Take in a list of examples and evaluate the results.
- Parameters:
- Returns:
A dictionary containing the overall score, results, and scores.
- Return type:
Dict[str, Any]
- class graphdoc.train.SinglePromptTrainer(prompt: SinglePrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Bases:
ABC
- __init__(prompt: SinglePrompt, optimizer_type: str, optimizer_kwargs: Dict[str, Any], mlflow_model_name: str, mlflow_experiment_name: str, mlflow_tracking_uri: str, trainset: List[Example], evalset: List[Example])[source]
Initialize the SinglePromptTrainer. This is the base class for implementing a trainer for a single prompt.
- Parameters:
prompt (SinglePrompt) – The prompt to train.
optimizer_type (str) – The type of optimizer to use.
optimizer_kwargs (Dict[str, Any]) – The keyword arguments for the optimizer.
mlflow_model_name (str) – The name of the model in mlflow.
mlflow_experiment_name (str) – The name of the experiment in mlflow.
mlflow_tracking_uri (str) – The uri of the mlflow tracking server.
trainset (List[dspy.Example]) – The training set.
evalset (List[dspy.Example]) – The evaluation set.
- abstract evaluation_metrics(base_evaluation, optimized_evaluation)[source]
Log evaluation metrics to mlflow.
- Parameters:
base_evaluation (Any) – The evaluation metrics of the base model.
optimized_evaluation (Any) – The evaluation metrics of the optimized model.