graphdoc.eval package

class graphdoc.eval.DocGeneratorEvaluator(generator: DocGeneratorModule | Module | Any, evaluator: DocQualityPrompt | SinglePrompt | Any, evalset: List[Example] | Any, mlflow_helper: MlflowDataHelper, mlflow_experiment_name: str = 'doc_generator_eval', generator_prediction_field: str = 'documented_schema', evaluator_prediction_field: str = 'rating', readable_value: int = 25)[source]

Bases: Module

__init__(generator: DocGeneratorModule | Module | Any, evaluator: DocQualityPrompt | SinglePrompt | Any, evalset: List[Example] | Any, mlflow_helper: MlflowDataHelper, mlflow_experiment_name: str = 'doc_generator_eval', generator_prediction_field: str = 'documented_schema', evaluator_prediction_field: str = 'rating', readable_value: int = 25)[source]

A simple module for evaluating the quality of generated documentation. We will make this extensible to include more complex evaluation metrics in the future.

Important: we assume that the rating values returned by the evaluator are [1, 2, 3, 4]. We will make this more flexible in the future.

forward(database_schema: str) dict[str, Any][source]

Takes a database schema, documents it, and then evaluates each component and the aggregate.

evaluate()[source]

Batches the evaluation set and logs the results to mlflow.

Submodules