graphdoc.prompts.schema_doc_generation module
- class graphdoc.prompts.schema_doc_generation.DocGeneratorSignature(*, database_schema: str, documented_schema: str)[source]
Bases:
Signature
### TASK: Analyze the provided GraphQL Schema and generate detailed yet concise descriptions for each field within the database tables and enums.
### Requirements: - Utilize only the verified information from the schema to ensure accuracy. - Descriptions should be factual, straightforward, and avoid any speculative language. - Refrain from using the phrase “in the { table } table” within your descriptions. - Ensure that the documentation adheres to standard schema formatting without modifying the underlying schema structure. - Make sure that the entities themselves are documented.
### Formatting: - Maintain consistency with the existing documentation style and structure. - Focus on clarity and precision to aid developers and system architects in understanding the schema’s components effectively.
- database_schema: str
- documented_schema: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphdoc.prompts.schema_doc_generation.DocGeneratorHelperSignature(*, database_schema: str, documented_schema: str)[source]
Bases:
Signature
### TASK: Analyze the provided GraphQL Schema and generate detailed yet concise descriptions for each field within the database tables and enums.
### Requirements: - If the field is unclear, and the documentation result is ambiguous, request additional information: “WARNING: Please provide additional information to avoid confusion”. - Utilize only the verified information from the schema to ensure accuracy. - Descriptions should be factual, straightforward, and avoid any speculative language. - Refrain from using the phrase “in the { table } table” within your descriptions. - Ensure that the documentation adheres to standard schema formatting without modifying the underlying schema structure.
### Formatting: - Maintain consistency with the existing documentation style and structure. - Focus on clarity and precision to aid developers and system architects in understanding the schema’s components effectively.
- database_schema: str
- documented_schema: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphdoc.prompts.schema_doc_generation.BadDocGeneratorSignature(*, database_schema: str, documented_schema: str)[source]
Bases:
Signature
### TASK: Given a GraphQL Schema, generate intentionally incorrect documentation for the columns of the tables in the database.
### Requirements: - Every table, entity, enum, etc. must have at least one column with a description that is obviosly incorrect. - The documentation must be incorrect and misleading. - The documentation should be scattered, with only some columns having documentation.
### Formatting - Ensure that the schema maintains proper documentation formatting, as is provided.
- database_schema: str
- documented_schema: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- graphdoc.prompts.schema_doc_generation.doc_gen_factory(key: str | Signature | SignatureMeta) Signature | SignatureMeta [source]
Factory function to return the correct signature based on the key. Currently only supports three signatures (zero_shot_doc_gen, doc_gen_helper, bad_doc_gen).
- Parameters:
key (Union[str, dspy.Signature]) – The key to return the signature for.
- Returns:
The signature for the given key.
- Return type:
Union[dspy.Signature, dspy.SignatureMeta]
- class graphdoc.prompts.schema_doc_generation.DocGeneratorPrompt(prompt: str | Signature | SignatureMeta, prompt_type: Literal['predict', 'chain_of_thought'] | Callable, prompt_metric: DocQualityPrompt)[source]
Bases:
SinglePrompt
DocGeneratorPrompt class for generating documentation for GraphQL schemas.
- evaluate_documentation_quality(schema: Example, pred: Prediction, trace=None, scalar=True) int [source]
Evaluate the quality of the documentation. Utilizes the instantiated metric type to evaluate the quality of the documentation.
- Parameters:
schema (dspy.Example) – The schema to evaluate the documentation for.
pred (dspy.Prediction) – The predicted documentation.
trace (Any) – The trace of the prediction.
scalar (bool) – Whether to return a squared score or the full evaluation object.
- Returns:
The squared score or the full evaluation object.
- Return type:
- evaluate_metric(example: Example, prediction: Prediction, trace=None) Any [source]
This is the metric used to evalaute the prompt.
- Parameters:
example (dspy.Example) – The example to evaluate the metric on.
prediction (dspy.Prediction) – The prediction to evaluate the metric on.
trace (Any) – The trace to evaluate the metric on. This is for DSPy.
- format_metric(examples: List[Example], overall_score: float, results: List, scores: List) Dict[str, Any] [source]
Format the metric results into a dictionary.
- Parameters:
examples (List[dspy.Example]) – The examples used to evaluate the metric.
overall_score (float) – The overall score of the metric.
results (List) – The results of the metric.
scores (List) – The scores of the metric.
- compare_metrics(base_metrics: Any, optimized_metrics: Any, comparison_value: str = 'overall_score') bool [source]
Compare the base and optimized metrics.
- Parameters:
base_metrics (Any) – The base metrics.
optimized_metrics (Any) – The optimized metrics.
comparison_value (str) – The value to compare.