Understanding AI-Generated Documentation Quality
aprity uses AI to generate human-readable documentation from Salesforce metadata. Understanding how AI is used -- and where its limitations lie -- helps you set appropriate expectations and get the most value from the tool.
How aprity Uses AI
aprity follows a strict principle: deterministic logic first, AI second.
What is determined by code (not AI)
The following are computed deterministically by aprity's analysis engine, with no AI involvement:
- Dependency graphs -- Which objects depend on which other objects.
- Execution order -- The sequence in which triggers, flows, and validation rules fire.
- Metadata structure -- Fields, relationships, and configurations extracted from Salesforce.
- DML classification -- Whether automation fires on insert, update, delete, or other events.
- Security rules -- Permission sets, sharing rules, and access controls.
These elements are facts derived directly from your Salesforce metadata. They are not AI-generated and are not subject to AI hallucination.
What AI generates
AI is used exclusively for explanation and narration:
- Business descriptions -- Plain-language explanations of what an object, rule, or process does.
- Business rule summaries -- Human-readable descriptions of validation rules, triggers, and flows.
- Process documentation -- Narrative descriptions of end-to-end business processes.
- Impact analysis -- Explanations of how changes to one component affect others.
- Classification -- Categorizing objects as business objects, technical services, parameters, etc.
:::info Key principle AI explains facts. AI does not decide facts. The underlying graph, dependencies, and metadata are always deterministic. AI adds the narrative layer on top. :::
Quality Indicators
Signs of high-quality output
- Specific references -- The description mentions specific fields, objects, or conditions from your metadata rather than generic language.
- Accurate business context -- The description correctly identifies the business purpose of the automation.
- Consistent terminology -- The same concepts are described using the same terms throughout the documentation.
- Correct execution sequence -- Process documentation reflects the actual order of operations.
Signs that feedback is needed
- Generic descriptions -- Language like "this rule validates data" without specifying what data or what conditions.
- Incorrect business context -- The description assigns the wrong business purpose to an automation (e.g., describing a discount validation as an inventory check).
- Missing information -- Important aspects of a business rule or process are not mentioned.
- Outdated references -- The description references fields or objects that no longer exist.
How aprity Mitigates AI Limitations
Structured prompts
aprity uses carefully engineered prompts that include:
- The full metadata context (field definitions, trigger code, flow structure).
- The dependency graph showing relationships between components.
- Previous feedback entries from your team.
- Language and terminology preferences.
This structured context significantly reduces hallucination compared to asking an AI to describe something from scratch.
Deterministic foundation
Because the dependency graph, execution order, and metadata structure are computed deterministically, the AI cannot invent relationships or dependencies that do not exist. The AI narrates what the code shows -- it does not speculate about what the code might do.
Feedback integration
Active feedback entries are included in the prompt context for subsequent scans. This means:
- Corrections you submit are incorporated into future outputs.
- The AI learns from your team's domain expertise over time.
- Each scan cycle improves on the previous one.
Language support
aprity generates documentation in six languages (English, French, Spanish, German, Italian, Portuguese). The quality is highest in English, as it is the primary language used during development and testing. For other languages, review output carefully during the first few scan cycles and submit feedback for any translation issues.
When to Submit Feedback
Submit feedback when you identify:
- Factual errors -- The AI misidentified the purpose or behavior of a rule.
- Missing context -- Business-specific information that the AI could not infer from metadata alone (e.g., external system integrations, business process context).
- Terminology issues -- The AI used generic terms instead of your organization's specific vocabulary.
- Clarity issues -- The description is technically correct but difficult to understand.
Do not submit feedback for:
- Metadata issues -- If the source trigger code or flow has confusing naming, fix the metadata in Salesforce. aprity documents what exists.
- Structural issues -- The organization of chapters and sections is deterministic, not AI-driven. Contact support if you need structural changes.
Improving Output Quality Over Time
First scan cycle
Expect the first scan to produce documentation that is approximately correct but may lack business-specific nuance. The AI has access to your metadata but not to your institutional knowledge.
Action: Review the output for the most critical objects and submit feedback for any inaccuracies.
Second and third scan cycles
With feedback incorporated, output quality improves noticeably. Business-specific terminology and corrections are reflected in the new documentation.
Action: Expand your review to additional objects. Submit feedback for finer-grained improvements.
Ongoing scans
After 3-4 cycles of feedback, documentation quality typically stabilizes at a high level. Ongoing feedback focuses on changes introduced by new deployments rather than baseline accuracy.
Action: Focus feedback on newly added or modified automation. Review existing documentation periodically for staleness.
Limitations to Be Aware Of
- External integrations -- AI cannot document the behavior of external systems that are called from your Salesforce org. It can identify that a callout exists but cannot describe the external system's response or behavior.
- Undocumented business rules -- If a business rule is implemented through code patterns that are unusual or obfuscated, the AI may not fully capture the intent. Feedback is essential for these cases.
- Complex conditional logic -- Deeply nested conditional logic in Apex triggers may result in simplified descriptions. The AI captures the main paths but may not enumerate every edge case.
- Cross-org dependencies -- aprity documents each org independently. Dependencies between orgs (e.g., org-to-org integrations) are not captured automatically.
Related Pages
- Feedback System -- How to submit feedback
- Building a Feedback Loop -- Systematic feedback practices
- Scan Strategy -- Prioritizing what to document