Skip to main content

Building a Feedback Loop

aprity's Feedback feature allows you to submit corrections, additions, and clarifications that are incorporated into subsequent scans. Over time, this feedback loop progressively improves the quality and accuracy of your generated documentation.

How Feedback Works

When you submit feedback through the aprity app, it is stored as an ACTIVE entry associated with a specific object and documentation section. The next time a scan runs for that object, the AI analysis phase receives all active feedback entries as additional context, allowing it to:

  • Correct factual errors flagged in previous outputs.
  • Incorporate missing information provided through additions.
  • Adjust language and clarity based on clarification entries.
  • Remove irrelevant content flagged through rejections.

Each scan cycle builds on previous feedback, creating a cumulative improvement effect.

Types of Feedback

Choose the feedback kind that best matches your correction:

KindWhen to UseExample
CorrectionThe generated text is factually wrongA validation rule is described as applying to Accounts, but it actually applies to Opportunities
RejectionThe generated text should be removed entirelyA paragraph describes behavior that does not exist in the org
AdditionImportant information is missingA business rule interacts with an external system that is not mentioned
ClarificationThe text is technically correct but unclearA process description uses generic language instead of the business-specific terminology your team uses

Submitting Effective Feedback

Be specific

The more precise your feedback, the better the AI can incorporate it. Compare:

  • Vague: "The description of this rule is wrong."
  • Specific: "This validation rule fires on Opportunity close, not on creation. It checks that the Discount field does not exceed the threshold defined in the Pricing_Config custom setting."

Target the right level

Use the cascading picklists in the Feedback form to narrow down to the exact section:

  1. Select the target type (RULE or OVERVIEW).
  2. Select the object (e.g., Opportunity).
  3. Select the chapter and section within that object's documentation.

Feedback targeted at the correct section is applied more precisely during the next scan.

One correction per entry

Submit separate feedback entries for each distinct correction rather than combining multiple issues into a single entry. This makes each piece of feedback easier for the AI to process and apply.

Building a Systematic Feedback Practice

After each scan

  1. Assign reviewers to specific objects based on their domain expertise.
  2. Set a review deadline (e.g., within 3 business days of scan completion).
  3. Reviewers browse documentation in the Doc Browser and submit feedback for any inaccuracies.
  4. Documentation lead reviews all submitted feedback for completeness.

Before the next scan

  1. Review open feedback entries in the Feedback tab.
  2. Verify that all corrections are still relevant (the underlying Salesforce configuration may have changed).
  3. Archive outdated feedback that no longer applies.
  4. Run the next scan, which incorporates all ACTIVE feedback.

After the follow-up scan

  1. Verify corrections -- Check that previously reported issues are resolved in the new output.
  2. Submit new feedback if corrections were not fully applied or if new issues are found.
  3. Archive resolved entries that are no longer needed.

Feedback Lifecycle

Feedback entries progress through the following statuses:

ACTIVE --> ARCHIVED (manually archived by user)
ACTIVE --> SUPERSEDED (replaced by newer feedback for the same target)
  • ACTIVE entries are applied to every subsequent scan.
  • ARCHIVED entries are no longer applied. Archive feedback when the underlying issue is resolved or no longer relevant.
  • SUPERSEDED entries are automatically archived when you submit a newer feedback entry targeting the same object and section.

Measuring Improvement

Track documentation quality over successive scans by monitoring:

  • Feedback volume per scan -- A decreasing trend indicates improving documentation quality.
  • Correction vs. addition ratio -- More additions than corrections suggests the AI is getting the basics right but missing nuances.
  • Repeat corrections -- If the same issue reappears after being corrected, the feedback may need to be more specific.

Common Pitfalls

Not using feedback at all

Generating documentation without reviewing and correcting it misses the primary value of the feedback loop. Even a quick review of the most critical objects yields significant quality improvements.

Feedback that is too vague

Generic feedback like "improve this section" does not give the AI enough context to make meaningful changes. Always include the specific correction or addition you want.

Never archiving old feedback

Over time, accumulated feedback entries may become outdated as the Salesforce org evolves. Review and archive stale feedback periodically to prevent the AI from applying corrections that are no longer valid.