Skip to main content

Feedback System

The aprity feedback system lets you refine AI-generated documentation by submitting structured corrections directly from the app. Your feedback is incorporated into subsequent scans, progressively improving documentation accuracy.

info

The feedback system is a feature-gated capability. It must be enabled for your tenant before it becomes available in the navigation.

Interface Layout

The feedback screen uses a two-panel layout:

  • Left panel -- the feedback submission form.
  • Right panel -- a list of all previously submitted feedback entries with status indicators.

Submitting Feedback

Target Selection

Every feedback entry targets a specific piece of generated documentation. Start by selecting the target type:

Target TypeDescription
Business RuleFeedback on a specific business rule (e.g., a validation rule, trigger, or flow).
Overview FieldFeedback on an object-level overview field (e.g., the business description of an object).

Narrowing the Scope

After selecting a target type, use the cascading picklists to narrow down the exact documentation element:

  1. Object -- select the Salesforce object (e.g., Account, Opportunity, a custom object).
  2. Chapter -- select the documentation chapter within that object.
  3. Section -- select the specific section containing the text you want to address.

The picklist options are populated from the most recent completed scan, so they reflect the actual structure of your generated documentation.

Feedback Kind

Select the type of feedback you are submitting:

KindWhen to Use
CorrectionThe generated text is factually wrong. Provide the corrected version.
RejectionThe generated text should be removed entirely. It is irrelevant or misleading.
AdditionInformation is missing. Provide the text that should be added.
ClarificationThe generated text is technically correct but unclear. Provide a clearer version.

Original and Corrected Text

  • Original text -- pre-populated with the current documentation text for the selected target. This field is read-only.
  • Corrected text -- enter your revised text here. For rejections, you may leave this empty or provide a brief explanation of why the content should be removed.
  • Comment -- an optional free-text field (up to 2000 characters) where you can add additional context or reasoning for your feedback.
tip

Be as specific as possible in your corrected text. aprity uses your feedback as training context for future scans, so precise language produces the best results.

Saving

Click Save to submit your entry. The feedback appears immediately in the right panel list.

Feedback List

The right panel displays all feedback entries for the current org, ordered by most recent first. Each entry shows:

  • Target object and section.
  • Feedback kind (color-coded badge).
  • Current status.
  • Submission date.

Status Tracking

Feedback entries move through the following statuses:

StatusMeaning
PendingThe feedback has been submitted but not yet consumed by a scan.
ActiveThe feedback is live and will be applied to the next scan.
ArchivedThe feedback was manually archived by the user. It is no longer applied.
SupersededA newer feedback entry for the same target replaced this one.

The list view includes a filter bar with options: All, Active, Archived, and Superseded. Each entry also shows an "Applied" badge when the feedback has been consumed in a completed scan.

Scan-Locked State

When a scan is currently running, the feedback form enters a locked state. You can view existing feedback but cannot submit new entries until the scan completes. This prevents conflicting modifications during active processing.

note

The locked state applies only during the RUNNING phase. Once the scan reaches any terminal status (COMPLETED, COMPLETED_WITH_ERRORS, or FAILED), the form unlocks automatically.

How Feedback Improves Documentation

When a new scan runs, aprity loads all ACTIVE feedback entries for the target org. The AI analysis phase receives this feedback as additional context, allowing it to:

  • Correct factual errors flagged in previous outputs.
  • Incorporate missing information provided through additions.
  • Adjust tone and clarity based on clarification entries.

Over multiple scan cycles, feedback accumulates and documentation quality improves progressively.