Live Survey Access

Open the running survey and use the live experience.

This page is for collaborators who can open the running AXP web survey. Here, live survey access means access to the public or deployed survey page, not code or repo access. The public participant entry point is alteredxproject.com. Collaborators can use alteredxproject.com/dev/ for test runs that should not count as ordinary participant data.

What live survey access lets you do

  • Open the current live questionnaire URL that was shared for testing.
  • Choose a language and walk the actual running flow.
  • Submit a response and reach the feedback view.
  • Check how wording, options, and required behavior appear in the running survey.
  • Use the feedback comparison controls, legend, reading guide, and share or download image actions after submission.

Live survey workflow

  1. 1
    Open the current app URL.

    Use the public participant link for real collection, or the dev link for collaborator test runs. Start in the running survey rather than a local or static copy.

  2. 2
    Choose your language.

    Check the live wording in the language that matters for the current task.

  3. 3
    Complete the combined entry step.

    The live app shows substance, dose, optional context, and the app-side consent checkbox together before the slider pages.

  4. 4
    Answer the slider pages and optional final free-text page.

    Slider order may be randomized by the current questionnaire definition. The optional q_free page, when present, is the last page before feedback.

  5. 5
    Submit and review the feedback screen.

    Do not stop before submit if the task depends on feedback or downstream behavior. The stored duration is measured from the context/consent step display to submit.

What a submitted response stores

  • The questionnaire answers needed for scoring and feedback.
  • The questionnaire version, language, definition hash, and presented item order needed to interpret the response later.
  • Context values used for peer comparison, while participant-facing comparison controls remain limited to enabled stable rows from comparison_tokens.
  • Broad timing and operational quality metadata used to review whether the submitted response looks like a reasonable completed run.

The public guide intentionally does not publish the exact review criteria. The goal is to document that this metadata exists and is stored without exposing implementation details that would make quality review less reliable.

What the feedback screen now includes

  • A radial violin comparison view that uses the selected canonical induction and dose bucket.
  • Comparison controls populated from enabled comparison_tokens rows. The current production-enabled buckets are cannabis, psilocybin, alcohol, MDMA, and low, medium, high.
  • A purple self profile over the selected peer distribution, plus a legend that names the current submission and peer group.
  • A peer count note showing how many complete comparison profiles were used for the selected bucket.
  • A reading guide toggle that explains the violin, quartiles, whiskers, and median marks.
  • Share or download image actions for the current feedback snapshot.

What reload does

  • Opening or refreshing the survey page starts a new Shiny session, but the same browser tab can resume an in-progress questionnaire from session storage when the saved draft still matches the loaded definition.
  • The in-app questionnaire reload keeps the current session open and attempts a best-effort answer restore after the sheet definition refreshes.
  • If item_id values and input types stay stable, already entered values are the most likely to survive reload.
  • If the questionnaire structure changes, affected fields can re-render or reset, so questionnaire reload is safest before someone starts answering.

Need the big picture?

Open the diagrams when you want to connect the running survey to upstream Google Sheets edits and downstream data examples and exports.