mock-data/submission_table.csv
Submission-level metadata written once per survey session, including the duration measured from the context/consent step display to submit, the exact item order shown to the participant, and high-level data-quality review metadata. The exact review criteria are intentionally not documented in this public guide.
Datatype: MariaDB table excerpt shown as CSV. Shape: one row per submission_id.
mock-data/response_numeric_table.csv
The durable numeric answer rows keyed by submission and item id. The live schema enforces one row per submission_id + item_id.
Datatype: MariaDB table excerpt shown as CSV. Shape: one row per numeric answer.
mock-data/response_text_table.csv
The durable text/context rows. Public export never includes these raw text values. STORE_TEXT_RESPONSES controls full text storage, while STORE_CONTEXT_TEXT separately controls whether q0/q1 context is kept for peer comparability when full text storage is off.
Datatype: MariaDB table excerpt shown as CSV. Shape: one row per text or context field.
mock-data/score_table.csv
The durable ASC factor score rows written after scoring succeeds. The schema enforces one row per submission_id + scale_id, and exports require all 11 canonical factors for every submission.
Datatype: MariaDB table excerpt shown as CSV. Shape: one row per canonical ASC factor per submission.