Skip to content

AI-Assisted Features Overview

Askalot integrates AI assistance throughout the survey research workflow, helping researchers design, execute, and analyze surveys more efficiently while maintaining formal mathematical rigor.

The AI-Assisted Workflow

AI capabilities are embedded at each stage of the survey lifecycle:

flowchart LR
    subgraph design["1. Design"]
        d1[Research Brief] --> d2[Research Agent]
        d2 --> d3[Research Document]
        d3 --> d4[Designer Agent]
        d4 --> d5[SMT Validation]
    end

    subgraph campaign["2. Campaign"]
        c1[Wizard Guidance] --> c2[Strategy Design]
        c2 --> c3[Pool Generation]
        c3 --> c4[Launch]
    end

    subgraph execute["3. Execute"]
        e1[Real Respondents] --> e2[Survey Completion]
        e3[Persona Simulation] --> e2
    end

    subgraph analyze["4. Analyze"]
        a1[Bronze Dataset] --> a2[Quality Metrics]
        a2 --> a3[AI Analyst]
        a3 --> a4[Structured Report]
    end

    design --> campaign
    campaign --> execute
    execute --> analyze

Feature Summary

Feature Stage Purpose Key Capabilities
Questionnaire Generation Design Transform research briefs into validated QML Research Agent + Designer Agent collaboration, semantic indexing, SMT validation
Campaign Management Campaign Guide campaign setup and execution Wizard flow, chat interface, sampling strategy design
Response Generation Execute Simulate campaigns with synthetic data Persona profiles, realistic responses, pipeline testing
Result Analysis Analyze Evaluate data quality and generate reports Sample representativeness, response quality metrics, AI analyst reports

AI-Assisted Questionnaire Generation

Stage: Design | Tool: Armiger

Transform research documents and briefs into formally verified QML questionnaires using two specialized AI agents.

Two agents collaborate: a Research Agent analyzes uploaded documents (PDFs, specs, regulations) and extracts requirements, then a Designer Agent plans the questionnaire structure and writes formally verified QML section by section, validating each piece against the Z3 SMT solver before assembly.

Learn more


AI-Assisted Campaign Management

Stage: Campaign | Tool: Targetor

Two complementary interfaces for campaign setup and management.

Campaign Wizard

Guided multi-step flow for beginners:

  1. Campaign Design - Name, questionnaire selection, mode
  2. Target Audience - Demographics, sample size
  3. Sampling Strategy - Stratification factors, distributions
  4. Respondent Pool - Generate and preview pool quality
  5. Review & Launch - Summary, validation, deployment

Chat Interface

On-demand conversational interface for power users:

  • "Merge these two pools to get more respondents"
  • "Show me which interviewers have the lowest completion rates"
  • "Create a new pool with only respondents aged 25-34"
  • "Compare response rates between campaign versions"

Learn more


Agentic Response Generation

Stage: Execute | Tool: Portor MCP (mass_fill_surveys)

Generate synthetic survey data for testing, validation, and demo purposes. The mass_fill_surveys tool offers four distribution strategies to balance speed vs. response quality:

Distribution Strategies

Strategy Engine Speed Quality Best For
realistic Weighted random Instant Medium Default — demographically influenced responses
random Uniform random Instant Low Stress testing, edge case discovery
stratified Strata-matched Instant Medium Quota-balanced samples
llm Claude Haiku ~1s/question High Realistic synthetic data, demos, presentations

llm mode produces the highest quality synthetic data: each question is answered by Claude Haiku with the full persona profile, respondent demographics, and accumulated Q&A history for intra-survey consistency. A respondent who answers "unemployed" won't describe a workplace later.

Rule-based modes (realistic, random, stratified) use weighted random selection based on persona traits — faster for high-volume pipeline testing where response quality matters less.

Learn more


AI-Assisted Result Analysis

Stage: Analyze | Tool: Balansor

Evaluate survey data quality across two dimensions and generate methodology-grounded reports.

Two Quality Dimensions

Dimension Metrics Question Answered
Sample Representativeness RMSE, MAE, Chi-Square, Max Deviation, Quality Score Does the sample match the target population?
Response Quality Normalized Entropy, Straightlining, Cronbach's Alpha, Acquiescence Bias Are responses reliable and informative?

AI Analyst Agent

The analyst agent runs quality assessment tools via MCP, gathers campaign context, and produces a structured report:

  1. Executive Summary — overall fitness for purpose
  2. Sample Representativeness — per-factor breakdown with specific numbers
  3. Weighting Assessment — Bronze vs Silver improvement analysis
  4. Key Findings — data-driven observations
  5. Recommendations — prioritized, actionable next steps

The agent is grounded in established survey methodology (AAPOR, Kish, Groves, Krosnick, ESOMAR) and interprets statistical metrics in practical research context.

Learn more


Grounding in Peer-Reviewed Methodology

Askalot's Designer, Manager, and Analyst agents consult a shared library of ~30 peer-reviewed survey-methodology books and papers (Dillman, Krosnick, Groves, Tourangeau, Bethlehem, Heeringa, Schouten, Fowler, and others) covering the full research process — design, sampling, fielding, analysis, weighting — when a question goes beyond their built-in skills or when a recommendation needs citable evidence.

The library is a graph-aware knowledge base built from Docling-parsed markdown of each paper, stored in the LightRAG vector + graph backend, and retrievable in five modes (keyword, entity-centric, concept-centric, hybrid, and mixed). Agents cite the paper, year, and section when they rely on it, so researchers can verify the claim directly.

Agent Topics it reaches for the library
Designer Validity and reliability (Taherdoost, Aithal), questionnaire logic and skip patterns (Fagan & Greenberg, Elliott, Feeney & Feeney, Schiopu-Kratina, Manski & Molinari), question wording and response scales (Krosnick, Bradburn, Fowler)
Manager Sampling design, stratification, cluster sampling, design effects and weighting fundamentals (Heeringa, Bethlehem, Schouten), adaptive survey design
Analyst Total Survey Error, nonresponse bias, raking and post-stratification, response-quality frameworks

The library is consulted on demand — agents default to their distilled skills for routine questions to keep latency low, and only reach for the full corpus when the customer asks for a citation or when the question sits outside the skill's scope.


Integration Across Stages

The AI features work together seamlessly:

Document → Data Flow

Research Documents
    ↓ [AI Questionnaire Generation]
QML Questionnaire
    ↓ [AI Campaign Management]
Campaign with Sampling Strategy
    ↓ [Agentic Response Generation - optional]
Survey Responses (real or simulated)
    ↓ [AI Result Analysis]
Insights and Reports

Shared Context

AI assistants share context across the workflow:

  • The Research Agent's document analysis informs the Designer Agent's QML generation
  • Questionnaire structure informs sampling strategy recommendations
  • Campaign demographics guide persona selection for simulation
  • Survey responses feed directly into analysis tools
  • Insights can trigger questionnaire refinements (closing the loop)

Project-Scoped Isolation

All AI-indexed documents are scoped by the active project, ensuring that research materials stay within team boundaries. In non-private organizations, each user's AI sessions are automatically scoped to their default project. This isolation carries through the entire workflow — from document indexing in Phase 1 to the Designer Agent's MCP tool access in Phase 2.


AI Provider

All AI agent tasks in Askalot run on Anthropic's Claude models, which the platform reaches through either of two gateways:

Gateway Models Best For
Anthropic API Claude Opus, Sonnet, Haiku Default, direct API access
AWS Bedrock Claude Opus, Sonnet, Haiku Enterprise deployments that need AWS IAM, VPC, and data-residency controls

Customers bring their own API credentials — no vendor lock-in on the account level, and Bedrock lets enterprises keep all traffic inside their own AWS account.

Local Inference

The platform includes a local Ollama instance for document embedding only (BGE-M3 model for semantic indexing in Questionnaire Generation). All AI agent tasks — questionnaire design, campaign management, response simulation, and result analysis — require Claude via the Anthropic API or AWS Bedrock. The server does not have GPU acceleration for running large language models locally.


Getting Started