Skip to content

AI-Assisted Features Overview

Askalot integrates AI assistance throughout the survey research workflow, helping researchers design, execute, and analyze surveys more efficiently while maintaining formal mathematical rigor.

The AI-Assisted Workflow

AI capabilities are embedded at each stage of the survey lifecycle:

flowchart LR
    subgraph design["1. Design"]
        d1[Research Brief] --> d2[Research Agent]
        d2 --> d3[Research Document]
        d3 --> d4[Designer Agent]
        d4 --> d5[SMT Validation]
    end

    subgraph campaign["2. Campaign"]
        c1[Wizard Guidance] --> c2[Strategy Design]
        c2 --> c3[Pool Generation]
        c3 --> c4[Launch]
    end

    subgraph execute["3. Execute"]
        e1[Real Respondents] --> e2[Survey Completion]
        e3[Persona Simulation] --> e2
    end

    subgraph analyze["4. Analyze"]
        a1[Bronze Dataset] --> a2[Quality Metrics]
        a2 --> a3[AI Analyst]
        a3 --> a4[Structured Report]
    end

    design --> campaign
    campaign --> execute
    execute --> analyze

Feature Summary

Feature Stage Purpose Key Capabilities
Questionnaire Generation Design Transform research briefs into validated QML Research Agent + Designer Agent collaboration, semantic indexing, SMT validation
Campaign Management Campaign Guide campaign setup and execution Wizard flow, chat interface, sampling strategy design
Response Generation Execute Simulate campaigns with synthetic data Persona profiles, realistic responses, pipeline testing
Result Analysis Analyze Evaluate data quality and generate reports Sample representativeness, response quality metrics, AI analyst reports

AI-Assisted Questionnaire Generation

Stage: Design | Tool: Armiger

Transform research documents and briefs into formally verified QML questionnaires using two specialized AI agents.

Two agents collaborate: a Research Agent analyzes uploaded documents (PDFs, specs, regulations) and extracts requirements, then a Designer Agent generates formally verified QML with parallel block generation and SMT validation.

Learn more


AI-Assisted Campaign Management

Stage: Campaign | Tool: Targetor

Two complementary interfaces for campaign setup and management.

Campaign Wizard

Guided multi-step flow for beginners:

  1. Campaign Design - Name, questionnaire selection, mode
  2. Target Audience - Demographics, sample size
  3. Sampling Strategy - Stratification factors, distributions
  4. Respondent Pool - Generate and preview pool quality
  5. Review & Launch - Summary, validation, deployment

Chat Interface

On-demand conversational interface for power users:

  • "Merge these two pools to get more respondents"
  • "Show me which interviewers have the lowest completion rates"
  • "Create a new pool with only respondents aged 25-34"
  • "Compare response rates between campaign versions"

Learn more


Agentic Response Generation

Stage: Execute | Tool: Portor MCP (mass_fill_surveys)

Generate synthetic survey data for testing, validation, and demo purposes. The mass_fill_surveys tool offers four distribution strategies to balance speed vs. response quality:

Distribution Strategies

Strategy Engine Speed Quality Best For
realistic Weighted random Instant Medium Default — demographically influenced responses
random Uniform random Instant Low Stress testing, edge case discovery
stratified Strata-matched Instant Medium Quota-balanced samples
llm Claude Haiku ~1s/question High Realistic synthetic data, demos, presentations

llm mode produces the highest quality synthetic data: each question is answered by Claude Haiku with the full persona profile, respondent demographics, and accumulated Q&A history for intra-survey consistency. A respondent who answers "unemployed" won't describe a workplace later.

Rule-based modes (realistic, random, stratified) use weighted random selection based on persona traits — faster for high-volume pipeline testing where response quality matters less.

Learn more


AI-Assisted Result Analysis

Stage: Analyze | Tool: Balansor

Evaluate survey data quality across two dimensions and generate methodology-grounded reports.

Two Quality Dimensions

Dimension Metrics Question Answered
Sample Representativeness RMSE, MAE, Chi-Square, Max Deviation, Quality Score Does the sample match the target population?
Response Quality Normalized Entropy, Straightlining, Cronbach's Alpha, Acquiescence Bias Are responses reliable and informative?

AI Analyst Agent

The analyst agent runs quality assessment tools via MCP, gathers campaign context, and produces a structured report:

  1. Executive Summary — overall fitness for purpose
  2. Sample Representativeness — per-factor breakdown with specific numbers
  3. Weighting Assessment — Bronze vs Silver improvement analysis
  4. Key Findings — data-driven observations
  5. Recommendations — prioritized, actionable next steps

The agent is grounded in established survey methodology (AAPOR, Kish, Groves, Krosnick, ESOMAR) and interprets statistical metrics in practical research context.

Learn more


Integration Across Stages

The AI features work together seamlessly:

Document → Data Flow

Research Documents
    ↓ [AI Questionnaire Generation]
QML Questionnaire
    ↓ [AI Campaign Management]
Campaign with Sampling Strategy
    ↓ [Agentic Response Generation - optional]
Survey Responses (real or simulated)
    ↓ [AI Result Analysis]
Insights and Reports

Shared Context

AI assistants share context across the workflow:

  • The Research Agent's document analysis informs the Designer Agent's QML generation
  • Questionnaire structure informs sampling strategy recommendations
  • Campaign demographics guide persona selection for simulation
  • Survey responses feed directly into analysis tools
  • Insights can trigger questionnaire refinements (closing the loop)

Project-Scoped Isolation

All AI-indexed documents are scoped by the active project, ensuring that research materials stay within team boundaries. In non-private organizations, each user's AI sessions are automatically scoped to their default project. This isolation carries through the entire workflow — from document indexing in Phase 1 to the Designer Agent's MCP tool access in Phase 2.


Provider Flexibility

Organizations can configure their preferred AI provider:

Provider Models Best For
Anthropic Claude Sonnet, Claude Haiku Default, best reasoning
AWS Bedrock Claude, Llama Enterprise deployments
OpenAI GPT-4, GPT-4o Alternative provider

Customers can bring their own API credentials—no vendor lock-in.

Local Inference

The platform includes a local Ollama instance for document embedding only (BGE-M3 model for semantic indexing in Questionnaire Generation). All AI agent tasks — questionnaire design, campaign management, response simulation, and result analysis — require a cloud AI provider (Anthropic, AWS Bedrock, or OpenAI). The server does not have GPU acceleration for running large language models locally.


Getting Started