Skip to content

AI-Assisted Campaign Management

This document describes the AI-assisted campaign management features that help users design and manage survey campaigns through conversational interfaces.

Overview

AI-assisted campaign management provides two complementary interfaces for interacting with the Askalot platform:

  1. Campaign Wizard - A guided, multi-step flow for beginners to create campaigns from scratch
  2. Chat Interface - An on-demand conversational interface for ad-hoc requests and complex operations

Campaign Wizard

The Campaign Wizard guides users through the complete campaign creation process, making it accessible for users unfamiliar with survey research workflows.

Purpose

  • Lower the barrier to entry for new users
  • Ensure all required steps are completed in the correct order
  • Provide contextual help and recommendations at each step
  • Reduce errors by validating inputs before proceeding

Wizard Steps

flowchart TD
    subgraph step1["Step 1: Campaign Design"]
        s1a[Name and description] --> s1b[Select or create questionnaire]
        s1b --> s1c[Choose campaign mode]
    end

    subgraph step2["Step 2: Target Audience"]
        s2a[Define demographic requirements] --> s2b[Set sample size target]
        s2b --> s2c[Configure stratification factors]
    end

    subgraph step3["Step 3: Sampling Strategy"]
        s3a[Select or create sampling strategy] --> s3b[Review target distributions]
        s3b --> s3c[Set oversample factor]
    end

    subgraph step4["Step 4: Respondent Pool"]
        s4a[Generate pool from strategy] --> s4b[Preview pool quality metrics]
        s4b --> s4c[Optionally refine pool manually]
    end

    subgraph step5["Step 5: Review & Launch"]
        s5a[Summary of all settings] --> s5b[Estimated completion metrics]
        s5b --> s5c[Launch campaign or save as draft]
    end

    step1 --> step2
    step2 --> step3
    step3 --> step4
    step4 --> step5

AI Assistance in Wizard

At each step, the AI assistant can:

Step AI Capabilities
Campaign Design Suggest questionnaire based on research goals, recommend campaign mode
Target Audience Recommend sample size based on statistical requirements, suggest demographic factors
Sampling Strategy Analyze research topic, recommend stratification factors beyond gender/age, propose target distributions, explain trade-offs (see AI-Assisted Sampling Strategy Design)
Respondent Pool Explain quality metrics, suggest improvements if distribution doesn't match targets
Review & Launch Summarize campaign setup, flag potential issues

Example Wizard Interaction

User: "I want to conduct a customer satisfaction survey"

AI: I'll help you set up a customer satisfaction campaign. Let's start with the basics.

    Step 1: Campaign Design

    What would you like to name this campaign?

    [Input: Campaign Name]

    I found these questionnaires that match "customer satisfaction":
    - customer_satisfaction_nps.qml (NPS + open feedback)
    - customer_satisfaction_detailed.qml (comprehensive 20-question survey)

    Which would you like to use, or should I help you create a new one?

AI-Assisted Sampling Strategy Design

Designing a representative sample requires selecting the right stratification factors. While gender and age are standard orthogonal factors for most research, many studies require additional factors specific to the research topic. The AI assistant helps identify and configure these research-specific factors.

Standard vs Research-Specific Factors

Factor Type Examples When to Use
Standard (always included) Gender, Age Every campaign - fundamental demographic representation
Geographic Region, Urban/Rural, County Location-dependent research, regional comparisons
Socioeconomic Income, Education, Employment Consumer research, economic studies
Attitudinal Political orientation, Values Opinion polls, social research
Behavioral Product usage, Media consumption Market research, behavioral studies

Research Topic Analysis

The AI analyzes the research topic and questionnaire content to recommend appropriate stratification factors.

How it works:

  1. Topic Understanding: AI analyzes the campaign description and questionnaire
  2. Factor Recommendation: Suggests factors that could introduce bias if not controlled
  3. Distribution Proposals: Recommends target distributions based on population data
  4. Trade-off Explanation: Explains the cost/benefit of adding each factor

Example: Party Preference Research

User: "I want to create a campaign to measure party preferences for the
       upcoming election"

AI: For election polling, I recommend stratifying beyond standard demographics.

    Standard Factors (recommended):
    ┌─────────────────┬────────────────────────────────────────┐
    │ Factor          │ Target Distribution                    │
    ├─────────────────┼────────────────────────────────────────┤
    │ Gender          │ Male 48%, Female 50%, Other 2%         │
    │ Age             │ 18-29: 18%, 30-44: 22%, 45-59: 25%,    │
    │                 │ 60+: 35%                               │
    └─────────────────┴────────────────────────────────────────┘

    Research-Specific Factors (recommended for election polling):
    ┌─────────────────┬────────────────────────────────────────┐
    │ Factor          │ Rationale                              │
    ├─────────────────┼────────────────────────────────────────┤
    │ Education       │ Strong correlation with voting         │
    │                 │ patterns and party preference          │
    ├─────────────────┼────────────────────────────────────────┤
    │ Settlement Type │ Urban/suburban/rural divide affects    │
    │                 │ political orientation significantly    │
    ├─────────────────┼────────────────────────────────────────┤
    │ Prior Vote      │ Previous election choice is the        │
    │ (2022)          │ strongest predictor of current intent  │
    └─────────────────┴────────────────────────────────────────┘

    ⚠️ Note on Political Orientation:
    Including "left/right self-placement" as a stratification factor
    is controversial - it can introduce bias if the target distribution
    is incorrectly specified. I recommend using it for weighting
    post-collection rather than pre-stratification.

    Would you like me to:
    1. Add Education and Settlement Type as factors?
    2. Explain how to handle Prior Vote recall?
    3. Show the statistical impact of each factor?

Example: Consumer Product Research

User: "Create a campaign for testing a new premium coffee brand"

AI: For premium product research, your sample needs to represent the
    potential customer base, not just the general population.

    Standard Factors:
    ┌─────────────────┬────────────────────────────────────────┐
    │ Gender          │ Male 48%, Female 50%, Other 2%         │
    │ Age             │ Focus on 25-54 (primary coffee buyers) │
    └─────────────────┴────────────────────────────────────────┘

    Recommended Research-Specific Factors:
    ┌─────────────────┬────────────────────────────────────────┐
    │ Factor          │ Rationale                              │
    ├─────────────────┼────────────────────────────────────────┤
    │ Income Bracket  │ Premium products require disposable    │
    │                 │ income - undersample low-income        │
    ├─────────────────┼────────────────────────────────────────┤
    │ Coffee          │ Non-coffee drinkers are irrelevant;    │
    │ Consumption     │ heavy users provide better insights    │
    ├─────────────────┼────────────────────────────────────────┤
    │ Purchase        │ Supermarket vs. specialty store vs.    │
    │ Channel         │ online affects brand perception        │
    └─────────────────┴────────────────────────────────────────┘

    Should I configure these factors with recommended distributions?

Factor Configuration Details

For each recommended factor, the AI provides:

Configuration Description
Attribute Path Which respondent field to use (e.g., custom_attributes.education)
Factor Type discrete (categories), bucketed (ranges), or geographic
Target Distribution Population proportions to match
Weight Relative importance vs. other factors
Data Source Census, panel data, or custom specification

Adding Custom Factors

User: "I also want to stratify by smartphone brand preference"

AI: I can add "Smartphone Brand" as a custom factor.

    Configuration needed:
    ┌────────────────────┬──────────────────────────────────────┐
    │ Setting            │ Value                                │
    ├────────────────────┼──────────────────────────────────────┤
    │ Factor Name        │ Smartphone Brand                     │
    │ Attribute Path     │ custom_attributes.smartphone_brand   │
    │ Factor Type        │ discrete                             │
    │ Categories         │ [Apple, Samsung, Other Android,      │
    │                    │  Other, None]                        │
    └────────────────────┴──────────────────────────────────────┘

    What target distribution would you like?

    Option A: Match market share (Apple 28%, Samsung 25%, Other Android 35%, ...)
    Option B: Equal representation (20% each)
    Option C: Custom distribution

    Note: Your respondent database must have this attribute populated.
    Currently 67% of respondents have smartphone_brand recorded.

Strategy Quality Preview

Before finalizing, the AI shows how well the strategy can be achieved:

AI: Strategy Preview for "Election Poll Q1 2026"

    Factors configured: Gender, Age, Education, Settlement Type

    Achievability Analysis:
    ┌─────────────────┬───────────┬───────────┬─────────────────┐
    │ Factor          │ Target    │ Available │ Match Quality   │
    ├─────────────────┼───────────┼───────────┼─────────────────┤
    │ Gender          │ 48/50/2   │ 47/51/2   │ ✅ Excellent    │
    │ Age 18-29       │ 18%       │ 12%       │ ⚠️ Undersample  │
    │ Age 60+         │ 35%       │ 41%       │ ⚠️ Oversample   │
    │ Higher Ed       │ 32%       │ 38%       │ ✅ Good         │
    │ Rural           │ 28%       │ 19%       │ ❌ Gap          │
    └─────────────────┴───────────┴───────────┴─────────────────┘

    Recommendations:
    1. Young respondents (18-29) are underrepresented in your database.
       Consider: Panel recruitment or oversample factor of 1.5x

    2. Rural respondents are significantly underrepresented.
       Options:
       - Accept bias and apply post-stratification weighting
       - Import additional rural respondents from external panel
       - Reduce rural target to match available pool

    How would you like to proceed?

Chat Interface

The Chat Interface provides a conversational way to perform ad-hoc operations and complex multi-step tasks without navigating through traditional UI forms.

Purpose

  • Enable power users to execute operations quickly through natural language
  • Handle complex requests that span multiple entities
  • Provide a "thinking partner" for campaign planning and analysis
  • Support batch operations and data manipulation

Capabilities

Entity Management

Operation Example Request
Create entities "Create a new project called 'Q1 Research'"
Modify entities "Update the campaign description to include the new deadline"
Query entities "Show me all campaigns that are currently active"
Delete entities "Remove the test respondents from the pool"

Sampling Strategy Operations

Operation Example Request
Create strategy "Create a sampling strategy for election polling with education as an extra factor"
Modify factors "Add settlement type as a factor to the existing strategy"
Analyze coverage "Can my respondent database support a 1000-sample strategy with these factors?"
Compare strategies "Show me how the two strategies differ in their target distributions"

Pool Operations

Pools are designed for single-campaign use—respondents who have participated are generally not surveyed again. Use strategies (reusable templates) to generate fresh pools for new campaigns.

Operation Example Request
Generate pool "Generate a new pool from the election strategy for my Q2 campaign"
Merge pools "Merge the 'Urban' and 'Suburban' pools into a new 'Metro' pool"
Filter pools "Create a new pool with only respondents aged 25-34 from the main pool"
Analyze pools "How does my current pool compare to the target distribution?"
Refresh pools "Regenerate the pool using the updated strategy"

Campaign Operations

Operation Example Request
Status check "What's the completion rate for the NPS campaign?"
Bulk actions "Send reminders to all respondents who haven't started"
Reassignment "Move the incomplete surveys from interviewer A to interviewer B"
Analysis "Compare response rates between the two campaign versions"

Example Chat Interactions

Pool Merging

User: "Merge the 'Panel A' and 'Panel B' pools to get more respondents
       for the satisfaction campaign"

AI: I'll merge those pools for you.

    Found:
    - Panel A: 450 respondents
    - Panel B: 380 respondents

    Creating merged pool 'Panel A + Panel B' with 830 respondents...

    ✓ Merged pool created successfully

    Would you like me to:
    1. Assign this pool to the satisfaction campaign?
    2. Check for duplicate respondents between the pools?
    3. Review the demographic distribution of the merged pool?

Complex Query

User: "Show me which interviewers have the lowest completion rates
       and suggest how to redistribute their workload"

AI: Analyzing interviewer performance for the active campaign...

    Completion Rates:
    ┌─────────────────┬────────────┬───────────┬──────────┐
    │ Interviewer     │ Assigned   │ Completed │ Rate     │
    ├─────────────────┼────────────┼───────────┼──────────┤
    │ Maria Santos    │ 45         │ 42        │ 93%      │
    │ John Chen       │ 50         │ 38        │ 76%      │
    │ Alex Kim        │ 48         │ 22        │ 46%  ⚠️  │
    │ Sarah Johnson   │ 52         │ 18        │ 35%  ⚠️  │
    └─────────────────┴────────────┴───────────┴──────────┘

    Recommendation:
    Alex Kim and Sarah Johnson have significantly lower completion rates.

    I suggest redistributing their remaining 60 surveys to Maria Santos
    and John Chen, who have capacity and higher completion rates.

    Should I proceed with this redistribution?

Conversation Context

The Chat Interface maintains context across messages, allowing for:

  • Follow-up questions: "Now filter that pool to only include females"
  • Clarification requests: "What do you mean by 'oversample factor'?"
  • Multi-turn workflows: Building complex operations step by step
  • Undo/rollback: "Actually, undo that last merge"

Agent Architecture

The Wizard and Chat Interface are powered by AI agents that interact with the Askalot platform through MCP (Model Context Protocol) tools.

Agent Types

Agent Role Model Use Case
Manager Orchestrates multi-step workflows Claude Sonnet Campaign creation, complex operations
Designer Assists with questionnaire design Claude Sonnet QML editing, question flow

MCP Tools Available to Agents

Agents have access to the full Askalot API through MCP (Model Context Protocol) tools covering projects, campaigns, questionnaires, respondents, pools, sampling strategies, and more.

See the MCP Interface Reference for the complete list of available tools and their parameters.

Provider Flexibility

Organizations can configure their preferred AI provider:

Provider Models Use Case
Anthropic Claude Sonnet, Claude Haiku Default, best reasoning
AWS Bedrock Claude, Llama Enterprise deployments
OpenAI GPT-4, GPT-4o Alternative provider

User Permissions

AI-assisted features respect the existing permission model:

Role Wizard Access Chat Access
Owner Full Full
Manager Full Full
Analyst View only Query only
Interviewer None Limited (own surveys)

Best Practices

When to Use the Wizard

  • First-time campaign creation
  • Onboarding new team members
  • Standard campaign types with typical configurations
  • When step-by-step validation is important

When to Use the Chat Interface

  • Power users who know what they want
  • Complex operations spanning multiple entities
  • Ad-hoc queries and data exploration
  • Batch operations on existing data