# FunnelStory Documentation > Customer Intelligence Platform — product documentation for B2B revenue, success, and ops teams. This file contains all documentation content in a single document following the llmstxt.org standard. ## Getting Started: Configuring the Account Model Welcome to FunnelStory. This guide walks you through configuring the **Account data model** -- the foundational model that everything else in FunnelStory depends on. Whether you're pulling data from a CRM like Salesforce or HubSpot, a data warehouse like Snowflake or BigQuery, or a combination of both, this guide covers what you need to get it right. ## What is the Account Model? In B2B SaaS, the **account** (the customer organization) is the unit that matters most. Individual users come and go, but the account is what you sell to, renew, expand, and measure health against. The Account model in FunnelStory is the structured representation of your customer accounts. It tells FunnelStory: - **Who your customers are** -- company names, domains, unique identifiers - **What their contracts look like** -- ARR, start dates, renewal dates - **How they're organized** -- parent-child hierarchies, product lines - **Who owns them internally** -- CSM assignments, account executives - **What their status is** -- active, churned, at risk Every other model in FunnelStory (Users, Product Activity, Support Tickets, etc.) links back to accounts. That's why the Account model is **mandatory** and should be configured first. ## What You'll Configure By the end of this guide, you will have: 1. Connected FunnelStory to your data source (CRM, warehouse, or both) 2. Written a query to pull account data 3. Mapped your data columns to FunnelStory's account properties 4. Configured advanced settings like ARR, churn status, CSM assignment, and parent-child hierarchies 5. Verified that your accounts are populating correctly ## The Configuration Journey ``` Connect Data Source --> Write Query --> Map Fields --> Save & Verify ``` - **Connect Data Source**: Select which integration or database FunnelStory should pull account data from. - **Write Query**: Write a query in the appropriate format for your data source (SQL, SOQL, HubSpot API, etc.) to retrieve account records. - **Map Fields**: Tell FunnelStory which columns in your query output correspond to which account properties (ID, name, ARR, etc.). - **Save & Verify**: Activate the model, refresh the data, and confirm accounts appear correctly in the Accounts View. ## How This Guide Is Organized | Document | What It Covers | |----------|---------------| | [Prerequisites](./prerequisites) | Setting up data connections before you configure the model | | [Configuration Walkthrough](./configuration-walkthrough) | Step-by-step UI walkthrough of the configuration flow | | [Writing Queries](./writing-queries) | Query format reference for every supported data source type | | [Field Reference](./field-reference) | Complete list of account properties, types, and timestamp formatting rules | | [Advanced Configuration](./advanced-configuration) | ARR, churn, CSM assignment, parent-child, test exclusion, multi-source joins | | [Real-World Examples](./real-world-examples) | End-to-end worked examples for Salesforce, HubSpot, warehouse, and hybrid setups | | [Verification & Troubleshooting](./verification) | How to verify your setup and fix common issues | ## Before You Begin You will need: - **Admin or Data Admin access** to your FunnelStory workspace - **Credentials or OAuth access** to at least one data source containing your account data - **Knowledge of your data schema** -- where accounts, contracts/deals, and related data live in your systems If you haven't set up a data connection yet, start with [Prerequisites](./prerequisites). If you already have a connection configured, skip ahead to [Configuration Walkthrough](./configuration-walkthrough). --- ## Prerequisites: Setting Up Data Connections Before configuring the Account model, you need at least one data connection in FunnelStory. A data connection is a link between FunnelStory and an external system that holds your account data. This page helps you decide which connection type to use and how to set it up. ## Choosing Your Data Source Most organizations store account data in one of three places: ### CRM (Customer Relationship Management) Use your CRM if it is the **primary system of record** for customer accounts. CRMs typically contain account names, owner assignments, deal/opportunity data, and lifecycle stages. | CRM | Connection Type | Query Format | |-----|----------------|-------------| | Salesforce | OAuth | SOQL block queries | | HubSpot | OAuth | HubSpot Search API (HS block queries) | | Attio | API Key | Attio API (AT block queries) | **Best for**: Organizations where the sales or CS team maintains account data directly in the CRM. Salesforce and HubSpot are the most common. ### Data Warehouse Use your warehouse if you have a **curated analytics layer** (dim tables, views) that consolidates data from multiple systems, or if your account data lives in a product database rather than a CRM. | Warehouse / Database | Connection Type | Query Format | |---------------------|----------------|-------------| | PostgreSQL | Host + credentials | Standard SQL | | MySQL | Host + credentials | Standard SQL | | MS SQL Server | Host + credentials | Standard SQL | | Snowflake | Account + credentials | Standard SQL (Snowflake dialect) | | BigQuery | Service account (JSON key) | Standard SQL (BigQuery dialect) | | Databricks | Host + token | Standard SQL (Spark SQL dialect) | | Redshift | Host + credentials | Standard SQL (Redshift dialect) | | Amazon Athena | Access key + region | Standard SQL (Presto/Trino dialect) | **Best for**: Organizations with a data team that maintains clean, modeled data in a warehouse. Often produces the richest account models because you can join data from many systems in a single query. ### Both (Hybrid) Use a **combination** when no single source has everything. For example: - **Salesforce for account metadata** (name, owner, ARR, renewal date) + **warehouse for product usage data** (last login, feature adoption) - **HubSpot for deal data** + **PostgreSQL for subscription/billing data** FunnelStory supports this through **Data Joins**, where your primary account query runs against one connection and is enriched with data from a second connection. See [Advanced Configuration](./advanced-configuration#join-with-data-from-other-connections) for details. ### Other Supported Sources | Source | Connection Type | Query Format | |--------|----------------|-------------| | MongoDB | Connection string | MQL block queries | | Gainsight | API | GS block queries | | Elasticsearch | Host + credentials | ES block queries | ## Setting Up a Connection ### Step 1: Navigate to Connections Go to **Configuration** > **Integrations** in the left sidebar. ### Step 2: Add a New Connection Click **Add Connection** and select your data source type. ### Step 3: Provide Credentials Each connection type requires different credentials: **For CRMs (Salesforce, HubSpot)**: - Click **Connect** to initiate an OAuth flow - Authorize FunnelStory in the popup window - The connection is established once you return to FunnelStory **For Data Warehouses (Snowflake, BigQuery, Databricks)**: - **Snowflake**: Account identifier, warehouse name, database, schema, username, password (or key pair). The user needs `USAGE` grants on the warehouse, database, and schema, plus `SELECT` on relevant tables/views. - **BigQuery**: Upload a service account JSON key file. The service account needs `BigQuery Data Viewer` and `BigQuery Job User` roles on the relevant dataset. - **Databricks**: Server hostname, HTTP path, and personal access token. **For SQL Databases (PostgreSQL, MySQL, MS SQL)**: - Host, port, database name, username, password - Ensure FunnelStory's IP addresses are allowed through your firewall - For private networks, configure an [SSH tunnel](./prerequisites#ssh-tunnels) first ### Step 4: Test the Connection After entering credentials, click **Test Connection**. A successful test confirms FunnelStory can reach your data source and authenticate. ## SSH Tunnels If your database is in a private network (VPC, private subnet), you can set up an SSH tunnel so FunnelStory connects through a bastion/jump host: 1. Go to **Configuration** > **Integrations** > **SSH Tunnels** 2. Add a tunnel with the bastion host, port, and SSH key 3. When creating the database connection, select the tunnel ## Refresh Intervals Once a connection is used by a model, FunnelStory periodically refreshes the data. You can choose: - **Hourly** (`1h`): For data that changes frequently (e.g., CRM deal stages, support tickets) - **Daily** (`24h`): For data that changes less often (e.g., account metadata, warehouse dim tables) You set the refresh interval when configuring the Account model, not when creating the connection itself. ## Decision Guide Not sure which path to take? Here's a quick decision tree: 1. **Do you have a data warehouse with a curated account view/table?** - Yes -> Use the warehouse. It's usually the cleanest source. - No -> Continue to step 2. 2. **Is your CRM (Salesforce or HubSpot) the system of record for accounts?** - Yes -> Use the CRM directly. - No -> Continue to step 3. 3. **Is account data spread across multiple systems?** - Yes -> Pick the richest source as primary, then use Data Joins for the rest. - No -> Use whatever database or system holds your account records. ## Next Step Once you have at least one data connection configured, proceed to [Configuration Walkthrough](./configuration-walkthrough) to set up the Account model. --- ## Configuration Walkthrough: Account Model This page walks through the FunnelStory UI step by step to configure the Account model. If you haven't set up a data connection yet, start with [Prerequisites](./prerequisites). ## Opening the Account Model Configuration 1. In the left sidebar, click **Configuration**. 2. Select **Data Models** from the dropdown. 3. Click the **Account** model card. 4. If this is your first time, click on the "+ New Data Model" button and select Account Model. This opens the model configuration flow, which has three steps: **Connect Data**, **Write Query**, and **Map Information**. ## Step 1: Connect Data In this step you select which data connection to use for the Account model. ### Fields to fill in: - **Model Name**: Give your model a descriptive name (e.g., "Accounts", "Customer Accounts", "Salesforce Accounts"). This is for your reference only. - **Description** (optional): Brief note about what this model pulls and from where. - **Connection Type**: Select the category of your data source. Options include databases (PostgreSQL, Snowflake, BigQuery, etc.), CRMs (Salesforce, HubSpot), and other integrations. - **Connection**: Select the specific connection you created in the prerequisites step. - **Refresh Interval**: Choose how often FunnelStory should re-pull data. Hourly for frequently changing data, daily for stable data. Click **Continue** to proceed to the query step. ## Step 2: Write Query This is where you tell FunnelStory exactly what data to pull from your data source. ### The query editor The editor accepts different query formats depending on your connection type: - **SQL databases and warehouses**: Write standard SQL (PostgreSQL, MySQL, Snowflake, BigQuery, etc.) - **Salesforce**: Write SOQL block queries (special syntax with `-- SOQL_START` / `-- SOQL_END` markers) - **HubSpot**: Write HubSpot block queries (special syntax with `-- HS_START` / `-- HS_END` markers) See [Writing Queries](./writing-queries) for the full format reference for each data source type. ### Required fields The editor displays a badge showing which fields your query **should** return. For the Account model, the expected fields are: - `account_id` (required -- the unique key) - `name` - `domain` - `created_at` Your query column names don't have to match these exactly -- you'll map them in the next step. But your query must return data that can be mapped to at least `account_id`. ### Validating the query Click **Validate** (or press **Ctrl+Enter** / **Cmd+Enter**) to test your query against the live data source. Validation will: - Execute the query (with a row limit for preview) - Report any syntax errors or connection issues - Show you a preview of the returned columns and data A green **"Query validated successfully"** message means you're ready to proceed. If you see an error, fix the query and validate again. Click **Continue** to proceed to field mapping. ## Step 3: Map Information This step connects the columns from your query output to FunnelStory's account properties. ### How mapping works Each row in the mapping table has: - **Property** (left side): The FunnelStory account property (e.g., `account_id`, `name`, `domain`, `created_at`) - **Column** (right side): A dropdown of columns from your query output. Select which column maps to each property. The default properties (`account_id`, `name`, `domain`, `created_at`) are pre-populated. You must map at least `account_id` for the model to be valid. ### Adding custom properties Click **Add another field here** at the bottom to map additional columns as custom account properties. Common additions include: - `amount` -- for ARR / contract value - `expires_at` -- for contract renewal date - `parent_account_id` -- for parent-child hierarchy - `is_churned` -- for churn status - `csm_email` / `csm_name` -- for auto-assigning accounts to CSMs See [Field Reference](./field-reference) for the complete list of recognized properties and their expected data types. ### Viewing query results Click **Run Test** to see the actual data your query returned. This helps verify you're mapping the right columns. ## Saving the Model After mapping is complete, click **Save Model**. A dialog appears asking for: - **Model Name**: Confirm or update the name - **Description**: Optional - **Activate Model?**: Choose whether to save as **Active** (data will start refreshing immediately) or **Draft** (saved but not running) If you're still iterating on the query or mappings, save as **Draft** and activate later. If you're confident in the configuration, save as **Active**. ## Refreshing and Verifying After saving an active model: 1. Click **Refresh Model** on the model card to trigger an immediate data pull. 2. Navigate to the **Accounts View** (in the left sidebar) to confirm accounts are populating. 3. Spot-check a few accounts to verify that names, domains, ARR values, and other properties look correct. See [Verification & Troubleshooting](./verification) for detailed verification steps and common issues. ## Next Steps - Need help writing the query? See [Writing Queries](./writing-queries). - Want to understand all available properties? See [Field Reference](./field-reference). - Ready to configure advanced features like ARR, churn, or CSM assignment? See [Advanced Configuration](./advanced-configuration). --- ## Writing Queries for the Account Model The query you write determines what data FunnelStory pulls into the Account model. The format depends on your data source type. This page covers the query syntax for every supported connection type. ## How Queries Work in FunnelStory Regardless of your data source, the goal is the same: return a **flat table of rows**, where each row is one account. The columns should include at least `account_id` (or a column you'll map to it), plus any other account fields you want to track. FunnelStory executes your query against the connected data source, validates the results, and lets you map columns to account properties. The query runs on every refresh cycle (hourly or daily, as configured). --- ## SQL Databases and Data Warehouses **Applies to**: PostgreSQL, MySQL, MS SQL Server, Snowflake, BigQuery, Databricks, Redshift, Amazon Athena These connections use **standard SQL**. Write a `SELECT` statement that returns one row per account. ### Basic example ```sql SELECT id AS account_id, company_name AS name, website AS domain, created_at FROM accounts WHERE status != 'deleted' ``` ### With aggregated deal/contract data If account metadata and revenue data live in different tables, join and aggregate them: ```sql SELECT a.id AS account_id, a.name, a.domain, a.created_at, SUM(d.amount) AS amount, MAX(d.close_date) AS expires_at, a.csm_email FROM accounts a LEFT JOIN deals d ON a.id = d.account_id AND d.stage = 'closed_won' GROUP BY a.id, a.name, a.domain, a.created_at, a.csm_email ``` ### Timestamp formatting FunnelStory has specific preferences for how timestamp values should be formatted. Getting this right ensures properties like `created_at`, `expires_at`, and `churned_at` work correctly throughout the platform. **Preference order (most preferred first):** 1. **Native database timestamp/datetime types** -- the preferred format. If your column is already `TIMESTAMP`, `DATETIME`, or `TIMESTAMPTZ`, the database driver preserves the type and FunnelStory reads it directly. 2. **Unix timestamp in seconds** (integer) -- a single integer representing seconds since January 1, 1970 UTC. Unambiguous and reliable. 3. **RFC 3339 string format** (e.g., `2024-01-15T12:00:00Z`) -- acceptable when timestamps must be strings 4. **Other standard ISO 8601 formats** -- generally recognized but less reliable 5. **Arbitrary string formats** (e.g., `"Jan 15, 2024"`, `"15/01/2024"`) -- **will NOT work**. FunnelStory cannot parse non-standard date strings. **Converting to Unix seconds by SQL dialect:** | Dialect | Conversion | |---------|-----------| | PostgreSQL | `EXTRACT(EPOCH FROM created_at)::BIGINT` | | MySQL | `UNIX_TIMESTAMP(created_at)` | | BigQuery | `UNIX_SECONDS(created_at)` | | Snowflake | `DATE_PART(EPOCH_SECOND, created_at)` | | Databricks | `UNIX_TIMESTAMP(created_at)` | | Redshift | `EXTRACT(EPOCH FROM created_at)::BIGINT` | | MS SQL | `DATEDIFF(SECOND, '1970-01-01', created_at)` | **When to convert**: If your source column is already a native `TIMESTAMP`, `DATETIME`, or `TIMESTAMPTZ` type, you generally don't need to convert -- the database driver will handle it. Convert when: - The column is stored as a string (e.g., `VARCHAR` with `"2024-01-15"`) - The column is a Unix timestamp in **milliseconds** (divide by 1000) - The column uses a non-standard date format **Example -- converting a millisecond timestamp in PostgreSQL:** ```sql SELECT id AS account_id, name, domain, (created_at_ms / 1000) AS created_at FROM accounts ``` ### Warehouse-specific tips **Snowflake**: Use fully qualified names if needed (`database.schema.table`). Ensure your Snowflake user has `SELECT` grants on the relevant tables/views. ```sql SELECT * FROM analytics.funnelstory.account_view ``` **BigQuery**: Use backtick-quoted dataset and table names. ```sql SELECT * FROM `my-project.analytics.accounts` ``` **Databricks**: Reference tables from your configured catalog and schema. ```sql SELECT * FROM catalog.schema.accounts ``` --- ## Salesforce (SOQL Block Queries) Salesforce connections use a special **block query format**. You write SOQL (Salesforce Object Query Language) inside marked blocks, and FunnelStory fetches those records from Salesforce's API. You then write SQLite SQL outside the blocks to transform and combine the fetched data. ### How it works 1. **SOQL blocks** fetch data from Salesforce objects and store results in named temporary tables 2. **SQLite SQL** (outside the blocks) queries those temporary tables to produce the final output This two-layer approach lets you pull from multiple Salesforce objects and join them locally. ### Syntax ``` -- SOQL_START: table_name (optional_column_hints) SELECT Field1, Field2, Field3 FROM SalesforceObject WHERE conditions -- SOQL_END SELECT * FROM table_name ``` - `-- SOQL_START: table_name` begins a block. `table_name` is the alias you'll use in the SQLite SQL. - The optional `(col1, col2, col3)` after the table name provides column hints (used when the SOQL returns zero rows, to still know the schema). - Everything between `START` and `END` is valid **SOQL** sent directly to the Salesforce Query API. - `-- SOQL_END` closes the block. - SQL written outside any block is **SQLite SQL** that runs against the fetched tables. ### Single-object example Pull accounts directly from Salesforce's Account object: ``` -- SOQL_START: accounts (Id, Name, Website, CreatedDate, AnnualRevenue, ParentId, OwnerId) SELECT Id, Name, Website, CreatedDate, AnnualRevenue, ParentId, OwnerId FROM Account WHERE IsDeleted = false -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, AnnualRevenue AS amount, ParentId AS parent_account_id, OwnerId AS owner_id FROM accounts ``` ### Multi-object example (Account + Opportunity) Pull accounts and their closed-won opportunities, then aggregate: ``` -- SOQL_START: accounts (Id, Name, Website, CreatedDate, ParentId) SELECT Id, Name, Website, CreatedDate, ParentId FROM Account WHERE IsDeleted = false -- SOQL_END -- SOQL_START: opps (Id, AccountId, Amount, CloseDate, StageName) SELECT Id, AccountId, Amount, CloseDate, StageName FROM Opportunity WHERE StageName = 'Closed Won' AND IsDeleted = false -- SOQL_END SELECT a.Id AS account_id, a.Name AS name, a.Website AS domain, a.CreatedDate AS created_at, a.ParentId AS parent_account_id, SUM(o.Amount) AS amount, MAX(o.CloseDate) AS expires_at FROM accounts a LEFT JOIN opps o ON a.Id = o.AccountId GROUP BY a.Id, a.Name, a.Website, a.CreatedDate, a.ParentId ``` ### SOQL tips - Use standard Salesforce field names (e.g., `Id`, `Name`, `CreatedDate`, `IsDeleted`) - Custom fields end with `__c` (e.g., `Customer_ARR__c`, `CS_Status__c`) - SOQL supports `WHERE`, `ORDER BY`, `LIMIT`, and relationship queries (e.g., `Owner.Email`) - You can include multiple SOQL blocks in one query - The SQLite SQL between/after blocks supports `JOIN`, `GROUP BY`, `HAVING`, subqueries, CTEs (`WITH`), and all standard SQLite functions ### Important notes on SOQL blocks - Every `SOQL_START` must have a matching `SOQL_END` - Blocks cannot be nested - The table name must be a single word (no spaces or special characters) - Column hints in parentheses are optional but recommended -- they provide a fallback schema when the SOQL returns zero rows --- ## HubSpot (HS Block Queries) HubSpot connections use a **block query format** similar to Salesforce, but the block body is a **JSON payload** that maps to HubSpot's CRM Search API. ### How it works 1. **HS blocks** call HubSpot's Search API to fetch objects (companies, deals, contacts) and store results in named temporary tables 2. **SQLite SQL** (outside the blocks) queries those temporary tables ### Syntax ``` -- HS_START: table_name { "object_type": "companies", "properties": ["property1", "property2"], "filter_groups": [...] } -- HS_END SELECT * FROM table_name ``` ### JSON body fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `object_type` | string | Yes | HubSpot object type: `companies`, `deals`, `contacts`, `tickets`, etc. | | `properties` | array of strings | No | Which HubSpot properties to return. If omitted, returns default properties. | | `filter_groups` | array of objects | No | Search filters following HubSpot's CRM Search API format. | | `limit` | integer | No | Max records per page (default 100). | | `sleep_ms` | integer | No | Milliseconds to wait between pagination requests (for rate limiting). | | `error_on_rate_limit` | boolean | No | If true, fails on rate limit instead of stopping pagination. | ### Filter groups format Filter groups follow HubSpot's standard [CRM Search API](https://developers.hubspot.com/docs/api/crm/search) syntax: ```json "filter_groups": [ { "filters": [ { "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" } ] } ] ``` Common operators: `EQ`, `NEQ`, `GT`, `GTE`, `LT`, `LTE`, `CONTAINS_TOKEN`, `NOT_CONTAINS_TOKEN`, `HAS_PROPERTY`, `NOT_HAS_PROPERTY`. ### Example: Pull companies ``` -- HS_START: companies { "object_type": "companies", "properties": ["name", "domain", "createdate", "annualrevenue", "hubspot_owner_id"] } -- HS_END SELECT id AS account_id, name, domain, createdate AS created_at, annualrevenue AS amount FROM companies ``` Note: HubSpot always includes the record `id` in results. It's available as a column in the SQLite query. ### Example: Pull closed-won deals, then aggregate by company ``` -- HS_START: deals { "object_type": "deals", "properties": ["dealname", "amount", "closedate", "hs_object_id", "associations.company"], "filter_groups": [{ "filters": [{ "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" }] }] } -- HS_END SELECT company_id AS account_id, SUM(CAST(amount AS REAL)) AS amount, MAX(closedate) AS expires_at FROM deals WHERE company_id IS NOT NULL GROUP BY company_id ``` ### HubSpot tips - Property names use HubSpot's internal names (lowercase, underscored), not display names - Use `createdate` for company creation, `closedate` for deal close date - Filter groups are ANDed within a group, ORed across groups - Rate limiting: If you're pulling large datasets, set `sleep_ms` (e.g., `200`) to avoid hitting HubSpot's API rate limits --- ## Attio (AT Block Queries) Attio uses a block format similar to HubSpot with a JSON body. ### Syntax ``` -- AT_START: table_name { "object_type": "companies", "filter": { ... }, "sorts": [{ "field": "created_at", "direction": "desc" }], "limit": 100 } -- AT_END SELECT * FROM table_name ``` ### JSON body fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `object_type` | string | Yes | Attio object type (e.g., `companies`, `people`, `deals`) | | `filter` | object | No | Filter conditions | | `sorts` | array | No | Sort order | | `limit` | integer | No | Maximum records to fetch | ### Example ``` -- AT_START: companies { "object_type": "companies", "filter": {}, "limit": 10000 } -- AT_END SELECT id AS account_id, name, primary_domain AS domain, created_at FROM companies ``` --- ## MongoDB (MQL Block Queries) MongoDB connections use MQL blocks with a JSON body specifying the database, collection, and optional filters. ### Syntax ``` -- MQL_START: table_name { "database": "your_db", "collection": "accounts", "filter": { "type": "customer" }, "limit": 10000, "sort": { "created_at": -1 } } -- MQL_END SELECT * FROM table_name ``` ### JSON body fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `database` | string | Yes | MongoDB database name | | `collection` | string | Yes | Collection name | | `filter` | object | No | MongoDB query filter | | `limit` | integer | No | Maximum documents | | `sort` | object | No | Sort specification | --- ## Gainsight (GS Block Queries) Gainsight uses blocks with an API endpoint and request body. ### Syntax ``` -- GS_START: table_name { "endpoint": "/v1/data/objects/query/Company", "body": { "select": ["Gsid", "Name", "Arr", "RenewalDate", "Stage"] } } -- GS_END SELECT * FROM table_name ``` The `endpoint` field is required and specifies which Gainsight API endpoint to call. --- ## Common Pattern: Blocks + SQLite SQL All block-based query formats (SOQL, HS, AT, MQL, GS) share the same architecture: 1. **Blocks** fetch raw data from the external API and store it in named temporary tables 2. **SQLite SQL** outside the blocks transforms, joins, filters, and reshapes the data This means you can: - **Use multiple blocks** to pull from different objects/tables, then join them in SQLite - **Use CTEs** (`WITH ... AS`) to build complex transformations - **Apply aggregations** (SUM, COUNT, MIN, MAX, GROUP BY) in the SQLite layer - **Filter** with WHERE clauses in the SQLite layer (in addition to or instead of API-level filters) - **Use standard SQLite functions** for string manipulation, date handling, CASE expressions, etc. The blocks handle API-specific syntax (SOQL, HubSpot Search JSON, etc.), while the SQLite SQL gives you full relational query power over the fetched results. --- ## Next Steps - See [Field Reference](./field-reference) for which properties to map your query columns to - See [Advanced Configuration](./advanced-configuration) for query patterns that handle ARR aggregation, churn detection, parent-child hierarchies, and more - See [Real-World Examples](./real-world-examples) for complete worked examples --- ## Account Model Field Reference This page documents every recognized account property in FunnelStory, its expected data type, and what it's used for. When mapping your query columns to account properties, use this as your reference. ## Property Categories ### Required Properties These properties must be present for the model to function. | Property | Type | Description | |----------|------|-------------| | `account_id` | string | The unique identifier for the account. This is the **model key** -- each row must have a unique, non-null `account_id`. FunnelStory uses this to identify and deduplicate accounts across refreshes. | ### Default Properties These are pre-populated in the mapping UI. While not strictly required, they are strongly recommended for a useful account model. | Property | Type | Description | |----------|------|-------------| | `name` | string | The display name of the account (company name). Shown throughout the FunnelStory UI. | | `domain` | string | The primary web domain of the account (e.g., `example.com`). Used for enrichment, domain-based matching, and deduplication. Should be lowercase, without `http://` or `www.` prefixes. | | `created_at` | timestamp | When the account was created or became a customer. Used as the account start date in lifecycle tracking. See [Timestamp Formatting](#timestamp-formatting) for format requirements. | ### Revenue and Lifecycle | Property | Type | Description | |----------|------|-------------| | `amount` | float | The account's revenue value (ARR, contract value, MRR, etc.). Used in revenue reporting, health scoring, and dashboards. For accounts with multiple deals/contracts, aggregate the total in your query. | | `expires_at` | timestamp | When the account's contract or subscription expires. Drives renewal management and expiry-based alerts. See [Timestamp Formatting](#timestamp-formatting). | ### Churn Tracking | Property | Type | Description | |----------|------|-------------| | `is_churned` | boolean | Whether the account has churned. Set to `true` for churned accounts, `false` or omit for active accounts. FunnelStory uses this to segment accounts and track churn metrics. | | `churned_at` | timestamp | When the account churned. Used for churn timeline analysis. See [Timestamp Formatting](#timestamp-formatting). | ### Account Hierarchy | Property | Type | Description | |----------|------|-------------| | `parent_account_id` | string | The `account_id` of this account's parent account. Used to build parent-child hierarchies. The parent account must exist in the same model. FunnelStory resolves the relationship and can roll up metrics from children to parents. | ### CRM Identifiers | Property | Type | Description | |----------|------|-------------| | `sfdc_account_id` | string | The Salesforce Account record ID (18-character ID). Links this account to its Salesforce record for CRM sync and opportunity tracking. | | `hubspot_company_id` | string | The HubSpot Company record ID. Links this account to its HubSpot record for CRM sync. | ### Role-based assignment (CSM, AE, …) These properties enable **automatic account assignment** by role. When FunnelStory refreshes the Account model, it matches the email or name values against workspace users and creates account assignments for the corresponding designation. | Property | Type | Description | |----------|------|-------------| | `csm_email` | string | Email of the assigned Customer Success Manager. Matched against workspace users. | | `csm_name` | string | Name of the assigned CSM. Used as fallback when email is not available. | | `cse_email` | string | Email of the assigned Customer Success Engineer. | | `cse_name` | string | Name of the assigned CSE. | | `ae_email` | string | Email of the assigned Account Executive. | | `ae_name` | string | Name of the assigned AE. | | `se_email` | string | Email of the assigned Sales Engineer. | | `se_name` | string | Name of the assigned SE. | **How auto-assignment works**: During each account refresh, FunnelStory looks at these property values and attempts to match them against users in your workspace. If a match is found, the account is automatically assigned to that user under the corresponding designation (CSM, AE, etc.). Manual assignments are never overwritten by auto-assignment. ### Shared team assignment This property ties an account to a **[Shared team](../shared-teams.md)** (a group of workspace users). It works alongside the role-based properties above, not instead of them. | Property | Type | Description | |----------|------|-------------| | `team_id` | string | Must match the **External Team ID** of a shared team defined under **Admin → Team → Shared Teams**. On each Account model refresh, FunnelStory assigns **all members** of that team to the account (auto-assigned, without a CSM/AE-style designation). If the value is empty or does not match any team, no assignment is made from this property. Manual assignment for a given user on that account prevents team-based auto-assignment for that same user. | ### Custom Properties Any column in your query that you map to a property name not listed above becomes a **custom account property**. Custom properties: - Appear in the account detail view - Can be used as filters in Accounts View and Audiences - Can be used in workflow conditions - Must follow the naming pattern: letters, numbers, hyphens, underscores, and spaces (regex: `^[a-zA-Z0-9-_ ]+$`) Common custom properties include: `industry`, `tier`, `plan_name`, `region`, `employee_count`, `health_score`, `contract_type`. --- ## Data Type Reference | Type | Description | Example Values | |------|------------|----------------| | `string` | Text values. Strings longer than 4,000 characters are truncated. | `"Acme Corp"`, `"enterprise"` | | `float` | Decimal numbers. Used for monetary values, scores, percentages. | `50000.00`, `0.85` | | `integer` | Whole numbers. | `42`, `1000` | | `timestamp` | Date/time values. See [Timestamp Formatting](#timestamp-formatting) for format rules. | `1705305600`, `"2024-01-15T12:00:00Z"` | | `boolean` | True/false values. | `true`, `false`, `1`, `0` | | `json` | Structured data stored as JSON. Used for complex/nested values. | `{"key": "value"}` | --- ## Timestamp Formatting Timestamp formatting is critical for properties like `created_at`, `expires_at`, and `churned_at` to work correctly across FunnelStory. The system needs to interpret these values as actual points in time, not just text. ### Format preference (most preferred first) **1. Native database timestamp type** -- Preferred If your database column is typed as `TIMESTAMP`, `DATETIME`, `TIMESTAMPTZ`, or similar, the database driver preserves the type and FunnelStory can read it directly. This is the most natural format when querying from SQL databases and warehouses -- no conversion needed. ```sql -- These work because the column type carries through: SELECT created_at FROM accounts -- where created_at is TIMESTAMP ``` **2. Unix timestamp in seconds (integer)** -- Good A single integer representing seconds since January 1, 1970 UTC. Unambiguous and reliable, with no timezone confusion or format ambiguity. ``` 1705305600 ``` This represents `2024-01-15T12:00:00Z`. **3. RFC 3339 / ISO 8601 string** -- Acceptable When timestamps must be represented as strings, RFC 3339 is the preferred string format: ``` 2024-01-15T12:00:00Z 2024-01-15T12:00:00+00:00 2024-01-15T07:00:00-05:00 ``` Other ISO 8601 variations (e.g., `2024-01-15 12:00:00`) are generally recognized but less reliable. **4. Arbitrary string formats** -- Will NOT work FunnelStory cannot parse non-standard date strings. These formats **will fail**: ``` Jan 15, 2024 15/01/2024 01-15-2024 January 15th, 2024 ``` If your source data uses these formats, you must convert them in your query. ### Converting timestamps in your query If your source data has timestamps in a non-ideal format, convert them in the query before FunnelStory receives them. **String date to Unix seconds:** | Dialect | Conversion | |---------|-----------| | PostgreSQL | `EXTRACT(EPOCH FROM created_at)::BIGINT` | | MySQL | `UNIX_TIMESTAMP(created_at)` | | BigQuery | `UNIX_SECONDS(created_at)` or `UNIX_SECONDS(TIMESTAMP(date_string))` | | Snowflake | `DATE_PART(EPOCH_SECOND, created_at)` | | Databricks | `UNIX_TIMESTAMP(created_at)` | | Redshift | `EXTRACT(EPOCH FROM created_at)::BIGINT` | | MS SQL | `DATEDIFF(SECOND, '1970-01-01', created_at)` | **Unix milliseconds to seconds:** ```sql SELECT (created_at_ms / 1000) AS created_at FROM accounts ``` **String to timestamp type (when column is VARCHAR):** ```sql -- PostgreSQL SELECT created_at::TIMESTAMP AS created_at FROM accounts -- BigQuery SELECT TIMESTAMP(date_string_column) AS created_at FROM accounts -- Snowflake SELECT TO_TIMESTAMP(date_string_column) AS created_at FROM accounts ``` ### Which properties need timestamp formatting? All properties with type `timestamp` in the tables above: - `created_at` -- account creation / start date - `expires_at` -- contract / subscription expiry - `churned_at` -- when the account churned - Any custom property you intend to use as a date/time value --- ## Next Steps - See [Writing Queries](./writing-queries) for query format details per data source type - See [Advanced Configuration](./advanced-configuration) for how to use these properties for ARR, churn, CSM assignment, and more - See [Real-World Examples](./real-world-examples) for complete worked examples with mappings --- ## Advanced Account Model Configuration This page covers advanced scenarios for the Account model. Each section explains a specific configuration goal, how it works in FunnelStory, and provides query examples for multiple data source types. Before reading this page, you should be familiar with: - [Writing Queries](./writing-queries) -- query syntax for your data source - [Field Reference](./field-reference) -- available properties and their types --- ## Calculate and Set ARR (Annual Recurring Revenue) Map your revenue data to the `amount` property. FunnelStory uses this value for revenue dashboards, health scoring, and account prioritization. ### Where the data comes from | Source | Common column/field names | | -------------- | -------------------------------------------------------- | | Salesforce | `AnnualRevenue`, `Customer_ARR__c`, Opportunity `Amount` | | HubSpot | Deal `amount`, Company `annualrevenue` | | Data warehouse | `arr`, `mrr * 12`, `contract_value`, `amount` | ### Simple case: ARR on the account record If ARR is stored directly on the account/company record: **SQL (warehouse):** ```sql SELECT id AS account_id, name, domain, created_at, arr AS amount FROM accounts ``` **Salesforce:** ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate, AnnualRevenue FROM Account -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, AnnualRevenue AS amount FROM accounts ``` ### Aggregating from deals/opportunities When ARR must be calculated from individual deals, use `SUM`: **SQL (warehouse):** ```sql SELECT a.id AS account_id, a.name, a.domain, a.created_at, COALESCE(SUM(d.annual_amount), 0) AS amount FROM accounts a LEFT JOIN deals d ON a.id = d.account_id AND d.stage = 'closed_won' AND d.is_active = true GROUP BY a.id, a.name, a.domain, a.created_at ``` **Salesforce (multi-block):** ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate FROM Account -- SOQL_END -- SOQL_START: opps SELECT AccountId, Amount FROM Opportunity WHERE StageName = 'Closed Won' AND IsClosed = true -- SOQL_END SELECT a.Id AS account_id, a.Name AS name, a.Website AS domain, a.CreatedDate AS created_at, COALESCE(SUM(o.Amount), 0) AS amount FROM accounts a LEFT JOIN opps o ON a.Id = o.AccountId GROUP BY a.Id, a.Name, a.Website, a.CreatedDate ``` **Mapping**: Map the `amount` column to the `amount` property. --- ## Set Start Date and Expiry Date ### Start date (`created_at`) The `created_at` property represents when the account became a customer. Common sources: - **CRM**: `CreatedDate` (Salesforce), `createdate` (HubSpot), or a custom field like `First_Closed_Won_Date__c` - **Warehouse**: `created_at`, `signup_date`, `first_contract_date` If you want to use the first closed-won deal date instead of the account creation date: ```sql SELECT a.id AS account_id, a.name, a.domain, MIN(d.close_date) AS created_at FROM accounts a JOIN deals d ON a.id = d.account_id AND d.stage = 'closed_won' GROUP BY a.id, a.name, a.domain ``` ### Expiry date (`expires_at`) The `expires_at` property drives renewal management. Map it from contract end dates: ```sql SELECT a.id AS account_id, a.name, MAX(c.end_date) AS expires_at FROM accounts a LEFT JOIN contracts c ON a.id = c.account_id AND c.status = 'active' GROUP BY a.id, a.name ``` Using `MAX` picks the latest contract end date when an account has multiple active contracts. **Timestamp formatting reminder**: Ensure `created_at` and `expires_at` are in Unix seconds, native database timestamps, or RFC 3339 format. See [Timestamp Formatting](./field-reference#timestamp-formatting). --- ## Set Domain The `domain` property is used for account enrichment, matching, and deduplication. It should be a clean, lowercase domain without protocols or paths. ### From a direct field ```sql SELECT id AS account_id, name, LOWER(REPLACE(REPLACE(website, 'https://', ''), 'http://', '')) AS domain, created_at FROM accounts ``` ### Extracted from email If you don't have a domain field but have contact emails: ```sql SELECT a.id AS account_id, a.name, LOWER(SPLIT_PART(MIN(c.email), '@', 2)) AS domain, -- PostgreSQL a.created_at FROM accounts a LEFT JOIN contacts c ON a.id = c.account_id GROUP BY a.id, a.name, a.created_at ``` --- ## Identify Actual Customers Using CRM Properties Not everything in your CRM is a paying customer. Use your CRM's stage, status, or type fields to filter down to real customers. ### Salesforce: Filter by Opportunity Stage ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate FROM Account WHERE Id IN (SELECT AccountId FROM Opportunity WHERE StageName = 'Closed Won') -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at FROM accounts ``` Or filter by a custom account status field: ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate FROM Account WHERE Customer_Status__c = 'Active Customer' -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at FROM accounts ``` ### HubSpot: Filter by Deal Stage ``` -- HS_START: deals { "object_type": "deals", "properties": ["hs_object_id", "amount", "closedate"], "filter_groups": [{ "filters": [{ "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" }] }] } -- HS_END -- HS_START: companies { "object_type": "companies", "properties": ["name", "domain", "createdate"] } -- HS_END SELECT DISTINCT c.id AS account_id, c.name, c.domain, c.createdate AS created_at FROM companies c INNER JOIN deals d ON c.id = d.associations_company ``` ### Warehouse: Filter by status or type ```sql SELECT id AS account_id, name, domain, created_at FROM accounts WHERE account_type = 'customer' AND status IN ('active', 'renewal_pending') ``` --- ## Define Parent-Child Account Relationships Map the `parent_account_id` property to create hierarchical account structures. The parent account must also exist in the Account model (i.e., it must be returned by the same query with a matching `account_id`). ### How it works When FunnelStory processes the Account model: 1. It looks at each account's `parent_account_id` value 2. It finds the parent account by matching against `account_id` in the same model 3. It establishes the parent-child relationship Parent accounts (sometimes called "container accounts") can aggregate metrics from their children, including `aggregate_amount` (sum of child ARR) and `earliest_child_expiry`. ### Salesforce (using ParentId) ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate, ParentId FROM Account WHERE IsDeleted = false -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, ParentId AS parent_account_id FROM accounts ``` ### Warehouse ```sql SELECT id AS account_id, name, domain, created_at, parent_id AS parent_account_id FROM accounts ``` ### Important notes - An account cannot be its own parent (`parent_account_id` must differ from `account_id`) - Circular references (A is parent of B, B is parent of A) will cause issues - The parent account must exist in the model -- if the parent isn't returned by your query, the relationship won't be created - You can have multi-level hierarchies (grandparent -> parent -> child) ### Container accounts If you want a parent account to be treated purely as a grouping container (not a customer account itself), you can include a custom property `is_container` set to `true`: ```sql SELECT id AS account_id, name, domain, created_at, parent_id AS parent_account_id, CASE WHEN account_type = 'holding_company' THEN 'true' ELSE 'false' END AS is_container FROM accounts ``` --- ## Auto-Assign Accounts to CSMs FunnelStory can automatically assign accounts to workspace users based on properties in the Account model. This works for CSMs, CSEs, AEs, SEs, and any other designations configured in your workspace. ### How it works 1. You map CSM email/name from your data source to properties like `csm_email` and `csm_name` 2. On each Account model refresh, FunnelStory: - Reads the `csm_email` value for each account - Looks up that email in the workspace's user list - If a match is found, assigns the account to that user under the "CSM" designation 3. **Manual assignments are never overwritten** -- only accounts without an existing assignment for that designation are auto-assigned ### Supported designations | Email property | Name property | Designation | | -------------- | ------------- | ------------------------- | | `csm_email` | `csm_name` | Customer Success Manager | | `cse_email` | `cse_name` | Customer Success Engineer | | `ae_email` | `ae_name` | Account Executive | | `se_email` | `se_name` | Sales Engineer | You can use any combination. Most teams map at least `csm_email`. ### Salesforce example ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate, Owner.Email, CSM_Email__c FROM Account -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, "Owner.Email" AS ae_email, CSM_Email__c AS csm_email FROM accounts ``` ### Warehouse example ```sql SELECT a.id AS account_id, a.name, a.domain, a.created_at, u_csm.email AS csm_email, u_csm.name AS csm_name, u_ae.email AS ae_email FROM accounts a LEFT JOIN users u_csm ON a.csm_user_id = u_csm.id LEFT JOIN users u_ae ON a.owner_user_id = u_ae.id ``` ### Prerequisites - The users being assigned must already exist in your FunnelStory workspace (invited and accepted) - The email addresses in your data must match the email addresses users signed up with in FunnelStory --- ## Combine Multiple Contracts or Deals for a Single Account When an account has multiple active contracts, subscriptions, or deals, you typically want to produce **one row per account** with aggregated values. ### Aggregate pattern ```sql SELECT a.id AS account_id, a.name, a.domain, a.created_at, COUNT(d.id) AS deal_count, -- custom property SUM(d.amount) AS amount, -- total ARR MIN(d.start_date) AS created_at, -- earliest contract start MAX(d.end_date) AS expires_at, -- latest contract end GROUP_CONCAT(DISTINCT d.product_name) AS products -- custom property FROM accounts a LEFT JOIN deals d ON a.id = d.account_id AND d.status = 'active' GROUP BY a.id, a.name, a.domain ``` ### Salesforce: Account + multiple Opportunities ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate FROM Account -- SOQL_END -- SOQL_START: opps SELECT Id, AccountId, Amount, CloseDate, StageName FROM Opportunity WHERE StageName = 'Closed Won' -- SOQL_END SELECT a.Id AS account_id, a.Name AS name, a.Website AS domain, a.CreatedDate AS created_at, COUNT(o.Id) AS deal_count, SUM(o.Amount) AS amount, MAX(o.CloseDate) AS expires_at FROM accounts a LEFT JOIN opps o ON a.Id = o.AccountId GROUP BY a.Id, a.Name, a.Website, a.CreatedDate ``` ### HubSpot: Companies + multiple Deals ``` -- HS_START: companies { "object_type": "companies", "properties": ["name", "domain", "createdate"] } -- HS_END -- HS_START: deals { "object_type": "deals", "properties": ["amount", "closedate", "associations.company"], "filter_groups": [{ "filters": [{ "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" }] }] } -- HS_END SELECT c.id AS account_id, c.name, c.domain, c.createdate AS created_at, COUNT(d.id) AS deal_count, SUM(CAST(d.amount AS REAL)) AS amount, MAX(d.closedate) AS expires_at FROM companies c LEFT JOIN deals d ON c.id = d.associations_company GROUP BY c.id, c.name, c.domain, c.createdate ``` --- ## Identify and Set Churned Status Map `is_churned` (boolean) and optionally `churned_at` (timestamp) to track which accounts have churned and when. ### From a status field If your CRM or warehouse has an explicit churn status: ```sql SELECT id AS account_id, name, domain, created_at, CASE WHEN status = 'churned' THEN true ELSE false END AS is_churned, churned_date AS churned_at FROM accounts ``` ### Salesforce (custom status field) ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate, CS_Status__c, Churn_Date__c FROM Account -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, CASE WHEN CS_Status__c = 'Churned' THEN 1 ELSE 0 END AS is_churned, Churn_Date__c AS churned_at FROM accounts ``` ### Derived from contract expiry If you don't have an explicit churn flag, derive it from expired contracts with no renewal: ```sql SELECT a.id AS account_id, a.name, a.domain, a.created_at, CASE WHEN MAX(c.end_date) < CURRENT_DATE AND NOT EXISTS ( SELECT 1 FROM contracts c2 WHERE c2.account_id = a.id AND c2.status = 'active' ) THEN true ELSE false END AS is_churned, MAX(c.end_date) AS churned_at FROM accounts a LEFT JOIN contracts c ON a.id = c.account_id GROUP BY a.id, a.name, a.domain, a.created_at ``` --- ## Exclude Test and Internal Accounts Filter out sandbox, test, demo, and internal accounts so they don't pollute your dashboards and metrics. Do this with `WHERE` clauses in your query. ### By domain ```sql SELECT id AS account_id, name, domain, created_at FROM accounts WHERE domain NOT IN ('example.com', 'test.com', 'yourcompany.com') AND domain NOT LIKE '%sandbox%' ``` ### By name pattern ```sql SELECT id AS account_id, name, domain, created_at FROM accounts WHERE name NOT LIKE '%[TEST]%' AND name NOT LIKE '%[DEMO]%' AND name NOT LIKE 'Sandbox%' ``` ### By account type or flag ```sql SELECT id AS account_id, name, domain, created_at FROM accounts WHERE is_test = false AND account_type != 'internal' ``` ### Salesforce ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate FROM Account WHERE Type != 'Test' AND Name != NULL AND IsDeleted = false -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at FROM accounts WHERE Name NOT LIKE '%[TEST]%' ``` Note that you can filter at both levels: in the SOQL (API-side filtering) and in the SQLite SQL (local filtering). Use API-side filtering for large datasets to reduce data transfer. ### HubSpot Use `filter_groups` to exclude at the API level, or filter in the SQLite SQL: ``` -- HS_START: companies { "object_type": "companies", "properties": ["name", "domain", "createdate", "hs_object_id"], "filter_groups": [{ "filters": [{ "propertyName": "hs_is_target_account", "operator": "EQ", "value": "true" }] }] } -- HS_END SELECT id AS account_id, name, domain, createdate AS created_at FROM companies WHERE name NOT LIKE '%test%' ``` --- ## Join with Data from Other Connections FunnelStory supports **Data Joins** to combine data from two different connections. This is how you build a hybrid Account model -- for example, CRM account data enriched with warehouse product data. ### How Data Joins work 1. Your **primary data source** (the model's main connection) provides the base account data 2. A **secondary data source** (configured as a Data Join) provides additional columns 3. You specify **join columns** -- which column from the primary maps to which column in the secondary 4. FunnelStory performs a **left join**: all primary rows are kept, matched secondary rows add their columns ### Setting up a Data Join in the UI 1. In the **Map Information** step, scroll to the bottom 2. Click **Add Data Join** 3. Select the **connection** for the secondary data source 4. Write a **query** against the secondary connection 5. Configure **join columns**: - **Left column**: A column from your primary query (e.g., `account_id`) - **Right column**: A column from the secondary query (e.g., `account_id` or `external_id`) 6. The joined columns become available for mapping alongside your primary columns ### Example: Salesforce accounts + warehouse usage data **Primary query** (Salesforce connection): ``` -- SOQL_START: accounts SELECT Id, Name, Website, CreatedDate, AnnualRevenue, OwnerId FROM Account WHERE IsDeleted = false -- SOQL_END SELECT Id AS account_id, Name AS name, Website AS domain, CreatedDate AS created_at, AnnualRevenue AS amount FROM accounts ``` **Data Join query** (Snowflake connection): ```sql SELECT salesforce_account_id AS account_id, last_login_at, total_logins_30d, active_users_count FROM product_analytics.account_usage_summary ``` **Join columns**: - Left column: `account_id` - Right column: `account_id` After the join, you can map `last_login_at`, `total_logins_30d`, and `active_users_count` as custom account properties. ### Example: Warehouse accounts + HubSpot deal data **Primary query** (PostgreSQL connection): ```sql SELECT id AS account_id, name, domain, created_at, hubspot_id FROM accounts ``` **Data Join query** (HubSpot connection): ``` -- HS_START: deals { "object_type": "deals", "properties": ["amount", "closedate", "dealstage", "associations.company"], "filter_groups": [{ "filters": [{ "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" }] }] } -- HS_END SELECT associations_company AS company_id, SUM(CAST(amount AS REAL)) AS deal_amount, MAX(closedate) AS latest_close_date FROM deals GROUP BY associations_company ``` **Join columns**: - Left column: `hubspot_id` - Right column: `company_id` ### Tips for Data Joins - Column name collisions: if both queries return a column with the same name, the primary query's column takes precedence - The join is always a **left join** -- every row from the primary query is preserved, even if there's no match in the secondary query - You can add multiple Data Joins to combine data from more than two connections - Keep secondary queries focused -- only return the columns you need for the join to minimize data transfer - Ensure the join column values match between the two sources (e.g., both use the same Salesforce Account ID format) --- ## Next Steps - See [Real-World Examples](./real-world-examples) for complete end-to-end configurations that combine many of these scenarios - See [Verification & Troubleshooting](./verification) to confirm your configuration is working correctly --- ## Real-World Examples This page provides complete, end-to-end Account model configurations for common setups. Each example includes the full query, the property mapping table, and an explanation of what's being configured. Use these as starting points and adapt them to your data schema. --- ## Example 1: Salesforce as Sole Source **Scenario**: Your organization uses Salesforce as the system of record. Accounts, opportunities, and owner assignments all live in Salesforce. You want to pull customer accounts with aggregated ARR from closed-won opportunities, set parent-child relationships, track churn, and auto-assign CSMs. ### Connection - **Type**: Salesforce - **Refresh**: Daily (24h) ### Query ``` -- SOQL_START: accounts (Id, Name, Website, CreatedDate, ParentId, AnnualRevenue, Customer_Status__c, Churn_Date__c, Type) SELECT Id, Name, Website, CreatedDate, ParentId, AnnualRevenue, Customer_Status__c, Churn_Date__c, Type FROM Account WHERE IsDeleted = false AND Type != 'Prospect' -- SOQL_END -- SOQL_START: opps (Id, AccountId, Amount, CloseDate, StageName) SELECT Id, AccountId, Amount, CloseDate, StageName FROM Opportunity WHERE StageName = 'Closed Won' AND IsDeleted = false -- SOQL_END -- SOQL_START: owners (Id, Email, Name) SELECT Id, Email, Name FROM User -- SOQL_END -- SOQL_START: account_owners (AccountId, OwnerId, CSM_Email__c, CSM_Name__c) SELECT Id AS AccountId, OwnerId, CSM_Email__c, CSM_Name__c FROM Account WHERE IsDeleted = false AND Type != 'Prospect' -- SOQL_END SELECT a.Id AS account_id, a.Name AS name, a.Website AS domain, a.CreatedDate AS created_at, a.ParentId AS parent_account_id, a.Id AS sfdc_account_id, -- ARR: use AnnualRevenue if set, otherwise sum opportunities COALESCE( NULLIF(a.AnnualRevenue, 0), opp_agg.total_amount ) AS amount, -- Expiry: latest opportunity close date opp_agg.latest_close AS expires_at, -- Churn CASE WHEN a.Customer_Status__c = 'Churned' THEN 1 ELSE 0 END AS is_churned, a.Churn_Date__c AS churned_at, -- CSM assignment ao.CSM_Email__c AS csm_email, ao.CSM_Name__c AS csm_name, ow.Email AS ae_email, ow.Name AS ae_name, -- Custom properties a.Type AS account_type FROM accounts a LEFT JOIN ( SELECT AccountId, SUM(Amount) AS total_amount, MAX(CloseDate) AS latest_close FROM opps GROUP BY AccountId ) opp_agg ON a.Id = opp_agg.AccountId LEFT JOIN account_owners ao ON a.Id = ao.AccountId LEFT JOIN owners ow ON ao.OwnerId = ow.Id WHERE a.Name NOT LIKE '%[TEST]%' AND a.Name NOT LIKE '%Sandbox%' ``` ### Property Mapping | Column | Property | Notes | |--------|----------|-------| | `account_id` | `account_id` | Salesforce Account Id | | `name` | `name` | | | `domain` | `domain` | From Account.Website | | `created_at` | `created_at` | Salesforce CreatedDate (native timestamp) | | `parent_account_id` | `parent_account_id` | Salesforce ParentId | | `sfdc_account_id` | `sfdc_account_id` | Same as account_id; enables CRM sync | | `amount` | `amount` | ARR from AnnualRevenue or summed opps | | `expires_at` | `expires_at` | Latest opp close date | | `is_churned` | `is_churned` | Derived from Customer_Status__c | | `churned_at` | `churned_at` | Custom Churn_Date__c field | | `csm_email` | `csm_email` | Auto-assigns CSM | | `csm_name` | `csm_name` | Fallback for CSM assignment | | `ae_email` | `ae_email` | Account Owner email | | `ae_name` | `ae_name` | Account Owner name | | `account_type` | `account_type` | Custom property for filtering | ### What this achieves - Pulls all non-prospect, non-deleted Salesforce accounts - Calculates ARR from AnnualRevenue field (preferred) or summed closed-won opportunities (fallback) - Sets expiry from latest opportunity close date - Builds parent-child hierarchy from ParentId - Identifies churned accounts from a custom status field - Auto-assigns CSMs and AEs from Salesforce user data - Excludes test and sandbox accounts via name filters --- ## Example 2: HubSpot as Sole Source **Scenario**: Your organization uses HubSpot. Companies and deals are in HubSpot. You want to pull companies with at least one closed-won deal, aggregate deal values for ARR, and track basic account metadata. ### Connection - **Type**: HubSpot - **Refresh**: Daily (24h) ### Query ``` -- HS_START: companies { "object_type": "companies", "properties": [ "name", "domain", "createdate", "annualrevenue", "hubspot_owner_id", "industry", "numberofemployees", "lifecyclestage" ] } -- HS_END -- HS_START: deals { "object_type": "deals", "properties": [ "dealname", "amount", "closedate", "dealstage", "hs_object_id", "associations.company" ], "filter_groups": [{ "filters": [{ "propertyName": "dealstage", "operator": "EQ", "value": "closedwon" }] }] } -- HS_END -- HS_START: owners { "object_type": "owners", "properties": ["email", "firstName", "lastName"] } -- HS_END SELECT c.id AS account_id, c.name, c.domain, c.createdate AS created_at, c.id AS hubspot_company_id, -- ARR: company-level annualrevenue or sum of deals COALESCE( NULLIF(CAST(c.annualrevenue AS REAL), 0), deal_agg.total_amount ) AS amount, -- Expiry: latest deal close date deal_agg.latest_close AS expires_at, -- Owner as AE ow.email AS ae_email, (ow.firstName || ' ' || ow.lastName) AS ae_name, -- Custom properties c.industry, c.numberofemployees AS employee_count, c.lifecyclestage AS lifecycle_stage FROM companies c LEFT JOIN ( SELECT associations_company AS company_id, SUM(CAST(amount AS REAL)) AS total_amount, MAX(closedate) AS latest_close, COUNT(*) AS deal_count FROM deals WHERE associations_company IS NOT NULL GROUP BY associations_company ) deal_agg ON c.id = deal_agg.company_id LEFT JOIN owners ow ON c.hubspot_owner_id = ow.id WHERE c.name IS NOT NULL AND c.name != '' AND c.name NOT LIKE '%test%' AND deal_agg.company_id IS NOT NULL -- only companies with closed-won deals ``` ### Property Mapping | Column | Property | Notes | |--------|----------|-------| | `account_id` | `account_id` | HubSpot Company ID | | `name` | `name` | | | `domain` | `domain` | | | `created_at` | `created_at` | HubSpot createdate | | `hubspot_company_id` | `hubspot_company_id` | Enables CRM sync | | `amount` | `amount` | From annualrevenue or deal sum | | `expires_at` | `expires_at` | Latest deal close date | | `ae_email` | `ae_email` | HubSpot owner email | | `ae_name` | `ae_name` | HubSpot owner name | | `industry` | `industry` | Custom property | | `employee_count` | `employee_count` | Custom property | | `lifecycle_stage` | `lifecycle_stage` | Custom property | --- ## Example 3: Data Warehouse (Snowflake) **Scenario**: Your data team maintains a curated analytics schema in Snowflake. Account data is in a dimension table that already joins CRM, billing, and product data. This is the cleanest approach when you have a well-maintained warehouse. ### Connection - **Type**: Snowflake - **Refresh**: Daily (24h) ### Query ```sql SELECT a.account_id, a.account_name AS name, LOWER(a.domain) AS domain, EXTRACT(EPOCH FROM a.first_contract_date)::BIGINT AS created_at, a.salesforce_id AS sfdc_account_id, a.hubspot_id AS hubspot_company_id, a.parent_account_id, -- Revenue a.current_arr AS amount, -- Expiry EXTRACT(EPOCH FROM a.contract_end_date)::BIGINT AS expires_at, -- Churn CASE WHEN a.status = 'churned' THEN true ELSE false END AS is_churned, CASE WHEN a.status = 'churned' THEN EXTRACT(EPOCH FROM a.churn_date)::BIGINT ELSE NULL END AS churned_at, -- Team assignment csm.email AS csm_email, csm.full_name AS csm_name, ae.email AS ae_email, ae.full_name AS ae_name, -- Custom properties a.tier, a.industry, a.region, a.plan_name, a.employee_count, a.health_score FROM analytics.funnelstory.dim_accounts a LEFT JOIN analytics.funnelstory.dim_users csm ON a.csm_user_id = csm.user_id LEFT JOIN analytics.funnelstory.dim_users ae ON a.ae_user_id = ae.user_id WHERE a.is_deleted = false AND a.is_test_account = false AND a.account_type = 'customer' ``` ### Property Mapping | Column | Property | Notes | |--------|----------|-------| | `account_id` | `account_id` | Internal account identifier | | `name` | `name` | | | `domain` | `domain` | Lowercased | | `created_at` | `created_at` | Converted to Unix seconds | | `sfdc_account_id` | `sfdc_account_id` | For Salesforce sync | | `hubspot_company_id` | `hubspot_company_id` | For HubSpot sync | | `parent_account_id` | `parent_account_id` | For hierarchy | | `amount` | `amount` | Current ARR | | `expires_at` | `expires_at` | Converted to Unix seconds | | `is_churned` | `is_churned` | Derived from status | | `churned_at` | `churned_at` | Converted to Unix seconds | | `csm_email` | `csm_email` | Auto-assigns CSM | | `csm_name` | `csm_name` | Fallback for CSM assignment | | `ae_email` | `ae_email` | Auto-assigns AE | | `ae_name` | `ae_name` | Fallback for AE assignment | | `tier` | `tier` | Custom: customer tier | | `industry` | `industry` | Custom: industry vertical | | `region` | `region` | Custom: geographic region | | `plan_name` | `plan_name` | Custom: subscription plan | | `employee_count` | `employee_count` | Custom | | `health_score` | `health_score` | Custom | ### What this achieves - Uses the warehouse as the single source of truth for clean, pre-modeled data - Converts all timestamps to Unix seconds (Snowflake `EXTRACT(EPOCH FROM ...)`) - Links to both Salesforce and HubSpot for CRM sync - Full parent-child hierarchy - Complete churn tracking with derived boolean and timestamp - Auto-assigns both CSMs and AEs - Rich custom properties for filtering and segmentation - Excludes deleted, test, and non-customer accounts at the query level --- ## Example 4: Hybrid (Salesforce + Data Warehouse) **Scenario**: Salesforce is your CRM for account metadata and ownership, but product usage and billing data lives in a PostgreSQL warehouse. You need data from both sources. This uses a **Data Join** -- Salesforce as the primary source, PostgreSQL as a secondary source joined by Salesforce Account ID. ### Primary Connection: Salesforce **Query:** ``` -- SOQL_START: accounts (Id, Name, Website, CreatedDate, ParentId, AnnualRevenue, OwnerId, CSM_Email__c, Customer_Status__c) SELECT Id, Name, Website, CreatedDate, ParentId, AnnualRevenue, OwnerId, CSM_Email__c, Customer_Status__c FROM Account WHERE IsDeleted = false AND Type = 'Customer' -- SOQL_END -- SOQL_START: owners (Id, Email, Name) SELECT Id, Email, Name FROM User -- SOQL_END SELECT a.Id AS account_id, a.Name AS name, a.Website AS domain, a.CreatedDate AS created_at, a.ParentId AS parent_account_id, a.Id AS sfdc_account_id, a.AnnualRevenue AS amount, a.CSM_Email__c AS csm_email, ow.Email AS ae_email, CASE WHEN a.Customer_Status__c = 'Churned' THEN 1 ELSE 0 END AS is_churned FROM accounts a LEFT JOIN owners ow ON a.OwnerId = ow.Id WHERE a.Name NOT LIKE '%[TEST]%' ``` ### Data Join: PostgreSQL **Connection**: Your product PostgreSQL database **Query:** ```sql SELECT salesforce_account_id, EXTRACT(EPOCH FROM MAX(last_login_at))::BIGINT AS last_login_at, COUNT(DISTINCT active_user_id) AS active_users_30d, SUM(api_calls_30d) AS api_calls_30d, MAX(subscription_plan) AS plan_name, EXTRACT(EPOCH FROM MAX(subscription_end_date))::BIGINT AS contract_expires_at FROM product_analytics.account_summary WHERE last_login_at > NOW() - INTERVAL '90 days' GROUP BY salesforce_account_id ``` **Join Columns:** | Left Column (Salesforce) | Right Column (PostgreSQL) | |--------------------------|---------------------------| | `sfdc_account_id` | `salesforce_account_id` | ### Combined Property Mapping After the join, you have columns from both sources: | Column | Source | Property | Notes | |--------|--------|----------|-------| | `account_id` | Salesforce | `account_id` | | | `name` | Salesforce | `name` | | | `domain` | Salesforce | `domain` | | | `created_at` | Salesforce | `created_at` | | | `parent_account_id` | Salesforce | `parent_account_id` | | | `sfdc_account_id` | Salesforce | `sfdc_account_id` | | | `amount` | Salesforce | `amount` | AnnualRevenue | | `csm_email` | Salesforce | `csm_email` | | | `ae_email` | Salesforce | `ae_email` | | | `is_churned` | Salesforce | `is_churned` | | | `last_login_at` | PostgreSQL | `last_login_at` | Custom; Unix seconds | | `active_users_30d` | PostgreSQL | `active_users_30d` | Custom | | `api_calls_30d` | PostgreSQL | `api_calls_30d` | Custom | | `plan_name` | PostgreSQL | `plan_name` | Custom | | `contract_expires_at` | PostgreSQL | `expires_at` | Unix seconds | ### What this achieves - Salesforce provides CRM-managed data (account info, ownership, ARR, hierarchy, churn) - PostgreSQL provides product usage metrics (last login, active users, API usage, subscription details) - The Data Join links them by Salesforce Account ID - Product metrics become custom account properties, visible on dashboards and usable in workflows - Timestamps from PostgreSQL are converted to Unix seconds for reliable timestamp handling --- ## Choosing the Right Example for Your Setup | Your situation | Start with | |---------------|-----------| | Salesforce is your primary CRM | Example 1 | | HubSpot is your primary CRM | Example 2 | | You have a curated data warehouse | Example 3 | | CRM + warehouse (need data from both) | Example 4 | | Multiple CRMs or complex multi-source | Combine patterns from Examples 1-4 using Data Joins | --- ## Next Steps - Ready to configure? Go back to [Configuration Walkthrough](./configuration-walkthrough) - Need to verify your setup? See [Verification & Troubleshooting](./verification) --- ## Verification and Troubleshooting After configuring and saving your Account model, use this page to verify it's working correctly and troubleshoot common issues. ## Verification Checklist ### 1. Refresh the Model After saving the model as **Active**: 1. Go to **Configuration** > **Data Models** 2. Find the Account model card 3. Click **Refresh Model** to trigger an immediate data pull The refresh may take a few seconds to several minutes depending on the data volume and connection type. ### 2. Check the Accounts View Navigate to the **Accounts View** in the sidebar. Verify: - Accounts are appearing in the list - Account names look correct (not IDs or garbled text) - The total account count is reasonable for your business - Domains are populated (if you mapped `domain`) ### 3. Spot-Check Individual Accounts Click on a few accounts and verify: - **Name and domain** match what's in your source system - **ARR / Amount** is correct (cross-reference with your CRM or billing system) - **Created At** date makes sense (not `1970-01-01` or null) - **Expiry date** is in the future for active accounts (if mapped) - **CSM / AE assignments** show the right people (if mapped) - **Custom properties** are populated and have expected values ### 4. Verify Parent-Child Relationships If you configured `parent_account_id`: - Parent accounts show child accounts in their detail view - Child accounts show their parent - No orphaned parent references (parent IDs pointing to non-existent accounts) - `aggregate_amount` on parent accounts reflects the sum of children's ARR ### 5. Verify Churn Status If you configured `is_churned`: - Churned accounts are marked as churned in the UI - Active accounts are NOT marked as churned - The ratio of churned to active accounts looks reasonable ### 6. Verify role-based assignments (CSM, AE, …) If you configured `csm_email` or other **role-based** assignment properties: - Accounts are assigned to the correct workspace users - Navigate to an assigned account and confirm the CSM/AE matches your CRM - Users not in the workspace are not auto-assigned (check if expected users need to be invited first) ### 7. Verify shared team assignments (`team_id`) If you mapped **`team_id`** to match **[Shared teams](../shared-teams.md)**: - After **Refresh model**, accounts whose `team_id` matches a team’s **External Team ID** show assignments for **all** members of that team - Accounts with an empty `team_id` or a value that does not match any shared team have **no** team-driven assignments from this property - If you manually assigned a user to an account, confirm team-based auto-assignment did not replace that relationship for that user (manual pairs are preserved) --- ## Common Issues and Fixes ### Query validation fails **Symptoms**: Error message when clicking Validate in the query editor. **Common causes**: - **SQL syntax error**: Check for missing commas, unclosed quotes, wrong table/column names. The error message usually includes the specific SQL error. - **Connection issue**: The data source may be unreachable. Test the connection in **Configuration** > **Integrations**. - **Permission denied**: The database user may lack `SELECT` privileges on the table. Check grants. - **SOQL/HS block errors**: Ensure `SOQL_START` has a matching `SOQL_END`. Check that JSON in HS blocks is valid. Blocks cannot be nested. ### No accounts appear after refresh **Symptoms**: Accounts View is empty after refreshing the model. **Common causes**: - **Query returns zero rows**: Run your query directly against the source and confirm it returns data. Your `WHERE` clause might be too restrictive. - **Model is in Draft**: Check the model card -- if it says "Draft", it's not active. Edit the model and save as Active. - **Mapping issue**: `account_id` might not be mapped, or mapped to a column that contains NULLs. - **Refresh didn't complete**: Wait a few minutes and refresh again. Check for any error indicators on the model card. ### Duplicate accounts **Symptoms**: The same account appears multiple times. **Common causes**: - **Query returns multiple rows per account**: If you're joining with deals/contracts, you need `GROUP BY` on the account ID to aggregate to one row per account. Review the [combining contracts](./advanced-configuration#combine-multiple-contracts-or-deals-for-a-single-account) section. - **Non-unique account_id values**: Ensure your `account_id` column truly contains unique values. Run `SELECT account_id, COUNT(*) FROM (...) GROUP BY account_id HAVING COUNT(*) > 1` against your query to find duplicates. ### Incorrect ARR / amount values **Symptoms**: Amount values are wrong -- too high, too low, or zero. **Common causes**: - **Not aggregating**: If an account has multiple deals, you need `SUM()` to get the total. Without it, you'll get only one deal's amount. - **Wrong column mapped**: Verify the `amount` property is mapped to the correct column (e.g., `total_amount`, not `single_deal_amount`). - **Currency/unit mismatch**: Some sources store amounts in cents (divide by 100) or in different currencies. - **NULL values**: Use `COALESCE(SUM(amount), 0)` to handle accounts with no deals. ### Timestamps show as wrong dates or "1970-01-01" **Symptoms**: `created_at`, `expires_at`, or other dates show January 1, 1970 or obviously wrong dates. **Common causes**: - **Milliseconds instead of seconds**: If your source stores Unix timestamps in milliseconds, divide by 1000 in your query: `(created_at_ms / 1000) AS created_at` - **String format not recognized**: FunnelStory needs Unix seconds, native database timestamps, or RFC 3339 strings. Arbitrary date strings like `"Jan 15, 2024"` won't work. Convert them in your query. See [Timestamp Formatting](./field-reference#timestamp-formatting). - **Column type mismatch**: The column might be typed as `VARCHAR` instead of `TIMESTAMP` in your database. Cast it: `CAST(date_column AS TIMESTAMP)` or convert to Unix seconds. - **Timezone issues**: If dates are off by hours, the timezone might not be specified. Use UTC-aware timestamps or explicit timezone conversion. ### CSM auto-assignment not working **Symptoms**: Accounts have `csm_email` values but aren't assigned to CSMs in FunnelStory. **Common causes**: - **Email doesn't match**: The email in your data must exactly match the email the user registered with in FunnelStory. Check for case differences, extra spaces, or different email aliases. - **User not in workspace**: The CSM must be invited to and have accepted their invitation to the FunnelStory workspace. - **Manual assignment exists**: Auto-assignment never overwrites manual assignments. If an account was manually assigned to a different CSM, the auto-assignment is skipped. - **Property name typo**: Ensure you're mapping to `csm_email` (not `csm-email` or `csmEmail`). ### Parent-child relationships not appearing **Symptoms**: `parent_account_id` is mapped but accounts don't show hierarchy. **Common causes**: - **Parent doesn't exist in model**: The parent account must be returned by the same query. If `parent_account_id` points to an account ID not in your results, the relationship can't be created. - **Self-referencing**: An account cannot be its own parent. Ensure `parent_account_id != account_id`. - **Circular reference**: Account A is parent of B, and B is parent of A. This is invalid. - **NULL vs empty string**: Accounts without parents should have `NULL` for `parent_account_id`, not an empty string. ### Data Join columns not appearing **Symptoms**: You configured a Data Join but the joined columns aren't available for mapping. **Common causes**: - **Join column mismatch**: The values in the left and right join columns must match. For example, if the primary query uses a Salesforce 18-character ID and the secondary uses a 15-character ID, they won't match. - **Secondary query error**: The Data Join's query might be failing. Try validating it independently. - **Column name collision**: If both queries return a column with the same name, the primary query's column takes precedence. Rename columns in the secondary query to avoid collisions. ### Model stuck in Draft / "Missing Configuration" **Symptoms**: The model card shows "Draft" or lists missing configuration items. **Common causes**: - **No data source configured**: The model needs a connection and query. - **Missing required mapping**: `account_id` must be mapped to a column. - **Empty query**: The query field is blank. - **Join misconfigured**: A Data Join exists but is missing its query or join columns. To fix: edit the model, complete any missing fields, and save as Active. --- ## Refresh Intervals and Data Freshness - **Hourly refresh**: Data is at most 1 hour old. Good for CRM data that changes frequently. - **Daily refresh**: Data is at most 24 hours old. Good for warehouse data that's updated on a schedule. You can always trigger a manual refresh from the model card. During a refresh, the previous data remains visible -- it's replaced once the new refresh completes successfully. If a refresh fails, the previous data is preserved. --- ## Getting Help If you've gone through this troubleshooting guide and the issue persists: 1. **Check the query independently**: Run your query directly against the source system to verify it returns expected results 2. **Simplify**: Start with a minimal query (just `account_id`, `name`, `created_at`) and add fields incrementally 3. **Check mappings**: Edit the model and review that each property is mapped to the correct column 4. **Review field types**: Ensure timestamps are in a supported format, booleans are `true`/`false` or `1`/`0`, and amounts are numeric --- ## Next Steps Once your Account model is verified and working: - Configure the **Users model** to link users to accounts - Set up **Product Activity** models to track user engagement - Connect **Support Ticket** or **Conversation** models for customer support data --- ## Inviting Users Workspace **Admins** and **Super Admins** grow the team by sending **invites** tied to a specific **role** (and optional **designation**). Invited users receive email with a secure link to join the same tenant everyone else uses. ## Send an invite 1. Go to **Admin Settings → Team permissions**. 2. Choose **Invite user** (or equivalent). 3. Enter **email**, **display name**, and the **role** that matches what they should be allowed to change. 4. Optionally set a **designation** such as **CSM**, **AE**, **SE**, or a custom value your workspace uses for reporting slices. 5. Send the invite. The user must accept before they appear as active. Invites can be **resent** or **revoked** while pending. After someone accepts, manage them alongside existing users on the same **Team permissions** screen. ## Roles at a glance Pick the smallest role that still lets someone do their job. Full detail is in **[Roles and permissions](./rbac.md)**. | Role | Typical invite scenario | |------|-------------------------| | **Super Admin** | IT owner, billing owner, or security—full tenant control | | **Admin** | RevOps or CS leadership configuring models, users, and notifications | | **Data Admin** | Analytics engineer owning connections and queries without user administration | | **Manager** | Team lead who reassigns accounts and needle movers but does not change global configuration | | **Account User** | Individual CSM or AM working an assigned book of business | | **Renari User** | Copilot-only access for people who should ask questions but not change configuration | Avoid inviting the entire company as **Admin**; use **Account user** plus **account assignment** for frontline scale. ## Account assignment and “My accounts” Before people rely on **Needle Movers**, predictions, or portfolio views, make sure they have the right book of business: - **Per user** — assign accounts to a person (or rely on fields on your **Account model**, such as **`csm_email`**, so **My accounts** fills automatically). See **[Field reference](./configuring-accounts/05-field-reference.md)**. - **Per shared team** — under **Admin → Team**, use the **Shared Teams** tab to define teams (each has an **External Team ID** and a member list). Map your Account model’s **`team_id`** property to a column whose values match those IDs. After an **Account model refresh**, every member of the matching team is auto-assigned to the account. Full workflow: **[Shared teams](./shared-teams.md)**. On the same screen, the **Team Members** tab is only for workspace users and invites—not for defining shared teams. Many of those views default to **My accounts**. Confirm assignment rules early so new users see a populated portfolio on first login. ## Single sign-on and auto-provisioning If your workspace enables **SSO**, users may **auto-join** the first time they authenticate with an email domain your admin has approved—skipping manual invites for large rollouts. Domain rules and default roles are configured in **[SSO](../platform/sso.md)**. Mixed mode is common: executives SSO in while contractors stay on email invites. ## Deactivation - **SSO users** — access is ultimately controlled by your **identity provider**. Remove or disable the user there so they can no longer authenticate into FunnelStory. - **Manually invited users** — remove them by **deleting** the user from **Admin Settings → Team permissions**. Deleted or IdP-removed users retain historical attribution on comments and audit entries where the product preserves it. ## Related - [Shared teams](./shared-teams.md) - [Roles and permissions](./rbac.md) - [SSO](../platform/sso.md) - [Workspace management](../platform/workspace-management.md) --- ## FunnelStory 101 FunnelStory is a **Customer SuperIntelligence Platform for B2B teams**. It acts as a unified intelligence and action layer that converts scattered enterprise data into a **Customer Intelligence Graph**, which powers AI copilots, agents, and automations — enabling B2B teams to work entirely through any combination of AI copilots, native UI, or developer apps. ## What Can You Do With FunnelStory? ### Customer Success FunnelStory transforms Customer Success from a reactive support function into a proactive revenue engine. - **End surprise churn** — AI predictions with 80–90% recall surface hidden risks that traditional health scores miss. - **See risk and opportunity 3–9 months out** — leading indicators forecast expansion and churn risk by learning historical patterns and your business DNA. - **Automate grunt work** — pre-built Skills in your AI copilot automatically synthesize account briefs, QBRs, and actionable intelligence. - **Zero developer dependency** — Vibe Coding lets any team member build complex workflows and agents in plain English. - **Automate with Agentic Workflows** — deploy agentic workflows to automate CS processes end-to-end, from risk alerts to executive reporting. ### B2B Customer Intelligence for Frontline Teams FunnelStory brings the Customer Intelligence Graph to every revenue-facing team — not just Customer Success. Intelligence is delivered where each team already works — inside Slack, CRM, CSM tools, or custom apps built on FunnelStory's API — and agentic workflows automate the grunt work across all functions. **Marketing** - Identify case study and spotlight candidates from account signals - Surface ROI stories, references, and testimonials at the right moment - Find champions and advocates, and track leads with enriched account context **Product** - Track adoption funnels and correlate retention with feature usage - Build power user profiles and identify beta candidates - Quantify bug impact across the customer base and surface market fit signals **Support** - Monitor SLA compliance and surface escalation risk early - Identify repeat issues and knowledge gaps before they compound - Benchmark resolution performance and track cost per account ### B2B Customer Intelligence for Ops and Leadership Teams For leadership, FunnelStory serves as the **Truth Layer** needed for accurate forecasting and confident decision-making. - **Agentic Governance** — AI agents automate complex analysis, revenue metrics tracking, and risk reporting on a continuous basis. - **Custom Intelligence Delivery** — synthesized intelligence is pushed directly to leadership through dashboards, executive reports, and Slack. ## What Is Inside FunnelStory? Standard LLMs are powerful reasoners but have no knowledge of your customers, your contracts, or your business history. FunnelStory provides the missing layer: unified customer intelligence grounded in your actual data, structured so that every AI agent, every copilot, and every team member works from the same shared reality. Five components make this possible: - **The Intelligence Fabric** — a shared intelligence layer for frontline teams, ops, and leadership, delivered in the right place and format for each audience. - **The Customer Intelligence Graph** — the core platform engine that connects accounts, contacts, interactions, product usage, and signals across all your tools into a single inspectable context. - **Time-Travel Discovery** — the graph maps every interaction from the first touchpoint, building a historical context that learns your specific business DNA over time. This makes it possible to reason about what happened, when, and why — not just what is true right now. - **Agentic Workflows** — a framework for running autonomous or human-in-the-loop workflows based on deep context and pre-computed intelligence from the graph. Workflows are assembled using a modular "LEGO block" approach and can be created in plain English using **Vibe Coding**, without waiting for engineering. - **Context for Agents** — by grounding every AI agent's reasoning in a shared, inspectable context graph, FunnelStory ensures agents act on mathematically validated facts rather than hallucinations. Every signal is scored using Precision, Recall, and F1 metrics. ## Where to Go Next - [How FunnelStory Works](../core-concepts/overview.md) — the platform's three-layer architecture in detail - [Quick Start](./quick-start.md) — get your first accounts loading in FunnelStory - [Configuring Accounts](./configuring-accounts/01-introduction.md) — the full guide to setting up your account model - [AI Agents](/ai/overview) — Renari, agent automation, and the MCP Server --- ## Prerequisites Before you roll FunnelStory out to a full revenue or success organization, confirm the items below. Confirming these items up front lets most teams finish onboarding in **hours** rather than **weeks** of back-and-forth. ## People and access Match people in your rollout to the **FunnelStory roles** they will use in the product (full detail in [Roles and permissions](./rbac.md)): | Role | What you need during onboarding | |------|----------------------------------| | **Super Admin** | At least one person who can create the workspace, billing relationship, first admins, and IdP setup if you use SSO | | **Admin** | Someone who can add connections, approve integrations, manage users under **Admin Settings → Team permissions**, and tune models and notifications | | **Data Admin** | Optional split: owns **Configure** paths (connections, queries, model mappings, refresh) without full user administration—useful when IT owns OAuth apps and warehouse users but RevOps owns day-to-day config | If you use **single sign-on**, involve whoever manages your IdP so SAML or OIDC metadata can be exchanged. See [SSO](../platform/sso.md). ## Data you should have in mind FunnelStory is built around **B2B accounts** rolling up **users**, **usage**, **CRM**, and **support** context. You do *not* need perfect data on day one, but you should know: 1. **Where the account list lives** (CRM, warehouse table, or MDM) and which column is the stable **account ID**. 2. **Where renewal and revenue fields live** if you want predictions and dashboards tuned to ARR and contract dates—usually on Account or Opportunity objects. 3. **Where product usage is stored** (Segment, warehouse events table, or native product DB) if needle movers and journeys should reflect adoption. If you are unsure, start from the **[Account model guide](./configuring-accounts/01-introduction.md)** and map one source of truth first. ## Supported connections Any integration you plan to use must be available under **Configure → Connections** for your workspace edition. The catalog is documented in **[Data connections](../data-connections/overview.md)**—databases, CRMs, support tools, chat, meetings, email, enrichment, and more. **Allowlisted egress** applies to many warehouse and database setups. If your security team requires fixed IPs, share **[Allowed IP addresses](../data-connections/allowed-ip-addresses.md)** before scheduling the first sync. ## Model literacy inside your team Someone on the customer side should be comfortable with **queries or object pickers** at the level your sources require (SQL for warehouses, SOQL for Salesforce, filters for HubSpot, and so on). FunnelStory accelerates authoring but does not remove the need for sensible joins and filters. ## Browser and network Use a **current evergreen browser**. Corporate proxies that terminate TLS can interfere with OAuth flows—test a connection from the same network your CSMs will use day to day. ## Related - [Quick Start](./quick-start.md) - [Inviting users](./inviting-users.md) - [Roles and permissions](./rbac.md) --- ## Quick Start This guide takes you from an empty workspace to a **first dashboard you can explore** in one sitting. It assumes you can sign in and have permission to add **connections** and **models** (typically **Admin** or **Data Admin**—see [Roles and permissions](./rbac.md)). ## Choose how you connect data When you start onboarding, FunnelStory asks how you want to supply data for the first models: | Path | Best for | |------|----------| | **Sample database** | Exploring the product quickly with realistic shape (accounts, users, activity) | | **Your own connection** | Going live against CRM, warehouse, or product analytics you already trust | Either path lands in the same **model configuration** experience; the difference is only *which connection* backs the suggested models. ## AI-suggested models and queries After you pick a connection, FunnelStory analyzes available tables or objects and **suggests models** that usually matter for B2B intelligence—often starting with **Accounts**, **Users**, and **Product activity**. For each suggestion you can: 1. **Review** the proposed model type and description. 2. **Inspect** the **AI-generated query** (SQL, SOQL, or provider-specific shape) that pulls rows for that model. 3. **Accept or adjust** the query before saving, if your workspace allows edits. The goal is to remove blank-page friction: you are validating and tightening rather than writing everything from scratch. ## Lock in the Account model first Every other model hangs off **accounts**. Configure only **one** Account model per workspace. Complete that model with a stable **account ID** mapping before layering users, subscriptions, or tickets. The deep walkthrough lives in [Configuring the Account model](./configuring-accounts/01-introduction.md). ## First dashboard Once required models are saved and a **refresh** has run, your usual landing spot is the **Focus Areas** page—where you see prioritized accounts and early signals (exact content depends on what you connected). To drill into a single customer, use the **[Accounts](../dashboard-insights/accounts-view.md)** experience. ## Invite the rest of the team When core data looks right, add teammates from **Admin Settings → Team permissions** and give them **Account user** or **Manager** roles for day-to-day work. See [Inviting users](./inviting-users.md). ## Related - [Prerequisites](./prerequisites.md) - [FunnelStory 101](https://docs.funnelstory.ai/) - [Data connections overview](../data-connections/overview.md) --- ## Roles and Permissions FunnelStory uses **workspace roles** to decide who can change **configuration**, who can **write** to customer records, and who can use **Renari** and **MCP** integrations. Roles are assigned per user in a workspace and can be paired with an optional **designation** (for example **CSM** or **AE**) for reporting—not for authorization by itself. ## Role catalog | Role | Who it is for | Typical capabilities | |------|----------------|----------------------| | **Super Admin** | Tenant owners | Full read/write across workspace, users, billing-level actions where exposed, audit visibility, MCP administration, and destructive operations such as workspace delete where the product supports it | | **Admin** | Operations and RevOps leaders | Manage users, connections, models, audiences, notifications, funnels, and most workspace settings—without some cross-tenant controls reserved for Super Admin | | **Data Admin** | Analytics engineers | Own **Configure** paths: connections, queries, model mappings, and refresh behavior—without full user administration | | **Manager** | Team leads | Read broadly, update **accounts** they coordinate, create tasks, and move work items; often paired with reassignment of needle movers | | **Account User** | CSMs and AMs | Day-to-day portfolio work: assigned accounts, needle movers, predictions, notes, and tasks—no global configuration | | **Renari User** | Copilot-only collaborators | **Renari** conversations, read access to accounts and engagement surfaces the role allows—intentionally narrow for vendors or executives who should not edit models | | **Access Token** | Automation identities | Minimal read used for scoped API or MCP-style access—treat like a service account | Exact permission strings evolve with the product; when in doubt, try the action in a staging workspace or ask a **Super Admin** to confirm. ## Designations **Designation** (CSM, AE, SE, CSE, or custom values) labels a user for **filters and reporting**. It does not, by itself, grant extra privileges—pair it with the correct **role**. ## Account write vs configuration write Most customer-facing teams care about this split: - **Configuration write** — changing connections, models, property mappings, and workspace-wide rules. Restricted to **Super Admin**, **Admin**, and **Data Admin** depending on the screen. - **Account write** — updating account fields, assignments, notes, tasks, and needle mover state. Granted to roles that carry **accounts:write** (for example **Admin**, **Data Admin**, **Manager**, **Account User** in typical setups). ## MCP access **MCP** (Model Context Protocol) clients use dedicated permissions for **read** and **write** of MCP resources. **Super Admin** and **Admin** usually configure which tools and datasets an assistant may call; end users consume MCP through approved clients. See **[MCP server overview](../platform/mcp-server/overview.md)** for the customer-facing introduction. ## Changing roles **Super Admins** and **Admins** can change another user’s role from **Admin Settings → Team permissions**. The product prevents removing the **last Super Admin** to avoid lockout. ## Shared teams **Shared teams** (user groups used with Account model **`team_id`**) are created and maintained under **Admin → Team → Shared Teams**, alongside invites on the same screen—only roles that can use **Admin** settings see them, not **Manager** or **Account user**. See **[Shared teams](./shared-teams.md)**. ## Related - [Shared teams](./shared-teams.md) - [Inviting users](./inviting-users.md) - [SSO](../platform/sso.md) - [Audit log](../platform/audit-log.md) --- ## Shared teams **Shared teams** are groups of **workspace users** you define in FunnelStory. When an account’s data includes a matching **team identifier**, every member of that team can be **auto-assigned** to the account after an **Account model refresh**—useful when portfolios are owned by pods or regions rather than a single CSM. This page is about **Shared teams** in **Admin → Team**. It is not the same as the **[Microsoft Teams](../data-connections/communication/ms-teams.md)** data connection used for notifications and channels. ## Where to configure teams 1. Open **Admin Settings → Team Permissions**. 2. Use the tabs: - **Team Members** — workspace users, invites, roles, and designations (see **[Inviting users](./inviting-users.md)**). - **Shared Teams** — create and manage named teams and their members. ## Create a shared team On **Shared Teams**: 1. Choose **Create team**. 2. Set **External Team ID** — a stable string you will reuse in your Account model (for example `portfolio-east` or a value that already exists in your warehouse). This value cannot be changed after the team is created; pick something durable. 3. Set **Name** and optional **Description** for humans reading the list. 4. Add **members** (workspace users) to the team. The **External Team ID** is what FunnelStory matches against the account property **`team_id`** (see **[Field reference](./configuring-accounts/05-field-reference.md)**). ## Link accounts with `team_id` In your **Account model**, map a column or expression to the reserved property **`team_id`**. For each account row, the value must **exactly match** the shared team’s **External Team ID** (same spelling and casing as stored in FunnelStory). - If the value is empty or does not match any shared team, no team-based assignments are created from that property. - If it matches, on **Account model refresh** FunnelStory assigns **every member** of that team to the account. **Coexistence with CSM / AE properties**: Role-based auto-assignment from fields like **`csm_email`** is separate. Both can apply to the same account when the model defines both. **Manual assignments**: If someone is already **manually** assigned to an account, FunnelStory does not overwrite that with a team-based auto-assignment for the same user on that account. ## When assignments update Team-based assignments are reconciled when the **Account model** runs its refresh (scheduled or manual **Refresh model**). After changing teams, members, or `team_id` values in your source data, run a refresh and spot-check assignments (see **[Verification](./configuring-accounts/08-verification.md)**). ## Deleting a shared team Deleting a team removes the team definition from the workspace. **Auto-assigned** account memberships that came from that team are cleared on the **next Account model refresh** after deletion. ## Related - [Inviting users](./inviting-users.md) — Team Members tab, roles, invites - [Field reference](./configuring-accounts/05-field-reference.md) — reserved properties including **`team_id`** - [Verification and troubleshooting](./configuring-accounts/08-verification.md) — checklist after changes - [Workspace management](../platform/workspace-management.md) — Admin vs Configure --- ## Account Hierarchy Overview # Account hierarchy overview **Account hierarchy** in FunnelStory links customer organizations into **parent–child trees** using your own stable `account_id` values. Alongside **[products](/data-models/products)** on each account, this lets you model enterprises with subsidiaries, holding companies, regional rollups, and **multi-product** relationships in one workspace. This section explains how hierarchy is represented, how it affects revenue rollups, **product-level funnels**, and **product-level predictions**, and what you need in your data models. ## What hierarchy is - Each account row still has a unique **`account_id`** (the model key). - Optional **`parent_account_id`** points to another row’s **`account_id`** in the **same** Account model. FunnelStory resolves that into an internal parent link when the parent exists. - You can model **multiple levels** (for example, global parent → region → sold-to account). See [Setting up hierarchy](./setting-up-hierarchy) for rules and edge cases. Hierarchy is **data-driven**: you express it in your Account model query and mappings, then refresh the model. Step-by-step mapping and examples live in **[Configuring the account model](/getting-started/configuring-accounts/introduction)** and **[Advanced configuration: parent–child](/getting-started/configuring-accounts/advanced-configuration#define-parent-child-account-relationships)**. ## How products fit in **Products** are a separate optional **[Products model](/data-models/products)**. Accounts carry a **`products`** list (JSON array of `product_id` values) that must align with that catalog. Product-level funnels and predictions use those ids to scope behavior **per product** while hierarchy scopes **which accounts** roll up to which parent. See also: **[Accounts model](/data-models/accounts-model)**. ## Why it matters in the product | Area | Role of hierarchy / products | |------|------------------------------| | **Revenue & renewal** | Parent accounts can show **rolled-up** revenue and renewal timing from **direct** child accounts (`aggregate_amount`, `earliest_child_expiry`). Details: [Parent account rollups](./parent-rollups). | | **Journeys** | **Product-level funnels** define stages **per product**; filters evaluate in the context of each account (and its data). See [Product-level funnels](./product-level-funnels). | | **Predictions** | When your workspace uses multi-product and hierarchy features, FunnelStory can maintain **per-product** prediction scores for non-container accounts and roll container parents separately. See [Product-level predictions](./product-level-predictions). | | **Metrics** | **[Account metrics](/data-models/account-metrics)** can be product-scoped; hierarchy changes **which accounts** appear under a parent in the UI. | ## Workspace capability and UI The full **multi-product and account hierarchy** experience in the app (for example hierarchy panels on account revenue views and related layouts) is enabled **per workspace**. If you do not see hierarchy or per-product controls, confirm with your FunnelStory admin that this capability is enabled for your workspace. ## Where to go next 1. [Setting up hierarchy](./setting-up-hierarchy) — `parent_account_id`, `is_container`, validation. 2. [Product-level funnels](./product-level-funnels) — stages, activation, refresh. 3. [Product-level predictions](./product-level-predictions) — scores and data dependencies. 4. [Parent account rollups](./parent-rollups) — `aggregate_amount` and `earliest_child_expiry`. --- ## Parent Account Rollups # Parent account rollups When accounts have **child rows** linked by **`parent_account_id`**, FunnelStory maintains two helpful rollup fields on the **parent** (and on any account that has children): | Field | Meaning | |-------|---------| | **`aggregate_amount`** | This account’s **`amount`** plus the **`amount`** of each **direct child** account. | | **`earliest_child_expiry`** | The **earliest** `expires_at` among: this account’s own **`expires_at`** (if set) and each **direct child’s** `expires_at`. | These are updated during **Account model refresh** after child rows are upserted. They are intended for **revenue dashboards**, renewal views, and filters that reference subscription-style totals on parents. :::info Direct children only Rollups use **one hierarchy level**: each child’s own **`amount`** and **`expires_at`**, not the child’s `aggregate_amount` and not **grandchildren**. Example: if `Global → Region → Account`, **Global’s** `aggregate_amount` includes **Region’s** `amount` only, **not** the leaf Account’s amount unless that amount is included in Region’s row in your source query. If you need **full-subtree** revenue on the top node, aggregate in your **warehouse query** (for example sum all descendant contracts into the top row’s `amount`) or flatten the tree for reporting. ::: ## How this appears in the product In list and detail experiences, subscription-style columns on a parent may show **`aggregate_amount`** instead of only the parent row’s native `amount`, and renewal timing may reflect **`earliest_child_expiry`**. Exact column labels follow the FunnelStory UI for your workspace. Underlying account fields remain: - **`amount`** / **`expires_at`** — what you mapped from your source for **that** row. - **`aggregate_amount`** / **`earliest_child_expiry`** — **computed** helpers for parent-style reporting. ## Configuring your data for sensible rollups 1. Put **contract or ARR** you want summed at the parent on **child** accounts when children are the economic sold-to entities; parent **`amount`** can be zero or corporate-level ARR depending on how you report. 2. Set **`expires_at`** on children when renewals happen at the child; the parent’s **`earliest_child_expiry`** surfaces the **next** renewal in the subtree one level down. 3. Use **`is_container`** on pure grouping parents so teams know not to expect standalone product motion on that row — see [Setting up hierarchy](./setting-up-hierarchy). ## Related - [Account hierarchy overview](./overview) - [Setting up hierarchy](./setting-up-hierarchy) - [Advanced configuration](/getting-started/configuring-accounts/advanced-configuration#define-parent-child-account-relationships) --- ## Product-Level Funnels # Product-level funnels **Product-level funnels** are **journey definitions scoped to a single catalog product** (`product_id` / internal product record). They let each product have its own stages and entry rules while still evaluating **accounts** (and their activities, subscriptions, and metrics) in your workspace. They complement the workspace **Funnels** area in the FunnelStory app. Use that section for account- or workspace-level journey definitions (including **[timeline vs last match](../../platform/funnels/evaluators.md)** evaluation); **product funnels** appear in product- and hierarchy-aware account views when your workspace has **multi-product and account hierarchy** enabled. ## Prerequisites 1. **[Products model](/data-models/products)** configured and refreshed. 2. Accounts list **`products`** (JSON array of `product_id`) consistent with that catalog — see [Setting up hierarchy](./setting-up-hierarchy). 3. Data you want stages to filter on (subscriptions, activities, **[account metrics](/data-models/account-metrics)**, and so on) modeled and refreshing as usual. ## What you configure For each product funnel: - **Product** — Exactly one product per funnel definition. - **Name** — Label for your team. - **Stages** — Ordered steps. Each stage has a **filter** (same conceptual filter builder as elsewhere in FunnelStory). **Stages with no filter conditions do not count** as active funnel stages for evaluation. **One active funnel per product** — Activating a funnel for a product while another is active will fail until you **deactivate** the other. Draft or inactive funnels can coexist. ## Product funnels and evaluators **Workspace Funnels** (main **Funnels** section) can use either **timeline** or **last match** evaluation. That choice changes whether progression is driven by **historical replay** or by **which stage filters match right now**. See **[Evaluators: timeline vs last match](../../platform/funnels/evaluators.md)** for a full comparison and when to use each. **Product-level funnels** work differently: on each refresh, FunnelStory evaluates your stage filters against **current** account data and assigns each account to the **rightmost** (latest in order) stage whose filter matches. That is the same **placement rule** as **last match** on workspace funnels. Product funnels **do not** replay the full timeline the way a workspace funnel set to **timeline** does. If your configuration UI shows an evaluator field on a product funnel, it is stored with the funnel; **stage placement on refresh** still follows this **current-data, rightmost match** behavior. ## How evaluation and refresh work - FunnelStory evaluates product funnels on a **background schedule**, and you can trigger work from the app when you **activate** or **refresh** a funnel (depending on your permission level). - After you change stage filters or activation, run a **refresh** on the funnel (or wait for processing) so account stage membership stays aligned with current data. - Stage **statistics** (accounts entered, in stage, exited) are available for reporting in the funnel experience. Default stage **names** suggested in the product include Acquisition, Activation, Realized Value, Growth, and Purchase Intent — you can rename and replace them; what matters is the **filter** on each stage you want to enforce. ## Hierarchy and product funnels - Funnel membership is still **per account** (and driven by that account’s data). - **Parent** rows do not automatically “inherit” a child’s funnel position unless your stage filters explicitly encode that. - If parents are **Container** rollups only, you typically define stages for **child** accounts that carry the product usage signals. ## Viewing product funnels With multi-product and hierarchy enabled, product funnel progress appears alongside **per-product** revenue and journey context on account views and related dashboards. Exact placement can evolve with the product; look for product selectors and funnel panels on the **Accounts** / revenue experience for your workspace. ## Related - [Account hierarchy overview](./overview) - [Funnel evaluators (timeline vs last match)](../../platform/funnels/evaluators.md) - [Product-level predictions](./product-level-predictions) - [Products model](/data-models/products) --- ## Product-Level Predictions # Product-level predictions When your workspace is set up for **multi-product** analysis, FunnelStory can store **prediction scores per account _and_ per product** — in addition to the overall account-level prediction you may already use. Product-level scores help teams answer: “How is this account trending **on Product A** vs **Product B**?” especially inside **[hierarchy](./overview)** where a parent may aggregate several subsidiaries with different product mixes. ## Prerequisites 1. **Prediction / ML configuration** is enabled and maintained for your workspace. 2. **[Products model](/data-models/products)** with stable **`product_id`** values. 3. Accounts carry a **`products`** array listing which product ids apply to that account (see [Setting up hierarchy](./setting-up-hierarchy)). Ids that are not in the Products catalog are ignored. **Accounts with no valid products do not get product-level predictions.** 4. Rich signals improve all predictions: **[Conversations](/data-models/conversations)**, **[Notes](/data-models/note)**, support integrations, **[account metrics](/data-models/account-metrics)** (including **product-scoped** metrics where relevant), and subscriptions. ## What gets stored For each qualifying account, FunnelStory can persist: - **Latest** per-product score and history for charts and trends. - **Factors and recommendations** attached to that product-scoped run, analogous to account-level predictions. ## Container parents **Container** accounts (see [Setting up hierarchy](./setting-up-hierarchy)) are **skipped** for normal per-product scoring because they are not leaf “customer motion” rows. After scores are computed for real accounts, FunnelStory can **roll up** container parents from children for **account-level** container behavior. Treat **Container Parents** as: “this row’s intelligence is mostly the sum / story of its children.” ## Hierarchy tips - Put **`products`** on the accounts that actually generate telemetry for that product (often **child** sold-to accounts). Parents then benefit from rollups and summaries without duplicating product ids across every intermediate node—unless your business truly attributes the same product to every level. - Keep **`product_id`** consistent across subscriptions, metrics, and `products` on the account. ## Related - [Account hierarchy overview](./overview) - [Parent account rollups](./parent-rollups) - [Account metrics](/data-models/account-metrics) --- ## Setting Up Hierarchy # Setting up account hierarchy Use your **Account model** to define parent–child relationships and optional **container** rows. **Primary references:** [Configuring the account model](/getting-started/configuring-accounts/introduction), [Account model field reference](/getting-started/configuring-accounts/field-reference), and [Advanced configuration: parent–child](/getting-started/configuring-accounts/advanced-configuration#define-parent-child-account-relationships). ## Map `parent_account_id` 1. In **Configure → Models → Accounts**, map a column to the **`parent_account_id`** property. 2. Values must be the **`account_id`** of the parent row, **not** the parent’s name or CRM name. 3. The parent row **must appear in the same Account model** (same query result set). If the parent is missing, FunnelStory cannot attach the child to a parent. 4. Leave **`parent_account_id`** empty (or omit it) for **top-level** accounts. After you **save** and **refresh** the Account model, open an account in the app and confirm the parent link or hierarchy view (when multi-product and hierarchy are enabled for your workspace). ## Rules the product enforces | Rule | Behavior | |------|----------| | **Self-parent** | If `parent_account_id` equals this row’s **`account_id`**, the parent link is **ignored** (treated as a top-level account). | | **Unknown parent** | If the value does not match any **`account_id`** in the model, the account has **no** parent in FunnelStory. | | **Circular chains** | **Avoid** A → B and B → A (or longer rings); they produce an inconsistent tree and unpredictable rollups. | | **Multi-level trees** | Allowed: a child can point to a parent that is itself a child of another account, and so on. Rollups use **direct children only**; see [Parent account rollups](./parent-rollups). | ## Container accounts Some organizations use a **parent row only as a rollup bucket** (holding company, “All APAC,” internal grouping) with **no** standalone revenue motion. For those rows, add a custom property: - **`is_container`**: set to a truthy value FunnelStory recognizes (`true`, `"true"`, `"1"`, or non-zero numbers). See [Advanced configuration](/getting-started/configuring-accounts/advanced-configuration#container-accounts) for SQL examples. Effects at a high level: - **Predictions**: container accounts are not scored like leaf customers; scores for containers can be **derived from children** after child accounts are updated. - **Other features**: several flows treat containers differently from sellable accounts; keep `is_container` accurate so parents that are real customers stay scored and messaged normally. If a parent is both a **real customer** and a **grouping** parent, leave `is_container` false and use normal **`amount`** / **`expires_at`** on that row. ## Products on accounts (`products`) For multi-product hierarchy, each account should expose **`products`** as a **JSON array** of **`product_id`** strings that exist in your **[Products](/data-models/products)** model. Unknown product ids are dropped on ingest. This list drives product-scoped funnels, filters, and predictions. See [Combine multiple contracts](/getting-started/configuring-accounts/advanced-configuration#combine-multiple-contracts-or-deals-for-a-single-account) for patterns that build `products` from joined deal or subscription data. ## Verification checklist 1. **Parent exists** for every non-empty `parent_account_id`. 2. **No cycles** in parent pointers. 3. **`account_id`** remains unique and stable across refreshes. 4. **`products`** ids match the Products model when you use product funnels or predictions. 5. Run **Refresh model**, then spot-check a deep node and a parent in **Accounts** / revenue views. If something looks wrong after refresh, use [Verification & troubleshooting](/getting-started/configuring-accounts/verification). ## Related - [Account hierarchy overview](./overview) - [Parent account rollups](./parent-rollups) - [Accounts data model](/data-models/accounts-model) --- ## Accounts An **account** in FunnelStory is the **customer organization** you sell to, renew, and measure: the anchor for revenue, health, ownership, and every downstream model (users, subscriptions, product usage, support, meetings, and notes). You reach for the account lens whenever you need a single place to see how one customer is doing across systems and time. ## How accounts show up in the product Your workspace’s **[Accounts model](../data-models/accounts-model.md)** defines the canonical list: each row needs a stable **`account_id`**, plus the properties you map from CRM, warehouse, or other connections. On each **refresh**, those rows sync into the **Customer Intelligence Graph** and power the Accounts view, account detail, predictions, needle movers, tasks, audiences, and more. You do not need to memorize every property here. **[Configuring the account model](../getting-started/configuring-accounts/01-introduction.md)** is the operational guide for queries, mappings, verification, and troubleshooting. ## Properties and “what you see” Accounts carry a mix of **standard fields** (identity, contract timing, revenue signals, assignments) and **custom properties** your team maps for filtering and workflows. Well-populated **name**, **domain**, and **created_at** (where available) make search, timelines, and handoffs usable on day one. Hierarchy-specific fields such as **`parent_account_id`** and **`is_container`** change how rollups and multi-product experiences behave; those concepts live in the hierarchy section below rather than as a long field list on this page. ## Lifecycle in plain language 1. **Connect** data sources and define the Accounts model query and mappings. 2. **Save and refresh** so FunnelStory ingests current account rows. 3. **Use** account screens and signals; as source systems change, the next refresh updates FunnelStory’s copy. If counts or fields look wrong after a refresh, treat it as a **model or source-data** issue first and walk **[Verification & troubleshooting](../getting-started/configuring-accounts/08-verification.md)** before changing product configuration elsewhere. ## Hierarchy and container accounts Many B2B businesses sell to **enterprises with subsidiaries** or track **multiple products** on one customer. FunnelStory represents that with optional **parent–child links** between account rows (via **`parent_account_id`**) and optional **container** rows (**`is_container`**) that roll up children for reporting without duplicating product-level scores. How rollups, **product-level funnels**, and **product-level predictions** interact with **[Products](../data-models/products.md)** is documented in the hierarchy guides—start with **[Account hierarchy overview](./account-hierarchy/overview.md)** and **[Setting up hierarchy](./account-hierarchy/setting-up-hierarchy.md)**. ## Related - **[Accounts model](../data-models/accounts-model.md)** — mandatory model, field pointers, and links to configuration. - **[Customer Intelligence Graph](./customer-intelligence-graph.md)** — how accounts sit in the broader graph. - **[Predictions](./predictions.md)** and **[Needle movers](./needle-movers.md)** — account-scoped intelligence built on account data. - **[Data models](./data-models.md)** — how models relate to one another at a high level. - **[Notes](./notes.md)** — capturing context on accounts. --- ## Customer Intelligence Graph The Customer Intelligence Graph is FunnelStory's organized context graph for B2B customer data. It connects customer entities — accounts, contacts, meetings, activities, and interactions — with internal workspace entities like teams, notes, and assignments, and continuously computes derived intelligence on top of that unified context. Every AI agent, copilot, and workflow in FunnelStory reasons against this graph rather than querying raw source systems directly. ## What the Graph Contains The graph is structured in two layers that work together. **Knowledge Layer** The knowledge layer holds the raw and normalized facts about your customers and your workspace: - **Customer objects** — accounts, contacts, interactions, product usage events, support tickets, conversations, meetings, emails, and signals sourced from your connected tools - **Workspace objects** — teams, CSM assignments, workflows, recorded actions, notes, and internal configurations Every object is timestamped, source-attributed, and versioned. When source data is incomplete or contradictory — a missing champion contact, duplicate account records, empty CRM fields — those gaps and conflicts surface explicitly as graph properties rather than being silently dropped. Wrong or empty fields are preserved with their provenance so you can see where data came from and why it looks the way it does. **Derived Intelligence Layer** The derived intelligence layer sits on top of the knowledge layer and holds everything FunnelStory computes from the raw context before any agent or user queries it: - **Sentiment** — tone and themes extracted from conversations, tickets, and emails - **Events** — detected moments of significance (a spike in usage, a drop in engagement, a contract milestone) - **Prediction scores** — churn risk, renewal likelihood, expansion probability, and confidence ratings - **Cohort factors** — how an account compares to peers with similar profiles and histories - **Process outputs** — results from agentic workflows and agent runs - **Norms and gaps** — baselines for what "normal" looks like for an account, and deviations from those baselines This separation is deliberate. By computing intelligence before the agent touches raw data, FunnelStory ensures that every query returns grounded, validated outputs — not inferences made on the fly from noisy source data. ## How the Graph Is Built FunnelStory connects to your data sources through over 37 connectors spanning CRMs, data warehouses, product analytics platforms, support tools, communication platforms, and enrichment providers. Structured data (usage records, CRM fields, tickets) and unstructured data (call transcripts, chat logs, documents, notes) are ingested and fused into a single unified model at ingestion time — not joined at query time. This patent-pending **information fusion architecture** is what makes the graph reliable for AI reasoning. Parallel data silos that must be reconciled at runtime introduce latency, inconsistency, and hallucination risk. By resolving everything at ingestion, the graph presents a single coherent view that agents can trust. **Handling Messy Enterprise Data** Enterprise data is rarely clean. CRM records have custom fields, duplicates, and partial entries that vary by tenant. FunnelStory's ingestion pipeline handles this through: - **Schema alignment and normalization** — source fields are mapped to canonical entity types and relationships regardless of how each system structures them - **Entity resolution and deduplication** — where connectors and rules allow, duplicate records for the same account or contact are resolved into a single entity - **Provenance tracking** — every record carries its source, timestamp, and version, so you can trace any value back to where it came from The graph does not erase bad source data. If a CRM field is wrong, that wrong value is preserved with its source attribution. If a champion contact is missing, that gap is an explicit property of the account node — something agents and workflows can act on — rather than a silent omission. ## Querying the Graph The graph uses a **semantic database** with a SQL interface and a federated query engine. It is designed specifically for agent-driven access: virtual tables present the graph in a familiar relational format while the engine manages the underlying complexity of fetching, joining, and computing data across different systems. The database provides: - **Workspace isolation** — each FunnelStory workspace's data is fully isolated - **Encryption** — data is encrypted at rest and in transit - **Embedding helpers** — built-in support for document embeddings used in semantic search and unstructured data queries **Renari and AI Agents** query the graph through this interface, retrieving pre-computed intelligence and raw context to answer questions, generate briefings, and take actions. Because intelligence is pre-computed at the derived layer, agents retrieve fast, validated answers rather than performing expensive joins or LLM-side reasoning over raw data. **The MCP Server** exposes the graph's tools and data to external AI environments — Claude Desktop, Cursor, ChatGPT Enterprise, and any MCP-compatible client — using the same semantic database interface. External copilots get the same grounded context that Renari uses internally. ## Related - [How FunnelStory Works](./overview.md) — how the graph fits into the three-layer platform architecture - [Predictions](./predictions.md) — how prediction scores in the derived intelligence layer are computed and validated - [AI Agents](/ai/overview) — Renari, agent automation, and the MCP Server - [Data Connections](../data-connections/overview.md) — the connectors that feed data into the graph --- ## Data models FunnelStory **data models** define how data from your connections becomes structured entities—accounts, users, subscriptions, activities, tickets, and more—with scheduled refreshes into the product. This concept is documented in detail under **[Data models overview](/data-models/overview)**. Start there for how models relate to connections, how to configure them in the UI, AI-suggested models, refresh behavior, and a full list of model types with links to each reference page. --- ## Needle Movers A **Needle Mover** is a leading indicator of churn or expansion detected 3–9 months before renewal — early enough to act decisively. Where traditional health scores tell you something has already gone wrong, Needle Movers surface the signals that precede deterioration, giving your team the runway to intervene while it still matters. ## Why Timing Is Everything Traditional health scores are lagging indicators. By the time a score turns red — overall product usage has dropped, your champion has gone dark, engagement across the account has collapsed — the window for effective intervention has largely closed. The chance of a successful recovery at 1–2 months before renewal is below 15%. The signals that actually predict churn appear much earlier: a champion's tone shifting from positive to neutral in emails, a competitor mentioned favorably in a QBR, a key team quietly abandoning a sticky feature. These are the moments FunnelStory detects and surfaces as Needle Movers — during the **Value Phase**, 6–9 months out, when the chance of successful intervention exceeds 75%. | | Health Scores | Needle Movers | |---|---|---| | **Indicator type** | Lagging | Leading | | **Purpose** | Insights into past performance | Predictive, allows timely adjustments | | **When detected** | After deterioration has occurred | 3–9 months before renewal | | **Intervention window** | < 15% success rate | > 75% success rate | ## How Needle Movers Are Detected On a daily — and frequently hourly — basis, FunnelStory AI analyzes thousands to millions of data points across conversations, usage behavior, and third-party data. It looks for topics, moments, business signals, and usage patterns that have historically preceded churn, expansion, or renewal events in your customer base. FunnelStory powers this through two patented technologies — **AI Customer Journeys** and **AI Health Scoring** — combined with temporal prediction modeling. The detection pipeline runs end-to-end across data ingestion, customer journey mining and NLP, feature engineering, ML modeling, and collaborative workflow execution. The temporal analysis learns your specific business DNA: what combinations of signals preceded churn for customers like this one, at this stage of their journey, with this usage profile. ## Needle Mover Types Needle Movers are organized by type, surfaced as tabs across the top of the list view: | Type | What it captures | |---|---| | **Pricing** | Concerns about cost, subscription value, or competitive pricing comparisons | | **Feature Requests** | Gaps between what the product does and what the customer needs | | **Task, Issue or Bug** | Unresolved product problems creating friction or distrust | | **Personnel Change** | Champion departures, new economic buyers, team restructures | | **Competitor** | Mentions of competitors, evaluation activity, or favorable comparisons | Additional types can be configured per workspace based on your business. ## Impact Classification Every Needle Mover is classified by direction and severity, visualized as three icons at the left of each row: - **Opportunity** — green dollar signs ($$$) indicate an expansion or retention opportunity - **Risk** — red triangles (△△△) indicate a churn or contraction risk Severity is shown by the number of filled icons across a three-point scale: **High** (all three filled), **Med** (two filled), **Low** (one filled). This makes triage instant: three filled red triangles is a high-priority risk; a single green dollar sign is a low-priority opportunity. ## The Needle Movers List The main **Needle Movers** view — *"Search, track, and prioritize needle movers across all accounts"* — displays every open signal as a sortable table. ![Needle Movers list view showing impact icons, titles, tags, company, assignee, and last activity](/img/needle-movers/list-view.png) Each row shows: - **Impact icons** — Retain or Churn direction with severity level - **Title** — a descriptive, AI-generated summary of the signal - **Tags** — type, comment count, Retain/Churn label, source count, account status (e.g. Account Expiring) - **Company** — the account name, with "+N more" when a signal spans multiple accounts - **Assignee** — current owner, or unassigned - **Last Activity** — the date of the most recent update **Filtering and search:** The toolbar provides fast controls to narrow the list: - **My Accounts** — toggle to show only your assigned accounts (default on login) - **Select Audiences** — filter by a specific set of accounts; the account picker shows each account's name, current journey stage, engagement frequency (Daily/Weekly/Monthly), and activity count - **Impact** — filter by Risk or Opportunity, and by severity (High/Med/Low) - **Open / Closed** — filter by state (default: Open) - **Assignee** — filter by owner - **Account** — filter to a specific account - **All Time** — filter by date range You can also search by keyword across needle mover titles, account names, and content. ## The Needle Mover Detail Clicking a row opens the detail view, navigable with arrows (**1 / 217 Needle Movers**) so you can move through your queue without returning to the list. ![Needle Mover detail view showing AI Summary, Renari input, Overview panel, and Activity Timeline](/img/needle-movers/detail-view.png) **Left panel — Overview:** - **AI Summary** — a synthesized explanation of why this signal was detected, drawn from the source data. Expandable with "Show more." - **Ask Renari Anything** — an inline Renari input, pre-loaded with the context of this needle mover. Ask follow-up questions, request a draft email, or get a recommended next action without leaving the view. - **Overview metadata** — Type, State (Open/Closed), Company, Assignee, Added On, Last Activity, and Source **Right panel — Activity Timeline:** A chronological record of everything associated with this Needle Mover, sortable ascending or descending. Each entry shows: - **Source and system** — e.g. *Chat (Postgres)*, *Ticket (Postgres)* — what type of interaction it came from and which connection surfaced it - **Category** — a label for the topic (e.g. *General System Issue*, *Pricing Discussions*, *Team Changes*) - **Excerpts** — the specific phrases or quotes that triggered detection - **Participants** — who was involved in the conversation - **Summary** — a short AI-generated synthesis of that interaction's relevance - **Details** — the full source text, expandable inline The first entry on every timeline is always **"Needle mover created"**, timestamped to when FunnelStory first detected the signal. ## Taking Action At the bottom of the detail view, a comment and task input — *"Type to add a task, use @ to mention users and / for more options"* — lets you collaborate, assign, and act directly from the needle mover. Supported actions: 1. **Assign** — assign to yourself or a team member from the **Assignee** dropdown; all changes are recorded on the timeline 2. **Discuss internally** — comment and @mention colleagues to align before engaging the customer 3. **Close** — change **State** from Open to Closed once the issue is resolved; closed needle movers are removed from the open queue 4. **Create a task** — use `/` in the comment input for task and action options 5. **Email the customer** — initiate a customer-facing message with full needle mover context 6. **Create a CRM task** — push the issue to Salesforce or HubSpot for Account Executive follow-up 7. **Run a Playbook** — execute a recommended response playbook; steps can be manual, agentic, or a combination. FunnelStory automatically suggests a playbook if one has been configured for this type Any action taken on a Needle Mover is recorded on its activity timeline and counts as implicit acceptance. Unaccepted Needle Movers older than 3 months automatically decay from the open view, keeping the workspace focused on current signals. ## For CSMs and Account Reps The default view on login filters to your assigned accounts. Use the type tabs and filter toolbar to focus on the signals that need your attention. The navigation arrows in the detail view let you move through your queue efficiently. Renari is available inline on every needle mover for instant context and suggested responses — no need to switch to a separate interface. ## For Customer Success and Sales Leaders Toggle off **My Accounts** to see needle movers across all accounts and team members. From this view you can: - **Monitor risk and opportunity signals** across the full portfolio - **Reprioritize and reassign** — edit the Assignee on any needle mover to shift ownership - **Track team responsiveness** — Last Activity and comment history show how each signal is being worked - **Configure playbooks** for each Needle Mover type, so that when a pricing concern or a personnel change is detected, the right playbook is automatically attached and ready to run ## Relationship to Predictions Needle Movers and [Predictions](./predictions.md) work together but serve different purposes. Predictions give you a score — the probability that an account will churn or expand. Needle Movers give you the *reason* — the specific, sourced signal that is moving that probability. Together they provide both the "what" and the "why" needed to take confident action. ## Related - [Predictions](./predictions.md) — account-level risk and expansion probability scores - [How FunnelStory Works](./overview.md) — where Needle Movers fit in the pre-computed intelligence layer - [Needle Movers: Notifications](../needle-movers/notifications.md) — configuring alerts for new and updated Needle Movers - [AI Agents](../ai/agents-overview.md) — automating multi-step responses to Needle Mover signals --- ## Notes **Notes** are rich-text records your team adds in FunnelStory—or that arrive from an **[imported Note model](../data-models/note.md)**—and attach to one or more **accounts**. Use them when you want durable, human-written context (QBR prep, escalation history, meeting recap) alongside synced system data. ## How notes work Each note has a **title**, **body**, **note type** (**General** or **Meeting** in the composer), an **event date** (defaults to today; you can backdate), and **one or more associated accounts**. You can add **labels** (multi-select under **Select labels**), **@mention** workspace users in the body (they can get email when mentioned), and save. Notes created in the product are editable; **imported** notes (rows from a Note model refresh) show as read-only in the UI—**Edit** is disabled for imported notes. Imported notes carry stable **`note_id`** values from the source; FunnelStory upserts on refresh so the same external row updates in place. ## Where you see and manage notes ### Notes on an account Open an account, then the **Notes** tab (alongside Properties, Meetings, and Tasks). You can browse notes for that account, open the editor, and create new ones in context. ### All notes Use the workspace **Notes** page (path **`/notes`**) for a cross-account list. The toolbar supports **time range** filtering, narrowing to **one account**, **My accounts** (assignee filter for the signed-in user), pagination, and **New note** to open the composer without starting from a single account. ### Tasks When creating or editing a task, use **Add/Remove notes** to **link existing notes** to the task as associations so the task carries the same narrative context. ### Comments on tasks The task detail **Comments** thread is separate from account-level **Notes**; use comments for short discussion on the task itself, and Notes when the write-up should live on the account and be reusable elsewhere. ## Slack, conversations, and imported rows **Slack** content can surface in FunnelStory in different ways: - **Slack connection → conversations** — Adding a **[Slack](../data-connections/communication/slack.md)** connection indexes channel messages and threads for **conversation-style** features tied to accounts. That path uses the **[Conversation model](../data-models/conversations.md)** (a Conversation model may be created for you after you connect—see the Slack connection page). It is oriented around threaded chat intelligence, not the same pipeline as manually authored **Notes**. - **Rows in the Notes list** — The in-app **Notes** surface is fed by notes your users create plus rows ingested through the **[Note model](../data-models/note.md)**. There is **no direct “Slack → Note” control** in the product. To land Slack-derived (or Slack-exported) content as first-class notes—with `note_id`, title, body, timestamps, and optional author **email**—typically model it in a **queryable source** (warehouse, CRM, etc.) and map it through the **Note** data model. Imported notes stay **read-only** in the UI; change them at the source and refresh the Note model. For **`timestamp`** vs **`created_at`** on imports, see the **[Note model](../data-models/note.md)** field reference. ## Labels and default templates **Labels** help teams categorize notes; pick them in **Select labels** when creating or editing. The composer loads labels configured for your workspace—if you need **new** label values, work with your **FunnelStory admin** (label maintenance is not a self-serve settings screen in the app today). On the workspace **Notes** page, combine time and account filters with label chips to narrow large histories. Your workspace can store **default templates per note type** (**general** and **meeting**) that pre-fill title, body, type, and labels when someone starts a note. Placeholders in template text: | Placeholder | Replaced with | |-------------|----------------| | `{{ACCOUNT_NAME}}` | The active account name when opening the composer from an account. | | `{{DATE}}` | Today’s date in the workspace display format when the composer loads. | ## Audit trail Creating, updating, or deleting a note generates **audit log** entries your admins can review. See **[Audit log](../platform/audit-log.md)**. ## Related - **[Note model](../data-models/note.md)** — importing notes from CRM or warehouse queries. - **[Accounts](./accounts.md)** — notes attach to account records from your Accounts model. --- ## How FunnelStory Works FunnelStory is built around a modular three-layer architecture — **Get, Set, Go** — designed to take your scattered enterprise data and turn it into grounded, actionable intelligence for every team and every AI system in your organization. ```mermaid flowchart BT subgraph DS["📦 Data Sources"] direction LR SD["Structured Data\nProduct usage · CRM · Tickets · Issues"] ~~~ UD["Unstructured Data\nMeetings · Chats · Documents · Notes"] ~~~ TPD["3rd Party Data & Intelligence\nEnrichment · Social · News · Articles"] end subgraph IL["🧠 GET — Intelligence Layer"] direction TB subgraph CL["Context Layer"] CIG["Customer Intelligence Graph\nAccounts · Users · Products · Activity"] end subgraph PIL["Pre-computed Intelligence"] direction LR OOB["Out-of-the-box\nPredictions · Needle Movers · Customer Journey\nCohorts · Temporal Analysis · Revenue Forecast…"] CUSTOM["Custom\nAgent Outputs · External Models\nTeam-specific Analysis"] end CIG --> PIL end subgraph AAL["⚙️ SET — Agent & Automation Layer"] CP["Control Plane\nAI Agents · Workflows · Triggers"] end subgraph UXAX["🖥️ GO — UX & AX Layer"] direction LR UX["UX — FunnelStory UI"] ~~~ APPS["UX — External Apps\nSlack · CRM · CSM Tools\nDeveloper Apps via API"] ~~~ AX["AX\nRenari · Claude · Cursor"] end DS --> CL IL --> AAL IL --> UXAX AAL --> UXAX ``` ## Step 1: GET — The Intelligence Layer The Intelligence Layer is the foundation. It ingests your enterprise data and builds the **Customer Intelligence Graph** — a patent-pending graph that synthesizes four pillars (Usage, CRM, Conversations, and Business Intel) into a single source of truth. This graph maps every interaction from the first touchpoint, enabling **Time-Travel Discovery**: the ability to reason about historical patterns, not just current state. The Intelligence Layer has two sub-components that work in sequence. **Context Layer** FunnelStory connects to your data sources and consolidates all enterprise data into the shared Customer Intelligence Graph: - **Structured data** — product usage, CRM records, tickets, billing and revenue data - **Unstructured data** — meetings, chats, documents, and notes - **3rd party data and intelligence** — enrichment data, social signals, news, articles, and external intelligence feeds The graph links accounts, users, products, and interactions into a single unified context that every downstream layer builds on. **Pre-computed Intelligence Layer** On top of that context, FunnelStory continuously runs intelligence computations and materializes the outputs. Pre-computing intelligence centrally means every team member and every AI agent works from the same shared reality — eliminating individually computed, divergent versions. Intelligence is generated once, not re-derived per query, which eliminates token inflation, reduces latency, and enables role-based access controls over which outputs each user or agent is permitted to see. There are two types of pre-computed intelligence: *Out-of-the-box Intelligence* FunnelStory ships a standard set of intelligence models that activate as soon as your data is connected: - **Predictive Scores** — renewal likelihood, expansion signals, and churn risk for every account - **Needle Movers** — leading indicators of risk and opportunity that surface accounts needing immediate attention - **Customer Journey** — where each account sits in their lifecycle at any point in time - **Historical Engagement Patterns** — how an account has engaged with your product, support, and team over time - **Temporal Analysis** — what behavioral patterns look like in defined windows, such as the six months preceding churn - **Topics and Sentiments** — themes and tone extracted from support tickets, calls, and conversations - **Revenue Forecast** — ARR projections based on renewal signals and expansion likelihood - **Cohort Analysis** — performance and behavior comparisons across account groups *Custom Intelligence* The pre-computed intelligence layer is extensible. Teams can add externally computed scores, outputs from custom AI agents, or third-party models alongside the out-of-the-box set. Custom intelligence can be tailored to any team's workflows — from Customer Success QBR prep and churn modeling to Marketing's advocacy pipeline and Product's adoption funnel analysis — and surfaces through the same shared layer with the same RBAC guarantees. ## Step 2: SET — The Agent & Automation Layer The Agent & Automation Layer is a control plane for configuring, managing, and running AI Agents through FunnelStory. It features a "LEGO block" approach, allowing teams to snap together modular workflows and components without writing code. Agents are triggered by intelligence signals — a prediction change, a needle mover, a stage transition — and execute actions such as summarizing account activity, flagging contract risks, updating CRM records, or generating briefings. Workflows provide the orchestration layer for routing outputs to Slack, HubSpot, Salesforce, or any webhook. Two capabilities define this layer: - **Vibe Coding** — create custom AI agents and automations by describing what you want in natural language. No engineering dependency. - **Background Execution** — vibe-coded agents continuously analyze data and trigger workflows even while your team is offline. ## Step 3: GO — The UX & AX Layer This is where intelligence and action surface to humans and AI systems. FunnelStory supports three access surfaces that can be used independently or in combination. **UX — FunnelStory UI** FunnelStory's built-in interface gives frontline and leadership teams direct access to pre-computed intelligence through views like the Accounts dashboard, Renewal Management, Needle Movers, and custom dashboards. Teams can consume intelligence, take notes, manage workflows, and configure agents without leaving the platform. **UX — External Apps** FunnelStory's intelligence is also consumable from the tools teams already work in. Third-party apps like Slack, CRM platforms, and CSM tooling can surface FunnelStory data in-context — for example, account health alerts in Slack or renewal scores embedded in Salesforce. Developers can build custom applications on top of FunnelStory's APIs to create tailored experiences for specific workflows or personas. **AX — AI Experience** FunnelStory uses a **Bring Your Own Copilot (BYOC)** model, piping the Customer Intelligence Graph directly into AI interfaces via a secure Enterprise MCP Server. This eliminates tool fatigue — CSMs and other team members access all intelligence within the tools they already use: - **Renari** — FunnelStory's built-in AI assistant. Ask questions about any account, get briefings, run analyses, and take action without leaving the platform. - **External AI Copilots** — connect Claude Desktop, Cursor, ChatGPT Enterprise, or any MCP-compatible client to query accounts, predictions, and activity directly. Across all three surfaces, three categories of work are supported: - **Pre-computed intelligence** — surfacing out-of-the-box and custom intelligence outputs to frontline and leadership teams, on demand - **Skills** — generating customized intelligence and executing actions tailored to a specific account, role, or workflow - **Vibe coding** — creating, updating, and testing agentic workflows directly through a conversational interface, without writing pipeline code ## Related - [FunnelStory 101](../getting-started/overview.md) — platform overview and use cases - [Customer Intelligence Graph](./customer-intelligence-graph.md) — how the graph is built and queried - [Predictions](./predictions.md) — how predictive scores are computed and validated - [Notes](./notes.md) — account notes, imports, labels, and templates - [AI Agents](/ai/overview) — Renari, agent automation, and the MCP Server --- ## Predictions A **Prediction** is an account-level score that estimates the probability of churn or renewal — calculated continuously from your actual customer data, not a manually configured formula. Where health scores ask you to decide in advance what matters, FunnelStory's prediction models learn the patterns from your own historical outcomes: which accounts renewed, which churned, and what their data looked like in the months before. The result is a score grounded in the specific reality of your customer base, not a generic industry template. ## The Health Score Every account receives a **health score from 0 to 100**. - **50** is neutral — no strong signal in either direction - **Below 50** — increasing churn risk - **Above 50** — healthy trajectory trending toward renewal The score is a net result of two competing signals: the probability the account will stay, weighed against the probability it will churn. When both signals are strong, the score reflects genuine uncertainty — an account with high product usage but also high support escalations, for example, will land near the middle until the pattern resolves. ## Predicted Outcome Each account is assigned a predicted outcome alongside its health score: | Outcome | What it means | |---------|---------------| | **Churn** | The account matches patterns historically associated with churn | | **Retention** | The account matches patterns historically associated with renewal | | **Neutral** | No strong signal in either direction | ## Confidence Every prediction includes a confidence level reflecting how clearly the data matches the predicted outcome: | Confidence | What it means | |------------|---------------| | **High** | Strong, clear signal — the prediction is reliable and actionable | | **Medium** | Moderate signal — worth investigating and acting on | | **Low** | Weak signal — use alongside other context | | **Neutral** | Insufficient data to form a reliable prediction | Low confidence most commonly appears for newer accounts that haven't yet accumulated enough history to match patterns clearly. ## Driving Factors Each prediction surfaces the specific factors contributing to the score, split into two categories: - **Increase/Maintain these values** — factors currently supporting retention. Protecting these is as important as addressing risk signals. - **Decrease/Maintain these values** — factors that are pushing the score toward churn. These are your intervention priorities. Each factor shows the account's current value on a min-max scale relative to the broader population. This makes it immediately clear whether an account is above or below average on any given signal — and by how much. ![Driving factors view showing Increase/Maintain and Decrease/Maintain factor columns with impact percentages and min-max sliders](/img/predictions/driving-factors.png) Driving factors pull from both structured data (product usage events, CRM attributes, support activity) and unstructured data (conversation sentiment, ticket themes, meeting transcripts). This combination is what allows the model to surface signals that pure usage-based health scores miss entirely. ## What-If Analysis The **What-If Analysis** lets you simulate how changing an account's data would affect its prediction. Enter a hypothetical value for any driving factor — reduced usage, fewer active users, resolved support tickets — and see the projected impact on the health score. This is useful for prioritizing which gaps to close ahead of a renewal conversation: if increasing one metric would move the score substantially, that's where to focus. ## How Predictions Learn Your Business FunnelStory models are trained on your specific outcomes, not a generic baseline. The system learns what "churn" and "retention" look like in your customer base by analyzing historical accounts — which ones renewed, which ones churned, and what combinations of signals preceded each. This is configured through **Revenue Tags**: you define what a churned account looks like and what a retained account looks like, using filters or specific account examples. The prediction model uses these labeled examples as its training set. ![Revenue Tags configuration showing Churn and Retention tags with account counts and a Retrain Models button](/img/predictions/revenue-tags.png) The more precise your Revenue Tags, the more accurately the model can learn the patterns that matter for your specific business. Your FunnelStory team works with you during setup to configure these correctly. ### Needle Mover Weights As part of model configuration, each Needle Mover type is assigned an **impact weight** — controlling how much influence conversation signals (competitor mentions, pricing concerns, personnel changes, etc.) have on the prediction score relative to structured activity data. ![Needle Mover type weights showing Competitor, Feature Request, Personnel Change, Pricing, and Task/Issue/Bug with impact sliders](/img/predictions/needle-mover-weights.png) ## How Predictions Improve Over Time Prediction models continuously improve as new outcomes are recorded. When an account predicted to churn actually churns — or an account predicted to renew actually renews — that outcome is used to validate and refine the model in the next training cycle. Missed predictions are equally valuable: an account the model predicted as healthy that churned unexpectedly teaches the model to look for signals it may have underweighted. This feedback loop means the model becomes more accurate over time, adapting to changes in your customer behavior, your product, and your market. You can trigger a manual retrain from the Revenue Tags configuration page when you've made significant changes to your tagging criteria. ## Per-Product Predictions For accounts with multiple products, FunnelStory generates **per-product predictions** — a separate health score and driving factors breakdown for each product line. This is useful when: - Different products have different renewal timelines or contract structures - A single account has separate CSM ownership for different products - You want to isolate which product relationship is at risk before a consolidated renewal conversation ## Acting on Predictions Predictions are designed to trigger action, not just inform awareness. From any account's prediction view, you can: 1. **Review driving factors** — understand exactly what is moving the score before engaging the customer 2. **Run a What-If analysis** — model which interventions would have the most impact 3. **Jump to Needle Movers** — see the specific conversation signals and behavioral changes behind the prediction 4. **Launch a playbook** — execute a structured response workflow directly from the prediction detail 5. **Create a CRM task** — push the risk or opportunity to Salesforce or HubSpot for Account Executive follow-up 6. **Ask Renari** — get an AI-synthesized action recommendation with full account context ## Relationship to Needle Movers Predictions and [Needle Movers](./needle-movers.md) are complementary, not redundant. A Prediction gives you the **score** — the probability that an account will churn or expand. A Needle Mover gives you the **reason** — the specific, sourced signal (a competitor mentioned in a QBR, a champion who has gone quiet, an unresolved pricing concern) that is moving that probability. Together they provide both the "what" and the "why" needed to take confident action. ## Related - [Needle Movers](./needle-movers.md) — the specific signals driving prediction scores - [Customer Intelligence Graph](./customer-intelligence-graph.md) — how prediction scores are computed and stored as derived intelligence - [How FunnelStory Works](./overview.md) — where predictions fit in the pre-computed intelligence layer - [AI Agents](../ai/agents-overview) — automating responses when predictions cross risk thresholds --- ## Allowed IP addresses When you host a data store or restrict access with a firewall or cloud security group, you may need to **allow inbound traffic from FunnelStory** so the product can connect (for example to PostgreSQL, MySQL, Microsoft SQL Server, etc.). ## Static egress IPs FunnelStory connects from these **static** IP addresses: - `54.188.137.72` - `44.227.107.126` Add both addresses to your allowlist (security group, `pg_hba.conf`, Atlas IP access list, S3 bucket policy `Condition` on `IpAddress`, and so on) for the port your service uses. ## When you do not need this list - **OAuth-only** or **SaaS API** integrations (Salesforce, HubSpot, Zendesk, and similar) use outbound HTTPS to the vendor; you normally do not allowlist these IPs on your side. - **SSH tunnels** — You run the tunnel from *your* host to FunnelStory; database rules often allow that host instead of FunnelStory’s egress IPs. See [SSH tunnels](./ssh-tunnels.md). ## Related links - [Data connections overview](./overview.md) - [SSH tunnels](./ssh-tunnels.md) --- ## Mixpanel The **Mixpanel** connection uses Mixpanel’s **Project ID** and **service account** credentials to pull product events into FunnelStory. ## What FunnelStory uses it for - **Product events** — Ingested events support product-activity views and account timelines. - **Sync window** — Optional **`time_range_days`** controls how many days of history each sync considers (default **1** if unset in config). ## Before you connect - In Mixpanel **Project Settings**, create a **service account** and note the **project ID**, **username**, and **password** (used as HTTP basic auth to `data.mixpanel.com`). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Mixpanel**. 2. Complete the fields in the connection form: ![Mixpanel connection form with project and service account fields](/img/data-connections/mixpanel-02-configuration.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **project_id** | Mixpanel numeric project id. | | **username** | Service account username. | | **password** | Service account secret (sensitive). | | **time_range_days** | Optional. Integer day window for ingestion. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Mixpanel data connection, a **Product Activity Model** will be automatically created for you. Stable **`distinct_id`** and group keys in Mixpanel help FunnelStory tie events to the right accounts. ## Related links - [Pendo](./pendo.md) - [Data connections overview](../overview.md) --- ## Pendo The **Pendo** connection uses Pendo’s **integration key** to read visitor and account data into FunnelStory for **product analytics** and account-level activity. ## What FunnelStory uses it for - **Product usage** — Pendo data backs product-activity style views and timelines in the product. - **Sync window** — **`time_range_days`** defaults to **7** when unset; increase carefully for large subscriptions. ## Before you connect - In Pendo **Settings → Integrations → API**, create an **integration key** with read access. - If you use a dedicated Pendo **hostname**, set **`base_url`**; otherwise it defaults to `https://app.pendo.io`. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Pendo**. 2. Complete the fields in the connection form: ![Pendo connection form](/img/data-connections/pendo-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **integration_key** | Required. Pendo integration key. | | **base_url** | Optional. Pendo app base URL (no trailing slash). | | **time_range_days** | Optional. Positive integer; default 7. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Pendo data connection, a **Product Activity Model** will be automatically created for you. ## Related links - [Mixpanel](./mixpanel.md) - [Segment](./segment.md) - [Data connections overview](../overview.md) --- ## Segment The **Segment** connection identifies your workspace in FunnelStory so **Segment track/identify/group calls** sent to FunnelStory’s **HTTP destination** are stored for **product activity** and account views. ## What FunnelStory uses it for - **Inbound events** — Webhook ingestion; events feed built-in product behavior rather than user-defined SQL models. ## Before you connect - In Segment, add (or configure) a **HTTP/Webhook** destination pointing at the **FunnelStory ingestion URL** and shared secret from your workspace documentation or support. - The connection entry in **Configuration → Connections** ties incoming events to this workspace; you do **not** paste a Segment **read** API token for sync. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Segment**. 2. Complete the fields in the connection form: ![Segment connection settings (shared secret)](/img/data-connections/segment-03-configuration.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Shared secret** (or equivalent) | Value Segment sends to authenticate webhook calls to FunnelStory—match what you configure in the Segment destination. | 3. Click **Validate**, then **Add Connection** (or **Connect**, as the UI shows). ![Segment connection saved](/img/data-connections/segment-04-webhook-settings.png) ## Configure the Segment webhook destination Inbound events use a **Webhook** destination in Segment (see [Segment docs](https://segment.com/docs/)). 1. In Segment, open **Catalog** and add (or edit) a **Webhooks** destination. ![Segment catalog — Webhook destination](/img/data-connections/segment-05-destination-url.png) 2. In **Destinations**, open your webhook destination and paste the **FunnelStory ingestion URL** and credentials from your workspace (**connection** settings or onboarding docs). ![Segment destination settings with webhook URL](/img/data-connections/segment-06-test-event.png) ## After you connect Finish Segment **destination** setup so events include stable IDs that line up with accounts in FunnelStory. Events far in the past or future may be rejected—see product limits or support. Once you’ve added the Segment data connection, a **Product Activity Model** will be automatically created for you. ## Related links - [Mixpanel](./mixpanel.md) - [Data connections overview](../overview.md) --- ## Fathom The **Fathom** connection uses **OAuth** to import meeting summaries and transcripts from Fathom into FunnelStory for **meetings** and AI-ready context on accounts. ## What FunnelStory uses it for - **Meetings** — Fathom recordings and summaries sync per built-in meeting behavior. ## Before you connect - A Fathom admin must allow the FunnelStory integration and the signing user must have access to the meetings you want synced. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Fathom**. 2. Complete the fields in the connection form: ![Fathom connection / OAuth step](/img/data-connections/fathom-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | 3. Complete **OAuth** in the browser, then **Validate** / **Add Connection** in FunnelStory as prompted. ## After you connect Once you’ve added the Fathom data connection, a **Meeting Model** will be automatically created for you. ## Related links - [Update.ai](./update-ai.md) - [Data connections overview](../overview.md) --- ## Gong The **Gong** connection uses Gong’s **access key** authentication to pull calls and transcripts into FunnelStory for **meeting** history and account-level context. ## What FunnelStory uses it for - **Calls and transcripts** — Gong calls sync per the product’s built-in meeting ingestion. ## Before you connect - In Gong, create an **API access key** and **access key secret** with permission to read calls your team needs in FunnelStory. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Gong**. 2. Complete the fields in the connection form: ![Gong connection form](/img/data-connections/gong-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **access_key** | Gong API access key. | | **access_key_secret** | Gong API secret. | | **base_url** | Optional. Override API base URL when directed by Gong. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Gong data connection, a **Meeting Model** will be automatically created for you. ## Related links - [Zoom](./zoom.md) - [Data connections overview](../overview.md) --- ## Microsoft Teams The **Microsoft Teams** connection uses **Microsoft OAuth** to read Teams **meetings**, **channel conversations**, and **group chats** (within granted Graph permissions) so FunnelStory can show meetings and conversations on accounts. ## What FunnelStory uses it for - **Meetings** — Calendar and online meetings synced from Teams. - **Channel and chat content** — Posts and chats indexed for conversation-style features. ## Before you connect - An Azure AD / Microsoft 365 admin needs to consent to the FunnelStory Microsoft app for the required **Microsoft Graph** scopes. - The signed-in user must be able to access the teams and chats you want indexed. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Microsoft Teams**. 2. Complete the fields in the connection form: ![Microsoft Teams connection configuration](/img/data-connections/ms-teams-02-configuration.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | The product records the Azure **`tenant_id`** from the token for API calls. 3. Sign in with Microsoft and complete **Microsoft OAuth** and admin consent if required. ![Microsoft sign-in or consent](/img/data-connections/ms-teams-03-authorization-01.png) ![Microsoft OAuth completed](/img/data-connections/ms-teams-04-authorization-02.png) ## After you connect Once you’ve added the Microsoft Teams data connection, **Meeting** and **Conversation** models will be automatically created for you. ## Related links - [Slack](./slack.md) - [Data connections overview](../overview.md) --- ## Slack The **Slack** connection authorizes FunnelStory to read channel messages and threads (within the Slack scopes granted) so content can appear in **conversation-style** features on accounts. Slack can also be used as a **notification** channel. ## What FunnelStory uses it for - **Messages and threads** — Indexed Slack content is tied to accounts per the product’s built-in behavior. - **Notifications** — Workspace features that post notifications to Slack. ## Before you connect - A Slack workspace admin must install or approve the FunnelStory Slack app. - FunnelStory uses **OAuth**. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Slack**. 2. Complete the fields in the connection form: ![Slack connection configuration before OAuth](/img/data-connections/slack-02-configuration.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | 3. Continue to **Connect with Slack** / **Authorize**, then approve the Slack app permissions in the browser flow (scopes your workspace requests). ![Slack OAuth — approve app](/img/data-connections/slack-03-authorization-01.png) ![Slack OAuth — success or workspace selection](/img/data-connections/slack-04-authorization-02.png) ## After you connect Once you’ve added the Slack data connection, a **Conversation Model** will be automatically created for you. ## Related links - [Microsoft Teams](./ms-teams.md) - [Data connections overview](../overview.md) --- ## Update.ai The **Update.ai** connection uses an **API key and secret** from Update.ai to import meetings and notes into FunnelStory for **meetings** and conversation-style enrichment. ## What FunnelStory uses it for - **Meetings** — Update.ai meetings sync per built-in meeting behavior. - **Optional filtering** — **`ignore_domains`** (comma-separated) can skip content tied to specific email domains. ## Before you connect - In Update.ai, create **API credentials** with **`api_key`** and **`api_secret`** as issued by their admin console. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Update.ai**. 2. Complete the fields in the connection form: ![Update.ai connection form](/img/data-connections/update-ai-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **api_key** | Update.ai API key. | | **api_secret** | Update.ai API secret. | | **ignore_domains** | Optional. Comma-separated domains to ignore. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Update.ai data connection, a **Meeting Model** will be automatically created for you. ## Related links - [Fathom](./fathom.md) - [Data connections overview](../overview.md) --- ## Zoom The **Zoom** connection uses **Zoom OAuth** to pull meeting metadata and related data into FunnelStory for **meetings** on accounts and timeline-style views. ## What FunnelStory uses it for - **Meetings** — Zoom meetings sync according to the product’s built-in meeting ingestion. ## Before you connect - A Zoom **account admin** is required to authorize the integration. - In **Zoom App Marketplace** (or app settings your org uses), enable the permissions FunnelStory needs—for example read access for meetings and related features your workspace uses. ![Zoom App Marketplace — enable read/write app permissions](/img/data-connections/zoom-01-marketplace-permissions.png) - Under **User Management** (or equivalent admin UI), allow **Users (View)** (or the permission your integration checklist specifies). ![Zoom admin — Users (View) permission](/img/data-connections/zoom-02-user-management-view-users.png) - Enable **Meeting summary with AI Companion** and **Meetings (Entire account)** (or current equivalents) if your integration requires them. ![Zoom — Meeting summary with AI Companion](/img/data-connections/zoom-03-meeting-summary-ai-companion.png) ![Zoom — Meetings (Entire account)](/img/data-connections/zoom-04-meetings-entire-account.png) ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Zoom** under application integrations. 2. Complete the fields in the connection form: ![Zoom connection form — name and ignore domains](/img/data-connections/zoom-06-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Ignore domains** | Optional. Comma- or line-separated domains where internal chatter should be ignored. | 3. Click **Connect** to launch Zoom OAuth. ## After you connect Once you’ve added the Zoom data connection, a **Meeting Model** will be automatically created for you. ## Related links - [Gong](./gong.md) - [Data connections overview](../overview.md) --- ## Attio The **Attio** connection lets FunnelStory read CRM objects from your Attio workspace so you can configure **data models** using Attio’s API-shaped **AT blocks** (filter, sort, and object type) in the model query step. ## What FunnelStory uses it for - **Data models** — Companies, people, deals, or other objects Attio exposes, mapped to FunnelStory properties after you validate the query. ## Before you connect - Create an **API key** or OAuth credential in Attio with access to the object types and records you need (follow Attio’s current docs for API access). - Decide which **object types** (for example `companies`, `deals`) you will model first. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Attio**. 2. Complete the fields in the connection form: ![Attio connection or API key form](/img/data-connections/attio-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name in FunnelStory. | | **API key** / **Token** | Attio credential as prompted by the UI. | 3. Click **Validate**, then **Add Connection**. ## After you connect - For **models**, write **AT** blocks (filter, sort, and object type) using the format in [Writing queries](../../getting-started/configuring-accounts/writing-queries). - Use **Refresh** on the connection or wait for scheduled syncs, per your workspace settings. ## Related links - [Writing queries (AT blocks)](../../getting-started/configuring-accounts/writing-queries) - [Salesforce](./salesforce.md) - [HubSpot](./hubspot.md) - [Data connections overview](../overview.md) --- ## HubSpot The **HubSpot** connection links FunnelStory to your HubSpot portal via OAuth so you can build **data models** on companies, deals, and contacts using HubSpot’s search APIs (**HS blocks** in the query step), power **CRM sync**, and use HubSpot in workflows or analytics where enabled. ## What FunnelStory uses it for - **Data models** — Structured JSON “HS” query blocks for objects such as companies and deals (see [Writing queries](../../getting-started/configuring-accounts/writing-queries)). - **CRM sync** — Optional updates back to HubSpot when configured. - **Enrichment** — HubSpot can act as an enrichment source in supported setups. ## Before you connect - Use a HubSpot user with access to the **objects and properties** you plan to query or sync. - In HubSpot **Settings → Integrations → Private Apps** (or OAuth apps, depending on your setup), ensure the scopes FunnelStory requests are approved—your onboarding or support team can confirm the exact scope list for your workspace. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **HubSpot** under application integrations. 2. Complete the fields in the connection form: ![HubSpot connection name and Connect](/img/data-connections/hubspot-03.png) | Field | Description | | ----- | ----------- | | **Connection name** | Label in FunnelStory (for example `HubSpot Production`). | 3. Click **Connect** to start OAuth. 4. Approve access on HubSpot’s consent screen. ![HubSpot OAuth consent](/img/data-connections/hubspot-04.png) ## After you connect - Configure **models** with **HS** blocks as documented in [Writing queries](../../getting-started/configuring-accounts/writing-queries). - Refresh manually or on schedule according to your workspace. ## Related links - [Salesforce](./salesforce.md) - [Attio](./attio.md) - [Data connections overview](../overview.md) --- ## Salesforce The **Salesforce** connection authorizes FunnelStory to read your CRM data using OAuth. Use it to back **data models** (with **SOQL** in the model builder), to support **CRM sync** where your workspace writes selected fields back to Salesforce, and anywhere else the product surfaces Salesforce as a source. ## What FunnelStory uses it for - **Data models** — Query Accounts, Opportunities, custom objects, and more using SOQL blocks or combined setups your workspace supports. - **CRM sync** — Optional outbound updates to Salesforce when configured separately. - **Enrichment** — Salesforce can participate in enrichment-style flows where your workspace enables it. ## Before you connect - Use a Salesforce user (often a dedicated integration user) whose **profile and permission sets** allow read (and, if you use sync, write) on the objects and fields FunnelStory needs. - Know your Salesforce **login URL** (production, sandbox, or custom domain). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Salesforce**. 2. Complete the fields in the connection form: ![Salesforce OAuth or environment step](/img/data-connections/salesforce-03-oauth.png) | Field | Description | | ----- | ----------- | | **Connection name** | How this org appears in FunnelStory (for example `Salesforce Production`). | 3. Click **Connect** or **Authorize** to start OAuth. 4. Approve the Salesforce consent screen and return to FunnelStory when prompted. ![Salesforce connection connected](/img/data-connections/salesforce-04-connected.png) ## After you connect - For **models**, write **SOQL** using the block format in [Writing queries](../../getting-started/configuring-accounts/writing-queries). - Use **Refresh** on the connection or wait for scheduled syncs, per your workspace settings. ## Related links - [Writing queries (SOQL)](../../getting-started/configuring-accounts/writing-queries) - [HubSpot](./hubspot.md) - [Data connections overview](../overview.md) --- ## Amazon Athena The **Amazon Athena** connection lets FunnelStory run queries in your AWS account against data in S3 via Athena, so **data models** can use curated tables in the AWS Glue Data Catalog (or supported catalogs). ## What FunnelStory uses it for - **Data models** — Athena SQL (Presto/Trino-flavored) over cataloged tables. ## Before you connect - **IAM** — Provide AWS credentials (access key + secret, or the mechanism your FunnelStory workspace supports) for an IAM principal that can: - Start and read Athena queries (`athena:StartQueryExecution`, `athena:GetQueryResults`, etc., as required by your setup). - Read the underlying S3 data and Glue catalog objects. - **S3 query results location** — Athena needs a bucket/prefix for query results (often the same path your Athena workgroup or admin expects). - **Cross-account access** (if applicable) — If your org uses role assumption, have **Role ARN** and **External ID** ready; leave blank when using access keys only. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Amazon Athena**. 2. Complete the fields in the connection form: ![Amazon Athena connection form](/img/data-connections/athena-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name in FunnelStory. | | **AWS Access Key** / **AWS Secret Access Key** | IAM credentials with permission to run Athena queries and read the underlying S3 data and Glue catalog. | | **AWS Region** | Region for Athena (for example `us-east-1`). | | **S3 Output Location** | `s3://bucket/prefix/` where Athena writes query results. | | **Role ARN** (optional) | IAM role to assume when your workspace uses cross-account or role-based access. | | **External ID** (optional) | External ID for the trust relationship, if your admin requires it. | 3. Click **Validate**, then **Add Connection**. ## After you connect Use Athena SQL in **models**. Large scans can incur AWS costs; scope queries to the partitions and tables you need. See [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Data connections overview](../overview.md) - [Amazon S3](../storage/s3.md) (if you also ingest files directly) --- ## BigQuery The **BigQuery** connection uses a Google **service account** so FunnelStory can run read-only jobs against your datasets and drive **data models** from BigQuery tables and views. ## What FunnelStory uses it for - **Data models** — Query BigQuery with SQL compatible with the BigQuery engine. ## Before you connect FunnelStory needs a service account with **read-only** access to BigQuery. Minimum permissions include: 1. `bigquery.datasets.get` 2. `bigquery.tables.get` 3. `bigquery.tables.getData` 4. `bigquery.tables.list` 5. `bigquery.jobs.create` You can define a **custom IAM role** with those permissions, or attach the predefined roles **BigQuery Data Viewer** and **BigQuery Job User** to the service account. - **BigQuery Data Viewer** — View datasets, tables, and read table data (no writes). - **BigQuery Job User** — Create and manage query jobs in the project (required to run queries). ### Enable the BigQuery API 1. Open the [Google Cloud Console](https://console.cloud.google.com/) and select the correct project from the **project picker** at the top of the page. 2. In the left navigation, open **APIs & Services**. Then either: - Go to **Dashboard** and click **+ ENABLE APIS AND SERVICES** (or **Enable APIs and Services**), **or** - Go to **Library** (or **Enabled APIs & services** → **+ ENABLE APIS AND SERVICES**). 3. Search for **BigQuery API**, open it, and click **Enable**. ### Create a service account and grant roles 1. In the left navigation, go to **IAM & Admin** → **Service accounts**. 2. Click **+ CREATE SERVICE ACCOUNT** (or **Create service account**). 3. Enter a **Service account name** (and optional description), then continue. 4. On the **Grant this service account access to project** step, click **Select a role**, add: - **BigQuery Data Viewer** - **BigQuery Job User** Add each role with **Add another role** if the UI offers it, then continue and finish creating the account. ![Service account creation showing BigQuery Data Viewer and BigQuery Job User roles](/img/data-connections/bigquery-service-account-roles.png) ### If roles are missing from the service account If the account was created without those roles, add them on the project IAM policy: 1. Go to **IAM & Admin** → **IAM**. 2. Click **Grant access** (or **+ GRANT ACCESS** / **Add** depending on console version). 3. Paste the **service account email** (for example `name@project-id.iam.gserviceaccount.com`). 4. Assign **BigQuery Data Viewer** and **BigQuery Job User**, then save. ### Project ID, JSON key, and optional endpoint **Project ID** - Use the **project picker** at the top of the console: the project’s **ID** is shown in the project details (not only the display name). - Or open **IAM & Admin** → **Settings** and copy **Project ID**. - The downloaded key file also contains `"project_id"`. **Credentials JSON** 1. Go to **IAM & Admin** → **Service accounts**. 2. Open your service account (click the email or name). 3. Open the **Keys** tab. 4. Select **Add key** → **Create new key** → **JSON**. A key file downloads immediately. Store it securely; you will paste or upload its contents in FunnelStory. **Endpoint (optional)** - Leave **Endpoint** empty unless your organization requires a non-default BigQuery API base URL (for example a restricted or private endpoint). If unsure, leave it blank. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **BigQuery**. 2. Complete the fields in the connection form: ![BigQuery connection form with fields and Validate](/img/data-connections/bigquery-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name (for example `BigQuery — production`). | | **Project ID** (optional) | GCP project ID. Often inferred from the JSON; set explicitly if needed. | | **Endpoint** (optional) | Custom BigQuery API base URL only if required by your network or org policy. | | **Credentials JSON** | Full contents of the service account **JSON** key. The account must have **BigQuery Data Viewer** and **BigQuery Job User** (or equivalent custom permissions). | 3. Click **Validate**, then **Add Connection**. ## After you connect Configure **models** with BigQuery SQL. See [Writing queries](../../getting-started/configuring-accounts/writing-queries) for guidance on types and timestamps. ## Related links - [Data connections overview](../overview.md) --- ## Databricks The **Databricks** connection lets FunnelStory run SQL against Databricks SQL warehouses (or SQL endpoints) so **data models** can read from Unity Catalog–backed tables and views your team already curates. ## What FunnelStory uses it for - **Data models** — Queries executed through the Databricks SQL interface you configure. ## Before you connect FunnelStory connects with a **Databricks service principal** and a **personal access token** created for that principal (not the interactive UI token flow for end users). At a high level: 1. **Create a service principal** in Databricks account settings. ![Databricks account console — create service principal](/img/data-connections/databricks-service-principal.png) 2. **Create a group** and add the service principal to it. ![Databricks account console — group with service principal](/img/data-connections/databricks-group-add-sp.png) 3. **Allow the service principal to use tokens** (token management permission). ![Databricks — allow service principal to use tokens](/img/data-connections/databricks-token-access.png) 4. Use the **Databricks CLI** (per [Databricks PAT docs](https://docs.databricks.com/en/dev-tools/auth/pat.html)) to create an on-behalf-of token for the service principal; keep the `token_value` for FunnelStory. 5. In the Databricks SQL editor, **grant** the group access to the catalog or objects the principal should read (tighten grants to least privilege in production). Also note the **server hostname** (workspace SQL API host), **HTTP path** for your SQL warehouse (from **SQL Warehouses** → **Connection details**), and optional default **catalog** / **schema**. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Databricks**. 2. Complete the fields in the connection form: ![Databricks connection form and SQL warehouse connection details](/img/data-connections/databricks-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Server hostname** | Databricks workspace SQL API host. | | **HTTP path** | Path to the SQL warehouse (from Databricks SQL settings). | | **Token** | Personal access token. | | **Catalog** / **Schema** (optional) | Defaults for unqualified table names if the form includes them. | 3. Click **Validate**, then **Add Connection**. ## After you connect Use this connection in **models** with Databricks SQL. See [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Data connections overview](../overview.md) --- ## MongoDB The **MongoDB** connection lets FunnelStory read documents from your MongoDB databases to power **data models**. Queries use FunnelStory’s **MQL block** format in the model query step (JSON describing database, collection, filter, and limits)—not ad hoc SQL against Mongo. ## What FunnelStory uses it for - **Data models** — Define what to pull from each collection and map returned fields to FunnelStory properties. ## Before you connect - **Connection string** — Have your MongoDB URI ready (`mongodb://` or `mongodb+srv://`), including auth database and TLS parameters if you use Atlas or enforced TLS. - **User** — A read-only user scoped to the databases and collections FunnelStory should access. - **Network** — Allow FunnelStory’s static egress IPs ([Allowed IP addresses](../allowed-ip-addresses.md)) or use Atlas IP allowlist / peering as required. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **MongoDB**. 2. Complete the fields in the connection form: ![MongoDB connection form](/img/data-connections/mongodb-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Connection URI** | Full MongoDB connection string (or split host/user/password if the UI separates them). | | **Database** (if asked separately) | Default database name for validation. | 3. Click **Validate**, then **Add Connection**. ## After you connect In the **model** query step, use the **MQL block** format documented in [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [Data connections overview](../overview.md) --- ## MS SQL Server The **Microsoft SQL Server** connection lets FunnelStory run read-only queries against SQL Server (or Azure SQL where supported) to back **data models** such as accounts, contracts, and usage summaries. ## What FunnelStory uses it for - **Data models** — You supply T-SQL (or SQL Server–compatible SQL), validate it, and map columns to FunnelStory properties. ## Before you connect - **Network** — Ensure the instance is reachable from FunnelStory’s servers ([Allowed IP addresses](../allowed-ip-addresses.md)), or use an [SSH tunnel](../ssh-tunnels.md) to a host that can reach SQL Server. - **Permissions** — Grant a dedicated login **db_datareader** (or equivalent `SELECT` only) on the databases or views you query. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Microsoft SQL Server**. 2. Complete the fields in the connection form: ![Microsoft SQL Server connection form](/img/data-connections/mssql-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Label in FunnelStory. | | **Host** | Server hostname or IP. | | **Port** | SQL Server port (often `1433`). | | **Database** | Initial database / catalog. | | **Username** | SQL login. | | **Password** | SQL password. | | **SSH** (optional) | Tunnel-related fields if shown; see [SSH tunnels](../ssh-tunnels.md). | 3. Click **Validate**, then **Add Connection**. ## After you connect Attach the connection to **models** and write queries in SQL Server dialect. For cross-dialect notes, see [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [SSH tunnels](../ssh-tunnels.md) - [Data connections overview](../overview.md) --- ## MySQL The **MySQL** connection lets FunnelStory run read-only SQL against your MySQL database to power **data models**—accounts, users, billing tables, or any view you maintain for analytics. ## What FunnelStory uses it for - **Data models** — SQL queries in the model builder, validated against your instance, with column mapping to FunnelStory properties. ## Before you connect - **Network** — Allow inbound access from FunnelStory’s static egress IPs ([Allowed IP addresses](../allowed-ip-addresses.md)), or use an [SSH tunnel](../ssh-tunnels.md). - **Permissions** — Use a user with **SELECT** only on the schemas or views FunnelStory should read. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **MySQL**. 2. Complete the fields in the connection form: ![MySQL connection form](/img/data-connections/mysql-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name in FunnelStory. | | **Host** | Server hostname or IP. | | **Port** | MySQL port (often `3306`). | | **Database** | Database name. | | **Username** | MySQL user. | | **Password** | MySQL password. | | **SSH** fields (optional) | If the form offers tunnel parameters; otherwise configure [SSH tunnels](../ssh-tunnels.md) separately. | 3. Click **Validate**, then **Add Connection**. ## After you connect Use the connection when configuring **models**. SQL follows MySQL syntax. See [Writing queries](../../getting-started/configuring-accounts/writing-queries) for types and timestamps. ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [SSH tunnels](../ssh-tunnels.md) - [Data connections overview](../overview.md) --- ## PostgreSQL The **PostgreSQL** connection lets FunnelStory run read-only SQL against your database to power **data models** (for example accounts, users, subscriptions, and warehouse-backed metrics). Use it when your source of truth already lives in Postgres or when you expose reporting views for FunnelStory to query. ## What FunnelStory uses it for - **Data models** — You write SQL in the model builder, validate the query, and map result columns to FunnelStory properties. - **Joins** — The same connection (or additional connections) can participate in multi-source model configuration where your workspace supports it. ## Before you connect - **Network** — The database must accept inbound connections from FunnelStory’s servers, unless you use an [SSH tunnel](../ssh-tunnels.md). Allow the static egress IPs in [Allowed IP addresses](../allowed-ip-addresses.md) in your firewall or security group. - **TLS** — Enable SSL/TLS for production databases when your policy requires it. - **Permissions** — FunnelStory only needs **read** access. A dedicated database user with `SELECT` on the schemas you query is recommended. ### Optional: read-only user ```sql CREATE USER funnelstory_readonly WITH PASSWORD ''; GRANT CONNECT ON DATABASE your_database TO funnelstory_readonly; GRANT USAGE ON SCHEMA public TO funnelstory_readonly; GRANT SELECT ON ALL TABLES IN SCHEMA public TO funnelstory_readonly; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO funnelstory_readonly; ``` Replace `public` with your schema if tables live elsewhere. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **PostgreSQL**. 2. Complete the fields in the connection form: ![PostgreSQL connection form with host, port, database, and credentials](/img/data-connections/postgres-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Label shown in FunnelStory (for example `Prod Postgres`). | | **Host** | Hostname or IP of the Postgres server. | | **Port** | Port (default `5432`). | | **Database** | Database name. | | **Username** | Database user. | | **Password** | Database password. | | **SSH host / user / private key (Base64)** (optional) | Only if you use tunnel fields on the form instead of a separately registered tunnel—see [SSH tunnels](../ssh-tunnels.md). | 3. Click **Validate**, then **Add Connection**. ## After you connect The connection appears on the connections list. Use **Refresh** when you need an immediate pull. Attach this connection when you [configure models](../../getting-started/configuring-accounts/configuration-walkthrough); SQL dialect is standard PostgreSQL. For timestamp and query tips, see [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [SSH tunnels](../ssh-tunnels.md) - [Data connections overview](../overview.md) --- ## Amazon Redshift The **Amazon Redshift** connection lets FunnelStory query your Redshift cluster or Serverless workgroup with read-only SQL to power **data models**—often used for warehouse-native account and revenue facts. ## What FunnelStory uses it for - **Data models** — PostgreSQL-compatible SQL against tables and views you expose to the FunnelStory database user. ## Before you connect - **Network** — The cluster must accept connections from FunnelStory’s static egress IPs ([Allowed IP addresses](../allowed-ip-addresses.md)), or you route access via a network path your team manages (for example VPN + bastion patterns may require [SSH tunnels](../ssh-tunnels.md) to an intermediate host). - **User** — Create a Redshift user with `SELECT` on the schemas FunnelStory should read (no write or admin rights required). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Amazon Redshift** (or **Redshift** as labeled). 2. Complete the fields in the connection form: ![Amazon Redshift connection form](/img/data-connections/redshift-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Host** | Redshift endpoint hostname. | | **Port** | Usually `5439`. | | **Database** | Database name. | | **Username** | Redshift user. | | **Password** | Redshift password. | 3. Click **Validate**, then **Add Connection**. ## After you connect Attach to **models** and write Redshift-compatible SQL. See [Writing queries](../../getting-started/configuring-accounts/writing-queries). ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [SSH tunnels](../ssh-tunnels.md) - [Data connections overview](../overview.md) --- ## Snowflake The **Snowflake** connection lets FunnelStory query your warehouse with read-only access so you can build **data models** on curated tables and views—common for ARR, account dimensions, and historical metrics. ## What FunnelStory uses it for - **Data models** — Standard SQL against Snowflake; ideal for large datasets and role-based access you already manage in Snowflake. ## Before you connect Create a **dedicated user and role** with least privilege: 1. **Role** — Create a role (for example `FUNNELSTORY_ROLE`), grant `USAGE` on the warehouse FunnelStory will use, and grant the role to your user. 2. **User** — Create a user with `DEFAULT_ROLE` set to that role. 3. **Database/schema** — `GRANT USAGE` on the database and schema, then `GRANT SELECT ON ALL TABLES` / `VIEWS` in that schema, plus `FUTURE` grants if you want new objects included automatically. Snowflake supports **password** or **RSA key pair** authentication. For key pair auth, register the public key on the user (`ALTER USER … SET RSA_PUBLIC_KEY='…'`), base64-encode the private key, and paste it (with optional passphrase) into the FunnelStory form when the UI requests it. See [Snowflake key-pair documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth) for key generation details. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Snowflake**. 2. Complete the fields in the connection form: ![Snowflake connection form](/img/data-connections/snowflake-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Account** | Snowflake account identifier (as in the Snowflake URL). | | **Username** | Snowflake user. | | **Password** | Password (omit if using private key only). | | **Database** | Default database. | | **Warehouse** | Warehouse to run queries. | | **Schema** (optional) | Default schema. | | **Role** (optional) | Role to assume. | | **Region** (optional) | If your account requires an explicit region. | | **Private key (Base64)** / **Passphrase** (optional) | For RSA authentication when enabled. | 3. Click **Validate**, then **Add Connection**. ## After you connect Use Snowflake SQL in **models**. See [Writing queries](../../getting-started/configuring-accounts/writing-queries) for value types and timestamps. ## Related links - [Data connections overview](../overview.md) --- ## Mailgun Connect **Mailgun** so your workspace can send email **through your Mailgun account**. Messages are sent from **domains you configure and verify in Mailgun** (for example `mg.yourcompany.com`)—not from FunnelStory’s domain. This connection validates your **domain** and **API credentials**; workflow actions use them to deliver mail on your behalf. ## What FunnelStory uses it for - **Workflow and product email** — Email actions in **workflows** (and similar features) can target this connection. Set **From** to an address on your Mailgun **sending domain**. ## Before you connect - In Mailgun, add and **activate** a **sending domain** and create a **private API key**. - Choose the correct **`base_url`** (include the `/v3` segment as in Mailgun’s documentation): - US: `https://api.mailgun.net/v3` - EU: `https://api.eu.mailgun.net/v3` - Validation calls Mailgun’s API to confirm the domain is **active**; inactive domains fail **Validate**. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Mailgun**. 2. Complete the fields in the connection form: ![Mailgun connection form](/img/data-connections/mailgun-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **api_key** | Mailgun private API key. | | **domain** | Mailgun sending domain name. | | **base_url** | Mailgun API base URL (US or EU, with `/v3`). | 3. Click **Validate**, then **Add Connection**. ## After you connect - Select this connection in **workflow** email steps (or other supported email features) and use **From** addresses on your verified Mailgun domain. - Use **Refresh** on the connection as needed for your workspace. ## Related links - [SendGrid](./sendgrid.md) - [Amazon SES](./ses.md) - [Postmark](./postmark.md) - [Data connections overview](../overview.md) --- ## Postmark Connect **Postmark** so your workspace can send email **through your Postmark server**. Recipients see messages from **sender addresses and domains you verify in Postmark**—not from FunnelStory. FunnelStory stores your **Server API token** (server token), calls Postmark to send from workflows, and can **ingest Postmark webhooks** so delivery, opens, clicks, and related events show up in the product when webhooks are pointed at your workspace. ## What FunnelStory uses it for - **Workflow and product email** — Email actions in **workflows** use this connection’s **server token** to send via Postmark. You set **From** in the workflow; it must be allowed for your Postmark **Server** (verified sender or domain). - **Opens and link tracking** — Outbound sends enable Postmark **open** and **link** tracking so webhook events can reflect engagement when webhooks are configured. - **Webhook analytics** — Postmark can POST events to FunnelStory (for example delivery, open, click). **Validate** calls Postmark’s **outbound stats** API with your token—if that succeeds, the token can reach Postmark’s API. ## Before you connect - In the [Postmark](https://postmarkapp.com/) UI, open the **Server** you use for transactional (or outbound) mail and copy the **Server API token** (FunnelStory field **`server_token`**). - **Verify** every **From** address or domain you plan to use, per Postmark’s requirements. - For **webhooks**, plan the HTTPS URL FunnelStory exposes for your workspace and connection (pattern: `/api/webhooks/{workspace_id}/data_connections/{data_connection_id}/postmark`). Enable the event types you care about in Postmark (see [Postmark email webhooks](https://postmarkapp.com/developer/webhooks/webhooks-overview)). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Postmark**. 2. Complete the fields in the connection form: ![Postmark connection form](/img/data-connections/postmark-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name in FunnelStory. | | **server_token** | Postmark **Server API token** for the server that will send mail. | | **reply_to_email** | Optional. Default **Reply-To** for sends using this connection when the workflow does not override it. | 3. Click **Validate**, then **Add Connection**. ## After you connect - In **workflows** (or other email features), choose this connection and a **From** address Postmark accepts. - In Postmark, add a **webhook** targeting your workspace URL for this connection so **Delivery**, **Open**, **Click**, and other supported events reach FunnelStory. ## Related links - [SendGrid](./sendgrid.md) - [Amazon SES](./ses.md) - [Mailgun](./mailgun.md) - [Data connections overview](../overview.md) --- ## SendGrid Connect **SendGrid** so your workspace can send email **through your own SendGrid account**. Messages are delivered from **addresses and domains you verify in SendGrid** (for example `notifications@yourcompany.com`)—not from FunnelStory’s domain. FunnelStory uses your **API key** only to send on your behalf and, when webhooks are configured, to **record delivery and engagement** in the product. ## What FunnelStory uses it for - **Workflow and product email** — Email actions in **workflows** (and similar features) can use this connection. You choose the **From** address in the workflow; it must be allowed by your SendGrid account. - **Event webhooks** — SendGrid can POST opens, clicks, bounces, deliveries, and related events to FunnelStory. Those events are tied to workflow runs when metadata matches, so you can see outcomes alongside automation in the workspace. ## Before you connect - In SendGrid, create an **API key** with permission to **send** mail. - **Verify** the sender identity or domain you will use for **From** addresses (SendGrid requirement). - Plan **Event Webhook** setup in SendGrid (URL, signing secret, and event types) using the values your workspace or onboarding provides—without that, sends still work but inbound analytics in FunnelStory may be incomplete. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **SendGrid**. 2. Complete the fields in the connection form: ![SendGrid API key and connection settings](/img/data-connections/sendgrid-03-configuration.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name in FunnelStory. | | **api_key** | SendGrid API key. | | **reply_to_email** | Optional. Default **Reply-To** for sends that use this connection when the workflow does not override it. | | **use_sandbox** | Optional. Set to `true` to use SendGrid’s **sandbox** during testing (no mail delivered to real inboxes). | 3. Click **Validate**, then **Add Connection**. ## After you connect - Point **workflow email** steps (or other email features) at this connection and set **From** to an address verified in SendGrid. - Finish **Event Webhook** configuration in SendGrid so sent, delivered, opened, clicked, and bounce events can appear in FunnelStory. ## Related links - [Amazon SES](./ses.md) - [Mailgun](./mailgun.md) - [Data connections overview](../overview.md) --- ## Amazon SES Connect **Amazon SES** so your workspace can send email **through your AWS account**. Recipients see mail from **identities and domains you verify in SES** (for example `alerts@yourcompany.com`)—not from FunnelStory. FunnelStory stores **SES event notifications** (bounces, complaints, deliveries, and related types) when you route them in through **Amazon SNS** (or an equivalent path) to FunnelStory’s endpoints, so engagement and delivery issues can surface in the product. ## What FunnelStory uses it for - **Workflow and product email** — Email actions in **workflows** can use this connection. The **From** address must be an identity permitted in **your** SES configuration. - **SES event ingestion** — Configuration events from SES (via SNS → HTTPS, as documented for your workspace) are stored and can drive visibility into sends tied to workflows. ## Before you connect - Create an **IAM user** (or role whose keys you use) for FunnelStory. - **Validate connection (Ping)** uses **`sts:GetCallerIdentity`** with the access key, region, and secret you enter. For **actual sending**, the same principal also needs SES **sending** permissions in that region (for example permissions that allow `ses:SendEmail` / `ses:SendRawEmail` or the **v2** API operations your account uses). If Ping succeeds but sends fail, widen IAM to match SES’s docs for programmatic sending. - Know the **`aws_region`** where you send and where SES is configured. - For **event visibility in FunnelStory**, configure **SES → SNS → HTTPS** (or your org’s approved variant) so notifications reach the webhook URLs your workspace provides. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Amazon SES**. 2. Complete the fields in the connection form: ![Amazon SES connection form](/img/data-connections/ses-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **aws_access_key_id** | IAM access key ID. | | **aws_secret_access_key** | IAM secret access key. | | **aws_region** | AWS region (for example `us-east-1`). | | **reply_to_email** | Optional. Default **Reply-To** for sends using this connection. | 3. Click **Validate**, then **Add Connection**. ## After you connect - Use this connection in **workflow** email steps with a **From** address that SES accepts for your account. - Complete **SNS (or equivalent) subscription** so SES publishing events reach FunnelStory; without that path, sends may work while in-product event history stays empty. - Use **Refresh** on the connection per your workspace’s schedule or when troubleshooting. ## Related links - [SendGrid](./sendgrid.md) - [Mailgun](./mailgun.md) - [Data connections overview](../overview.md) --- ## Apollo The **Apollo** connection stores an **`api_key`** so FunnelStory can call Apollo’s APIs to **enrich** accounts and contacts during enrichment runs or downstream jobs. ## What FunnelStory uses it for - **Data Enrichment** — The connection supplies live lookups against Apollo. ## Before you connect - Generate an **API key** in Apollo with the endpoints your plan allows. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Apollo**. 2. Complete the fields in the connection form: ![Apollo connection form](/img/data-connections/apollo-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **api_key** | Apollo API key. | 3. Click **Validate**, then **Add Connection**. ## After you connect Enable enrichment where the product exposes it (workflows, settings, or account tools). Monitor usage against Apollo rate limits. ## Related links - [Clearbit](./clearbit.md) - [Data connections overview](../overview.md) --- ## Clearbit The **Clearbit** connection stores an **`api_key`** so FunnelStory can call Clearbit for **company and person enrichment** when those features are enabled. ## What FunnelStory uses it for - **Enrichment only** — Data is fetched on demand from Clearbit. ## Before you connect - Copy the **secret API key** from Clearbit. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Clearbit**. 2. Complete the fields in the connection form: ![Clearbit connection form](/img/data-connections/clearbit-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **api_key** | Clearbit secret API key. | 3. Click **Validate**, then **Add Connection**. ## After you connect Use enrichment in the surfaces your workspace provides; avoid enriching addresses your policy treats as personal-only. ## Related links - [Apollo](./apollo.md) - [Data connections overview](../overview.md) --- ## Gainsight The **Gainsight** connection uses your Gainsight **tenant domain** and **access key** to query Gainsight’s **MDA/API** from FunnelStory’s **model** layer (virtual tables over HTTP), similar to other analytical, query-shaped sources. After the connection is saved, you **configure data models** in the model builder—Gainsight uses **GS block** queries. ## What FunnelStory uses it for - **Data models** — Gainsight objects (timeline, company, and others) are exposed through models you define; map fields and align identifiers with **CRM** keys where your workspace needs a single account view. ## Before you connect - Obtain a Gainsight **`access_key`** with API access to the objects you need. - Know your Gainsight **`domain`** (API host / tenant identifier as provided by Gainsight—often the subdomain portion used in API URLs). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Gainsight**. 2. Complete the fields in the connection form: ![Gainsight connection form](/img/data-connections/gainsight-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **domain** | Gainsight API domain / tenant host fragment. | | **access_key** | Gainsight API access key. | 3. Click **Validate**, then **Add Connection**. ## After you connect In the **model** builder, add or edit models that use this connection: configure **GS blocks** (endpoint and body), map columns to FunnelStory **properties**, and **refresh** on schedule. See [Writing queries](../../getting-started/configuring-accounts/writing-queries) for Gainsight block format. ## Related links - [Salesforce](../crm/salesforce.md) - [Writing queries](../../getting-started/configuring-accounts/writing-queries) - [Data connections overview](../overview.md) --- ## Data Connections Overview A **connection** is a secure link between your FunnelStory workspace and an external system—your CRM, warehouse, support tool, chat app, email provider, or analytics product. For **databases** and **CRM** systems you add a connection, then configure **models** (queries, objects, mappings) so FunnelStory can pull structured data on a schedule. Many **app** integrations (support, chat, meetings, product analytics APIs, webhooks, and similar) work once the connection is authorized—**no model** setup is required in the model builder. ## How connections relate to models **Models** define *what* you want from queryable sources (accounts, revenue, custom objects, and so on). **Connections** define *where* that data lives. For **databases** and **CRM**, you configure a model by picking a connection, writing a query or choosing objects, and mapping fields to FunnelStory **properties**. Until those connections are valid and models are saved, FunnelStory cannot refresh that tabular/CRM data. **App connections** (for example Zendesk, Segment, or Slack) typically **do not** use that model flow: after you connect, FunnelStory ingests or receives the integration’s data according to the product’s built-in behavior. ## What you can connect | Category | Examples | Typical use | |----------|----------|-------------| | **Databases & warehouses** | PostgreSQL, MySQL, Snowflake, BigQuery, Databricks, … | Query tables and views for models (often the source of truth for accounts and revenue data). | | **CRM** | Salesforce, HubSpot, Attio | CRM objects for models. | | **Support & ticketing** | Zendesk, Intercom, Freshdesk, … | Tickets and conversations on accounts. | | **Communication & meetings** | Slack, Microsoft Teams, Zoom, Gong, … | Messages and meetings for conversation features. | | **Email** | SendGrid, SES, Mailgun, Postmark | Send from **your** ESP and domain in **workflows**. | | **Product analytics** | Mixpanel, Pendo, Segment | Product usage and events. | | **Enrichment** | Apollo, Clearbit | Enriching account and contact records. | | **Storage & search** | S3, Elasticsearch | Files or search-backed data for models. | ## Adding a connection 1. Open **[FunnelStory](https://app.funnelstory.ai)** and go to **Configuration → Connections**. 2. Click **Add connection**. 3. Choose the integration, enter the required fields, and use **Validate** to test. 4. Click **Connect** to save. OAuth-based apps (for example Salesforce or HubSpot) redirect you to the provider to approve access. API-key and database connections stay in the form. ## Refresh behavior After a connection is connected, FunnelStory **refreshes** data on a schedule and also lets you trigger a **manual refresh** from the connections UI where supported. Model runs use the latest successful data from the underlying connection. ## Network allowlists (firewalls and security groups) If your database or other self-hosted endpoint only accepts traffic from known IPs, allow FunnelStory’s **static egress addresses** listed in [Allowed IP addresses](./allowed-ip-addresses.md). ## Private databases and SSH tunnels If your database is not on the public internet, use an **SSH tunnel** so FunnelStory can reach it securely. See [SSH tunnels](./ssh-tunnels.md). ## Next steps - Configure your first data models: [Configuring the Account model](../getting-started/configuring-accounts/introduction). - Query formats by source: [Writing queries](../getting-started/configuring-accounts/writing-queries). - Per-product setup: browse the subsections in the sidebar (Databases & warehouses, CRM, Support, and so on). --- ## SSH Tunnels Use an **SSH tunnel** when your database sits in a private network or VPC and should not be exposed on the public internet. The tunnel encrypts traffic and only requires outbound SSH from a small host you control to FunnelStory’s tunnel service. ## When to use a tunnel Reach for a tunnel when: - The database has no public hostname, or your policy forbids opening it to the internet. - You prefer a bastion or jump host in front of the data store. After the tunnel is running, create your database connection in FunnelStory and select the tunnel configuration so queries route through it. ## What you need A Linux instance (for example a small EC2 instance) that: - Can connect to your database on the private network. - Can open an **outbound** SSH session to FunnelStory’s tunnel host. - Can keep a long-lived SSH process running (manually, via `autossh`, or `systemd`). ## Register a tunnel in FunnelStory 1. Go to **Configuration → Tunnels** (or your workspace’s tunnel management screen). 2. Click **Add tunnel** and follow the prompts. 3. Copy the **private key** FunnelStory provides and install it on your tunnel instance with restrictive permissions: ```bash chmod 0400 /path/to/key.pem ``` ## Start the tunnel FunnelStory shows an SSH command similar to: ```bash ssh -i key.pem -o "ExitOnForwardFailure=yes" -NR 127.0.0.1:50000:${DB_HOST}:${DB_PORT} tunnel@tunnel.funnelstory.ai ``` Replace placeholders with your database host, port, and the local port assigned in the UI. `ExitOnForwardFailure=yes` makes SSH exit if forwarding cannot be established. ### Optional: restart loop For a simple keep-alive loop: ```bash #!/bin/bash KEY_PATH="/path/to/key.pem" while true; do ssh -i "$KEY_PATH" -o "ExitOnForwardFailure=yes" -o "ServerAliveInterval=10" -o "ServerAliveCountMax=3" \ -NR 127.0.0.1:50000:my_private_postgres:5432 tunnel@tunnel.funnelstory.ai echo "Tunnel dropped; retrying in 5s..." sleep 5 done ``` ### Optional: autossh or systemd Use **`autossh`** or a **`systemd` unit** to supervise the tunnel so it restarts on failure and optionally starts on boot. Point `ExecStart` at the same `ssh` (or `autossh`) line the UI provides. ## Connect your database in FunnelStory When the tunnel process is healthy, add your PostgreSQL, MySQL, or other supported database connection in **Configuration → Connections** and choose the tunnel you registered. Validate and connect as usual. ## Related links - [Data connections overview](./overview.md) - Database guides: [PostgreSQL](./databases/postgres.md), [MySQL](./databases/mysql.md), [MS SQL Server](./databases/ms-sql.md) --- ## Elasticsearch The **Elasticsearch** connection talks to Elasticsearch **8.x** (or compatible) using either **host/port** or **Elastic Cloud ID**, with flexible auth options. ## What FunnelStory uses it for - **Data models** — SQL-style queries run against Elasticsearch-backed virtual tables. ## Before you connect - If the cluster only accepts traffic from known IPs, allow FunnelStory’s static egress addresses from [Allowed IP addresses](../allowed-ip-addresses.md). - Gather either: - **`host`** and **`port`** (HTTPS URL is built as `https://{host}:{port}`), or - **`cloud_id`** for Elastic Cloud. - Choose one auth mode: - **`user`** + **`password`**, or - **`api_key`**, or - **`service_token`**. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Elasticsearch**. 2. Complete the fields in the connection form (only the combination your cluster needs): ![Elasticsearch connection form](/img/data-connections/elasticsearch-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **host** | Cluster hostname (with **port**). | | **port** | HTTPS port (often `9243` or `443`). | | **cloud_id** | Elastic Cloud deployment id (alternative to host/port). | | **user** / **password** | Basic authentication. | | **api_key** | Base64-style API key auth. | | **service_token** | Bearer service token when used. | 3. Click **Validate**, then **Add Connection**. ## After you connect Author **queries** in the model builder and **refresh** on schedule. For clusters only reachable inside a VPC, use private networking options or an [SSH tunnel](../ssh-tunnels.md) if your workspace supports it. ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [Writing queries](../../getting-started/configuring-accounts/writing-queries) - [Amazon S3](./s3.md) - [Data connections overview](../overview.md) --- ## Amazon S3 The **Amazon S3** connection reads a **single CSV object** from S3 using **IAM access keys** so you can build **file-backed models**. ## What FunnelStory uses it for - **Data models** — The configured object is loaded into FunnelStory’s query layer like other database-backed sources. ## Before you connect - Upload or maintain a **CSV** file in a bucket. - Create **IAM keys** with `s3:GetObject` on that object (and list permission if your policy requires it). - Note **`aws_region`**, **`s3_bucket_name`**, and the object **`s3_object_key`** (full key path to the file—not a folder prefix). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Amazon S3**. 2. Complete the fields in the connection form. Enter **`aws_access_key_id`**, **`aws_secret_access_key`**, **`aws_region`**, **`s3_bucket_name`**, **`s3_object_key`**, and **`s3_file_type`** (`csv` only). Optionally set **`s3_csv_params`** as JSON, for example `{"delimiter": ";"}`. ![Amazon S3 connection form](/img/data-connections/s3-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **aws_access_key_id** | IAM access key ID. | | **aws_secret_access_key** | IAM secret key. | | **aws_region** | Bucket region. | | **s3_bucket_name** | Bucket name. | | **s3_object_key** | Path to the CSV object. | | **s3_file_type** | Must be `csv`. | | **s3_csv_params** | Optional. JSON with CSV options (e.g. delimiter). | 3. Click **Validate**, then **Add Connection**. ## After you connect If the bucket policy restricts by IP, allow FunnelStory’s static egress addresses from [Allowed IP addresses](../allowed-ip-addresses.md). Define **models** against the loaded CSV and **refresh** after you replace the object or update the key. ## Related links - [Allowed IP addresses](../allowed-ip-addresses.md) - [Elasticsearch](./elasticsearch.md) - [SSH tunnels](../ssh-tunnels.md) - [Data connections overview](../overview.md) --- ## Freshdesk The **Freshdesk** connection syncs support tickets from Freshdesk into FunnelStory for **ticket views** and account-level support context. ## What FunnelStory uses it for - **Tickets** — Ticket records sync into FunnelStory’s support shape per built-in behavior. - **Account linkage** — Tickets associate with accounts based on requester email, company fields, or rules the product applies. ## Before you connect - In Freshdesk **Profile settings → API**, generate an **API key** for an agent with access to the tickets and contacts you need. - Know your **Freshdesk subdomain** (the hostname part before `.freshdesk.com`, same value Freshdesk uses in API URLs). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Freshdesk**. 2. Complete the fields in the connection form: ![Freshdesk connection form](/img/data-connections/freshdesk-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **subdomain** | Your Freshdesk subdomain (not the full URL). | | **api_key** | Freshdesk API key. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Freshdesk data connection, a **Support Ticket Model** will be automatically created for you. ## Related links - [Zendesk](./zendesk.md) - [Data connections overview](../overview.md) --- ## Intercom The **Intercom** connection brings conversations and related metadata from Intercom into FunnelStory for **conversation-style** views on accounts and support analytics. ## What FunnelStory uses it for - **Conversations** — Intercom threads sync and link to accounts per the product’s built-in behavior. ## Before you connect - FunnelStory uses **OAuth** to access Intercom. An Intercom workspace admin must approve the app and required read scopes. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Intercom**. 2. Complete the fields in the connection form: ![Intercom connection form](/img/data-connections/intercom-03.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | 3. Click **Connect** to start OAuth. 4. Sign in and authorize on Intercom. ![Intercom OAuth / sign-in](/img/data-connections/intercom-04.png) ## After you connect Once you’ve added the Intercom data connection, a **Conversation Model** will be automatically created for you. ## Related links - [Pylon](./pylon.md) - [Data connections overview](../overview.md) --- ## Jira The **Jira** connection (Jira Cloud via Atlassian OAuth) exposes Jira issues to FunnelStory’s **query/model layer**, similar to a database connection, so you can join issue data with CRM, product, and support sources. ## What FunnelStory uses it for - **Data models** — Define models with queries against Jira-backed virtual tables (JQL/issue data), not only a single “ticket” preset. ## Before you connect - Use an Atlassian account that can **OAuth** into your Jira Cloud site and read the projects you need. - Know your **site URL** (for example `https://yourcompany.atlassian.net`). This is stored as **`site_url`**. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Jira**. 2. Complete the fields in the connection form: ![Jira connection form with site URL](/img/data-connections/jira-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **site_url** | Base URL of your Jira Cloud site (e.g. `https://yourcompany.atlassian.net`). | 3. Click **Connect** / **Authorize** to complete OAuth with Atlassian. 4. When you return to FunnelStory, **Validate** and finish **Add Connection** (or **Connect**) as the UI shows. ## After you connect Add or edit **models** and write queries in the model builder as you would for other SQL-style sources. **Refresh** to ingest updated issues within the configured windows. ## Related links - [Writing queries](../../getting-started/configuring-accounts/writing-queries) - [Data connections overview](../overview.md) --- ## Pylon The **Pylon** connection imports customer support issues from Pylon into FunnelStory for **support ticket** views and account-level context. ## What FunnelStory uses it for - **Issues** — Pylon issues sync per built-in support-ticket behavior. ## Before you connect - Create a **Pylon API token** with access to the issues you want synced (`https://api.usepylon.com`). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Pylon**. 2. Complete the fields in the connection form: ![Pylon connection form](/img/data-connections/pylon-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **api_token** | Pylon bearer token. | 3. Click **Validate**, then **Add Connection**. ## After you connect Once you’ve added the Pylon data connection, a **Support Ticket Model** will be automatically created for you. ## Related links - [Intercom](./intercom.md) - [Data connections overview](../overview.md) --- ## Zendesk The **Zendesk** connection pulls support tickets and related conversation data from Zendesk into FunnelStory so tickets can appear on **accounts** and feed support context in the product. ## What FunnelStory uses it for - **Support tickets** — Synced tickets and conversations are stored and linked to accounts when the requester’s email matches a known user in your workspace. - **Optional `match_tld`** — When ticket requesters use a **subdomain** email (for example `you@mail.customer.com`) that does not match any user, enabling **`match_tld`** makes FunnelStory retry matching after **dropping the leftmost label** of the domain (here, `you@customer.com`). If that still matches nobody, the ticket is skipped. Emails with only a **registrable domain + TLD** (for example `you@customer.com`, two hostname parts) are unchanged. This is a fallback for account linking, not a separate “TLD-only” match mode. ## Before you connect - FunnelStory connects to Zendesk with **OAuth**. A Zendesk admin can authorize the integration. - Know your **Zendesk subdomain** (the hostname part before `.zendesk.com`). ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Zendesk**. 2. Complete the fields in the connection form: ![Zendesk connection form with subdomain](/img/data-connections/zendesk-03.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **Subdomain** | Required. Your Zendesk subdomain. | | **match_tld** | Optional boolean. If the requester’s full email matches no user, retry once using the same local part and a domain with the **first subdomain label removed** (only when the domain has more than two dot-separated parts). Helps when corporate mail uses an extra subdomain. | 3. Click **Connect** to start OAuth. 4. Sign in on Zendesk’s authorization screen to finish linking the workspace. ![Zendesk OAuth / sign-in](/img/data-connections/zendesk-04.png) ## After you connect Once you’ve added the Zendesk data connection, a **Support Ticket Model** will be automatically created for you. ## Related links - [Freshdesk](./freshdesk.md) - [Data connections overview](../overview.md) --- ## Zoho Desk The **Zoho Desk** connection syncs tickets from Zoho Desk into FunnelStory for **ticket history** on accounts. ## What FunnelStory uses it for - **Tickets** — Tickets and metadata sync per built-in support behavior. - **Filtering** — Optional **`ignore_domains`** skips ticket requester emails whose domains you list (comma-separated). ## Before you connect - FunnelStory uses **OAuth** with Zoho. You need your Zoho Desk **organization ID** (`org_id`) from Zoho Desk admin/settings. - If your account uses a Zoho **data center** other than the default, you may need **`zoho_domain`** (for example `zoho.eu`); when empty, the product defaults to `zoho.com`. ## Add the connection in FunnelStory 1. Open **[Configuration → Connections](https://app.funnelstory.ai/configure/connections)** → **Add connection**, then choose **Zoho Desk**. 2. Complete the fields in the connection form: ![Zoho Desk connection form](/img/data-connections/zoho-desk-03-connection-form.png) | Field | Description | | ----- | ----------- | | **Connection name** | Display name. | | **org_id** | Required. Zoho Desk organization ID. | | **zoho_domain** | Optional. Zoho region host; defaults to `zoho.com` when unset. | | **ignore_domains** | Optional. Comma-separated email domains to exclude from ingestion. | 3. Click **Connect** to start OAuth in Zoho, then return to FunnelStory and **Validate** / **Add Connection** as the UI shows. ## After you connect Once you’ve added the Zoho Desk data connection, a **Support Ticket Model** will be automatically created for you. ## Related links - [Zendesk](./zendesk.md) - [Data connections overview](../overview.md) --- ## Account metrics # Account metrics model **Account metrics** are **custom time-series** attached to accounts: any numeric (or parsed numeric) signal you want to trend over time—health scores, adoption indices, usage ratios, survey results, etc. They appear in account detail and feed **Metric changed** style behaviors where configured. ## Prerequisites - **[Accounts model](./accounts-model)** - Optional: **[Products](./products)** if you scope metrics with `product_id` ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account the metric row belongs to. | | `metric_id` | string | Stable name or id for the metric (for example `health_score`, `weekly_active_users`). | | `timestamp` | timestamp | Point in time for the sample. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `value` | json | Numeric value stored in a JSON-compatible form; the pipeline parses numeric usage from strings and numbers. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `product_id` | string | When set, scopes the metric to a product from the [Products](./products) model for multi-product accounts. | ## Configure 1. **Add model → Account metric**. 2. Build a query that returns one row per sample: account, metric id, timestamp, and value. 3. Map to the properties above. Ensure timestamps are valid—see [Timestamp formatting](/getting-started/configuring-accounts/field-reference#timestamp-formatting). 4. **Validate**, **save**, **refresh**. 5. Open account detail for a test account and confirm the metric series. ## Notes - Duplicate rows for the same account, metric, product, and time window may be deduplicated or skipped depending on refresh logic; prefer idempotent queries or clear time bucketing. - Use consistent **`metric_id`** strings so charts aggregate correctly across refreshes. ## Related - [Products](./products) - [Usage](./usage) (plan-dimension utilization, distinct from custom metrics) - [Data models overview](./overview) --- ## Accounts model The **account** is the primary B2B customer entity in FunnelStory: the organization you sell to, renew, and measure. Every other model ultimately ties back to `account_id` from this model. The accounts model is **mandatory**. Configure it before users, subscriptions, activities, and most integrations. ## What you get - A canonical list of customer accounts with stable IDs - Properties for name, domain, revenue, churn, hierarchy, CRM links, and internal ownership - Optional **custom properties** for filtering, audiences, and workflows ## Account hierarchy If your customers include **subsidiaries, regions, or holding companies**, map **`parent_account_id`** (and optionally **`is_container`**) as described in [Configuring the account model](/getting-started/configuring-accounts/introduction). For how rollups, **per-product funnels**, and **per-product predictions** behave—and how they interact with the **[Products](/data-models/products)** model—see the dedicated guides: - [Account hierarchy overview](/core-concepts/account-hierarchy/overview) - [Setting up hierarchy](/core-concepts/account-hierarchy/setting-up-hierarchy) - [Product-level funnels](/core-concepts/account-hierarchy/product-level-funnels) - [Product-level predictions](/core-concepts/account-hierarchy/product-level-predictions) - [Parent account rollups](/core-concepts/account-hierarchy/parent-rollups) ## Full configuration guide Step-by-step setup, query patterns for every connection type, the complete property list, advanced options (ARR, parent–child, assignments), examples, and troubleshooting live here: **[Getting started: Configuring the account model](/getting-started/configuring-accounts/introduction)** That guide replaces long account-specific content on this page. Use it as the source of truth while mapping columns and validating data. ## Minimum expectations At a minimum, your account model must provide a unique **`account_id`** per row. FunnelStory pre-populates recommended mappings such as **`name`**, **`domain`**, and **`created_at`**; filling these makes the rest of the product usable. See [Account model field reference](/getting-started/configuring-accounts/field-reference) for every supported property and data type. ## After configuration 1. Save and activate the model. 2. Run **Refresh model** (or wait for scheduled refresh). 3. Open **Accounts** in the app and confirm counts and key fields. 4. If something looks wrong, use [Verification & troubleshooting](/getting-started/configuring-accounts/verification). ## Related - [Data models overview](./overview) - [Account hierarchy](/core-concepts/account-hierarchy/overview) (parent–child, products, rollups, funnels) - [Users model](./users-model) (depends on accounts) - [Subscriptions](./subscriptions) and [Plans](./plans) (entitlements) - [Products](./products) (catalog for multi-product and hierarchy) --- ## Conversations # Conversations model The **conversation** model represents threaded discussions across channels such as **chat**, **Slack**, or **support-style messaging**. It feeds sentiment and topic features where enabled and creates **Conversation created** timeline activity. ## Managed models from messaging connections Many workspaces get a **Conversation** model automatically when you connect a supported messaging integration—for example **Intercom**, **Slack**, or **Microsoft Teams** (Teams may also supply meetings; see [Meetings](./meetings)). 1. Add the connection from [Data connections](/data-connections/overview). 2. **Authorize** and complete any required setup. 3. Under **Configuration → Data models**, confirm the **Conversation** model for that connection. 4. **Refresh** and validate on an account with recent conversations. ## Query-based configuration On **HubSpot**, **Salesforce**, **warehouses**, and other query-capable sources, add a **Conversation** model manually and map rows to the properties below. ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account associated with the conversation. | | `conversation_id` | string | Stable id for the thread in the source system. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `created_at` | timestamp | When the conversation started or was opened. | | `sentiment` | float | Optional aggregate sentiment. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `channel_type` | string | Channel (chat, email, etc.) when the source distinguishes it. | | `link` | string | Deep link to the conversation in the source app. | | `type` | string | Conversation type or category. | | `status` | string | Open, closed, snoozed, etc. | | `text_analysis` | json | Structured NLP output when present. | | `label_analysis` | json | Labels or topics extracted from content. | ## HubSpot and CRM notes When conversations (or email threads) are modeled from **HubSpot** or **Salesforce**, use the connection’s query format and map external ids and account keys consistently with your [Accounts](./accounts-model) and [Users](./users-model) models. See [Writing queries](/getting-started/configuring-accounts/writing-queries) for HubSpot HS blocks and SQL-style sources. ## Related - [Support tickets](./support-tickets) - [Meetings](./meetings) - [Data models overview](./overview) --- ## Meetings # Meetings model The **meetings** model captures **calls and meetings**: start time, duration, participants, and optional **transcript-derived** fields such as summary and sentiment. It powers **Meeting occurred** timeline activity and meeting-centric analytics. ## Typical sources Meeting models are commonly backed by: - **Zoom** - **Gong** - **Microsoft Teams** (may coexist with Teams **conversations**) - **UpdateAI** - **Fathom** Exact availability depends on your workspace’s connections; see [Data connections](/data-connections/overview). Setup is usually **connection-driven**: add the integration, authorize, then confirm the **Meeting** model under **Configuration → Data models** and **refresh**. ## Query-based configuration On unified CRM or warehouse connections (for example **Salesforce**, **HubSpot**, or a database), you can define a **Meeting** model with a query if your workspace supports it. ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account associated with the meeting (must exist in accounts model). | | `meeting_id` | string | Stable identifier for the meeting in the source. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `start_time` | timestamp | Scheduled or actual start. | | `duration` | integer | Length in seconds (or source-native unit as documented for your connector). | | `sentiment` | float | Optional aggregate sentiment when transcripts are analyzed. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `summary` | string | Short summary from notes or transcript analysis. | | `attendees` | json | List or structure of attendees. | | `invitees` | json | Invited participants if distinct from attendees. | | `source` | string | System or provider name for the meeting. | | `zoom_meeting_id` | integer | Provider-specific meeting id when using Zoom-related pipelines. | Additional mapped columns may be stored as custom properties where supported. ## Verification After refresh, open an account with known calls and check the timeline for **Meeting occurred** and any summary or sentiment fields in the UI. ## Related - [Conversations](./conversations) - [Data models overview](./overview) --- ## Non-product activity **Non-product activity (NPA)** models capture events that are **not** core in-app product usage: support-adjacent actions, marketing engagement, docs visits, emails, or any other stream you want on the account timeline without labeling it as product activity. ## Prerequisites Configure the **[Accounts model](./accounts-model)**. Map **`user_id`** when the event is attributable to a person and you want user-level joins; otherwise some rows may be account-only if your pipeline supports it. ## Model keys | Property | Type | Key? | Description | |----------|------|------|-------------| | `activity_id` | string | yes | Unique identifier for the event. | | `account_id` | string | yes | Account the event belongs to. | ## Default properties | Property | Type | Description | |----------|------|-------------| | `user_id` | string | User associated with the event, if known. | | `timestamp` | timestamp | When the event occurred. | Optional columns can be mapped to custom properties for filtering or reporting where supported. ## Activities Rows feed **Activity occurred** timeline events for this model’s name, similar to other activity-shaped models. ## Configure 1. **Configuration → Data models → Add model → Non-product activity**. 2. Choose a connection and provide a query or managed source. 3. Map to `activity_id`, `account_id`, and default properties. 4. **Validate**, **save**, **refresh**, then check an account timeline. Supported connections for this model type include warehouses, databases, HubSpot, Salesforce, Segment, SES-backed pipelines, and others per your workspace—see [Data connections](/data-connections/overview). --- ## HubSpot page views and marketing activity If you use **HubSpot**, you can bring **page view** and similar **marketing** events into FunnelStory as non-product activities. That requires: 1. **HubSpot tracking** on your marketing site, product surfaces, and documentation so page views are collected. 2. **Visitor identify** — HubSpot must associate page views with a **known contact email** for events to attach to contacts you can sync. Use HubSpot’s tracking API `identify` (for example after login) so prior page views can be backfilled to the contact. 3. A **HubSpot connection** in FunnelStory with a non-product activity (or activity) model whose query returns those events with ids and account (or user) linkage consistent with your accounts and users models. ### Identify visitors (tracking code) After the HubSpot tracking snippet loads, call `identify` with the user’s email when you know it (for example post-authentication): ```js _hsq.push([ 'identify', { email: 'user@example.com', }, ]); ``` See HubSpot’s documentation: [Identify a visitor](https://developers.hubspot.com/docs/api/events/tracking-code#identify-a-visitor). ### Single-page applications For SPAs, trigger page view tracking on route changes: ```js _hsq.push(['trackPageView']); ``` See [Track page view](https://developers.hubspot.com/docs/api/events/tracking-code#track-page-view). ### Install the snippet Use the tracking code from HubSpot **Settings → Tracking & analytics → Tracking code**. HubSpot’s guide: [Install the HubSpot tracking code](https://knowledge.hubspot.com/reports/install-the-hubspot-tracking-code). ### Map into FunnelStory Your model query should return stable `activity_id`, `account_id` (and `user_id` / timestamp) consistent with how you resolve HubSpot contacts to accounts—often via email alignment with the users model or account-level mapping rules. Validate mappings and refresh the model; then confirm events on an account timeline. For the HubSpot **connection** itself, see [HubSpot](/data-connections/crm/hubspot). --- ## Related - [Product & activity models](./product-activity) - [Data models overview](./overview) - [Conversations](./conversations) (for chat-style channels) --- ## Note # Note model The **note** model imports **external notes** (for example from a CRM or data warehouse) into FunnelStory’s notes system. Each row becomes or updates a **note** linked to one or more **accounts**, with optional author resolution by **email**. ## Prerequisites - **[Accounts model](./accounts-model)** — rows without matching accounts are skipped during refresh. ## Field reference ### Model key | Property | Type | Description | |----------|------|-------------| | `note_id` | string | Stable external identifier for the note; required. Rows with an empty `note_id` are skipped. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Primary account id for the note (must exist in the accounts model). | | `title` | string | Note title or subject. | | `content` | string | Body text. | | `email` | string | Author email; if it matches a workspace user, the note is attributed to that user. | | `timestamp` | timestamp | Display or event time for the note. If unset, `created_at` is used when present. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `created_at` | timestamp | Creation time in the source; used when `timestamp` is missing. | | `account_ids` | json | JSON array of additional `account_id` strings to link the same note to multiple accounts. | ## Configure 1. **Configuration → Data models → Add model → Note**. 2. Choose a connection (Salesforce, HubSpot, warehouse, etc.). 3. Map `note_id`, `account_id`, `title`, `content`, and time fields. 4. **Validate**, **save**, **refresh**. 5. Open linked accounts in the app and confirm notes appear with expected attribution. ## Related - [Using notes in the product](../core-concepts/notes) — in-app notes, labels, templates, and tasks. - [Task](./task) - [Strategy](./strategy) - [Data models overview](./overview) --- ## Data models overview A **data model** in FunnelStory is a typed definition that tells the platform how to read rows from a [data connection](/data-connections/overview), map columns to **properties**, and keep that data **refreshed** on a schedule. Models power Accounts View, timelines, health, workflows, and analytics. ## Connections versus models - **Connection** — credentials and access to a system (warehouse, CRM, support tool, etc.). - **Model** — a named configuration bound to a connection: query (or managed source), column mappings, refresh interval, and optional joins. Some integrations **create and manage** certain models for you (for example support tickets from Zendesk). Others require you to **add a model** and supply a query and mappings (typical for databases and many CRM-backed models). ## Configure models in the product 1. Open **Configuration → Data models**. 2. **Add model** (or open an existing model), pick the **model type**, and choose a **connection**. 3. Provide a **query** or accept the managed source, then **map** each required column to a property name. 4. **Validate**, run a **quick test** if offered, and **save**. 5. Use **Refresh model** (or wait for the scheduled refresh) to load data. Exact query syntax depends on the connection type (SQL, SOQL, HubSpot HS blocks, etc.). For the account model specifically, see the dedicated guide: [Configuring the account model](/getting-started/configuring-accounts/introduction). ## AI-suggested models During onboarding, FunnelStory can **suggest** models and queries from your connection metadata (for example tables in a warehouse). Suggested models appear in the data model list; review mappings, run validation, then activate them like any other model. ## Model refresh Each model has a **refresh interval** (default six hours; your workspace may differ). Refreshes pull new and updated rows from the source, update entities, and drive timeline and metric updates. You can trigger a **manual refresh** from the Data models UI when you need data sooner. Timestamp and type expectations for mapped columns follow the same rules as account properties where applicable; see [Timestamp formatting](/getting-started/configuring-accounts/field-reference#timestamp-formatting) in the account field reference. ## Model types (reference) | Type | Purpose | Documentation | |------|---------|----------------| | **Account** | Customer organizations; required foundation | [Accounts model](./accounts-model) → [full setup guide](/getting-started/configuring-accounts/introduction) | | **User** | People belonging to accounts | [Users model](./users-model) | | **Activity** | Generic in-product events | [Product & activity models](./product-activity) | | **Product activity** | Product usage events | [Product & activity models](./product-activity) | | **Product feature activity** | Events used for feature adoption | [Product & activity models](./product-activity) | | **Invite** | Invitations (sender, invitee, acceptance) | [Product & activity models](./product-activity) | | **Non-product activity** | Events outside core product (e.g. marketing page views) | [Non-product activity](./non-product-activity) | | **Support ticket** | Support cases | [Support tickets](./support-tickets) | | **Conversation** | Chat and messaging threads | [Conversations](./conversations) | | **Meeting** | Calls and meetings (recordings, transcripts) | [Meetings](./meetings) | | **Plan** | Plan definitions for entitlements | [Plans](./plans) | | **Subscription** | Entitlements and contracts per account | [Subscriptions](./subscriptions) | | **Usage** | Time-series usage of plan dimensions | [Usage](./usage) | | **Product** | Product catalog for multi-product setups | [Products](./products) | | **Account metric** | Custom account-level time-series metrics | [Account metrics](./account-metrics) | | **Note** | Notes attached to accounts | [Note](./note) | | **Strategy** | Account-scoped strategy fields | [Strategy](./strategy) | | **Task** | Work tracked per accounts and assignees | [Task](./task) | ## Suggested reading order 1. [Accounts model](./accounts-model) (mandatory) and the [configuring accounts](/getting-started/configuring-accounts/introduction) guide 2. [Users model](./users-model) if you need user-level analytics 3. [Plans](./plans), [Subscriptions](./subscriptions), [Usage](./usage), and [Products](./products) for revenue and entitlement analytics 4. Activity models ([product](./product-activity), [non-product](./non-product-activity)) for timelines and engagement 5. Integration-backed models ([support](./support-tickets), [conversations](./conversations), [meetings](./meetings)) as you connect those systems --- ## Plans # Plans model The **plan** model defines the **catalog of plans** your customers can subscribe to: names, plan types, and **default dimensions** (limits and feature flags). **Subscriptions** reference `plan_id` to inherit those defaults; per-subscription **dimensions** can override plan defaults for entitlements and **usage** reporting. ## Prerequisites Plans are referenced by **subscriptions**. Configure **accounts** first, then **plans** before or alongside **subscriptions** so `plan_id` values resolve correctly. ## Field reference ### Model key | Property | Type | Description | |----------|------|-------------| | `plan_id` | string | Stable identifier for the plan (SKU or internal id). | ### Default properties | Property | Type | Description | |----------|------|-------------| | `name` | string | Human-readable plan name. | | `type` | string | Plan category used in FunnelStory: typically one of `free`, `trial`, `paid`, `poc`, `partner`, or `internal` (align values with your billing system). | | `dimensions` | json | Default **dimension map** for this plan: keys are dimension names (for example seats, API units, feature flags); values are numeric limits or booleans as appropriate. These flow into subscription and usage analytics when subscriptions do not override them. | ### Optional custom properties You may map additional columns as custom properties for filtering or reporting where exposed. ## Configure 1. **Configuration → Data models → Add model → Plan**. 2. Choose a warehouse, database, or CRM connection that holds plan definitions. 3. Query one row per plan with `plan_id`, `name`, `type`, and `dimensions`. 4. **Validate** JSON for `dimensions` matches a JSON object shape. 5. **Save** and **refresh**. ## Relationship to subscriptions and usage - Each **subscription** row carries a `plan_id` and optional **`dimensions`** / **`products`** overrides—see [Subscriptions](./subscriptions). - **Usage** rows reference **`dimension`** names that must align with keys in plan and subscription dimension maps—see [Usage](./usage). ## Related - [Subscriptions](./subscriptions) - [Usage](./usage) - [Products](./products) - [Data models overview](./overview) --- ## Product & activity models FunnelStory distinguishes several **activity-shaped** models. All describe something that happened at a point in time, usually tied to an **account** and often a **user**. They differ by **semantic use** in the product (for example product usage vs invitations vs generic events). This page covers: 1. **Activity** (generic) 2. **Product activity** 3. **Product feature activity** 4. **Invite** ## Shared concepts - **Model key:** `activity_id` + `account_id` identify a row for activity, product activity, product feature activity, and non-invite activity types. - **Invite** uses `user_id` + `account_id` + `invitee_email` as its composite key (see below). - Each row produces **Activity occurred** (or invite-specific) timeline events when configured. - **Prerequisites:** [Accounts model](./accounts-model). **Users model** is required when your events are user-scoped and you map `user_id`. For timestamp and JSON types, see the account [field reference](/getting-started/configuring-accounts/field-reference). --- ## 1. Activity (generic) Use the **Activity** model type when you have a single stream of in-product or application events and do not need the separate **product** vs **product feature** semantics. ### Field reference | Property | Type | Key? | Description | |----------|------|------|-------------| | `activity_id` | string | yes | Unique id for the event. | | `account_id` | string | yes | Owning account; must exist in the accounts model. | | `user_id` | string | no* | User who performed the action, if applicable. | | `timestamp` | timestamp | no | When the event occurred. | \*Required for a correct user association when the event is user-driven; omit only for account-level events. --- ## 2. Product activity **Product activity** captures usage inside your product (logins, actions, API calls, etc.). Use this when events represent general product engagement. ### Field reference Same shape as generic **Activity**: | Property | Type | Key? | Description | |----------|------|------|-------------| | `activity_id` | string | yes | Unique id for the event. | | `account_id` | string | yes | Owning account. | | `user_id` | string | no* | User associated with the event. | | `timestamp` | timestamp | no | Event time. | ### Product feature activity **Product feature activity** uses the **same required fields** as product activity. Configure it when events should feed **feature adoption** metrics and reporting. Technically the mapping and query are the same shape; choose the model type that matches how FunnelStory should treat the stream. --- ## 3. Invite model The **invite** model tracks invitations: who was invited, by whom, to which account, and whether the invite was accepted. It is configured as its own model type (not as product activity), but is documented here alongside other user- and account-scoped events. ### Model keys | Property | Type | Key? | Description | |----------|------|------|-------------| | `invitee_email` | string | yes | Email of the invited person. | | `user_id` | string | yes | Inviter (or primary actor) user id in your system. | | `account_id` | string | yes | Account context for the invite. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `accepted` | boolean | Whether the invite was accepted. | | `created_at` | timestamp | When the invite was sent or created. | ### Activities Invite models drive **Created invite** and **Updated invite** style activities on timelines when rules are enabled. --- ## Configure (query-based connections) 1. Ensure [Accounts](./accounts-model) (and usually [Users](./users-model)) are configured. 2. **Configuration → Data models → Add model** and choose **Activity**, **Product activity**, **Product feature activity**, or **Invite**. 3. Pick a connection (database, warehouse, HubSpot, Salesforce, Segment, Mixpanel, etc., depending on what your workspace supports). 4. Write a query that returns one row per event (or invite) with stable ids. 5. Map columns to properties, **validate**, **quick test**, **save**. 6. **Refresh** and confirm events on an account timeline. --- ## HubSpot and other CRM sources When **HubSpot** (or Salesforce) is your connection, you typically express the query using the connection’s native query format (for HubSpot, HS blocks in the query step—see [Writing queries](/getting-started/configuring-accounts/writing-queries)). Map HubSpot properties into the same FunnelStory property names as in the tables above. Ensure contact or user identifiers align with your **users** model if you map `user_id`. --- ## Segment, Mixpanel, Pendo These integrations support activity-shaped models per your workspace’s connection capabilities. Use **Product activity** or **Product feature activity** for in-product analytics streams, and map provider fields to `activity_id`, `account_id`, `user_id`, and `timestamp` (or equivalent) per your pipeline. --- ## Related - [Non-product activity](./non-product-activity) (events outside core product, including marketing page views) - [Data models overview](./overview) - [Users model](./users-model) --- ## Products # Products model The **products** model is your **product catalog**: discrete products or SKUs that can be tied to **subscriptions** and **account metrics** for multi-product reporting. Use it when a single account can buy multiple products and you need consistent `product_id` values across models. ## Prerequisites - **[Accounts model](./accounts-model)** - Optional but common: **[Subscriptions](./subscriptions)** using the `products` JSON field; **[Account metrics](./account-metrics)** when metrics are product-scoped ## Field reference ### Model key | Property | Type | Description | |----------|------|-------------| | `product_id` | string | Stable product identifier across FunnelStory models. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `name` | string | Display name. | | `description` | string | Optional longer description. | Custom properties may be mapped for segmentation. ## Configure 1. **Add model → Product**. 2. Query your source of truth for product definitions (warehouse table, CRM product object, etc.). 3. Map `product_id`, `name`, and `description`. 4. **Save** and **refresh**. 5. In **subscriptions**, ensure `products` JSON references these ids consistently. For **account metrics**, map `product_id` when metrics are per product—see [Account metrics](./account-metrics). ## Related - [Subscriptions](./subscriptions) - [Account metrics](./account-metrics) - [Data models overview](./overview) --- ## Strategy # Strategy model The **strategy** model stores **account-scoped strategy data**: one row (or aggregated row) per account with a flexible set of **properties**. Typical sources include CRM **strategy** or **account plan** objects synced from a warehouse or API-backed connection. ## Prerequisites - **[Accounts model](./accounts-model)** ## How strategy rows work - Rows are keyed by **`account_id`**. - **Every column** you map becomes a **property** on the strategy entity for that account (including fields beyond the defaults below). Use consistent property names for reporting and workflows. - Invalid rows may be skipped on refresh; ensure `account_id` values exist in your accounts model. ## Field reference ### Required key | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account this strategy record describes. | ### Default guidance The configuration UI may pre-populate **`account_id`** as the minimum mapping. Add mappings for every strategic field your source provides (for example goals, priorities, renewal strategy, competitor notes)—each becomes a named property in FunnelStory. There is no fixed global list: treat this model like a **wide, account-level profile** driven by your query. ## Configure 1. **Add model → Strategy**. 2. Point at the connection and query that returns **one logical strategy row per account** (or that FunnelStory aggregates per account per refresh rules). 3. Map `account_id` and map additional columns to descriptive property names. 4. **Validate**, **save**, **refresh**. 5. Confirm strategy fields on account records in the product UI where exposed. ## Related - [Accounts model](./accounts-model) - [Note](./note) - [Tasks](./task) - [Data models overview](./overview) --- ## Subscriptions # Subscriptions model The **subscription** model represents **entitlements** for an account: which **plan** applies, the subscription **type**, validity window, optional **per-subscription dimensions**, and linked **products**. In many stacks the same model holds **opportunities**, **contracts**, or **deals** as long as you map them to these properties. ## Prerequisites - **[Accounts model](./accounts-model)** - **[Plans model](./plans)** (strongly recommended so `plan_id` resolves) - **[Products model](./products)** if you use the `products` field ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `subscription_id` | string | Stable id for this subscription or contract row. | | `account_id` | string | Account that holds the entitlement. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `plan_id` | string | References a row in the **plans** model. | | `type` | string | Subscription category: typically `free`, `trial`, `paid`, `poc`, `partner`, or `internal`. | | `created_at` | timestamp | When the subscription record was created. | | `valid_from` | timestamp | Start of the entitlement window. | | `valid_until` | timestamp | End of the entitlement window (if applicable). | | `dimensions` | json | Per-subscription overrides to plan **dimensions** (same key semantics as plan `dimensions`). | | `products` | json | Product linkage for multi-product contracts—align with [Products](./products) `product_id` values where used. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `subscription_type` | string | Additional subtype or commercial label from your source when distinct from `type`. | Custom-mapped fields may appear as extra properties for filtering or workflows. ## Activities Subscription models participate in subscription **created** and **updated** style activities for timelines when those rules are enabled. ## Configure 1. **Add model → Subscription**. 2. Point at the connection that contains subscription, deal, or contract data. 3. Map identifiers and dates; ensure `account_id` matches the accounts model. 4. Serialize `dimensions` and `products` as JSON in the query result if those columns are not native JSON in the source. 5. **Validate**, **save**, **refresh**, then check subscription details on accounts in the app. ## Related - [Plans](./plans) - [Usage](./usage) - [Products](./products) - [Accounts model](./accounts-model) - [Data models overview](./overview) --- ## Support tickets # Support tickets model The **support ticket** model represents customer support cases. It powers ticket counts, sentiment, SLA-style signals where configured, and **Support ticket created** timeline activity. ## Managed models from support connections For several integrations, FunnelStory **creates a support ticket model automatically** when you add and authorize the connection—you do not pick a manual SQL query. Examples include **Zendesk**, **Freshdesk**, and **Pylon** (exact behavior follows your workspace’s integration version). ### Setup 1. Add the support connection from [Data connections](/data-connections/overview) (for example [Zendesk](/data-connections/support/zendesk)). 2. Complete **OAuth** or credentials and **authorize**. 3. Open **Configuration → Data models** and confirm a **Support ticket** model exists for that connection. 4. **Refresh** the model and open an account that has tickets to verify the timeline. ## Query-based configuration On **database**, **warehouse**, **HubSpot**, **Salesforce**, and other data sources that expose ticket-like objects, you can **add a support ticket model** manually: 1. **Add model → Support ticket**. 2. Select the connection and provide a query (or source) returning one row per ticket. 3. Map properties below, **validate**, **save**, **refresh**. ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account the ticket belongs to; must match your accounts model. | | `ticket_id` | string | Stable ticket identifier from the source system. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `created_at` | timestamp | When the ticket was opened or created. | | `sentiment` | float | Optional sentiment score if your pipeline produces one. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `link` | string | URL to the ticket in the source system. | | `type` | string | Ticket type or category. | | `status` | string | Current status (open, pending, solved, etc.). | | `priority` | string | Priority label or code. | | `sla_breach` | string | SLA breach indicator if modeled in the source. | | `related_issues` | string | Related ticket or issue references. | | `text_analysis` | json | Structured output from text analysis pipelines. | | `label_analysis` | json | Structured label or topic output. | Custom-mapped columns become additional properties where the product consumes them. ## Verification After refresh, pick an account with known tickets and confirm **Support ticket created** (or equivalent) appears on the account timeline and counts look correct. ## Related - [Conversations](./conversations) - [Data models overview](./overview) --- ## Task # Task model The **task** model imports **work items** from an external system into FunnelStory **tasks**: titles, bodies, status, assignees (matched by email), due dates, and associations to **one or more accounts**. ## Prerequisites - **[Accounts model](./accounts-model)** — tasks without resolvable accounts are skipped for association. - Workspace **users** whose emails appear in `assignee_emails` for assignee matching. ## Field reference ### Model key | Property | Type | Description | |----------|------|-------------| | `task_id` | string | Stable external id; required. Empty ids are skipped. | ### Default properties | Property | Type | Description | |----------|------|-------------| | `title` | string | Task title. | | `created_at` | timestamp | Creation time in the source. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `content` | string | Body or description (mapped to task body in FunnelStory). | | `is_done` | boolean | When true, task is **done** / closed. | | `in_progress` | boolean | When true (and not done), task is treated as **in progress**. | | `due_at` | timestamp | Due date/time. | | `expires_at` | timestamp | Optional expiry. | | `priority` | string | Priority label; interpreted when it matches known priority values in the product. | | `assignee_emails` | json | JSON array of email strings; each is resolved to a workspace user when possible. | | `account_ids` | json | JSON array of `account_id` values to link the task to multiple accounts. | ## Behavior notes - **Status** is derived from `is_done` and `in_progress` when those flags are set. - Assignees are linked only for emails that resolve to users in your workspace. - Existing tasks are **updated** on refresh when the same `task_id` is seen again. ## Configure 1. **Add model → Task**. 2. Select the connection and query tasks from your source. 3. Map `task_id`, `title`, and timestamps; add optional fields as available. 4. Ensure `account_ids` (or equivalent) aligns with your accounts model. 5. **Validate**, **save**, **refresh**, then verify tasks on linked accounts. ## Related - [Note](./note) - [Strategy](./strategy) - [Data models overview](./overview) --- ## Usage # Usage model The **usage** model stores **time-series measurements** of how accounts consume **dimensions** defined on **plans** and **subscriptions** (for example seats consumed, API credits, storage). It powers utilization views such as **license utilization** percentages when those features are enabled. ## Prerequisites - **[Accounts model](./accounts-model)** - **[Plans](./plans)** and **[Subscriptions](./subscriptions)** should define **dimension** keys so usage rows align with entitlements ## Field reference ### Model keys | Property | Type | Description | |----------|------|-------------| | `account_id` | string | Account being measured. | | `dimension` | string | Name of the dimension being reported; must match a key from plan or subscription `dimensions` for consistent rollups. | | `timestamp` | timestamp | Time the measurement applies to (sample time, period end, etc.). | ### Default properties | Property | Type | Description | |----------|------|-------------| | `value` | json | Measured usage. Often a number; JSON type allows structured payloads if your pipeline requires it. Parse expectations follow your workspace configuration. | ## Configure 1. **Add model → Usage**. 2. Provide a query that emits one row per account, dimension, and timestamp (or grain you use). 3. Map columns exactly to `account_id`, `dimension`, `timestamp`, and `value`. 4. **Validate**, **save**, **refresh**. 5. Confirm utilization or usage-driven widgets on **Accounts** for test accounts. ## Tips - Keep **`dimension`** strings stable across plan, subscription, and usage models. - If you aggregate usage in the warehouse, ensure **`timestamp`** reflects the period your charts expect (for example daily buckets). ## Related - [Plans](./plans) - [Subscriptions](./subscriptions) - [Data models overview](./overview) --- ## Users model The **users** model represents people who belong to customer accounts. It lets FunnelStory attribute product activity, invites, and other events to individuals, roll up to the account level, and power user-centric views. ## Prerequisites Configure the **[Accounts model](./accounts-model)** first. Every user row must reference a valid **`account_id`** that exists in your account model. ## Model keys Rows are uniquely identified by the combination of **`user_id`** and **`account_id`** (the same person could theoretically appear under different accounts with different IDs). ## Field reference ### Required key properties | Property | Type | Description | |----------|------|-------------| | `user_id` | string | Stable identifier for the user within your product or source system. | | `account_id` | string | The account this user belongs to; must match an account from the accounts model. | ### Default properties (strongly recommended) | Property | Type | Description | |----------|------|-------------| | `name` | string | Display name. | | `email` | string | Email address; used for matching, enrichment, and assignment flows where applicable. | | `created_at` | timestamp | When the user record was created; used for lifecycle and timelines. | ### Optional properties | Property | Type | Description | |----------|------|-------------| | `role` | string | Product role or persona label; may be used for role tagging in FunnelStory. | | `traits` | json | Arbitrary JSON object of additional attributes from the source (for example Segment-style traits). | | `fs_user_id` | string | Internal or cross-system user identifier when you use one. | | `user_type` | string | Distinguish user categories if your source provides them. | ### Custom properties Any additional column you map becomes a **custom user property**, subject to naming rules (letters, numbers, spaces, hyphens, underscores). Custom properties can be used for filtering and display where the product exposes them. For general type and timestamp rules, see [Data type reference](/getting-started/configuring-accounts/field-reference#data-type-reference) and [Timestamp formatting](/getting-started/configuring-accounts/field-reference#timestamp-formatting). ## Activities from this model When a user model is saved, FunnelStory creates default activity rules such as **Created user** and **Updated user** so account timelines reflect user lifecycle changes. ## Configure the model 1. Add a [data connection](/data-connections/overview) that contains your users (warehouse, Postgres, HubSpot, Salesforce, etc.). 2. Open **Configuration → Data models → Add model → User**. 3. Select the connection and provide a **query** (or table selection) that returns one row per user per account with the properties above. 4. **Map** each source column to the correct property name. Ensure **`user_id`** and **`account_id`** are never null for rows you want to keep. 5. **Validate** the query, **run quick test** if available, then **save**. 6. **Refresh** the model and confirm user counts on the **Accounts** and **Users** views in the app. ## Related - [Accounts model](./accounts-model) - [Product & activity models](./product-activity) - [Data models overview](./overview) --- ## Accounts View The **Accounts** view (`/accounts`) is the primary grid for your customer organizations: predictions, renewal context, model-driven columns, and filters that match how RevOps and CS actually slice the business. Open it whenever you need to find accounts, narrow a cohort, sort by risk or revenue, or export a list for offline work. ## How the grid works Each row is an **account** from your **Account model**. Columns come from your workspace configuration—identifiers, ARR, funnel stage, prediction scores, custom properties, sparkline-style metric cells, and other fields your admin mapped. Not every workspace shows the same columns. ## Filtering and scope Use the **filter** control and sidebar chips to combine criteria. Common dimensions include: | Dimension | Purpose | |-----------|---------| | **Search** | Quick text match across configured searchable fields. | | **Prediction / revenue tag** | Narrow by churn, retention, or neutral outcomes. | | **Audiences** | Intersect the grid with one or more saved **[Audiences](../platform/audiences/overview.md)** segments. | | **Assignees** | Filter to accounts owned by specific internal users. | | **Funnel stage** | Limit to accounts in selected journey stages or stage states. | | **Time period** | Restrict metrics or summaries to a reporting window. | | **Advanced / selector filters** | Boolean groups (`AND` / `OR`) using the structured filter builder. | Active filters appear as chips; clear them to widen the result set again. Filter choices are reflected in the URL so you can bookmark or share a view. ## Sorting and pagination Column headers drive **sort** direction where sorting is enabled for that column. **Pagination** and **page size** are query-driven so large books of business stay responsive. Combine sort with filters to answer questions like “highest ARR among at-risk accounts this quarter.” ## Export When your role includes export, you can download the **currently filtered** account set as **CSV**. Export uses the same visible columns as the grid (chart-style cells are skipped). Use export for ad-hoc analysis; for recurring CRM pushes, prefer **[CRM sync](../platform/crm-sync/overview.md)** or **[Audience sync](../platform/crm-sync/audience-sync.md)**. ## Account hierarchy (multi-product) When **account hierarchy** is enabled for your workspace, the Accounts experience may switch to a layout optimized for **parent and child** revenue and adoption across products. You still filter and drill in, but summaries and rollups align to the hierarchy your admin configured—see **[Account hierarchy](../core-concepts/account-hierarchy/overview.md)**. ## Opening an account Select a row to open the **account detail** experience: predictions, subscriptions, timeline, relationship map (if enabled), notes, and other tabs your workspace exposes. ## Related - **[Dashboard overview](./overview.md)** — workspace-level summary cards. - **[Metrics](./metrics.md)** — metric columns and history. - **[Signals](./signals.md)** — rules that can appear in the timeline and drive alerts. - **[Audiences](../platform/audiences/overview.md)** — saved segments used in filters. - **[Predictions](../predictions/overview.md)** — scores behind prediction filters. - **[Account model](../data-models/accounts-model.md)** — where row data originates. --- ## Custom Dashboards **Custom dashboards** are curated, chart-first layouts—often provisioned for **enterprise** workspaces. Use them when leadership needs a fixed board with specific KPIs, branding, and layout, while analysts still use **Accounts** and **Journey** for operational drill-down. ## Layout and editing When edit mode is available, authorized users can **rearrange** chart tiles on a responsive grid and **save** layout positions so the view persists for everyone. ## Time range Charts honor a workspace-level **time range** control (commonly defaulting to roughly the last week). Align the window with how your team runs standups or pipeline reviews. ## Requesting or changing a custom dashboard Custom dashboards are **configured by the FunnelStory team** together with your admins: target personas, metrics definitions, chart types, color palette, and refresh expectations. Contact your **FunnelStory account manager** or customer success lead with: - The decisions each dashboard should support (executive QBR, CS capacity, product adoption). - The **metrics**, **funnels**, and **segments** that must appear. - Any **export** or **deep link** requirements into **Accounts** or **Journey**. ## Related - **[Dashboard overview](./overview.md)** — default workspace dashboard. - **[Metrics](./metrics.md)** — definitions behind charts. - **[Accounts view](./accounts-view.md)** — drill-down from dashboard KPIs. - **[Funnels](../platform/funnels/overview.md)** — funnel analytics alongside dashboards. --- ## Metrics **Metrics** in FunnelStory are numeric or aggregate measures tied to **accounts** (and sometimes **users** or **products**) that refresh with your workspace **data models**. You interact with them on the **default dashboard** as summary cards, inside the **Accounts** grid as columns or sparklines, on **account detail** as trends, and—when enabled—under **Other views → Property charts** for cross-account distributions. ## How metrics are produced Admins define **account metrics** (and related configurations) in **Configure → Models** and related tooling. Each metric has a definition, aggregation window, and display rules. After a **refresh**, FunnelStory recomputes values so dashboards and grids stay aligned with the same underlying numbers. ## Account-level metric history On an **account**, open the metrics or analytics sections your workspace exposes (wording varies by version). There you can: - Compare current values to prior periods when history is stored. - See **trend** visualizations for key drivers of health or adoption. - Correlate movements with **Timeline** events or **Needle movers**. If a metric looks flat or empty, confirm the underlying **data connection** and model mapping are populating for that account. ## Property charts (workspace-wide) When **Property charts** is enabled for your user, **Other views → Property charts** lists charts built from **account properties** across the full account set. Pick a time range to see how distributions or aggregates shift week over week. This is useful for segmentation reviews with Marketing or Finance that go beyond a single account. Access is often gated by workspace policy—if you do not see the view, ask an admin. ## Using metrics with filters and signals - In **Accounts**, metric columns and sparklines respect the same **filters** as the rest of the grid. - **Signals** can reference metric thresholds; when a signal fires, it may appear on **Timeline** and power **[Notifications](../platform/notifications/overview.md)** or **[CRM sync](../platform/crm-sync/overview.md)**. ## Related - **[Accounts view](./accounts-view.md)** — metric columns in the grid. - **[Dashboard overview](./overview.md)** — summary cards. - **[Account metrics](../data-models/account-metrics.md)** — data model reference. - **[Signals](./signals.md)** — threshold-driven rules. - **[Configuring accounts](../getting-started/configuring-accounts/01-introduction.md)** — how model fields feed metrics. --- ## Dashboard Overview The **Dashboard & insights** area of FunnelStory is where you see your book of business at a glance, drill into **accounts**, follow **metrics** and **signals**, and review **activity over time**. ## How the pieces fit together | Area | Route (typical) | When to use it | |------|------------------|----------------| | **Accounts** | `/accounts` | Search, filter, sort, and export the customer list; open any account for detail. | | **Metrics & charts** | Dashboard cards, account detail, **Other views → Property charts** (when enabled) | Trends, distributions, and account metric history. | | **Timeline** | Inside an **account** (or user) | Chronological product activity, integrations, and optional **signals** on one record. | | **Signals** | **Configure → Signals** (when available) | Workspace rules that drive alerts, filters, and downstream automation. | | **Custom dashboards** | `/v2/dashboards` (when provisioned) | Curated, chart-first layouts built with your FunnelStory team. | Exact navigation labels can vary slightly by role and workspace configuration. ## Other entry points - **Focus Areas** — When enabled, a guided landing for prioritized work (otherwise you may land on **Accounts**). - **Journey** — Funnel analytics and stage configuration; complements dashboard funnel cards. - **Relationship maps** — On an account, visualize internal–external relationship strength; see **[Relationship maps](./relationship-maps.md)**. ## Related - **[Accounts view](./accounts-view.md)** — filters, columns, and export. - **[Metrics](./metrics.md)** — reading trends and account metric history. - **[Timeline](./timeline.md)** — event stream on an account. - **[Signals](./signals.md)** — workspace signal rules. - **[Custom dashboards](./custom-dashboards.md)** — enterprise layouts on `/v2/dashboards`. - **[Funnels](../platform/funnels/overview.md)** — journey definitions behind stage-based reporting. --- ## Relationship maps A **relationship map** visualizes **who on your team talks with which customer contacts**, how **strong** those ties are, and how **frequently** conversations happen—grounded in the meetings, emails, chats, and tickets FunnelStory already ingests. Open it on an account when you need a **single-pane map** for coverage planning, escalation handoffs, or spotting single-threaded accounts. ## What you see on the map - **Team members** — internal users with mapped relationships. - **Contacts** — customer-side people pulled from your CRM or directory data. - **Edges** — lines between a team member and a contact, sized and shaded by **relationship score**, **weekly frequency**, **sentiment**, and **conversation mix** (email, chat, ticket, meeting). Controls let you **search** people, filter by **conversation type**, adjust the **time range**, and raise or lower how many **top relationships** display when an account is very large. ## Analysis panel Expand **Analysis** to load a **natural-language summary** of the relationship graph for the selected window. The copy is generated from the same signals powering the map, so it is useful for pasting into account briefs or QBR docs when you need narrative context fast. ## Data expectations Relationship maps only populate when FunnelStory has **conversations** (or related activity models) tied to the account. Sparse data yields shorter maps—fix upstream **[Conversations](../data-models/conversations.md)** or **[Meetings](../data-models/meetings.md)** coverage before troubleshooting the visualization itself. ## Related - **[Accounts view](./accounts-view.md)** — open an account before launching the map. - **[Predictions overview](../predictions/overview.md)** — account-level risk and opportunity scores alongside relationship context. - **[Needle movers](../needle-movers/overview.md)** — high-signal conversation events that often feed relationship strength. - **[Accounts](../core-concepts/accounts.md)** — the anchor record for every map. --- ## Signals A **signal** is a workspace rule that **fires** when an account (or related entity) meets a defined condition—often on a **metric** threshold, a **prediction** state, or another modeled attribute. Signals bridge analytics and action: they can appear on **Timeline**, drive **[Notifications](../platform/notifications/overview.md)**, inform **[CRM sync](../platform/crm-sync/overview.md)** mappings, and power **[AI agents](../platform/notifications/ai-agents.md)** triggers when you need more than a single static alert. ## Common signal patterns | Pattern | Example use | |---------|-------------| | **Metric threshold** | “Support tickets opened in 7d > 5” for proactive outreach. | | **Prediction-based** | “Churn outcome with high confidence” to prioritize QBRs. | | **Stage or funnel** | “Account dropped two stages in 30d” for RevOps review. | | **Composite** | Combine properties and metrics when your workspace supports richer rule builders. | Exact operators depend on the metric type (numeric, categorical, windowed). ## Where signals surface for day-to-day users - **Accounts filters** — Narrow the grid to accounts that currently satisfy (or recently triggered) selected signals, depending on how your workspace wires filters. - **Timeline** — Toggle **Include signals** to see firings alongside behavioral events on an account. - **Notifications** — Built-in Slack or Teams posts can subscribe to signal-driven events. - **CRM sync** — Optional outbound fields may include signal state when mapped. For branching playbooks or LLM summarization, prefer **AI agents** over stacking dozens of overlapping signals. ## Related - **[Accounts view](./accounts-view.md)** — signal-aware filtering. - **[Timeline](./timeline.md)** — include signal events in the feed. - **[Metrics](./metrics.md)** — what thresholds measure. - **[Notifications](../platform/notifications/overview.md)** — channel delivery. - **[Predictions](../predictions/overview.md)** — prediction-backed conditions. --- ## Timeline The **Event timeline** is a chronological feed of what happened for an **account** or **user**—product activity, integrations, model changes, and (optionally) **signals**—so CSMs and AMs can reconstruct the story before a call or escalation without opening five tools. ## Where to find it Open any **account** from **Accounts** (`/accounts`) and choose the timeline or activity tab. A similar timeline exists on **user** detail when your hierarchy exposes user-level history. ## How events are grouped Events load in **reverse chronological** order, bucketed by **calendar day** for scanning. Each row shows a human-readable **message**, a **relative timestamp** (for example “3 days ago”), and an absolute timestamp on hover. Some rows include an **additional details** control when structured payload or links are available. The list **loads more as you scroll** so long histories stay performant. ## What appears in practice Typical categories include **product activity** (logins, feature usage), **CRM or support** updates, **conversation-derived** highlights when those models are connected, **metric** or **prediction** changes when wired into timeline, and **subscription** or contract milestones. Exact event types depend on which **data models** and processors your workspace runs. ## Using timeline with other views Pair timeline review with **[Predictions](../predictions/overview.md)** and **[Needle movers](../needle-movers/overview.md)** on the same account: predictions show where things stand; needle movers explain sharp changes; timeline shows the sequence that led there. ## Related - **[Accounts view](./accounts-view.md)** — opening accounts. - **[Signals](./signals.md)** — optional signal rows in the feed. - **[Metrics](./metrics.md)** — metric shifts that may log timeline entries. - **[Product activity](../data-models/product-activity.md)** and **[Non-product activity](../data-models/non-product-activity.md)** — common event sources. --- ## Account Predictions Every account in FunnelStory has a health score and a predicted outcome, updated continuously as your data refreshes. These appear throughout the product — on account cards, in the accounts view, and on each account's detail page. ## The Health Score Badge The health score is displayed as a two-part badge: - **Left**: the numeric score (0–100) - **Right**: the confidence level (High, Medium, Low) The badge is color-coded by predicted outcome: **green** for retention, **pink** for churn, **gray** for neutral. The shade reflects confidence — a high-confidence churn prediction is darker than a low-confidence one. ## Viewing Predictions in the Accounts View The accounts view includes a prediction filter that lets you focus on a specific outcome: - **All** — show every account regardless of prediction - **Churn** — accounts predicted to churn - **Retention** — accounts predicted to renew - **Neutral** — accounts with no strong signal The `health score` and `confidence` columns are always visible. You can sort by health score to quickly surface the most at-risk accounts in your book of business. ## Account Detail Opening an individual account shows the full prediction breakdown: - Health score and predicted outcome - Confidence level, with a data confidence warning if FunnelStory has limited data on this account (for example, few historical data points or a short tenure) - **Contributors** — the factors pushing the score toward retention - **Detractors** — the factors pulling it toward churn - Each factor includes the account's current value and how that compares to the broader population ## Score Trends The prediction chart on each account shows how the health score has moved over time. A declining score over several months is often more meaningful than a single low reading — it indicates a worsening trajectory rather than a stable at-risk state. ## Data Confidence Alongside prediction confidence, FunnelStory also surfaces a separate **data confidence** indicator. This reflects how much data is available for the account. A high-confidence prediction ("High") on low data might still be less reliable than a medium-confidence prediction on a well-established account with years of history. When data confidence is medium or low, a warning is shown with a brief explanation — for example, "Insufficient data" or "Limited conversation history." ## Related - [How Predictions Work](./overview.md) — health scores, outcomes, and confidence explained - [Confidence Ratings](./confidence-ratings.md) — how confidence levels are determined - [What-If Analysis](./what-if-analysis.md) — simulate changes and see their impact on the score --- ## Confidence Ratings Every prediction includes a confidence rating — **High**, **Medium**, **Low**, or **Neutral** — that tells you how strongly the account's data supports the predicted outcome. ## Confidence Levels | Level | Meaning | |-------|---------| | **High** | The account's data strongly matches the predicted outcome. Act on this. | | **Medium** | A moderate signal — meaningful but not conclusive. Worth monitoring and investigating. | | **Low** | A weak signal. The prediction is directional but should be one input among several. | | **Neutral** | No meaningful signal in either direction, often due to limited data. | Confidence is derived from the strength of the prediction score. A score far from neutral — in either direction — produces a higher confidence rating. A score close to neutral produces a lower one. ## Data Confidence Confidence ratings reflect the strength of the signal, not the quality of the underlying data. FunnelStory also surfaces a separate **data confidence** indicator that reflects how much data is available for the account. When data confidence is medium or low, a warning appears on the account with a brief explanation: - **Insufficient data** — the account doesn't have enough history for a reliable prediction - **Limited conversation history** — few or no meetings, notes, or activities have been logged A high prediction confidence with low data confidence means the model found a pattern, but there isn't much data behind it. Treat those predictions with appropriate skepticism. ## Using Confidence in Practice **High confidence, churn** — prioritize these accounts for immediate outreach. The signal is strong enough to act on without waiting for more data. **Medium confidence, churn** — schedule a check-in. Review the detractors to understand what's driving the signal before the conversation. **Low confidence** — keep an eye on trend direction. A low-confidence churn prediction that's been moving lower over several months is worth treating more seriously than an isolated reading. **Neutral** — focus elsewhere unless there's a near-term renewal. Neutral accounts with upcoming renewals are worth a proactive touch regardless of prediction. ## Related - [Account Predictions](./account-predictions.md) — where confidence is displayed - [How Predictions Work](./overview.md) — how scores and outcomes are calculated --- ## Predictions(Predictions) A **Prediction** is an account-level score that estimates the probability of churn or renewal — calculated continuously from your actual customer data, not a manually configured formula. Where health scores ask you to decide in advance what matters, FunnelStory's prediction models learn the patterns from your own historical outcomes: which accounts renewed, which churned, and what their data looked like in the months before. The result is a score grounded in the specific reality of your customer base, not a generic industry template. ## The Health Score Every account receives a **health score from 0 to 100**. - **50** is neutral — no strong signal in either direction - **Below 50** — increasing churn risk - **Above 50** — healthy trajectory trending toward renewal The score is a net result of two competing signals: the probability the account will stay, weighed against the probability it will churn. When both signals are strong, the score reflects genuine uncertainty — an account with high product usage but also high support escalations, for example, will land near the middle until the pattern resolves. ## Predicted Outcome Each account is assigned a predicted outcome alongside its health score: | Outcome | What it means | |---------|---------------| | **Churn** | The account matches patterns historically associated with churn | | **Retention** | The account matches patterns historically associated with renewal | | **Neutral** | No strong signal in either direction | ## Confidence Every prediction includes a confidence level reflecting how clearly the data matches the predicted outcome: | Confidence | What it means | |------------|---------------| | **High** | Strong, clear signal — the prediction is reliable and actionable | | **Medium** | Moderate signal — worth investigating and acting on | | **Low** | Weak signal — use alongside other context | | **Neutral** | Insufficient data to form a reliable prediction | Low confidence most commonly appears for newer accounts that haven't yet accumulated enough history to match patterns clearly. ## Driving Factors Each prediction surfaces the specific factors contributing to the score, split into two categories: - **Increase/Maintain these values** — factors currently supporting retention. Protecting these is as important as addressing risk signals. - **Decrease/Maintain these values** — factors that are pushing the score toward churn. These are your intervention priorities. Each factor shows the account's current value on a min-max scale relative to the broader population. This makes it immediately clear whether an account is above or below average on any given signal — and by how much. Driving factors pull from both structured data (product usage events, CRM attributes, support activity) and unstructured data (conversation sentiment, ticket themes, meeting transcripts). This combination is what allows the model to surface signals that pure usage-based health scores miss entirely. ## What-If Analysis The **What-If Analysis** lets you simulate how changing an account's data would affect its prediction. Enter a hypothetical value for any driving factor — reduced usage, fewer active users, resolved support tickets — and see the projected impact on the health score. This is useful for prioritizing which gaps to close ahead of a renewal conversation: if increasing one metric would move the score substantially, that's where to focus. ## How Predictions Learn Your Business FunnelStory models are trained on your specific outcomes, not a generic baseline. The system learns what "churn" and "retention" look like in your customer base by analyzing historical accounts — which ones renewed, which ones churned, and what combinations of signals preceded each. This is configured through **Revenue Tags**: you define what a churned account looks like and what a retained account looks like, using filters or specific account examples. The prediction model uses these labeled examples as its training set. The more precise your Revenue Tags, the more accurately the model can learn the patterns that matter for your specific business. Your FunnelStory team works with you during setup to configure these correctly. ## How Predictions Improve Over Time Prediction models continuously improve as new outcomes are recorded. When an account predicted to churn actually churns — or an account predicted to renew actually renews — that outcome is used to validate and refine the model in the next training cycle. Missed predictions are equally valuable: an account the model predicted as healthy that churned unexpectedly teaches the model to look for signals it may have underweighted. This feedback loop means the model becomes more accurate over time, adapting to changes in your customer behavior, your product, and your market. You can trigger a manual retrain from the Revenue Tags configuration page when you've made significant changes to your tagging criteria. ## Per-Product Predictions For accounts with multiple products, FunnelStory generates **per-product predictions** — a separate health score and driving factors breakdown for each product line. This is useful when: - Different products have different renewal timelines or contract structures - A single account has separate CSM ownership for different products - You want to isolate which product relationship is at risk before a consolidated renewal conversation ## Acting on Predictions Predictions are designed to trigger action, not just inform awareness. From any account's prediction view, you can: 1. **Review driving factors** — understand exactly what is moving the score before engaging the customer 2. **Run a What-If analysis** — model which interventions would have the most impact 3. **Jump to Needle Movers** — see the specific conversation signals and behavioral changes behind the prediction 4. **Launch a playbook** — execute a structured response workflow directly from the prediction detail 5. **Create a CRM task** — push the risk or opportunity to Salesforce or HubSpot for Account Executive follow-up 6. **Ask Renari** — get an AI-synthesized action recommendation with full account context ## Relationship to Needle Movers Predictions and [Needle Movers](../core-concepts/needle-movers.md) are complementary, not redundant. A Prediction gives you the **score** — the probability that an account will churn or expand. A Needle Mover gives you the **reason** — the specific, sourced signal (a competitor mentioned in a QBR, a champion who has gone quiet, an unresolved pricing concern) that is moving that probability. Together they provide both the "what" and the "why" needed to take confident action. ## Related - [Needle Movers](../core-concepts/needle-movers.md) — the specific signals driving prediction scores - [Customer Intelligence Graph](../core-concepts/customer-intelligence-graph.md) — how prediction scores are computed and stored as derived intelligence - [How FunnelStory Works](../core-concepts/overview.md) — where predictions fit in the pre-computed intelligence layer - [AI Agents](/ai/agents-overview) — automating responses when predictions cross risk thresholds --- ## Prediction Triggers When an account's prediction changes, FunnelStory emits a **signal**. You can use that signal to notify your team or kick off automated follow-up. ## Notifications Signals can be wired to notification triggers, which deliver alerts to **Slack** or **Microsoft Teams** when a prediction changes. Notifications are configured by workspace admins under **Settings → Notifications**. From there, you can: - Select which accounts are in scope (all accounts, specific segments, or accounts assigned to specific CSMs) - Choose the delivery channel - Customize the message template A typical notification includes the account name, health score, predicted outcome, confidence level, and a direct link to the account in FunnelStory. ## Email via Agents For email delivery, you can configure an [AI agent](../ai/agents-overview.md) to watch for prediction signals and send email summaries. Agents can be set up to run on a schedule or in response to signal events, and the email content is generated based on the account's current prediction data. This is useful for weekly digests of at-risk accounts or immediate alerts when a high-confidence churn signal emerges. ## Related - [Account Predictions](./account-predictions.md) — how predictions are displayed per account - [Confidence Ratings](./confidence-ratings.md) — understanding what confidence levels mean - [AI Agents](../ai/agents-overview.md) — configuring agents for email notifications - [AI Agents](../ai/agents-overview.md) — automating actions in response to account signals --- ## Product-Level Predictions(Predictions) For accounts with multiple products, FunnelStory generates a separate prediction for each product — its own health score, outcome, confidence level, and factor breakdown. This is useful when different products have different renewal dates, different owners, or meaningfully different usage patterns. An account might be healthy overall while one product is quietly at risk. ## What's Included Each product prediction includes: - **Health score** — 0–100, same scale as account-level predictions - **Predicted outcome** — churn, retention, or neutral for this product - **Confidence** — how strongly the data supports the prediction - **Contributors and Detractors** — the factors driving the score for this specific product - **Diagnosis summary** — a short AI-generated summary of what the data suggests about this product's trajectory - **Symptoms** — a list of observable signals behind the summary, drawn from the product's metrics and activity ## How Product-Scoped Metrics Feed Predictions Product predictions are trained on metrics and activity that are scoped to each product — not just the account overall. If your Account model includes product-specific usage data (for example, feature adoption per product or support tickets filed against a specific product), those signals feed directly into that product's prediction. This means the factors driving a product prediction can differ substantially from the account-level factors. An account may have strong overall engagement while a specific add-on has low adoption, a separate renewal timeline, and a churn signal. ## Viewing Product Predictions Product predictions are visible on the account detail page, within each product's section. The layout mirrors the account-level prediction: badge, factor breakdown, and trend chart. ## Related - [Account Predictions](./account-predictions.md) — account-level scoring and the accounts view - [How Predictions Work](./overview.md) — how the prediction model is trained - [What-If Analysis](./what-if-analysis.md) — simulate changes for a specific product --- ## What-If Analysis What-if analysis lets you simulate how changes to an account's data would affect its prediction. Adjust a factor value and FunnelStory immediately shows you the updated health score, outcome, and confidence. ## How It Works The what-if panel is available on each account's prediction detail view. It shows the account's significant factors in two groups: - **Increase or maintain these** — factors where a higher value improves the prediction (for example, product adoption, meeting frequency, feature usage) - **Decrease or maintain these** — factors where a lower value improves the prediction (for example, open support tickets, time since last login) For each factor, you can see the account's current value plotted against the distribution across all accounts — the 25th percentile, median, and 75th percentile. You can enter a simulated value and see the prediction update in real time. ## What It's Good For **Renewal conversations** — before a QBR, use what-if to identify which one or two changes would have the biggest impact on the account's score. That gives you a concrete, data-backed recommendation to bring to the call. **Prioritizing CSM effort** — if you're deciding between two interventions, what-if analysis shows which one moves the needle more. This is especially useful when you have limited time before a renewal date. **Setting expectations with customers** — "if you reach this usage threshold by next quarter, here's what we'd expect to see in your health score" is a more compelling message than a generic health score improvement goal. ## What-If vs. Actual Predictions What-if simulations are hypothetical — they show what the prediction *would be* if the account's data were different, not what it will be if nothing changes. The actual prediction continues to update based on real data as it refreshes. ## Related - [Account Predictions](./account-predictions.md) — viewing the current prediction and factors - [Confidence Ratings](./confidence-ratings.md) — understanding how confidence changes with the score - [How Predictions Work](./overview.md) — the model behind the predictions --- ## AI Summaries **AI summaries** turn long threads, tickets, and usage context into short, sourced explanations attached to each **needle mover**. They answer “why is this here?” without asking you to open every underlying document first. ## The AI Summary panel On the needle mover **detail** view, the left column opens with an **AI Summary**: a narrative of the theme FunnelStory detected, the direction (**risk** vs **opportunity**), and the main evidence. Use **Show more** when the default paragraph is truncated; the model may surface additional nuance or secondary contributors. The summary is **not** a replacement for the **activity timeline**—it is a guided entry point. When you need citations or raw text, scroll the timeline on the right. ## How titles are generated The **title** in the list view is a separate, highly compressed line meant for triage at a glance. Titles prioritize: - The **business theme** (for example pricing pressure, competitor evaluation, champion departure) - **Severity** cues reflected in impact icons - **Recency** of the strongest supporting excerpt If a title feels slightly generic, open the detail view—the summary and timeline usually carry the specificity you need for customer conversations. ## Activity timeline entries Each timeline row represents a **source** FunnelStory used when building or updating the needle mover. Typical fields include: | Field | What it shows | |-------|----------------| | **Source and system** | Channel (chat, ticket, meeting, and so on) and which **connection** supplied the row | | **Category** | A topic label the model assigned to that interaction | | **Excerpts** | Short quotes that triggered or reinforced detection | | **Participants** | People involved, when available from the source | | **Summary** | A per-source micro-summary | | **Details** | Expandable full text for verification | The first timeline entry is always **Needle mover created** with the detection timestamp so you can tell **age of signal** at a glance. ## Inline Renari **Ask Renari Anything** sits on the same screen so you can drill deeper without losing context. Renari inherits the needle mover, account, and timeline as grounding—use it to draft follow-up emails, compare similar accounts, or ask for recommended next steps. See [Renari](../platform/renari.md). ## Quality and limitations Summaries depend on the **quality and coverage** of connected data. If a timeline row shows thin excerpts, check whether the upstream integration captures full bodies (some ticketing tools default to short previews). When the model is uncertain, the summary uses cautious language; pair that with human review before external commitments. ## Related - [Needle Movers (Core concepts)](../core-concepts/needle-movers.md) - [Managing Needle Movers](./lifecycle.md) - [Renari](../platform/renari.md) --- ## Managing Needle Movers Use this page when you need to **triage**, **assign**, **collaborate on**, or **close** needle movers in the main workspace views. It complements the product tour in [Needle Movers (Core concepts)](../core-concepts/needle-movers.md) with operational detail. ## Open and closed states Every needle mover is either **Open** or **Closed**. - **Open** — Active in your queue; appears in default list filters and counts toward team responsiveness metrics. - **Closed** — Resolved or no longer actionable. Closed items drop out of the default open queue but remain searchable and readable for audit purposes. Change state from the needle mover **detail** view. Closing records the moment on the **activity timeline** and frees the signal from day-to-day triage. ## Assignment and ownership Use the **Assignee** field to route a needle mover to yourself or a teammate. Assignment is visible in the list view and on the account, so everyone knows who is driving the response. **Managers** and leaders can filter the full portfolio (toggle off **My Accounts**) and reassign work when load shifts or ownership changes mid-quarter. ## Comments, mentions, and tasks The composer at the bottom of the detail view is where collaboration happens: 1. **Comment** — Add context for your team; use **@** to mention another workspace user so they receive a notification (see [Needle Mover notifications](./notifications.md)). 2. **Tasks** — Use **/** in the composer to create or link **tasks** tied to the needle mover when your workspace uses task workflows. Every accepted action is reflected on the **activity timeline**, which acts as the system of record for the signal. ## Titles and metadata The **title** is generated to summarize the theme of the signal. If your role allows edits, small corrections can keep portfolio rollups readable—prefer editing when the AI phrasing is misleading, not for full rewrites of customer intent. **Type** tabs and tag chips categorize the needle mover (pricing, competitor, personnel change, and so on). **Impact** icons communicate **risk** vs **opportunity** and **High / Med / Low** severity at a glance. ## Multiple accounts Some needle movers span more than one account (for example a parent conversation that references subsidiaries). The list shows the primary **Company** with **+N more** when additional accounts are linked. Open the detail view to see the full association set before you take customer-facing action. ## List views: filters, search, and sorting The main **Needle Movers** list is designed for daily CSM and AM workflows: | Control | Use it to | |---------|-----------| | **My Accounts** | Limit to accounts you own (common default after login) | | **Select Audiences** | Narrow to a saved **audience** of accounts | | **Impact** | Filter by risk vs opportunity and severity | | **Open / Closed** | Switch between active and historical queues | | **Assignee** | See only your team’s ownership | | **Account** | Drill to one customer | | **All Time** | Restrict by date range | | **Search** | Match keywords in titles, accounts, and surfaced content | Sort columns to prioritize by **last activity**, severity, or account name—whatever matches your stand-up or QBR prep process. ## Detail view and queue navigation Opening a row shows the **AI Summary**, **Ask Renari Anything** inline box, overview metadata, and the **activity timeline** (see [AI summaries](./ai-summaries.md)). Use the **arrow** controls to move **1 / N** through the filtered queue without returning to the list—useful for dedicated “needle mover morning” blocks. ## Decay and housekeeping Very old **open** needle movers that were never accepted can **decay** out of the default open view so the workspace stays focused on current risk and opportunity. This is automatic hygiene, not a penalty—if you still need one, adjust filters or search to find it. ## Related - [Needle Movers (Core concepts)](../core-concepts/needle-movers.md) - [Needle Mover notifications](./notifications.md) - [AI summaries](./ai-summaries.md) - [Accounts view](../dashboard-insights/accounts-view.md) --- ## Needle Mover Notifications Your team learns about needle movers through **in-app activity**, **email**, **channel posts** (Slack or Microsoft Teams), and **mentions** in comments. This page describes what to expect by default and where to configure richer routing. ## In-app visibility Anyone with **accounts** access can open the **Needle Movers** list and subscribe to views with filters that match their portfolio. New and updated signals appear as you **refresh** data. Assignees see ownership changes immediately in the list and on each **account** record. ## Email Workspace email notifications typically cover: - **Assignment changes** when you become the assignee - **@mentions** in needle mover comments (always directed at the mentioned user) ## Slack and Microsoft Teams For **“post to a channel when something happens”** delivery—across needle movers, **signals**, and other account events—use **Admin → Notifications** and connect **Slack** or **Teams** under **Configure → Connections** first. See: - [Notifications overview](../platform/notifications/overview.md) - [Slack notifications](../platform/notifications/slack.md) - [Microsoft Teams notifications](../platform/notifications/ms-teams.md) Built-in channel notifications fire **after** FunnelStory processes refreshed account data, aligned with your model refresh cadence—not necessarily the instant an upstream ticket changes. ## Mentions in comments When you **@mention** a colleague in a needle mover comment, they receive a targeted notification (email and/or in-product, depending on workspace configuration). Use mentions for **handoffs**, **legal review**, or **leadership visibility** without reassigning the entire needle mover. ## Assignment and status changes Becoming **assignee** or having a needle mover **closed** usually generates a notification to the people affected. Treat these as lightweight operational pings rather than full narrative digests—the **detail view** and **AI Summary** remain the source of truth. ## Advanced: AI agents When you need **branching logic**, **LLM judgment**, or **multi-step playbooks** (for example “if severity is High and renewal is within 60 days, post to `#risk` and open a task”), use **[AI Agents](../ai/agents-overview.md)** with triggers on needle mover or signal events. Agents complement built-in notifications rather than replacing them. ## Related - [Managing Needle Movers](./lifecycle.md) - [Signals](../dashboard-insights/signals.md) - [AI agents and notifications](../platform/notifications/ai-agents.md) --- ## Needle Movers Overview **Needle Movers** are leading indicators of churn or expansion that FunnelStory surfaces from your connected conversations, product usage, support data, and related signals—early enough for your team to act while outcomes are still in motion. This section covers how detection fits into your workspace, how data feeds the pipeline, and where to go for day-to-day workflows. For the conceptual model, UI walkthrough, and relationship to **Predictions**, start with [Needle Movers in Core concepts](../core-concepts/needle-movers.md). The pages here focus on **operations**: managing the lifecycle, notifications, and how **AI summaries** are produced. ## How detection fits together FunnelStory continuously ingests data from your **connections** and **models** (accounts, users, product activity, tickets, meetings, chats, notes, and more). On each processing cycle, needle-mover detection evaluates new and updated activity against patterns that historically preceded renewal, expansion, or churn for *your* customer base—not a generic template. What you see in the product—**type** (for example pricing, competitor, personnel change), **impact** (risk vs opportunity and severity), **title**, and **timeline** entries—is the output of that pipeline. You do not configure individual rules in the UI; the system learns from outcomes and surfaces candidates for your team to review, assign, and close. ## What feeds a Needle Mover Needle Movers are grounded in evidence from your workspace. Typical inputs include: | Input | Role | |-------|------| | **Conversations and chats** | Transcripts and threads from connected communication tools | | **Support tickets** | Subjects, descriptions, and status changes from helpdesk integrations | | **Product activity** | Usage signals tied to accounts and users from your product analytics or warehouse models | | **Meetings** | Summaries and excerpts where your workspace ingests meeting content | | **Notes** | Human-written context on accounts (see [Notes](../core-concepts/notes.md)) | When the same underlying theme appears across multiple sources, FunnelStory can consolidate it into one needle mover with multiple **sources** on the timeline. ## Cold start and ongoing refresh **Cold start** refers to the period right after you connect data: the graph is still accumulating history, so the model has fewer comparable renewal and churn examples. Expect confidence and volume of needle movers to grow as more refresh cycles complete and outcomes (renewals, churns, expansions) are observed in your data. **Ongoing refresh** means each model and connection **refresh** can add new timeline entries, adjust severity, or occasionally merge or supersede older signals when the situation changes. You do not need to manually “re-run” detection; staying on a regular refresh schedule keeps the queue current. ## Where to go next | Topic | Page | |-------|------| | List and detail views, assign, comment, close, playbooks | [Managing Needle Movers](./lifecycle.md) | | Email, Slack, Teams, mentions, and workspace alerts | [Needle Mover notifications](./notifications.md) | | AI Summary, titles, and timeline excerpts | [AI summaries](./ai-summaries.md) | | Built-in Slack/Teams routing for account events | [Notifications overview](../platform/notifications/overview.md) | | Automating multi-step responses | [AI Agents overview](../ai/agents-overview.md) | ## Related - [Needle Movers (Core concepts)](../core-concepts/needle-movers.md) - [Predictions](../predictions/overview.md) - [Signals](../dashboard-insights/signals.md) --- ## Examples These **patterns** are starting points—swap table names, thresholds, Slack channels, and connection IDs for your workspace. Each example lists the **intent**, **trigger**, and **shape** of the graph; paste fragments into the canvas or provide them to an assistant following [Vibe coding](./agents-vibe-coding.md). :::tip Build with AI Natural-language version: “Build me the *Incremental ETL* pattern from the docs using dataset `processed_tickets`.” Point the assistant at this page plus the [flow authoring guide](./agents-flow-authoring-guide.md). ::: ## 1. Account health alert (schedule + Slack) **Intent:** Every weekday morning, find accounts whose **health score** is below a threshold and post a compact list to Slack. **Trigger:** `schedule` (cron) with timezone. **Graph sketch:** 1. `CALL` `semantic.query` → `@.at_risk` 2. `CONDITION` on `len(results) > 0` (derive boolean in a small `TRANSFORM` if needed) 3. `CALL` `slack.send_message` with interpolated summary text **MCP-style prompt:** “Create a scheduled agent weekdays 9am PT. Query accounts with health_score < 40 and churned = 0, limit 50. Post `account_id`, `name`, `health_score` as bullets to Slack channel C0123 using connection ``.” ## 2. Ticket analysis pipeline (conversation trigger) **Intent:** When a **ticket** conversation is ingested, classify sentiment and topics, then store JSON in a dataset for dashboards. **Trigger:** `conversation` with `types: ["ticket"]`. The conversation trigger payload provides `key`, `metadata`, and `timestamp` — not the full ticket text. To load the full content, use a `semantic.query` step early in the flow. **Graph sketch:** 1. `CALL` `semantic.query` to fetch the full ticket using identifiers from `$.trigger.conversation.key` (e.g. join `tickets` on the id found in the key object). 2. `AGENT` `small` with JSON-only system prompt → `@.classification` 3. `CALL` `dataset.record.upsert` keyed by ticket id. ## 3. Renewal risk summary (query trigger + email) **Intent:** Once per UTC day, select accounts renewing in **N** days with risk signals, summarize with a **large** model, email the owning CSM list. **Trigger:** `query` SQL returning `account_id`, `csm_email`, `name`, … **with LIMIT**. **Graph sketch:** 1. `CALL` `semantic.query` for core renewal rows → `@.renewals` 2. Optional second `CALL` for needle movers / metrics. 3. `AGENT` `large` to craft per-account or batched summaries depending on volume. 4. `CALL` `email.send` with both `body` and optional `html`. Remember **query-trigger cadence** and idempotency ([Triggers](./agents-triggers.md)). ## 4. CRM sync (interval + update) **Intent:** Every few hours, select accounts whose derived fields changed in FunnelStory and push properties to **HubSpot** or **Salesforce**. **Trigger:** `interval` such as `"6h"`. **Graph sketch:** 1. `CALL` `semantic.query` for the change set. 2. `LOOP` over rows calling `hubspot.update_record` or `salesforce.update_record` with mapped fields. Respect API rate limits—keep SQL selective and batches small. ## 5. Chat agent (manual + tools) **Intent:** A user opens **Chat**, asks questions, and the LLM may call `semantic.query` or create tasks. **Trigger:** `manual` (plus chat entrypoint). **Graph sketch:** - Final `AGENT` must write user-visible text to **`@.response`**. - Attach tools for `semantic.query` and `tasks.create` as needed. ## 6. Incremental ETL (query + dataset dedupe) **Intent:** Process only rows not yet present in a **dataset** of finished keys. **Trigger:** `query` SQL using `LEFT JOIN dataset_records('processed')` pattern (see flow authoring guide). **Graph sketch:** 1. `CALL` `semantic.query` returning pending rows. 2. `LOOP` with inner `AGENT` or `CALL` processing. 3. `CALL` `dataset.record.upsert` marking completion keys. ## 7. Parallel notify (BRANCH + JOIN) **Intent:** When a condition hits, notify **Slack** and **email** in parallel, then continue cleanup only after both succeed. **Graph sketch:** 1. `CONDITION` on severity predicate. 2. `BRANCH` into `path_slack` and `path_email`, each ending at `JOIN`. 3. After `JOIN`, mark a dataset row or call `tasks.create`. This pattern avoids serial delays when integrations are independent. ## Related - [Triggers](./agents-triggers.md) - [Operations](./agents-operations.md) - [Functions reference](./agents-functions-reference.md) - [Variables and data](./agents-variables-and-data.md) --- ## Flow authoring guide # Flow Authoring Guide Create FunnelStory agent definitions (flows) — JSON configurations you edit in the agent builder or export/import that define multi-step data processing and LLM agent workflows. ## Before You Start Clarify these before writing any JSON: 1. **What's the goal?** One-shot batch job, chat-driven flow, or scheduled pipeline? 2. **Where does data come from?** Semantic tables, external connections, or existing datasets? 3. **Which pattern fits?** ETL, multi-query aggregation, or incremental processing? (See [Common Patterns](#common-patterns)) 4. **Do you need an AGENT step?** Not all flows require LLM calls (e.g., pure data pipelines, usage alerts). If yes, `small` for structured extraction, `large` for complex reasoning. ## Workflow Follow this order when building a flow: 1. **Define `input_schema`** (if needed) — what the user provides when triggering. Omit if the flow is self-contained. 2. **Decide how the flow starts** — manual/API/MCP only, or add `trigger_config` for automatic runs (see [Triggers](#triggers-trigger_config)). 3. **Sketch the step graph** — entrypoint → ... → terminal (`"next": ""`) 4. **Choose the right op for each step** — see [Operation Types](#operation-types) 5. **Wire up variable passing** — decide local vs global for each output 6. **If this is a chat flow** — the final AGENT step **must** store its output in `@.response`. This is the variable the runtime reads to send a reply to the user. Omitting this is the #1 cause of silent chat flows. 7. **Add guardrails** — error checks after CALL steps, incremental processing for large datasets ## Flow Structure Published flows that should run automatically set `"draft": false` and include `trigger_config`. Draft flows are not picked up by the background runner. ```json { "name": "My Flow", "draft": true, "trigger_config": null, "input_schema": [...], "config": { "entrypoint": "first_step", "steps": { ... } } } ``` ## Triggers (`trigger_config`) Optional. When set on a **non-draft** flow, the agent can run automatically when events occur. | `type` | Purpose | Config | |--------|---------|--------| | `schedule` | Cron | `schedule.expr` (optional `schedule.timezone`) | | `interval` | Fixed repeat | `interval.duration` (e.g. `"6h"`, `"30m"`) | | `activity` | Model activity events | `activity.activity_ids` (array); optional `filter_expr` | | `signal` | Signal events | `signal.rule_ids` (array); optional `filter_expr` | | `needle_mover` | Needle mover rows | `needle_mover.labels` and/or `needle_mover.impacts` (arrays; need at least one value across both); optional `filter_expr` | | `conversation` | Conversations | `conversation.types` (array); optional `filter_expr` | | `query` | Semantic DB rows | `query.query` — SQL against the semantic DB; **each result row starts one run** | ### Query trigger - The SQL is the same dialect/workspace tables you use in `semantic.query` inside the flow (e.g. `accounts`, `dataset_records('my_dataset')`, …). - Each row becomes one run. The row is available at runtime as **`@.trigger.row.`** (and in templates as `{{ $.trigger.row. }}`). Only include columns you need; the idempotency key is derived from the **entire row JSON** (stable row → deduped runs). - **Cadence:** query evaluation runs at most **once per UTC day** (first successful tick of the `flows` runner that day for the workspace). Use `LIMIT` in SQL to cap work per day. - Prefer narrow `WHERE` clauses so you do not enqueue more than you need; the runner also caps how many new runs it creates per cycle. Example: ```json "trigger_config": { "type": "query", "query": { "query": "SELECT account_id, name FROM accounts WHERE subscription_remaining_days < 90 LIMIT 50" } } ``` Steps can reference `{{ $.trigger.row.account_id }}`, etc. ### Trigger data available in your flow Each trigger type exposes different fields under `@.trigger` (and `$.trigger` in templates). Only one shape applies per run. | Trigger type | Runtime fields under `$.trigger` | |---|---| | **query** | `row.` — columns from your SQL SELECT | | **activity** | `activity.activity_id`, `activity.model_id`, `activity.account_id`, `activity.user_id`, `activity.timestamp`, `activity.count` | | **signal** | `signal.signal_id`, `signal.rule_id`, `signal.type`, `signal.account_id`, `signal.timestamp`, plus optional `signal.message`, `signal.attributes`, `signal.value`, `signal.previous_value` | | **needle_mover** | `needle_mover.needle_mover_id`, `needle_mover.title`, `needle_mover.description`, `needle_mover.state`, `needle_mover.impact`, `needle_mover.label`, `needle_mover.created_at` | | **conversation** | `conversation.key`, `conversation.metadata`, `conversation.timestamp` | Event-driven runs also include `$.trigger.account_ids` when account scope is available. **Important:** `$.trigger.row` only exists for **query** triggers. If a step references `$.trigger.row` but the run was started by an activity, signal, needle mover, or conversation trigger, the value will be empty or missing. Use the matching path for the trigger type — for example `$.trigger.activity.account_id` on an activity-triggered run. If you need data beyond what the trigger provides, add a `semantic.query` step to look it up. **Naming differences between saved config and runtime:** | Topic | In the saved trigger JSON | On the run under `$.trigger` | |---|---|---| | Needle mover | `impacts` (plural array of filters) | `needle_mover.impact` (singular string for this event) | | Activity | `activity_ids` (which activities fire the flow) | `activity.activity_id` (the specific event) | | Conversation | `types` (which conversation kinds fire the flow) | `conversation.key`, `conversation.metadata`, `conversation.timestamp` | ### Testing trigger-shaped flows without waiting When running from the builder or starting a test run, you can supply sample **trigger data** so the run behaves as though a real trigger started it. The sample JSON must match the trigger type you are building for. **Query-shaped sample** (columns from your SQL): ```json { "trigger": { "row": { "account_id": "acct_123", "name": "Example Corp" } } } ``` **Activity-shaped sample:** ```json { "trigger": { "activity": { "activity_id": "019bacd1-e737-7bef-a310-c35ff896febd", "account_id": "acct_456", "timestamp": "2026-04-01T09:00:00Z", "count": 1 } } } ``` The same shapes apply when an assistant runs a flow with a simulated trigger via MCP. ## Input Schema (Optional) ```json "input_schema": [ { "id": "account_id", "type": "string", "description": "The account ID to process", "value": null }, { "id": "limit", "type": "number", "description": "Max records to process", "value": 100 } ] ``` - `id` (required): Variable name (access via `{{ $.account_id }}`) - `type` (required): `"string"`, `"number"`, `"boolean"`, `"object"`, `"array"` - `description` (optional): Human-readable label for UI - `value` (optional): Default value if input not provided **Note**: `input_schema` is optional. Many flows (scheduled pipelines, usage alerts) don't need user input at all. Only add it when the flow requires parameters at trigger time. ## Operation Types | Op | Purpose | Key Fields | |----|---------|------------| | `CALL` | Call a function | `call.function_id`, `call.args` | | `AGENT` | Run LLM agent | `agent.system`, `agent.user`, `agent.model_type` | | `LOOP` | Iterate over array | `loop.over`, `loop.var`, `loop.step` | | `CONDITION` | Boolean gate (stops if false) | `condition.condition` | | `BRANCH` | Run parallel paths | `branch.parallel_paths` | | `JOIN` | Wait for all branches | (no config) | | `TRANSFORM` | Format or extract data | `transform.type`, `transform.input` | | `WAIT` | Pause execution | `wait.duration` | | `SPAWN` | Start subplan | `spawn.plan_id`, `spawn.input` | ## Step Structure Every step has these common fields: ```json { "step_name": { "id": "step_name", "op": "CALL", "next": "next_step_name", "out": { "set": "@.result_var" } } } ``` - `id`: **Must match** the key name - `next`: Next step to execute. Empty string `""` = end of flow. - `out`: Where to store the result ### Output Configuration ```json "out": { "set": "@.my_result" } // Global — accessible by any subsequent step "out": { "set": "my_result" } // Local — only in current scope "out": { "append": "@.all_results" } // Append to array (useful in loops) "out": { "merge": "@.summary" } // Merge object fields into existing object ``` **CRITICAL decision rule for `@.` vs bare name:** - **ALWAYS** use `@.` globals when a downstream step outside the current loop/branch needs the value - Use bare names only for values consumed by the immediately next step in the same scope - When in doubt, use `@.` — it's always safe ## Variable Syntax | Syntax | When to use | Example | |--------|-------------|---------| | `"$.var"` (quoted, no braces) | Pass an entire object/array as-is | `"record": { "data": "$.analysis" }` | | `"{{ $.var }}"` | Interpolate into a string | `"WHERE id = '{{ $.id }}'"` | | `@.var` | Reference a global variable | `"out": { "set": "@.results" }` | | `@.trigger.*` | Payload from the run’s trigger (event, query row, …) | `@.trigger.row.account_id`, `{{ $.trigger.row.account_id }}` | `@.trigger` is only present when the run was started with trigger context (automatic triggers or a manual request that supplied `trigger`). **NEVER** use `{{ $.var }}` to pass an entire object — template interpolation stringifies objects unpredictably. Use `"$.var"` (quoted, no braces) instead. ### Variable Scopes | Prefix | Scope | Lifecycle | |--------|-------|-----------| | `$.var` | Local | Input variables + loop vars. Available to the current step and chained steps within the same scope. | | `@.var` | Global | Persisted across the entire flow run. Any step can read/write. | --- ## CALL — Calling Functions ```json { "id": "fetch_data", "op": "CALL", "next": "process_data", "out": { "set": "@.data" }, "call": { "function_id": "semantic.query", "args": { "query": "SELECT * FROM accounts WHERE id = '{{ $.account_id }}'" } } } ``` **IMPORTANT**: If a CALL step fails (function error, query failure, size limit exceeded), the step returns an error — no result is stored and execution of the current path stops. This is true for all step types, not just CALL. ### Available Functions #### `semantic.query` Query the semantic database (workspace data). ```json "call": { "function_id": "semantic.query", "args": { "query": "SELECT * FROM accounts LIMIT 10" } } ``` **Returns**: `{ "results": [...], "columns": [...], "total_rows": N }` **Common tables and columns:** | Table | Columns | |-------|------| | `accounts` | `account_id` TEXT, `domain` TEXT, `name` TEXT, `amount` REAL, `created_at` TIMESTAMP, `properties` JSON, `expires_at` TIMESTAMP, `churned` BOOLEAN, `churned_at` TIMESTAMP, `prediction` TEXT, `prediction_score` REAL, `assignees` JSON (array of assignee emails), `activity_score` REAL, `conversation_sentiment` REAL, `feature_adoption` REAL, `health_score` REAL, `license_utilization` REAL, `product_engagement` TEXT, `subscription_remaining_days` REAL, `support_sentiment` REAL, `total_conversations` REAL, `total_support_tickets` REAL, `total_users` REAL | | `meetings` | `id` TEXT, `source` TEXT, `title` TEXT, `link` TEXT, `timestamp` TIMESTAMP, `duration_seconds` INTEGER, `sentiment` REAL, `summary` TEXT, `key` JSON, `data` JSON, `metadata` JSON, `participants` JSON | | `conversations` | `id` TEXT, `parent_conversation_id` TEXT, `key` JSON, `metadata` JSON, `data` JSON, `timestamp` TIMESTAMP | | `tickets` | `source` TEXT, `id` TEXT, `timestamp` TIMESTAMP, `key` JSON, `sentiment` REAL, `link` TEXT, `title` TEXT, `text` TEXT, `contact_email` TEXT, `contact_name` TEXT, `assignee_email` TEXT, `resolved_at` TIMESTAMP, `status` TEXT, `priority` TEXT, `metadata` JSON, `data` JSON, `custom_fields` JSON, `tags` JSON | | `topics` | `reference_type` TEXT, `reference_id` TEXT, `account_id` TEXT, `user_id` TEXT, `user_email` TEXT, `topic` TEXT, `sentiment` TEXT, `created_at` TIMESTAMP, `link` TEXT | | `notes` | `id` TEXT, `title` TEXT, `content` TEXT, `link` TEXT, `note_type` TEXT, `created_at` TIMESTAMP, `updated_at` TIMESTAMP, `created_by_email` TEXT, `updated_by_email` TEXT, `account_id` TEXT, `timestamp` TIMESTAMP | | `tasks` | `id` TEXT, `title` TEXT, `body` TEXT, `link` TEXT, `status` TEXT, `created_at` TIMESTAMP, `updated_at` TIMESTAMP, `expires_at` TIMESTAMP, `created_by_email` TEXT, `assigned_to_email` TEXT, `account_id` TEXT | | `activities` | `activity_id` TEXT, `activity_name` TEXT, `account_id` TEXT, `user_id` TEXT, `count` INT, `timestamp` TIMESTAMP, `user_email` TEXT | | `contacts` | `id` TEXT, `name` TEXT, `email` TEXT, `domain` TEXT | | `workspace_users` | `user_id` TEXT, `name` TEXT, `email` TEXT, `user_role` TEXT, `user_designation` TEXT, `assignable` BOOLEAN, `last_activity` TIMESTAMP, `deactivated_at` TIMESTAMP | | `account_metrics` | `account_id` TEXT, `metric_id` TEXT, `value` REAL | | `dataset_records(name)` | `key` TEXT, `record` JSON — see [Dataset Operations](#dataset-operations) | #### `data_connection.query` Query external data connections (CRM, etc.). ```json "call": { "function_id": "data_connection.query", "args": { "data_connection_id": "019b3c9e-...", "query": "SELECT * FROM companies WHERE ..." } } ``` **Returns**: Same shape as `semantic.query`. On failure, the CALL step fails (no error payload is returned). #### `dataset.record.upsert` Save or update a record in a dataset. ```json "call": { "function_id": "dataset.record.upsert", "args": { "dataset": "my_dataset", "key": "{{ $.item.id }}", "record": { "field1": "{{ $.item.name }}", "field2": "$.analysis" } } } ``` **Note**: Use `"$.analysis"` (quoted, no braces) to store entire objects. Use `"{{ $.item.name }}"` to interpolate strings. **Returns**: `{ "dataset": "...", "key": "..." }` #### `dataset.record.set_field` Update a single field in a dataset record. ```json "call": { "function_id": "dataset.record.set_field", "args": { "dataset": "my_dataset", "key": "{{ $.item.id }}", "field": "status", "value": "completed" } } ``` **Returns**: `{ "dataset": "...", "key": "...", "field": "..." }` #### `dataset.record.delete` Delete a record from a dataset. ```json "call": { "function_id": "dataset.record.delete", "args": { "dataset": "my_dataset", "key": "{{ $.item.id }}" } } ``` **Returns**: `null` #### `tasks.create` Create a task. ```json "call": { "function_id": "tasks.create", "args": { "title": "{{ $.summary }}", "body": "{{ $.details }}" } } ``` **Returns**: `{ "task_id": "...", "title": "..." }` #### `slack.send_message` Send a Slack message. ```json "call": { "function_id": "slack.send_message", "args": { "connection_id": "slack_conn_123", "channel_id": "C01234567", "text": "{{ $.message }}" } } ``` - `connection_id` (required): Slack connection ID - `channel_id` (required): Slack channel ID - `text` or `blocks` (at least one required): Plain text message or Slack Block Kit blocks. If both are provided, `text` becomes the notification fallback. **Returns**: `{ "success": true, "response_channel": "...", "response_timestamp": "..." }` #### `email.send` Send an email. ```json "call": { "function_id": "email.send", "args": { "to": ["owner@acme.com", "csm@acme.com"], "subject": "Weekly summary", "body": "Your weekly summary is ready", "html": "Your weekly summary is ready" } } ``` **Returns**: `{ "sent": true, "recipients": [...], "recipient_count": N }` **IMPORTANT**: `to`, `subject`, and `body` are **all mandatory**. Even if you provide `html`, you must still include `body` with a plain-text version — emails will fail without it. #### `salesforce.read_record` Read a Salesforce record. ```json "call": { "function_id": "salesforce.read_record", "args": { "data_connection_id": "sf_conn_123", "object_type": "Account", "record_id": "001XXXXXXXXXXXXXXX", "fields": ["Name", "Industry", "AnnualRevenue"] } } ``` **Returns**: `{ "record": { ... } }` #### `salesforce.update_record` Update a Salesforce record. ```json "call": { "function_id": "salesforce.update_record", "args": { "data_connection_id": "sf_conn_123", "object_type": "Account", "record_id": "001XXXXXXXXXXXXXXX", "fields": { "Customer_Health__c": "At Risk", "Renewal_Risk_Score__c": "82" } } } ``` **Returns**: `{ "ok": true }` #### `hubspot.read_record` Read a HubSpot record. ```json "call": { "function_id": "hubspot.read_record", "args": { "data_connection_id": "hs_conn_123", "object_type": "companies", "record_id": "123456789", "fields": ["name", "domain", "industry"] } } ``` **Returns**: `{ "record": { ... } }` #### `hubspot.update_record` Update a HubSpot record. ```json "call": { "function_id": "hubspot.update_record", "args": { "data_connection_id": "hs_conn_123", "object_type": "companies", "record_id": "123456789", "fields": { "funnel_stage": "Expansion", "health_status": "watch" } } } ``` **Returns**: `{ "ok": true }` #### `search.web` Run a web search. ```json "call": { "function_id": "search.web", "args": { "query": "latest customer onboarding best practices", "recency_filter": "30d" } } ``` **Returns**: `[ { "title": "...", "url": "...", "snippet": "...", "date": "..." } ]` **`recency_filter` options**: `7d`, `30d`, `90d`, `1y` #### `accounts.select` Select accounts using Funnel filters. Uses the UniversalFilter (`FilterGroup`) shape. Filters use boolean logic: `and_group` contains `or_group` arrays, each `or_group` contains individual `filter` entries. ```json "call": { "function_id": "accounts.select", "args": { "filter": { "and_group": [ { "or_group": [ { "filter": { "name": "", "metric_filter": { "metric_id": "product_engagement", "condition": "equal", "value": "daily_active" } } }, { "filter": { "name": "", "rule_filter": { "activity_id": "019bacd1-e737-7bef-a310-c35ff896febd", "condition": "count_is_more_than_or_equal", "value": "5" } } } ] } ] } } } ``` **Returns**: `{ "total_count": N, "account_ids": ["acct_1", "acct_2"] }` **Note**: `metric_id` and `activity_id` values are workspace-specific. These must be looked up from the workspace configuration — they cannot be guessed. #### `template.render` Render a stored template with variables. ```json "call": { "function_id": "template.render", "args": { "template_id": "my_template", "vars": { "name": "{{ $.account_name }}", "score": "{{ $.health_score }}" } } } ``` **Returns**: The rendered template string. - `template_id` (required): ID of the template to render - `vars` (optional): Key-value map of variables to inject into the template --- ## AGENT — Running an LLM ```json { "id": "analyze", "op": "AGENT", "next": "", "out": { "set": "@.response" }, "agent": { "model_type": "small", "system": "You are an analyst. Output ONLY raw JSON, no markdown, no backticks.", "user": "Analyze this data:\n{{ $.data }}", "tools": [ { "name": "semantic_query", "function_id": "semantic.query" } ] } } ``` **Decision rules:** - **`small`**: Use for structured extraction, classification, formatting — any task with a clear expected output shape - **`large`**: Use only for complex reasoning over multiple inputs, nuanced analysis, or open-ended generation - **For JSON output**: **ALWAYS** include "Output ONLY raw JSON, no markdown, no backticks" in the system prompt - **Chat final response**: The last AGENT step **must** use `"out": { "set": "@.response" }`. This is the variable the chat runtime reads to send a reply. If you omit this, the chat will produce no visible response. ### Tools on AGENT steps (vs CALL steps) CALL steps can invoke the **full function catalog** — the flow author decides when each function runs and what arguments go in. AGENT steps can also invoke functions, but only when you grant access by attaching **tools**. The set of functions available as agent tools is currently smaller than the full CALL catalog: - `semantic.query` - `email.send` - `slack.send_message` - `tasks.create` Each tool entry has **`name`** (what the model invokes) and **`function_id`** (which function runs). Example: `{ "name": "query_semantic_db", "function_id": "semantic.query" }`. Functions not in this list (CRM read/update, dataset operations, web search, etc.) remain available in ordinary CALL steps. ### Fixed arguments on tools For each tool you can set **`fixed_args`** — argument keys the model must not choose. Fixed values are merged at runtime and those keys are hidden from the model's view of the tool schema, so it only sees parameters it is allowed to fill in. ```json { "tools": [ { "name": "post_to_slack", "function_id": "slack.send_message", "fixed_args": { "connection_id": "slack_conn_123", "channel_id": "C01234567" } } ] } ``` In this example the model can only compose the message; connection and channel are locked by the flow author. ### Advanced: threads, memory, and multi-turn These are optional add-ons for specialized flows: - **`thread_id`**: Optional. Restores/continues a conversation thread from previous runs, enabling multi-turn agent interactions. - **`variable_store`**: Set to `true` to give the agent persistent key-value storage tools (`variable_get`, `variable_set`, `variable_push`, `variable_pop`, `variable_clear`) for scratchpad memory across tool calls. - **`multi_turn`**: Set to `true` to allow the agent to pause execution and wait for external input before continuing. --- ## LOOP — Iterating Over Arrays ```json { "id": "process_items", "op": "LOOP", "next": "", "loop": { "over": "items.results", "var": "item", "step": "handle_item" } } ``` - `over`: Path to array (e.g., `"items.results"`) - `var`: Current item variable name (access via `$.item`) - `step`: Step to execute for each item — this step can chain to further steps via `next` - The loop variable (`$.item`) and any local variables set within the loop body are available to all chained steps **within the same iteration** - **Max 1000 iterations** — the step fails if the array exceeds this **CRITICAL — Loop Scheduling Gotcha:** Loops use **breadth-first scheduling**, not depth-first. For a loop over `[A, B, C]` with steps `process → save`, the execution order is: ``` process(A) → process(B) → process(C) → save(A) → save(B) → save(C) ``` **NOT** the intuitive `process(A) → save(A) → process(B) → save(B) → ...` All first steps in each iteration run before any second steps. This means: - **NEVER** write to a `@.` global in one step and read it in a later step within the same loop — the global will hold the value from the *last* iteration by the time any later steps run - **ALWAYS** use local variables (bare names, no `@.`) to pass data between chained steps within the same iteration — each iteration has its own isolated scope, so this is safe - Use `@.` globals inside loops **only** for accumulating results (e.g., `out.append`) that you read *after* the loop completes --- ## CONDITION — Boolean Gate ```json { "id": "check_data", "op": "CONDITION", "next": "process_data", "condition": { "condition": "$.has_data" } } ``` - If **true**: proceeds to `next` - If **false**: execution **stops** (no next step scheduled) - **There is no if/else.** For branching, use BRANCH with separate CONDITION steps on each path. --- ## BRANCH + JOIN — Parallel Execution ```json { "id": "parallel_tasks", "op": "BRANCH", "next": "join_results", "branch": { "parallel_paths": ["task1", "task2", "task3"] } } ``` ```json { "id": "join_results", "op": "JOIN", "next": "aggregate" } ``` - All paths in `parallel_paths` execute concurrently - JOIN waits for all branches to complete before proceeding --- ## TRANSFORM — Formatting and Extraction ```json { "id": "format_output", "op": "TRANSFORM", "next": "save", "out": { "set": "@.formatted" }, "transform": { "type": "format", "input": "$.raw_data", "template": "Summary: {{ $.input.summary }}, Count: {{ $.input.count }}" } } ``` **Transform types:** 1. **`format`**: Apply Go template to input - `input`: Path to input variable - `template`: Go template string (access input as `$.input`) 2. **`regexp_extract`**: Extract text using regex - `input`: Path to string variable - `pattern`: Regex pattern - `group`: Capture group index (default: 1) --- ## WAIT — Pause Execution ```json { "id": "wait", "op": "WAIT", "next": "continue", "wait": { "duration": "5m" } } ``` Duration — for example `"1s"`, `"5m"`, `"1h"`. --- ## SPAWN — Start a Subplan ```json { "id": "spawn_subflow", "op": "SPAWN", "next": "continue", "spawn": { "plan_id": "subplan_name", "input": { "param1": "{{ $.value1 }}" } } } ``` - `plan_id`: ID of subplan defined in `config.subplans` - Subplan runs **concurrently** with the parent flow --- ## Template Syntax Flows use Go's `text/template` syntax for string interpolation. ### Basic Interpolation ``` "SELECT * FROM accounts WHERE id = '{{ $.account_id }}'" ``` ### Conditionals ``` "{{if $.limit}}LIMIT {{ $.limit }}{{else}}LIMIT 100{{end}}" ``` ### Accessing Nested Data Use the `index` function for array access and complex keys: ``` "{{ index (index $.data.results 0) \"domain\" }}" ``` **IMPORTANT**: Go templates do NOT support bracket syntax like `$.results[0]`. Always use the `index` function. ### Important Notes - Missing variables resolve to **empty string** (not errors) - Use single quotes for SQL strings: `'{{ $.id }}'` - No `| json` filter available — LLM receives raw object representation - Template errors are silent — a malformed template produces empty output, not a runtime error --- ## Dataset Operations Datasets are persistent key-value stores for flow outputs. Records are stored as JSON objects with a string key. **IMPORTANT**: Dataset writes (`upsert`, `set_field`, `delete`) are permanent, side-effectful operations. Only use them when the user has explicitly asked to store data, or when the flow's documented purpose is to persist results for later processing. Do not write to a dataset speculatively. **CRITICAL — Dataset Name Rules:** - **NEVER** invent a dataset name. Only use a dataset if the user has explicitly provided the dataset name in their request. - **ALWAYS** verify the dataset exists before using it by querying `dataset_records('name')` (e.g., `SELECT 1 FROM dataset_records('my_dataset') LIMIT 1`). If the query fails or returns no schema, the dataset does not exist — stop and inform the user rather than proceeding. - There is **no MCP tool to create a new dataset**. If the user asks to create one, explain that dataset creation is not supported as of now. ### Querying Dataset Records Use `dataset_records('name')` as a table in `semantic.query`: ```sql -- Basic query SELECT * FROM dataset_records('my_dataset') LIMIT 100 -- Filter by record fields (JSONB operators) SELECT * FROM dataset_records('my_dataset') WHERE record->>'status' = 'completed' -- Access nested JSON SELECT key, record->>'name' as name, record->'analysis'->>'category' as category FROM dataset_records('my_dataset') ``` ### Writing Records Use the CALL functions in flows: - **`dataset.record.upsert`** — Create or overwrite a full record - **`dataset.record.set_field`** — Update a single field - **`dataset.record.delete`** — Delete a record by key --- ## Common Patterns ### 1. Fetch → Process → Store (ETL) ```json { "entrypoint": "fetch", "steps": { "fetch": { "id": "fetch", "op": "CALL", "next": "loop", "out": { "set": "items" }, "call": { "function_id": "semantic.query", "args": { "query": "SELECT * FROM tickets LIMIT 100" } } }, "loop": { "id": "loop", "op": "LOOP", "next": "", "loop": { "over": "items.results", "var": "item", "step": "process" } }, "process": { "id": "process", "op": "AGENT", "next": "save", "out": { "set": "analysis" }, "agent": { "model_type": "small", "system": "Extract insights as JSON. Output ONLY raw JSON.", "user": "{{ $.item }}" } }, "save": { "id": "save", "op": "CALL", "next": "", "call": { "function_id": "dataset.record.upsert", "args": { "dataset": "my_analysis", "key": "{{ $.item.id }}", "record": { "analysis": "$.analysis" } } } } } } ``` ### 2. Multi-Query Aggregation → Report ```json { "entrypoint": "query1", "steps": { "query1": { "id": "query1", "op": "CALL", "next": "query2", "out": { "set": "@.data1" }, "call": { "function_id": "semantic.query", "args": { "query": "..." } } }, "query2": { "id": "query2", "op": "CALL", "next": "generate", "out": { "set": "@.data2" }, "call": { "function_id": "semantic.query", "args": { "query": "..." } } }, "generate": { "id": "generate", "op": "AGENT", "next": "", "out": { "set": "@.response" }, "agent": { "model_type": "large", "system": "Generate a report.", "user": "Data 1:\n{{ $.data1 }}\n\nData 2:\n{{ $.data2 }}" } } } } ``` ### 3. Incremental Processing (Skip Already Processed) ```sql SELECT t.* FROM tickets t LEFT JOIN dataset_records('processed') d ON t.id = d.key WHERE d.key IS NULL LIMIT {{if $.limit}}{{ $.limit }}{{else}}100{{end}} ``` This is the standard pattern for flows that run repeatedly and should only process new records. --- ## Common Mistakes - **NEVER** use `{{ $.var }}` to pass an entire object — use `"$.var"` (quoted, no braces). Template interpolation stringifies objects unpredictably. - **NEVER** assume a `CALL` step always succeeds. `semantic.query` and `data_connection.query` fail the step on query errors — the current path stops. Design flows to be resilient to step failures. - **NEVER** store loop results as bare local variables if you need them after the loop — use `@.` globals or `out.append`. - **NEVER** write to a `@.` global in one loop step and read it in a later step of the same loop — loops are breadth-first, so all first steps run before any second steps. Use local variables to pass data within an iteration. - **NEVER** use `CONDITION` expecting if/else — it's a gate that stops execution on false. Use `BRANCH` for true branching. - **ALWAYS** add "Output ONLY raw JSON, no markdown, no backticks" to AGENT system prompts when you need JSON. - **ALWAYS** set `"next": ""` on the last step — omitting it causes runtime errors. ## Runtime Limits | Limit | Value | Behavior | |-------|-------|----------| | Function output (`CALL`) | 100 KiB | Step fails | | Global variables | 1 MiB | Run fails | | Step memory | 10 MiB | Run fails | | Loop iterations | 1000 | Step fails | | Events size | 5 MiB | Run fails | | Chat ticks per request | 1000 | Request stops | | Background run ticks per execution | 100 | Run pauses until next runner cycle | | Chat wakeup/wait | 1 minute | Fails with `WAIT_TOO_LONG` | | Run timeout | 24 hours | Background/scheduled runs | ## Useful Tips ### Getting the Current Date/Time The flow runtime does not expose the current date directly. Use a `semantic.query` CALL to get it: ```json "call": { "function_id": "semantic.query", "args": { "query": "SELECT date('now') as today, strftime('%w', 'now') as day_of_week" } } ``` This returns: `{ "results": [{ "today": "2026-03-02", "day_of_week": "1" }] }` (0=Sunday, 6=Saturday). Use this before any step that needs date-aware logic. ### Debugging 1. **Template not interpolating?** Check variable path. Use `{{ . }}` to see entire context. 2. **Empty results?** Verify query syntax and table/column names. 3. **Wrong variable scope?** Use `@.var` for global, `$.var` for local. When in doubt, use `@.`. 4. **LLM output malformed?** Add explicit formatting instructions in system prompt. 5. **Object passed as string?** You used `{{ $.var }}` instead of `"$.var"`. 6. **Chat produces no response?** Ensure the final AGENT step stores output in `@.response`. --- ## Functions reference **CALL** steps invoke **functions** by `function_id`. Each function validates its arguments; failures surface as step errors (the run stops along that path). :::tip Build with AI Use [Vibe coding](./agents-vibe-coding.md) so an assistant can query the semantic schema and wire arguments safely. ::: :::info For AI assistants Argument and return shapes, edge cases, and SQL dialect notes live in the [flow authoring guide](./agents-flow-authoring-guide.md). ::: ## Data ### `semantic.query` Runs **SQL** against the workspace semantic database (accounts, metrics, conversations, tickets, etc.). **Returns:** `{ "results": [...], "columns": [...], "total_rows": N }` ```json { "function_id": "semantic.query", "args": { "query": "SELECT account_id, name, health_score FROM accounts WHERE churned = 0 LIMIT 100" } } ``` Common tables include **`accounts`**, **`meetings`**, **`conversations`**, **`tickets`**, **`topics`**, **`notes`**, **`tasks`**, **`activities`**, **`contacts`**, **`workspace_users`**, **`account_metrics`**, and **`dataset_records('your_dataset')`** for dataset-backed tables. Column lists and types are documented in the flow authoring guide. ### `data_connection.query` Runs SQL against a configured **external** connection (warehouse, CRM mirror, etc.). **Args:** `data_connection_id`, `query` **Returns:** Same general shape as `semantic.query`. ```json { "function_id": "data_connection.query", "args": { "data_connection_id": "019b3c9e-aaaa-bbbb-cccc-ddddeeeeffff", "query": "SELECT id, name FROM companies WHERE updated_at > current_timestamp - interval '1' day" } } ``` ### `accounts.select` Returns account IDs matching a **filter group** (AND of OR groups of filters)—the same structured filters used elsewhere in FunnelStory. **Returns:** `{ "total_count": N, "account_ids": ["..."] }` ```json { "function_id": "accounts.select", "args": { "filter": { "and_group": [ { "or_group": [ { "filter": { "name": "", "metric_filter": { "metric_id": "product_engagement", "condition": "equal", "value": "daily_active" } } } ] } ] } } } ``` Metric and activity IDs are **workspace-specific**—look them up in your configuration; do not guess. ## Datasets ### `dataset.record.upsert` Creates or replaces a record keyed by string **`key`**. **Returns:** `{ "dataset": "...", "key": "..." }` ```json { "function_id": "dataset.record.upsert", "args": { "dataset": "qbr_notes", "key": "{{ $.account_id }}", "record": { "summary": "$.llm_output", "updated_at": "{{ $.now }}" } } } ``` Use quoted `"$.var"` form when passing **whole objects**; use `{{ $.id }}` for string interpolation inside SQL or text. ### `dataset.record.set_field` Updates a single field on an existing record. ### `dataset.record.delete` Deletes a record by key. ## Communication ### `slack.send_message` **Required:** `connection_id`, `channel_id`, and at least one of **`text`** or **`blocks`**. **Returns:** `{ "success": true, "response_channel": "...", "response_timestamp": "..." }` ```json { "function_id": "slack.send_message", "args": { "connection_id": "slack_conn_123", "channel_id": "C01234567", "text": "{{ $.message }}" } } ``` ### `email.send` **Required:** `to` (array), `subject`, **`body`** (plain text). **`html`** is optional but **`body`** is still required. **Returns:** `{ "sent": true, "recipients": [...], "recipient_count": N }` ```json { "function_id": "email.send", "args": { "to": ["csm@example.com"], "subject": "Weekly risk digest", "body": "Plain text version here", "html": "HTML version" } } ``` ## CRM ### `salesforce.read_record` / `salesforce.update_record` **Args:** `data_connection_id`, `object_type`, `record_id`, plus `fields` array for reads or `fields` object for updates. ### `hubspot.read_record` / `hubspot.update_record` **Args:** `data_connection_id`, `object_type` (e.g. `companies`), `record_id`, and fields as above. ## Utility ### `tasks.create` Creates a FunnelStory **task**. Title can be driven from template fields; see the flow authoring guide for optional priority, assignee, due date, actions, and references. ### `template.render` Renders a stored **message template** by `template_id` with a `vars` map. ### `search.web` Runs a **web search**; optional `recency_filter`: `7d`, `30d`, `90d`, `1y`. ## Related - [Variables and data](./agents-variables-and-data.md) — interpolation and `$.` vs `@.` - [LLM steps](./agents-llm-steps.md) — which functions may appear as **AGENT** tools - [Examples](./agents-examples.md) --- ## Getting started with AI Agents This walkthrough takes you from an empty workspace to a **saved, testable agent** that reads account data, summarizes it with an LLM, and sends a **Slack** message—so you learn triggers, `CALL` steps, and an `AGENT` step in one pass. :::tip Build with AI Prefer natural language? See [Vibe coding](./agents-vibe-coding.md) to have an assistant create the same agent over MCP. ::: ## Prerequisites - A FunnelStory **workspace** with models and data you can query. - At least one **Slack** [data connection](../data-connections/overview.md) if you follow the notification step (you can swap Slack for **email** using `email.send` instead). ## Steps 1. Open **Agents** in the product (`/agents`). Choose **Create agent** (or **Add agent** when the list is empty). 2. **[screenshot placeholder]** Name the agent and pick a **Manual** trigger while learning—runs only when you or an API client starts them. 3. On the **canvas**, add a **CALL** step and choose **Semantic query**. Configure a simple query, for example accounts with health data: ```json { "function_id": "semantic.query", "args": { "query": "SELECT account_id, name, health_score FROM accounts WHERE health_score IS NOT NULL ORDER BY health_score ASC LIMIT 20" } } ``` Wire the step so the graph has a clear entry and `next` pointing forward. 4. Add an **AGENT** step after the query. Use a **small** model for structured summaries. Example prompts: - **System:** `You are a CSM assistant. Output plain text bullet points, no markdown code fences.` - **User:** `Summarize risks and positives for this cohort:\n{{ $.query_results }}` (use the variable name your previous step wrote; the UI shows available names.) 5. Add another **CALL** step—**Send Slack message**—with your workspace Slack **connection id**, **channel id**, and `text` that includes the agent output, for example `{{ $.summary }}` if you stored the LLM output in `summary`. 6. Use **Test** on each step (or the trigger **test run**) to confirm SQL returns rows, the LLM returns text, and Slack accepts the payload. **[screenshot placeholder]** 7. **Save**, then toggle the agent **Active** when you are ready for scheduled or event triggers (configure those on [Triggers](./agents-triggers.md)). ## Next - [Triggers](./agents-triggers.md) — run on a schedule, on needle movers, on SQL rows, and more - [Testing and runs](./agents-testing-and-runs.md) — run history, limits, debugging - [Examples](./agents-examples.md) — copy-paste patterns --- ## LLM steps (AGENT) # LLM steps (`AGENT`) An **`AGENT`** step runs an LLM with a **system** prompt, a **user** prompt, optional **tools**, and optional **multi-turn** behavior. Use it whenever you need summarization, classification, routing decisions, or natural-language output—while keeping deterministic work in **`CALL`** steps. :::tip Build with AI Prompt-heavy agents are easiest to iterate via MCP; see [Vibe coding](./agents-vibe-coding.md). ::: ## Model sizes | `model_type` | When to use it | |--------------|----------------| | `small` | Structured extraction, tagging, short summaries, JSON you will parse downstream. | | `large` | Multi-input reasoning, long narratives, nuanced recommendations. | ## CALL steps vs tools on LLM steps FunnelStory has a shared catalog of **functions** — `semantic.query`, `slack.send_message`, `email.send`, `tasks.create`, CRM read/update, dataset operations, web search, and more. The difference is *who decides when to call them*: - **CALL steps** let *you* (the flow author) invoke any function in the catalog with arguments you control. You decide which step runs the function and what values go in. - **Tools on an AGENT step** let *the model* invoke functions during the step. Only functions the product exposes as agent tools can be attached — currently a smaller set than the full CALL catalog. Functions not in the agent-tool list remain available in ordinary CALL steps. See [Functions reference](./agents-functions-reference.md) for the full catalog. ## Available agent tools These functions can be attached as tools on an AGENT step today: - `semantic.query` - `email.send` - `slack.send_message` - `tasks.create` Each tool entry maps a **`name`** (what the model sees) to a **`function_id`** (what actually runs): ```json { "tools": [ { "name": "query_workspace", "function_id": "semantic.query" } ] } ``` ## Fixed arguments For each tool, you can **lock specific arguments** so the model cannot choose them. Fixed keys are merged at runtime and hidden from the model's view of the tool schema — the model only sees the remaining parameters it is allowed to fill in. Use fixed arguments to narrow scope: force a specific connection, pin a Slack channel, or constrain part of a query. ```json { "tools": [ { "name": "post_to_slack", "function_id": "slack.send_message", "fixed_args": { "connection_id": "slack_conn_123", "channel_id": "C01234567" } } ] } ``` In this example the model can only choose the message content; connection and channel are locked by the flow author. ## Prompting for JSON When downstream steps parse JSON, add explicit instructions — for example: **"Output ONLY raw JSON, no markdown, no backticks."** to the system prompt. ## Threads, memory, and multi-turn (advanced) These are optional add-ons for specialized flows. Most agents do not need them. | Field | Purpose | |-------|---------| | `thread_id` | Optional stable id so repeated runs can restore prior LLM context. | | `variable_store` | When true, exposes persistent key-value storage tools (`variable_get`, `variable_set`, `variable_push`, `variable_pop`, `variable_clear`) for scratchpad memory inside the step. | | `multi_turn` | When true, allows the agent to pause execution and wait for external input before continuing (uses the `signal` tool path). | Details on these capabilities belong in the [flow authoring guide](./agents-flow-authoring-guide.md). ## Chat responses For agents you talk to in the product **Chat** experience, the **final** `AGENT` step should write its user-visible text to the global **`@.response`** variable (`"out": { "set": "@.response" }`). If you skip this, the thread may complete without showing a reply. ## Example ```json { "id": "analyze", "op": "AGENT", "next": "", "out": { "set": "@.analysis" }, "agent": { "model_type": "small", "system": "You are an analyst. Output ONLY raw JSON, no markdown, no backticks.", "user": "Classify renewal risk for this payload:\n{{ $.account_payload }}", "tools": [ { "name": "query_semantic_db", "function_id": "semantic.query" } ] } } ``` ## Related - [Flow authoring guide](./agents-flow-authoring-guide.md) — full JSON reference for `agent` blocks - [Operations](./agents-operations.md) - [Functions reference](./agents-functions-reference.md) - [Testing and runs](./agents-testing-and-runs.md) --- ## Operations (step types) Each **step** in an agent graph has an **`op`** that tells the runner what to do next. Steps share common fields: **`id`** (must match the key in the `steps` map), **`next`** (empty string `""` ends the path), optional **`out`** for where results are stored, and an op-specific block (`call`, `agent`, `loop`, etc.). :::tip Build with AI See [Vibe coding](./agents-vibe-coding.md) and the [flow authoring guide](./agents-flow-authoring-guide.md) for complete JSON patterns. ::: :::info For AI assistants Authoritative JSON shapes and edge cases (especially **LOOP** scheduling) are in the [flow authoring guide](./agents-flow-authoring-guide.md). ::: ## Summary | `op` | Purpose | Config block | |------|---------|----------------| | `CALL` | Invoke a function | `call.function_id`, `call.args` | | `AGENT` | Run an LLM (optional tools) | `agent` — see [LLM steps](./agents-llm-steps.md) | | `LOOP` | Iterate an array | `loop.over`, `loop.var`, `loop.step` | | `CONDITION` | Continue only if true | `condition.condition` | | `BRANCH` | Start parallel paths | `branch.parallel_paths` | | `JOIN` | Wait for all branches | *(no extra config)* | | `TRANSFORM` | Template format or regex extract | `transform` | | `WAIT` | Pause wall-clock | `wait.duration` | | `SPAWN` | Run a subplan | `spawn.plan_id`, `spawn.input` | ## CALL Invokes one **function** by id (for example `semantic.query`). On failure the step errors and the current path stops. ```json { "id": "fetch_accounts", "op": "CALL", "next": "summarize", "out": { "set": "@.rows" }, "call": { "function_id": "semantic.query", "args": { "query": "SELECT account_id, name FROM accounts LIMIT 50" } } } ``` See [Functions reference](./agents-functions-reference.md). ## AGENT Runs an LLM with prompts and optional tools. Detailed options are on [LLM steps](./agents-llm-steps.md). ## LOOP Repeats a child step for each element in an array. **Scheduling is breadth-first:** for body steps `A → B`, the engine runs `A` for all items before any `B`. Do not rely on a global written in `A` being read in `B` within the same iteration—use **locals** within the iteration and **`out.append`** on globals only when aggregating after all iterations. Max **1000** iterations. ```json { "id": "each_ticket", "op": "LOOP", "next": "", "loop": { "over": "tickets.results", "var": "ticket", "step": "classify_one" } } ``` ## CONDITION If the condition expression is **true**, execution follows **`next`**. If **false**, the **current path stops** (this is not an if/else). Use **`BRANCH`** with separate paths when you need alternatives. ```json { "id": "gate", "op": "CONDITION", "next": "notify", "condition": { "condition": "$.has_results" } } ``` ## BRANCH and JOIN **BRANCH** schedules multiple entry steps at once; **JOIN** waits until all paths reach it before continuing. ```json { "id": "fan_out", "op": "BRANCH", "next": "join_all", "branch": { "parallel_paths": ["path_slack", "path_email"] } } ``` ```json { "id": "join_all", "op": "JOIN", "next": "done" } ``` ## TRANSFORM **`format`** applies a Go `text/template` to an input object. **`regexp_extract`** pulls a capture group from a string. ```json { "id": "format_body", "op": "TRANSFORM", "next": "send", "out": { "set": "@.body" }, "transform": { "type": "format", "input": "$.payload", "template": "Hello {{ $.input.name }}" } } ``` ## WAIT Pauses for a duration string (e.g. `"5s"`, `"10m"`, `"1h"`). ```json { "id": "pause", "op": "WAIT", "next": "after_pause", "wait": { "duration": "5m" } } ``` ## SPAWN Starts a **subplan** defined under `config.subplans` in parallel with the parent. ```json { "id": "side_effect", "op": "SPAWN", "next": "continue_main", "spawn": { "plan_id": "audit_trail", "input": { "note": "{{ $.reason }}" } } } ``` ## Rules that prevent silent failures - The **`id`** field must equal the step’s key in the `steps` map. - The **last** step on a path should use `"next": ""`. - Prefer **globals** with the **`@.`** prefix when a later step outside the same loop/branch needs the value ([Variables and data](./agents-variables-and-data.md)). ## Related - [Functions reference](./agents-functions-reference.md) - [LLM steps](./agents-llm-steps.md) - [Examples](./agents-examples.md) --- ## AI Agents overview # AI Agents **AI Agents** are configurable automations in FunnelStory that run a series of steps: pull data from your workspace or connections, call an LLM where you need judgment or language, then take action (Slack, email, tasks, CRM updates, datasets, and more). You choose **when** they run—on a schedule, when events occur, when SQL returns matching rows, or only when someone starts a run manually or from chat. :::tip Build with AI You can design and refine agents with Cursor, Claude, or any MCP client. See [Vibe coding](./agents-vibe-coding.md). ::: ## How it works Each agent is backed by a **configuration** (a directed graph): an **entry** step, **operations** on each node, and **variables** that pass results between steps. **Triggers** decide when the platform starts a new **run**; each run records **events** so you can inspect what happened. ```mermaid flowchart LR subgraph build [Build] UI[Canvas UI] MCP[MCP plus AI assistant] end subgraph config [Agent configuration] Trigger[Trigger] Graph[Step graph] Funcs[Functions] end subgraph exec [Execution] Runner[Flow runner] LLM[LLM provider] Integrations[Slack Email CRM] end UI --> config MCP --> config Trigger --> Runner Graph --> Runner Runner --> LLM Runner --> Integrations Funcs --> Integrations ``` ## Core concepts | Concept | Meaning | |--------|---------| | **Trigger** | When a new run is created (manual, schedule, interval, activity, signal, needle mover, conversation, or query). Published agents use `draft: false` and a trigger so the runner can enqueue work. | | **Step graph** | Steps wired with `next`; each step has an **operation** (`CALL`, `AGENT`, `LOOP`, `CONDITION`, `BRANCH`, `JOIN`, `TRANSFORM`, `WAIT`, `SPAWN`). | | **CALL** | Invokes a **function** (for example `semantic.query`, `slack.send_message`) with arguments you configure. | | **AGENT** | Runs an LLM with optional **tools** (a subset of the same functions) for multi-step reasoning inside one step. | | **Variables** | Step outputs go to **local** or **global** variables; templates interpolate values into strings and SQL. | | **Run** | One execution of the graph from trigger or manual start through completion, failure, or wait states. | ## Two ways to build 1. **Canvas** — Open **Agents**, create an agent, set the trigger, add blocks on the canvas, configure each step in the side panel, test, then save and activate. 2. **Vibe coding** — Connect an AI assistant to the [FunnelStory MCP server](../platform/mcp-server/getting-started.md); it can read the [flow authoring guide](./agents-flow-authoring-guide.md), create or update agents, and run them for you. ## Related - [Getting started](./agents-getting-started.md) — first agent end-to-end - [Triggers](./agents-triggers.md) — when agents run - [Operations](./agents-operations.md) — step types - [Functions reference](./agents-functions-reference.md) — what each `CALL` can do - [LLM steps](./agents-llm-steps.md) — configuring `AGENT` steps - [Vibe coding](./agents-vibe-coding.md) — MCP and skill file workflow - [MCP server overview](../platform/mcp-server/overview.md) — tools and auth for assistants --- ## Testing and runs FunnelStory gives you **step-level tests**, **trigger previews**, **Chat** tryouts, and a **run history** so you can ship agents confidently. :::tip Build with AI MCP **`run_flow`** is ideal for regression checks after your assistant edits JSON—see [Vibe coding](./agents-vibe-coding.md). ::: ## Step testing In the agent builder, select a step and use **Test** to execute it with the current variable context. Fix SQL, templates, and missing globals before wiring more complexity. ## Trigger preview The trigger panel in the agent builder can **preview** what would match before you rely on the trigger in production. What you see depends on the trigger type: | Trigger type | What preview shows | |---|---| | **Query** | Sample rows the SQL would return | | **Activity / Signal / Needle mover / Conversation** | Matching events (where the product supports preview for that type) | | **Schedule / Interval** | Upcoming run times in a window | | **Manual** | Nothing — manual triggers have no automatic matches to preview | You typically need a saved trigger configuration to preview. If the list is empty, check that the trigger is configured correctly and that matching data exists in your workspace. ## Test runs Start a **test run** from the builder to stream logs and intermediate LLM output without waiting for production schedules. This uses the same runtime as production but is initiated explicitly from the UI. When testing a trigger-driven agent, you can supply **sample trigger data** in the test run panel. The sample must match the trigger type you are building for: - A **query-shaped** sample uses `row` with column values from your SQL. - An **activity-shaped** sample uses `activity` with fields like `activity_id`, `account_id`, `timestamp`. - Other event types follow the same pattern — see [Variables and data — trigger data by type](./agents-variables-and-data.md#trigger-data-by-type). If the sample shape does not match the trigger type, steps that reference `$.trigger.row` (or the wrong event path) will see empty values. ## Chat testing From the **Agents** list, open **Chat** on an agent to supply inputs and stream responses. Chat-capable graphs should finish by writing to **`@.response`** ([LLM steps](./agents-llm-steps.md)). ## Run history Use **View runs** on an agent card to inspect past executions, statuses, and **event timelines** for a selected run. ## Statuses you will see | Status | Meaning | |--------|---------| | `RUNNING` | Work is still executing or waiting within limits. | | `COMPLETED` | All paths finished successfully. | | `FAILED` | An error stopped the run (check events for the failing step). | | `WAITING_APPROVAL` | A step requested human approval before continuing. | ## Runtime limits (selected) Exact numbers belong in the [flow authoring guide](./agents-flow-authoring-guide.md); highlights for operators: | Area | Behavior | |------|-----------| | CALL output | Large function results fail the step—keep SQL selective. | | Globals | Very large `@.` state fails the run—avoid storing unbounded query results when possible. | | Loops | Capped at **1000** iterations. | | Background ticks | Long runs may pause between runner cycles—design idempotent retries. | ## Debugging checklist - Templates show up empty → wrong variable path or missing quotes for object passthrough. - SQL returns nothing → verify table/column names against the semantic schema resource. - Chat is silent → confirm the final `AGENT` writes **`@.response`**. - Loop appears "out of order" → remember **breadth-first** scheduling ([Operations](./agents-operations.md)). - `$.trigger.row` is empty → the run was not started by a query trigger; use the matching trigger path (see [Variables and data](./agents-variables-and-data.md#trigger-data-by-type)). ## Related - [Triggers](./agents-triggers.md) — simulating trigger payloads on manual runs - [Examples](./agents-examples.md) --- ## Triggers Triggers define **when** FunnelStory starts a new run for an agent. A published (non-draft) agent with a configured trigger runs automatically; draft agents and agents with no trigger only run when someone starts them manually. :::tip Build with AI Assistants should read the full schema in the [flow authoring guide](./agents-flow-authoring-guide.md) while editing trigger configuration. See also [Vibe coding](./agents-vibe-coding.md). ::: :::info For AI assistants If you are helping a user author JSON, use the [flow authoring guide](./agents-flow-authoring-guide.md) for exact shapes, CEL filter hints, and idempotency behavior. ::: ## Summary | `type` | When it fires | Main config | |--------|----------------|-------------| | `manual` | Someone starts a run from the UI, Chat, or an integration | None in stored config | | `schedule` | Cron expression | `schedule.expr`, optional `schedule.timezone` | | `interval` | Repeating duration | `interval.duration` (e.g. `"6h"`, `"30m"`) | | `activity` | Model activity events | `activity.activity_ids` (array); optional `filter_expr` | | `signal` | Signal rule fires | `signal.rule_ids` (array); optional `filter_expr` | | `needle_mover` | Needle mover recorded | `needle_mover.labels` and/or `needle_mover.impacts` (at least one value across both); optional `filter_expr` | | `conversation` | Conversation ingested | `conversation.types` (e.g. meeting, ticket, email, chat, call); optional `filter_expr` | | `query` | SQL against semantic DB returns rows | `query.query`; **each row starts one run** | Optional on several types: **`account_filter`** — restricts which accounts the trigger applies to. If a trigger "never fires" for certain accounts, check the account filter and trigger rules first. Uses the same filter-group concept as elsewhere in FunnelStory. :::caution Trigger data at runtime Each trigger type exposes different fields under `$.trigger`. For example, query triggers provide `$.trigger.row.*` while activity triggers provide `$.trigger.activity.*`. Referencing `$.trigger.row` on a non-query run produces empty or missing values. See [Variables and data — trigger data by type](./agents-variables-and-data.md#trigger-data-by-type) for the full table. ::: ## Manual Use **manual** while building. Runs start only when someone triggers them — from the UI, Chat, or an integration. Nothing schedules automatically. ```json { "type": "manual" } ``` ## Schedule Runs on a **cron** expression, optionally in a named timezone. ```json { "type": "schedule", "schedule": { "expr": "0 9 * * 1-5", "timezone": "America/Los_Angeles" } } ``` ## Interval Runs every **duration** after the previous eligible tick (e.g. `"30s"`, `"15m"`, `"6h"`, `"24h"`). ```json { "type": "interval", "interval": { "duration": "6h" } } ``` ## Activity Fires when configured **activity** IDs appear in your modeled activity stream. ```json { "type": "activity", "activity": { "activity_ids": ["019bacd1-e737-7bef-a310-c35ff896febd"] }, "filter_expr": "true" } ``` Replace IDs with values from **your** workspace configuration. ## Signal Fires when specific **signal rules** emit. ```json { "type": "signal", "signal": { "rule_ids": ["019c0a12-3456-7890-abcd-ef1234567890"] } } ``` ## Needle mover Fires when a needle mover matches configured **labels** and/or **impacts**. ```json { "type": "needle_mover", "needle_mover": { "labels": ["pricing"], "impacts": ["positive"] } } ``` ## Conversation Fires when a conversation of given **types** is ingested. ```json { "type": "conversation", "conversation": { "types": ["ticket", "meeting"] } } ``` The trigger payload provides `key`, `metadata`, and `timestamp` for the conversation — not the full message body. To load the full content, use a `semantic.query` step referencing ids from `$.trigger.conversation.key`. ## Query **Query** triggers run SQL against the **semantic** workspace database. **Each row** returned becomes **one run**; row columns are exposed as **`@.trigger.row.`** in steps and templates. Important product rules: - Evaluation runs at most **once per UTC day** for the workspace (first successful pass that day), so design SQL with a **`LIMIT`** and a selective **`WHERE`**. - **Idempotency** is derived from the **full row JSON** — stable identical rows dedupe; changing any column changes the key. - Use **`LIMIT`** to cap how many new runs enqueue per cycle. ```json { "type": "query", "query": { "query": "SELECT account_id, name FROM accounts WHERE subscription_remaining_days < 90 ORDER BY subscription_remaining_days ASC LIMIT 50" } } ``` In downstream steps, reference `{{ $.trigger.row.account_id }}` or the global trigger payload patterns described in the flow authoring guide. ## Testing without waiting for the trigger When running from the builder or starting a manual test run, you can supply sample **trigger data** so the run behaves as though a real trigger started it. The sample JSON must match the trigger type you are building for. **Query-shaped sample** (columns from your SQL): ```json { "trigger": { "row": { "account_id": "acct_123", "name": "Example Corp" } } } ``` **Activity-shaped sample:** ```json { "trigger": { "activity": { "activity_id": "019bacd1-e737-7bef-a310-c35ff896febd", "account_id": "acct_456", "timestamp": "2026-04-01T09:00:00Z", "count": 1 } } } ``` The same shapes apply when an assistant runs a flow with a simulated trigger via MCP. ## Related - [Operations](./agents-operations.md) - [Variables and data](./agents-variables-and-data.md) - [Testing and runs](./agents-testing-and-runs.md) --- ## Variables and data Steps pass data through **variables**. Templates turn those values into SQL, JSON arguments, and prompts. :::tip Build with AI The [flow authoring guide](./agents-flow-authoring-guide.md) documents every interpolation edge case (including "silent empty string" behavior). ::: :::info For AI assistants Read the [flow authoring guide](./agents-flow-authoring-guide.md) before generating JSON—especially **LOOP** ordering and object passing rules. ::: ## Syntax | Syntax | Use | |--------|-----| | `{{ $.name }}` | Interpolate a **string** inside SQL, JSON text fields, or prompts. | | `"$.object"` | Pass a **whole object or array** through without stringifying (quoted path, no braces). | | `@.name` | **Global** variable—readable from any later step in the run. | | `$.name` | **Local** variable—scoped to the current chain / iteration. | | `@.trigger.*` | Payload from the run's trigger — the shape depends on trigger type (see below). | **Do not** wrap whole objects in `{{ }}`—you will stringify unpredictably. Use `"$.var"` instead. ## Trigger data by type Each trigger type populates different fields under `$.trigger` (and `@.trigger` for globals). Only one shape applies per run. | Trigger type | Available under `$.trigger` | |---|---| | **query** | `row.` — columns from your SQL SELECT | | **activity** | `activity.activity_id`, `activity.model_id`, `activity.account_id`, `activity.user_id`, `activity.timestamp`, `activity.count` | | **signal** | `signal.signal_id`, `signal.rule_id`, `signal.type`, `signal.account_id`, `signal.timestamp`, plus optional `signal.message`, `signal.attributes`, `signal.value`, `signal.previous_value` | | **needle_mover** | `needle_mover.needle_mover_id`, `needle_mover.title`, `needle_mover.description`, `needle_mover.state`, `needle_mover.impact`, `needle_mover.label`, `needle_mover.created_at` | | **conversation** | `conversation.key`, `conversation.metadata`, `conversation.timestamp` | Event-driven runs also include `$.trigger.account_ids` when account scope is available. :::warning $.trigger.row is query-only Referencing `$.trigger.row` when the run was started by an activity, signal, needle mover, or conversation trigger produces empty or missing values. Use the matching path for the trigger type — for example `$.trigger.activity.account_id` on an activity-triggered run. If you need data that is not in the trigger payload, add a `semantic.query` step to look it up. ::: :::note Saved config vs runtime naming Some field names differ between the saved trigger configuration and the runtime payload. For example, the saved needle mover config uses **`impacts`** (plural array) while the runtime event carries **`impact`** (singular string). See [Triggers](./agents-triggers.md) for the saved shapes and the table above for runtime fields. ::: ## Outputs (`out`) | Form | Meaning | |------|---------| | `{ "set": "@.global" }` | Store result globally. | | `{ "set": "local" }` | Store locally for tight chains inside one scope. | | `{ "append": "@.list" }` | Append each result to an array (common in loops). | | `{ "merge": "@.obj" }` | Shallow-merge object keys into an existing global object. | **Rule of thumb:** if any later step **outside** the current loop or branch needs the value, use an **`@.`** global. When unsure, prefer **`@.`**—locals are easy to lose across `LOOP` scheduling. ## Templates String fields go through Go **`text/template`** semantics: - Conditionals: `{{if $.limit}}LIMIT {{ $.limit }}{{else}}LIMIT 100{{end}}` - Indexing arrays: `{{ index $.rows 0 }}` — bracket syntax like `$.rows[0]` is **not** supported. - Missing variables become **empty string** (not an error), which can make SQL silently wrong—validate in testing. ## Datasets **Datasets** are durable key/value stores of JSON records. Read them through **`semantic.query`** using `dataset_records('dataset_name')` as a table. **Rules:** - Do **not** invent dataset names—use names your workspace already has. - Verify a dataset exists (for example `SELECT 1 FROM dataset_records('name') LIMIT 1`) before writing. - Writes (`upsert`, `set_field`, `delete`) are **destructive**—only use them when the agent's purpose includes persistence. ## Common patterns - **Fan-out then aggregate:** use `BRANCH`/`JOIN` or sequential `CALL`s writing to distinct `@.` keys, then an `AGENT` step that reads multiple globals. - **Incremental processing:** keep a dataset of processed keys; query with `LEFT JOIN` to skip completed rows (see [Examples](./agents-examples.md)). ## Related - [Operations](./agents-operations.md) - [Testing and runs](./agents-testing-and-runs.md) - [Examples](./agents-examples.md) --- ## Vibe coding **Vibe coding** means describing the agent you want in everyday language and letting an **AI assistant** (Cursor, Claude Desktop, etc.) draft the JSON, call APIs, and run tests on your behalf. FunnelStory supports this through the **MCP server** and a published **flow authoring guide** your assistant can read. :::tip Human-first path Prefer clicking the canvas? Start with [Getting started](./agents-getting-started.md)—you can still paste JSON from an assistant later. ::: ## Prerequisites 1. Connect your assistant using [MCP Getting Started](../platform/mcp-server/getting-started.md) and [Authentication](../platform/mcp-server/authentication.md). 2. Ensure your user can access the target **workspace** and connections (Slack, CRM, email). ## Approach A — MCP (recommended) Once connected, the assistant gains tools such as: | Tool | Role | |------|------| | `query_semantic_db` | Explore live schema + sample rows | | `get_flows` | List agents or fetch a full configuration | | `configure_flow` | Create or update an agent graph | | `run_flow` | Execute by id with optional inputs / simulated `trigger` payload | FunnelStory also exposes the **flow authoring guide** as a resource (`file://flow/guide.md` over MCP). In practice you can say: > “Read the flow guide, then create a draft agent that runs daily, selects accounts with `health_score < 35`, and posts details to Slack channel `C0123` using connection ``.” The assistant should: 1. Pull the guide resource. 2. Draft `trigger_config`, `input_schema` (if needed), and `config.entrypoint` + `steps`. 3. Call **`configure_flow`** with `draft: true`. 4. Call **`run_flow`** with sample `input` / `trigger` objects to validate. 5. Flip `draft` false only after you approve. See also [MCP examples](../platform/mcp-server/examples.md). ## Approach B — Skill file (offline) Download the same guide your workspace would expose: - **Hosted copy:** [Flow authoring guide](./agents-flow-authoring-guide.md) (same material as the MCP `file://flow/guide.md` resource) Add it to your assistant’s context: | Client | Suggested location | |--------|--------------------| | **Cursor** | `.cursor/rules/funnelstory-agents.mdc` or project rules referencing the file | | **Claude** | Project knowledge / uploaded doc | | **Other** | Paste into a pinned system prompt | Keep the file **version-stamped** in your internal wiki if you fork it—upstream behavior evolves with the product. ## Prompt snippets that work well - **Portfolio sweep:** “List accounts where prediction = churn and subscription_remaining_days < 60. Build a query-triggered agent capped at 200 rows/day.” - **Narrative QBR:** “Given flow input `account_id`, pull last 90 days of meetings + tickets, then summarize risks/opps with citations in plain text.” - **Regression test:** “Fetch flow ``, run it with trigger row JSON `{...}`, and diff the events against yesterday’s run.” ## Guardrails for assistants - Start **`draft: true`** until a human confirms side effects (email, CRM writes, datasets). - Never invent **dataset** names—confirm they exist via SQL. - Respect **query-trigger** daily cadence and **`LIMIT`**. ## Site map for LLMs - [llms.txt](https://docs.funnelstory.ai/llms.txt) — machine-oriented index of documentation URLs (generated at build time for the whole site). - [llms-full.txt](https://docs.funnelstory.ai/llms-full.txt) — single-file bundle of doc content for assistants that ingest one document. ## Related - [AI Agents overview](./agents-overview.md) - [MCP server overview](../platform/mcp-server/overview.md) - [Available MCP tools](../platform/mcp-server/available-tools.md) --- ## Overview # AI Agents **AI Agents** are configurable automations in FunnelStory that run a series of steps: pull data from your workspace or connections, call an LLM where you need judgment or language, then take action (Slack, email, tasks, CRM updates, datasets, and more). You choose **when** they run — on a schedule, when events occur, when SQL returns matching rows, or only when someone starts a run manually or from chat. Everything is grounded in your **Customer Intelligence Graph** — the same workspace data your team already trusts. ## Get started - [AI Agents overview](./agents-overview.md) — how agents work, core concepts, and two ways to build - [Getting started](./agents-getting-started.md) — build your first agent end-to-end - [Triggers](./agents-triggers.md) — when agents run - [Operations](./agents-operations.md) — step types - [Functions reference](./agents-functions-reference.md) — what each `CALL` can do ## Related - [Renari](../platform/renari.md) — the in-product AI copilot - [MCP Server](../platform/mcp-server/overview.md) — tool-based API for external AI assistants - [How FunnelStory works](../core-concepts/overview.md) - [Customer Intelligence Graph](../core-concepts/customer-intelligence-graph.md) - [Notifications overview](../platform/notifications/overview.md) — when to use built-in Slack/Teams vs agents --- ## AI Providers FunnelStory ships with **built-in LLM usage** for product features such as **Renari**, **Agents**, embeddings, and other AI-assisted workflows—so most teams can use those capabilities without touching this screen. **Bring your own LLM (BYO LLM)** is for organizations that want AI traffic to run through **their preferred vendor and models** instead: your own API keys or cloud credentials, your commercial terms with that vendor, and (where supported) your choice of model names and regions. **AI Providers** under **Configure** is where workspace administrators register those vendors and wire them into FunnelStory. If you never add a provider, the product keeps using its default LLM path for your workspace (subject to your plan and agreement). Access requires an **Admin**, **Data Admin**, or **Super Admin** role. ## Providers tab Use **Providers** to add and maintain **your** vendor connections when you are on BYO LLM. Skip this tab if you are not planning to route traffic to a third-party bill you control. 1. Go to **Configure → AI Providers**. 2. Open the **Providers** tab. 3. Choose **Add Provider** and pick a vendor: | Vendor | What you supply | |--------|-----------------| | **OpenAI** | API key for your OpenAI organization | | **Google Gemini** | API key for Google AI Studio–style access | | **Google Vertex AI** | Service account **JSON** key plus **project** and **region** (location) that host Vertex | 4. Save the provider. You can add **multiple** providers (for example, OpenAI for chat-quality models and Vertex for embeddings) and **edit** credentials later when keys rotate. Keep keys in your vault rotation policy: updating a key here is immediate for new AI requests. ## Custom Model tab When you use BYO LLM, open the **Custom Model** tab after at least one provider is saved. There you map **which of your models** FunnelStory should call for each class of work: - **Large** — higher-capability model for complex reasoning or long outputs. - **Small** — faster, cheaper model for structured tasks and high volume. - **Embedding** — model used to vectorize text for search and similarity. Each slot is a **provider + model name** pair chosen from the lists supported in the UI for that provider. If you have not completed a custom mapping, the UI may indicate that **FunnelStory’s default** models are still in use for some workloads—once you save mappings for the slots you care about, eligible features use **your** vendor for those calls. Changing mappings applies to **new** AI runs; in-flight jobs finish on the configuration they started with. ## Operational notes - **Why BYO**: Common reasons include enterprise procurement (single LLM vendor), data residency, existing spend commits, or security review requirements for a specific provider. - **Credential scope**: API keys and service accounts should be least-privilege—only the model endpoints FunnelStory needs. - **Data residency**: For BYO LLM, your provider’s region and data policy apply to prompts and completions sent to that vendor; pick providers and regions that match your compliance requirements. - **Audit trail**: Creating, updating, or deleting providers and editing the custom model map generates entries in the [Audit log](./audit-log.md). ## Related - [Renari](./renari.md) — primary in-product assistant - [AI Agents overview](../ai/agents-overview.md) — agents that call LLM steps - [MCP server overview](./mcp-server/overview.md) — optional developer access to the same workspace --- ## Audience filters **Audience filters** are the rules FunnelStory evaluates to decide which **accounts** (and, where applicable, **users**) belong to an audience. You combine conditions into **groups** so the same audience can express realistic segments—for example, "Enterprise region **and** (expansion prediction **or** high usage)". ## How the builder is structured Filters are organized as a tree of **AND** and **OR** groups, with an optional **NOT** branch to exclude a subgroup. Inside each leaf, you pick a **field or signal type**, an **operator** (such as equals, greater than, in list, within time window), and a **value**. The exact operators depend on the property type (text, number, date, categorical). Preview the audience while editing to see **matching accounts and users** and sanity-check counts before saving. ## Common condition categories | Category | Examples of what you can filter on | |----------|-----------------------------------| | **Account model** | Custom **properties** mapped from your warehouse or CRM (industry, region, ARR band, segment flags). | | **Journey / funnel** | Accounts in a given **funnel** **stage**, stage changes within a **time range**, or stage-related thresholds. | | **Signals** | Workspace **signals** (including multi-select) tied to account behavior or integrations. | | **Predictions** | **Prediction** **states** or scores for churn, expansion, or other models enabled for the workspace. | | **Metrics** | **Account metrics** with aggregations and thresholds (for example percentile-style bands where configured). | | **Activity** | Specific **product** or **non-product** activities and time-bounded patterns. | | **Traits** | **Enrichment-backed** or derived **traits** on accounts (from connections such as **[Apollo](../../data-connections/enrichment/apollo.md)** or **[Clearbit](../../data-connections/enrichment/clearbit.md)**). | | **Users** | Conditions that reference **users** (for example specific user IDs) when the audience should narrow to certain people on matching accounts. | Not every workspace exposes every category; available rows depend on your **data models**, **funnels**, **predictions**, and **signals** configuration. ## Prediction-based audiences When **predictions** are enabled, you can build audiences directly from **prediction state** (and related fields your admin mapped). That is the usual pattern for "all accounts in **at risk** for churn" or "accounts eligible for **expansion**" cohorts that downstream **[notifications](../notifications/overview.md)** or **[CRM sync](../crm-sync/overview.md)** should use. See **[Predictions](../../predictions/overview.md)** for how scores and states are produced. ## Using audiences as an Accounts filter On the **Accounts** view, use **Select Audiences** in the global filters to intersect the grid with one or more saved audiences. Chips in the filter bar show which audiences are active; clear them to return to the full book of business. This path reuses the **same** audience definitions you maintain under **Audiences**, so marketing, CS, and RevOps stay aligned on segment membership. ## Related - **[Audiences overview](./overview.md)** — creating audiences and where they apply. - **[Funnels](../funnels/overview.md)** — configuring stages used in funnel filters. - **[Predictions](../../predictions/overview.md)** — prediction-backed conditions. --- ## Audiences An **audience** is a **saved definition** of which **accounts** (and related **users**) match a set of rules in your workspace. Create audiences when you need repeatable segments for the **Accounts** view, **CRM list sync**, **workflows**, or campaign-style follow-up—without rewriting filters each time. ## How audiences work Each audience stores a **name**, optional **description**, and a **filter** built from conditions you combine with **AND**, **OR**, and **NOT** groups (see **[Audience filters](./filters.md)**). FunnelStory evaluates that filter against your **Account model** and related data on refresh, then stores **how many accounts and users** match, optional **top account traits** for quick scanning, and timestamps such as **last refreshed**. Some audiences are marked as **AI-discovered** when they originate from suggested segments; you can still edit, clone, or delete them like manually created audiences. ## Creating and managing audiences 1. Open **Audiences** (navigation path **`/audiences`**). 2. Choose **New Audience** (or start from an empty state that creates a draft with a generated name). 3. Open the audience, build or adjust **filters**, and use **preview** to see matching **users** and **accounts** before you rely on the segment in production. 4. **Save** changes; FunnelStory updates match counts as data refreshes. You can **clone** an existing audience to iterate on a variant, **rename** or refine filters over time, and **delete** audiences you no longer need. Match counts and previews help confirm the definition matches your intent after model or funnel changes. ## AI-suggested audiences FunnelStory can propose **suggested audiences** derived from patterns in your current book of business. Review each suggestion, inspect the implied filters and match rates, then **save** or adapt them into a standard audience you own. ## Where audiences show up | Area | Use | |------|-----| | **Accounts** | Restrict the working account list with **Select Audiences** (global filters and sidebar chips) so day-to-day work stays inside a segment. | | **Audience detail** | Open a single audience to inspect **matching accounts** and summary stats. | | **CRM sync** | Attach a **HubSpot list** sync to an audience so membership stays aligned with CRM—see **[Audience sync](../crm-sync/audience-sync.md)**. | | **Workflows** | Reference audiences when configuring workflow filters so automations run for the same segment. | ## Related - **[Audience filters](./filters.md)** — condition types and combining logic. - **[Apollo](../../data-connections/enrichment/apollo.md)** and **[Clearbit](../../data-connections/enrichment/clearbit.md)** — common sources for account traits used in filters. - **[Audience sync](../crm-sync/audience-sync.md)** — HubSpot list sync from an audience. - **[Accounts](../../core-concepts/accounts.md)** — the customer records audiences select. --- ## Audit Log The **Audit Log** is the workspace-level history of important configuration and integration actions: who did what, and when. Use it for governance (proving that only approved admins changed a sync), security investigations, and troubleshooting (“we thought the model saved—which user saved it last?”). ## How it works Each entry records an **activity** type (for example, connection created, model updated, CRM sync rule changed), the **user** who performed it (name and email when available), a **timestamp**, and a **data** payload with structured details about the change. The list is read-only; you cannot edit or delete audit rows from the UI. Activities cover the lifecycle of major workspace objects: **data models**, **data connections** (create, update, delete, authorize, and related actions), **CRM sync** configurations, **enrichments**, **audiences**, **dashboards** and charts, **agents** (create/update/delete), **LLM providers** and **LLM configuration**, **tasks**, **notes**, **labels**, **invitations**, **workspace** settings and resets, **MCP clients**, **reports**, and more. When assistants or integrations act on behalf of a user, the entry is still attributed to that user where the platform has user context. ## Viewing the workspace audit log 1. Open **Admin** in the main navigation (you need an admin role that includes access to admin settings). 2. Go to **Audit Log**. 3. Choose a **time range** at the top to narrow the window (for example, last week). 4. Scan columns **Performed on**, **Activity**, **Performed by**, **Email**, and **Data**. The **Data** column shows JSON with fields relevant to that activity (ids, names, or diffs depending on the event). If a user was later deactivated, the log can still show their historical actions; deactivated users may be labeled accordingly in the **Performed by** column. ## Other audit-style views Some objects—such as a **funnel**—also expose an **Audit** tab on their own detail page. That tab is scoped to that object, whereas the workspace **Audit Log** aggregates events across the whole tenant. ## Related - [Security](./security.md) - [Workspace management](./workspace-management.md) - [CRM sync overview](./crm-sync/overview.md) — many audit entries come from sync configuration changes --- ## Audience sync **Audience sync** keeps a **HubSpot static list** aligned with a **FunnelStory audience** so marketing workflows, enrollment triggers, and outbound sequences always target the same accounts your data science team defined. Use it when HubSpot—not only the FunnelStory UI—must reflect **live membership** of a saved segment. ## Prerequisites - A **HubSpot** data connection with list permissions. - An existing **audience** with stable match rules (see **[Audiences](../audiences/overview.md)**). - CRM sync permissions for the user configuring the binding. ## How it behaves When you bind an audience to HubSpot, FunnelStory stores a **sync configuration** that includes the audience ID and HubSpot **`list_id`** (creating or selecting the list during setup). **Sync runs**—manual or scheduled—evaluate the audience filter, then **add or remove** contacts/companies on the HubSpot side so the list mirrors FunnelStory membership. Membership updates follow the same **interval** you set on the sync (or **Manual** runs from **Sync Data**). ## Configuring from an audience 1. Open the audience from **Audiences**. 2. Open **CRM sync** or **HubSpot list** setup from the audience actions (wording matches your app version) and choose the HubSpot **connection**. 3. Pick an existing **static list** or **create** a new one; save the binding. 4. Run **Sync now** the first time from **Sync Data**, then rely on the configured **schedule** for ongoing refreshes. ## Related - **[HubSpot sync](./hubspot.md)** — deal/company/custom object field sync. - **[CRM sync overview](./overview.md)** — auditing and schedules. - **[Audience filters](../audiences/filters.md)** — how membership is computed. --- ## HubSpot sync **HubSpot sync** publishes FunnelStory **account** or **user** data into HubSpot so lists, workflows, and deal boards can branch on the same health, usage, and prediction fields your CS team sees internally. You need an authorized **[HubSpot connection](../../data-connections/crm/hubspot.md)** before creating a sync. ## Account sync targets For **account** sync, pick where FunnelStory should write: | Target | When to use it | |--------|----------------| | **Deal** (default) | You want one HubSpot **deal** per FunnelStory account, commonly paired with pipeline reporting. | | **Company** | You prefer fields directly on the **company** record. | | **Custom object** | Your operations model stores customer telemetry on a HubSpot **custom object**—select the object type ID your admin provisioned. | The form labels these as **Deal**, **Company**, and **Custom Object** in the HubSpot sync type selector. ## Shared HubSpot settings - **Property group** — Name the HubSpot **property group** FunnelStory should create or reuse; managed fields stay grouped for easy governance. - **Stage filter** (account sync) — Optionally restrict which FunnelStory accounts participate based on their **funnel stage** before anything is upserted in HubSpot. - **Schedule** — Same interval ladder as Salesforce (**1 Hour** … **24 Hour**) or **Manual**. - **Field sections** — Choose **account properties**, **activities**, **subscriptions**, **signals**, and **metrics** exactly like the **[Sync properties](./sync-properties.md)** reference describes. ## Associations **Deal** syncs attempt to **associate** deals with existing HubSpot **companies** when a match is found so CRM users see company context on the deal record. ## User sync **User sync** maps FunnelStory **users** to HubSpot **contacts**, carrying user properties, activities, and selected **subscription** fields from the parent account. Pick the same connection, property group, schedule, and field lists—scoped to user entities. ## Operational notes - If someone **deletes** a HubSpot record that FunnelStory created, the next sync may **not automatically recreate** it depending on CRM state—treat deletions as intentional and re-save or reconfigure if you need a fresh object. - For **audience-driven lists**, use **[Audience sync](./audience-sync.md)** in addition to (or instead of) broad account sync when Marketing only needs membership. ## Related - **[CRM sync overview](./overview.md)** — scheduling and auditing. - **[Audience sync](./audience-sync.md)** — HubSpot lists tied to audiences. - **[HubSpot connection](../../data-connections/crm/hubspot.md)** — connecting HubSpot to FunnelStory. --- ## CRM sync **CRM sync** pushes selected **FunnelStory intelligence** into **Salesforce** or **HubSpot** so revenue teams can report, automate, and message using the same scores and fields they already work with in the CRM. Use it when Marketing Ops or Sales Ops wants **FunnelStory as the system of analysis** but **the CRM as the system of engagement**. ## What sync does (and does not do) - **Outbound** — FunnelStory **writes** configured properties, metrics, activities, signals, subscriptions, and **traits** to CRM records on a schedule you choose (or **manually**). - **Not a full data warehouse** — Sync sends **curated slices** you opt into per configuration; it does not replace your CRM’s native objects or every column from the **Account model**. - **Auditable** — Each **sync run** records start/end time, success, counts, and errors so you can troubleshoot with your admin and **[Audit log](../audit-log.md)** entries. Inbound CRM data still flows through your **data connections** and **data models** separately; CRM sync is the **return path** for FunnelStory-derived fields. ## Supported systems | CRM | Account-oriented sync | User-oriented sync | |-----|-------------------------|---------------------| | **Salesforce** | Yes — see **[Salesforce sync](./salesforce.md)** | Yes | | **HubSpot** | Yes — see **[HubSpot sync](./hubspot.md)** | Yes | Both require an **active, authorized** **[Salesforce](../../data-connections/crm/salesforce.md)** or **[HubSpot](../../data-connections/crm/hubspot.md)** data connection in the same workspace. ## Where to configure sync 1. Open **Sync Data** (path **`/syncs`**). 2. Choose **Add New Configuration** (or edit an existing one). 3. Pick **Account sync** or **User sync**, the **connection**, optional **funnel stage filter** (HubSpot account sync), **property group** naming (HubSpot), and the **fields** to include. 4. Set **schedule** — intervals such as **1h**, **3h**, … **24h**, or **Manual** only when you want human-triggered runs. 5. **Save**, then **Sync now** (or wait for the next interval) to push updates. ## Audience-specific sync Audiences can drive **HubSpot static lists** through a dedicated sync binding—see **[Audience sync](./audience-sync.md)**. ## When you need more than scheduled field sync Built-in CRM sync is ideal when you want **stable, mapped fields** on a cadence everyone can govern in RevOps. Some situations need **branching logic, lookups across the workspace, LLM judgment, or several CRM-side actions in one flow**—for example “when renewal risk crosses a threshold, summarize the account, update a custom field, and only open an opportunity when the model says it is urgent.” For that, use **[AI agents](../../ai/agents-overview.md)** together with CRM-oriented **functions** (see the **[functions reference](../../ai/agents-functions-reference.md)**). Agents are **graphs of steps** with **triggers** (schedule, signals, activities, manual runs, and more); they can pull FunnelStory intelligence and push updates through the same integrations your team already trusts, in richer patterns than a single sync configuration object allows. Use **CRM sync** for **broad, low-lift coverage** of agreed fields; use **agents** when a **specific playbook** should run end to end. ## Related - **[Sync properties](./sync-properties.md)** — what can be mapped outbound. - **[Salesforce sync](./salesforce.md)** and **[HubSpot sync](./hubspot.md)** — CRM-specific behavior. - **[Audiences](../audiences/overview.md)** — segments you can attach to HubSpot list sync. - **[Getting started with agents](../../ai/agents-getting-started.md)** — build playbooks that combine intelligence and CRM actions. --- ## Salesforce sync **Salesforce sync** writes FunnelStory **account** or **user** intelligence into Salesforce so reports, list views, and flows can consume the same metrics your team already trusts in FunnelStory. Configure it after you have a working **[Salesforce connection](../../data-connections/crm/salesforce.md)** with appropriate admin permissions. ## Sync types | Type | Best for | |------|-----------| | **Account sync** | Pushing account-level properties, activities, subscriptions, signals, and metrics to **Account** (or configured) records. | | **User sync** | Pushing user-level properties and activities, plus selected **subscription** attributes scoped to each user’s **account**. | Pick the type in **Sync Data → Add New Configuration → Select sync type**. ## Configuring an account sync 1. Open **`/syncs`** → **Add New Configuration**. 2. Choose **Account sync**. 3. Select your **Salesforce data connection**. 4. Choose a **schedule** (fixed intervals such as **1 Hour** … **24 Hour**, or **Manual**). 5. Under each section (**Account properties**, **Account activities**, **Account subscriptions**, **Account signals**, **Account metrics**), tick the fields you want outbound—only populated options appear. 6. **Save**, then trigger **Sync now** (or wait for the interval). First successful runs typically create or populate the **Salesforce custom structures** your implementation agreed on (often namespaced so admins can find FunnelStory-owned objects quickly). Use Salesforce **Setup → Object Manager** to verify fields after the first run. ## Configuring a user sync Follow the same entry path, choose **User sync**, and select **user properties**, **user activities**, and any **account subscription** fields that should ride along for each contact’s parent account. Save and run like account sync. ## Runs, errors, and auditing Each configuration shows **last sync run** status. Failed runs surface error text in the run history so you can correct mappings or Salesforce permissions. Important configuration changes and runs also appear in the workspace **[Audit log](../audit-log.md)**. ## Related - **[CRM sync overview](./overview.md)** — concepts shared with HubSpot. - **[Sync properties](./sync-properties.md)** — field categories in detail. - **[Salesforce connection](../../data-connections/crm/salesforce.md)** — OAuth and prerequisites. --- ## Sync properties Each **CRM sync** configuration lists which **FunnelStory fields** to copy into Salesforce or HubSpot. This page summarizes the **categories** you can opt into; the live form only shows resources your workspace actually has (models, funnels, metrics, signals, etc.). ## Account sync payloads When **sync type** is **Account**, you can typically include any combination of: | Category | What gets sent | |----------|----------------| | **Account properties** | Mapped **Account model** columns you select (including custom properties). | | **Account activities** | Chosen **product** or **non-product** activities summarized for the account. | | **Subscription properties** | Fields such as **subscription type**, **valid from**, **valid until** when subscriptions are modeled. | | **Signals** | Workspace **signals** you pick from the list. | | **Account metrics** | Configured **account metrics** (for example health components) with the aggregations you select in the form. | | **Account traits** | **Enrichment-backed** or derived **traits** enabled for sync (connect **[Apollo](../../data-connections/enrichment/apollo.md)** or **[Clearbit](../../data-connections/enrichment/clearbit.md)**—or your workspace’s enrichment source—so traits exist to map). | HubSpot **account** sync can also target **Deal**, **Company**, or a **custom object**—see **[HubSpot sync](./hubspot.md)**. ## User sync payloads When **sync type** is **User**, you select parallel categories for **users** (properties, activities) plus, where offered, **account-level subscription** slices for the **account the user belongs to**, so contact-level automation in the CRM still reflects contract context. ## Custom mapping and naming - **HubSpot** — You define a **property group name**; FunnelStory writes managed properties inside that group so CRM admins can distinguish FunnelStory-owned fields. - **Salesforce** — Field mappings follow your admin’s configuration for the sync template; custom objects or fields may be provisioned as part of onboarding—confirm with your FunnelStory implementation contact if you need a new target field. ## Choosing what to sync 1. Start with the **smallest** set of properties teams actively use in the CRM. 2. Add **metrics** and **signals** once stakeholders agree on definitions (to avoid noisy or confusing CRM fields). 3. Keep **traits** behind feature flags or pilot sandboxes until enrichment quality is validated. ## Related - **[CRM sync overview](./overview.md)** — schedules, runs, and navigation. - **[HubSpot sync](./hubspot.md)** and **[Salesforce sync](./salesforce.md)** — CRM-specific steps. - **[Data models overview](../../data-models/overview.md)** — where account and user fields originate. --- ## Configuring stages Each funnel is an **ordered list of stages** (minimum **three**, maximum **five** in the current product). Accounts are placed on a stage according to the funnel’s **evaluator** and each stage’s **filter**—see **[Evaluators](./evaluators.md)** and **[Stage conditions](./stage-conditions.md)**. ## Opening the stage editor From a funnel at `/funnels/:funnelId`, use **View Configuration** on the overview when stages are already wired, or stay in the initial **Configure Funnel Stages** layout after creating a funnel. The editor shows every stage as a column with a preview of its filters. ## Stage names and emojis Each stage has a **label** and optional **emoji** prefix (for example “🚀 Activation”). Names appear wherever funnel stages surface—**Accounts**, tooltips, and filters—so keep them short and recognizable for revenue and success teams. ## Ordering Stages are evaluated in **definition order** from first to last. That order is part of the journey semantics: - **Timeline** evaluator walks history in time while respecting that order. - **Last match** evaluator picks the **rightmost** stage in order whose filter matches **now**. Reorder only when you intend to change what “further along” means for the journey. ## Building filters per stage Click a stage column to open the **Configure Funnel Stage** drawer. Add rows with the filter builder, then **Save Filters** to persist that stage. Repeat until every stage you care about has the right gates. The filter types available depend on the funnel’s evaluator—see **[Stage conditions](./stage-conditions.md)**. ## Minimum configuration before “overview” mode The product expects at least **three stages with real filter configuration** before treating the funnel as ready for the overview experience (where **Funnel Overview** and evaluator controls appear alongside a compact summary). Until then, you remain in the full-screen stage manager. ## Saving the funnel Use **Save Funnel** in the stage manager header to open the save dialog: set the funnel **name**, choose whether to **activate** (when allowed), and persist. Active funnels can still be edited; consider **refresh** after substantive filter changes—see **[Managing workspace funnels](./managing-workspace-funnels.md)**. ## Done and completion semantics With the **Timeline** evaluator, when an account has satisfied every stage in order, FunnelStory treats the account as having completed the journey (**Done** semantics in evaluation). **Last match** does not use the same ordered-completion story; see **[Evaluators](./evaluators.md)**. ## Related - **[Stage conditions](./stage-conditions.md)** — filter groups and allowed field types - **[Evaluators](./evaluators.md)** — how stage order interacts with placement - **[Managing workspace funnels](./managing-workspace-funnels.md)** — activate, refresh, delete --- ## Evaluators # Funnel evaluators (timeline vs last match) An **evaluator** tells FunnelStory **how to place an account on a workspace funnel’s stages** when stage filters can be true at different times, overlap, or change as data updates. This page applies to funnels in the **Funnels** section (`/funnels`). **[Product-level funnels](../../core-concepts/account-hierarchy/product-level-funnels.md)** use the same filter ideas for stages but **refresh placement from current matches** with **last-match-style** rules; see that guide for hierarchy-specific behavior. ## Where you set it When you create a funnel, pick **Timeline** or **Last match** in the create dialog. On an existing funnel, change it from **Funnel Overview** with the **Evaluator** dropdown (defaults to **Timeline** when unset). Changing evaluator affects which **filter types** are available when editing stages—see **[Stage conditions](./stage-conditions.md)**. ## Timeline (`timeline`) **What it does** - FunnelStory **replays the account’s history** (activities and other signals the funnel uses) in **chronological order** and drives a **state machine** so stages advance in sequence. - **Order matters:** progression respects your stage order as the replay walks forward in time. - **Completed stages stick:** once a stage is recorded as **completed** for that account, it **remains completed** even if the same filter would **fail later** (for example a usage metric drops after the account already qualified). That keeps the journey faithful to what actually happened. - **Current stage** is the **first stage that is not yet complete** at evaluation time. - When **every** stage has completed, the account is shown in the funnel’s **Done** state (the built-in completion stage). **When to use it** - Onboarding or **strict sequences** (“must complete A before B”). - Motions where **history** matters more than “what would pass a filter if we only looked at today.” - Reporting where you do **not** want accounts to “fall back” to an earlier stage just because a later-stage metric cooled off. **Stage filters** Only **metrics**, **activities**, and **account properties** are available—inputs compatible with historical replay. ## Last match (`last_match`) **What it does** - FunnelStory **does not** run the full historical replay to decide where the account sits **right now**. - It evaluates each stage’s filter against the account’s **current** data, then picks the **rightmost** stage in your order whose filter **matches** (the **latest** qualifying stage). - If **no** stage matches, the account is **not in the funnel** for that evaluation. Unlike **Timeline**, there is no built-in **Done** state that means “finished every stage in order”—placement is always “**rightmost stage that matches today**.” - If data changes so only an **earlier** stage’s filter matches, the account can appear to **move backward** relative to a strict historical story. **When to use it** - Stages represent **current maturity or health** (tiers, risk bands) where it is normal for an account to **move up or down** as metrics change. - You want the funnel view to answer: **“Which stage definition fits this account today?”** not **“What path did it take through time?”** **Stage filters** The builder exposes the broader palette (signals, subscriptions, account metrics, funnel stage/state helpers, etc.) in addition to the timeline-compatible types. ## Side-by-side | Question | Timeline | Last match | |----------|----------|------------| | Default if unset | Yes | No | | Uses chronological replay | Yes | No | | How “current stage” is chosen | First **incomplete** stage in order | **Rightmost** stage whose filter **currently** matches | | Earlier stage already completed, later filter stops matching | Account can sit in **next incomplete** or **Done**; earlier completion **stays** | Account moves to whatever matches **now** (often an **earlier** stage if only that filter passes) | | All stages satisfied in order | **Done** | Still based on **rightmost match**, not a special “finished” flag | ## Product-level funnels (reminder) Product funnels documented under **[Product-level funnels](../../core-concepts/account-hierarchy/product-level-funnels.md)** evaluate on refresh with **current data** and **rightmost** placement—the same placement rule as **Last match** here—even if a stored evaluator field is shown in configuration. ## Related - **[Overview](./overview.md)** - **[Stage conditions](./stage-conditions.md)** - **[Managing workspace funnels](./managing-workspace-funnels.md)** - **[Product-level funnels](../../core-concepts/account-hierarchy/product-level-funnels.md)** --- ## Funnel analytics Funnel analytics in FunnelStory are the **counts, conversion views, and timing hints** you get once accounts are being evaluated against a funnel—not a separate “analytics SKU.” You read them from the **Funnels** experience and from **Accounts** when a funnel is active for the relevant product. ## Funnel list and refresh On **Funnels** (`/funnels`), each row reflects the funnel’s **active** or **draft** status. Use **refresh** on a funnel when you need evaluation to catch up after large data loads or configuration changes so downstream counts stay credible. ## Per-funnel overview Inside `/funnels/:funnelId`, **Funnel Overview** summarizes: - **Product** tied to the definition - **Evaluator** (**Timeline** or **Last match**) with an inline control to change it - **Status** (active vs draft) - How many stages have **configured filters** - **Created by** metadata Open **View Configuration** to return to the stage editor when you need to adjust gates. ## Accounts experience Where your workspace surfaces funnel context on **Accounts** (layout varies by feature flags), you typically see: - **Stage columns or selectors** so reps can sort and scan accounts by journey position. - **Stage headers** with account counts; hover tooltips often include a **conversion rate** between stages derived from those counts. - A compact **Sankey-style funnel chart** per account row that visualizes progression across stages using the same stage ordering as the definition. Together, these elements answer “how many accounts are in each stage?” and “how sharp are the transitions between stages?” without exporting to a BI tool. ## Timing and health cues For **Timeline** funnels, FunnelStory can still compute **duration-oriented** hints (for example whether an account is tracking **on time**, **slow**, or **stuck** relative to historical completion patterns for a stage) when that data is exposed in your workspace’s accounts UI. **Last match** funnels emphasize **current** placement; treat timing labels as secondary to “which stage matches today?” Exact labels and charts can evolve between releases; rely on the in-product tooltips and legends for precise definitions. ## Related - **[Managing workspace funnels](./managing-workspace-funnels.md)** — refresh and activation - **[Evaluators](./evaluators.md)** — what the numbers mean under timeline vs last match - **[Dashboard & insights → Accounts view](../../dashboard-insights/accounts-view.md)** — working with accounts and funnel filters --- ## Managing workspace funnels The **Funnels** page is the control plane for journey definitions: see everything in one table, narrow by product, and drill into a single funnel to edit stages or change its **evaluator**. ## Prerequisites - **Products** configured in your workspace so each funnel can be associated with a catalog product (see **[Products model](../../data-models/products.md)**). - Models and data behind your stage filters (activities, metrics, properties) set up and refreshing as usual. ## The funnel list 1. Open **Funnels** (`/funnels`). 2. Use the **product** control to filter the table to one product or view **all** funnels. 3. Each row shows the funnel name, product, status, and actions such as open, **refresh**, activate/deactivate, and delete. You can maintain **multiple funnel records per product**. That is useful for experiments, seasonal motions, or handoff between teams—only one of them should be **active** for a given product at a time (see below). ## Creating a funnel 1. Click **New Funnel** (or the empty-state action). 2. In the dialog, **select a product**—this association cannot be skipped for the current multi-product experience. 3. Choose the **evaluator** (**Timeline** or **Last match**). You can change it later on the funnel overview as well. 4. Click **Configure manually**. FunnelStory creates the funnel with a **default stage template** (five stages such as Acquisition through a purchase-oriented stage) so you can rename and add filters quickly. You land on the funnel detail screen. If stages are not yet configured with filters, the product keeps you in the **Configure Funnel Stages** experience until enough stages are ready. ## Saving, naming, and activation When you **Save Funnel** from the stage editor: - Set a clear **name** (for example “Enterprise trial motion”). - If **no other funnel is active** for that product, you can turn on **Activate this Funnel** in the save dialog. - If another funnel for the same product is already active, the activation switch is not offered—you must **deactivate** the other funnel first from the list or detail flow, then activate the one you want. After save, FunnelStory may **activate** and navigate you back to the funnel list depending on the choices in the dialog. ## Active vs draft | State | Meaning | |-------|--------| | **Active** | This funnel’s stages and evaluator drive the primary journey experience for its product on **Accounts** and related surfaces (subject to your workspace layout). | | **Draft** | Editable definition that does not replace the active funnel until you activate it (and deactivate any competitor for that product). | Use **deactivate** on a row when you need to pause a motion without deleting its configuration. ## Refresh **Refresh** re-runs evaluation for that funnel so account placement and stage stats reflect the latest ingested data. Run a refresh after meaningful configuration changes or if results look stale relative to your sync cadence. ## Removing a funnel Delete removes the definition from the workspace. If you might need the structure again, prefer **deactivate** and keep the draft until you are sure. ## Related - **[Overview](./overview.md)** — concepts and vocabulary - **[Configuring stages](./configuring-stages.md)** — stage editor and ordering - **[Evaluators](./evaluators.md)** — timeline vs last match - **[Dashboard & insights → Accounts view](../../dashboard-insights/accounts-view.md)** — funnel filters and columns --- ## Overview(Funnels) # Funnels **Funnels** (also called **journey funnels** in the product) are ordered stages that describe how **accounts** move through your customer journey for a **catalog product**. You define each stage with filters on activities, metrics, account properties, and more; FunnelStory evaluates those filters and shows where accounts sit in the journey. Use funnels when you want a shared definition of “where is this account in our motion?” that powers **Accounts** views, filters, and reporting—without rebuilding the same logic in spreadsheets or one-off reports. ## Multiple funnels per workspace Unlike older versions of FunnelStory that supported a single workspace funnel, you can maintain **many funnel definitions**. Each funnel is tied to a **product** from your **Products** model. Teams often keep **draft** funnels while iterating, then **activate** the one that should drive the live Accounts experience for that product. Only **one funnel per product** can be **active** at a time. That matches how Accounts and related surfaces pick the journey to display. ## Where you work in the app Open **Funnels** in the navigation (`/funnels`). You get a table of funnels, optional **product** filter, and actions to create, open, refresh, activate or deactivate, and remove definitions. Opening a funnel takes you to its detail route (`/funnels/:funnelId`) for overview, evaluator selection, and stage configuration. Workspaces without **multi-product and account hierarchy** features enabled may still see an earlier single-funnel layout until those capabilities are rolled out for your tenant. ## How this relates to “product-level funnels” **[Product-level funnels](../../core-concepts/account-hierarchy/product-level-funnels.md)** (under **Core concepts → Account hierarchy**) are a separate feature for hierarchy-heavy workspaces: they are scoped to a product inside the hierarchy product experience and follow their own activation and refresh rules. The guides in this section describe the main **Funnels** area—the journey definitions that pair with **evaluators** (**Timeline** vs **Last match**) as documented in **[Evaluators](./evaluators.md)**. ## Related - **[Managing workspace funnels](./managing-workspace-funnels.md)** — list, create, activate, refresh - **[Evaluators](./evaluators.md)** — timeline vs last match placement - **[Account hierarchy → Product-level funnels](../../core-concepts/account-hierarchy/product-level-funnels.md)** — hierarchy-scoped funnels - **[Dashboard & insights → Accounts view](../../dashboard-insights/accounts-view.md)** — where funnel stages surface --- ## Stage conditions A **stage condition** is the **filter** attached to a funnel stage. When filters pass, an account can sit on that stage (subject to the funnel’s **evaluator** rules). Conditions are built with the same structured filter experience used elsewhere in FunnelStory—groups of rules combined with **AND** / **OR** logic inside the builder. ## What you can filter on depends on the evaluator In the funnel stage drawer, FunnelStory passes different **filter palettes** based on the funnel’s evaluator: | Evaluator | Stage filter types (high level) | |-----------|-----------------------------------| | **Timeline** | **Metrics**, **activities**, and **account properties** only. | | **Last match** | Broader palette aligned with audience-style filters—for example **signals**, **subscriptions**, **account metrics**, **health tags**, **funnel stages** / **funnel states**, and more—plus the timeline-compatible types. | **Why the split:** the **Timeline** evaluator replays history with stricter semantics; it only consumes filter types that are meaningful for that replay. **Last match** evaluates the account’s **current** snapshot, so additional dimensions are available. If you switch evaluators after building filters, revisit each stage: types that are incompatible with **Timeline** should be removed or simplified so evaluation stays deterministic. ## Condition structure - **Groups** — Organize rules into AND/OR combinations the way you would for audiences or account list filters. - **Empty stages** — A stage with **no** configured filter conditions does not count as “configured” for readiness checks and does not contribute useful gating. - **Save path** — Each stage saves independently via **Save Filters** in the drawer. ## Practical tips - Start with the **minimum** rules that truly define “this stage is achieved,” then add refinements—dense filters are harder to reason about in **Last match** and can cause unexpected movement when data changes. - Align metric windows and activity definitions with how your **models** refresh so accounts are not stuck waiting for stale inputs. - When a stage is only for reporting nuance inside a broader step, consider merging steps instead of stacking many near-duplicate filters. ## Related - **[Evaluators](./evaluators.md)** — timeline vs last match behavior - **[Configuring stages](./configuring-stages.md)** — stage editor workflow - **[Platform → Audiences → Filters](../audiences/filters.md)** — similar filter concepts for segments --- ## MCP Authentication FunnelStory authenticates MCP clients using OAuth 2.0 with PKCE. This is the same browser-based authorization flow used across enterprise software — your credentials stay in FunnelStory, and the client receives a scoped access token. ## The Authorization Flow When an MCP client connects for the first time: 1. The client fetches OAuth metadata from `/.well-known/oauth-authorization-server` to discover FunnelStory's authorization endpoints 2. The client registers itself using Dynamic Client Registration (DCR) — no manual setup needed 3. The client redirects you to FunnelStory's authorization page 4. You log in (if not already) and approve the access request 5. The client exchanges the authorization code for an access token 6. The client uses that token for all subsequent requests to `/api/mcp` After the initial setup, the client refreshes its token automatically. You won't need to re-authorize unless you revoke access. ## PKCE PKCE (Proof Key for Code Exchange) protects the authorization flow in environments where a static client secret can't be kept confidential — like desktop apps. It prevents authorization codes from being used even if they're intercepted mid-flow. ## Revoking Access To remove a client's access to your workspace: 1. Go to the profile menu (avatar) → **MCP Clients** 2. Find the client and click the delete icon The client's tokens are invalidated immediately. ## For Custom MCP Clients If you're integrating a custom client with FunnelStory's OAuth server: | Endpoint | Standard | |----------|---------| | `/.well-known/oauth-authorization-server` | RFC 8414 | | `/.well-known/oauth-protected-resource` | RFC 9728 | Dynamic Client Registration is supported — clients discover the registration endpoint from the server metadata. ## Next Steps - [Getting Started](./getting-started.md) — connect your MCP client - [Available Tools](./available-tools.md) — what authenticated clients can access --- ## Available MCP Tools FunnelStory exposes four tools through the MCP server. | Tool | What it does | |------|-------------| | `query_semantic_db` | Run SQL queries against your workspace data | | `get_flows` | List flows or retrieve a specific flow's configuration | | `configure_flow` | Create or update a flow | | `run_flow` | Execute a flow and return its output | --- ## query_semantic_db Runs a SQL query against FunnelStory's semantic database — a structured view of your workspace data including accounts, metrics, predictions, activities, meetings, notes, tasks, tickets, and needle movers. Your AI assistant can read the schema first (available as the `file://semantic/schema.sql` resource) to understand what tables and columns are available before querying. **Input:** | Field | Type | Description | |-------|------|-------------| | `query` | string | SQL query to run | --- ## get_flows Returns flow metadata for the workspace. Pass a `flow_id` to get the full configuration of a specific flow — including its trigger, input schema, and graph. Omit it to get a summary list of all flows. **Input:** | Field | Type | Description | |-------|------|-------------| | `flow_id` | string | *(Optional)* ID of the flow to retrieve | --- ## configure_flow Creates a new flow or updates an existing one. Pass `flow_id` to update; omit it to create. Flows saved with `draft: true` won't trigger automatically — useful while you're still building. Read the `file://flow/guide.md` resource for the complete flow authoring reference. **Input:** | Field | Type | Description | |-------|------|-------------| | `flow_id` | string | *(Optional)* Flow to update; omit to create | | `name` | string | Display name | | `draft` | boolean | If true, saves without publishing | | `trigger_config` | object | When and how the agent runs — see [Triggers](../../ai/agents-triggers.md) (`manual`, `schedule`, `interval`, `activity`, `signal`, `needle_mover`, `conversation`, `query`) | | `input_schema` | array | Input fields the flow accepts | | `config` | object | Flow graph with `entrypoint` and `steps` | --- ## run_flow Executes a flow by ID and waits for it to complete. If the flow sets a response variable, that value is returned to the caller. **Input:** | Field | Type | Description | |-------|------|-------------| | `flow_id` | string | ID of the flow to run | | `input` | object | *(Optional)* Input values, merged with flow defaults | --- ## Next Steps - [Examples](./examples.md) — example prompts using these tools - [Getting Started](./getting-started.md) — connect your MCP client - [AI Agents](../../ai/agents-overview.md) — product concepts, triggers, and examples - [Vibe coding](../../ai/agents-vibe-coding.md) — using MCP to author agents end-to-end --- ## MCP Examples Example prompts for use with Claude Desktop, Cursor, or any MCP-compatible client connected to FunnelStory. ## Account Preparation > "Pull up everything I need to know about Acme Corp before my call tomorrow — health score, prediction, open renewal date, recent needle movers, and any notes from the last 90 days." Pulls account data, predictions, needle mover history, and recent activity in a single pass. Useful for call prep without opening multiple tabs. --- > "Which accounts are predicted to churn, have a renewal date in the next 60 days, and haven't had a meeting logged in the last 30 days?" Crosses predictions, account properties, and activity logs. Returns a prioritized list for immediate outreach. --- ## Portfolio Analysis > "Break down my book of business by health score — how many accounts are above 70, how many are below 40, and what's the total ARR at risk?" Queries accounts with health scores and ARR, buckets them, and returns a distribution summary with dollar amounts. --- > "Show me all accounts in the Healthcare segment with a churn prediction that are missing our Advanced Analytics add-on. Group by CSM." Filters by segment, prediction outcome, and product ownership, then groups by CSM assignment. --- ## QBR Preparation > "I have a QBR with Globex next week. Summarize their health over the last 6 months — health score trend, needle movers flagged, metrics that improved or declined, and what our prediction says going into renewal." Returns a narrative summary backed by historical score data, the needle mover timeline, metric trends, and the current prediction. --- ## Working with AI Agents > "Run the renewal risk summary flow for Q2 renewals and show me the output." Calls `get_flows` to find the right flow, then `run_flow` to execute it. Returns the output without navigating to the Flows section in the UI. --- > "Create a flow that runs daily, finds accounts with a health score below 35, and sends a Slack message to #cs-alerts with the account name, health score, and CSM." Uses `configure_flow` to build a scheduled flow saved as a draft for review before publishing. --- ## Next Steps - [Available Tools](./available-tools.md) — reference for all four MCP tools - [Getting Started](./getting-started.md) — connect your AI assistant - [AI Agents overview](../../ai/agents-overview.md) — triggers, operations, and run lifecycle in the product - [Vibe coding](../../ai/agents-vibe-coding.md) — recommended workflow for assistants --- ## Getting Started with MCP This guide walks through connecting an MCP-compatible AI client to your FunnelStory workspace. ## Prerequisites - A FunnelStory workspace with at least one account configured - An MCP-compatible client: [Claude Desktop](https://claude.ai/download), Cursor, or similar ## Connect Your Client **Get the MCP URL from FunnelStory.** Open the profile menu (avatar) → **MCP Clients**, choose **Create Client**, give it a name, then copy the **MCP URL** from the dialog (it uses your app's origin, e.g. `https://app.funnelstory.ai/api/mcp` in production). If your assistant asks for OAuth credentials, copy the **Client ID** and **Client secret** from the same dialog; the secret is one-time only. Many MCP clients also support **Dynamic Client Registration (DCR)** — after you paste only the MCP URL, the client may register itself automatically against FunnelStory's OAuth metadata without you typing a client secret. Use the UI-generated secret when the client does not support DCR or when you prefer a named, revocable client. The exact UI in the assistant varies by product: - **Claude Desktop**: **Settings → Developer → Edit Config** and add the server under `mcpServers` - **Cursor**: **Settings → MCP** and add a new server entry ## Authorize Access The first time you connect, your client opens a browser window for authorization. Log in to FunnelStory if prompted, then click **Authorize**. Your credentials stay in FunnelStory — the client receives a time-limited access token, not your password. ## Select a Workspace After authorizing, select which workspace the client should connect to. If you have access to multiple workspaces, choose the one you want to work with. To switch to a different workspace later, disconnect the server from your client and reconnect — the selection prompt will appear again. ## Verify the Connection Try a prompt to confirm everything is working: > "List the accounts with the lowest health scores in FunnelStory." If the client returns account data from your workspace, the connection is set up correctly. ## Managing Clients On the **MCP Clients** page (profile menu → **MCP Clients**) you can **Create Client** (new URL + credentials bundle), see existing clients, and **revoke** access with the delete action so tokens stop working immediately. ## Next Steps - [Available Tools](./available-tools.md) — what your AI assistant can do once connected - [Authentication](./authentication.md) — details on how the OAuth flow works - [Examples](./examples.md) — example prompts to try --- ## MCP Server Overview FunnelStory's MCP server gives AI assistants — Claude, Cursor, or any [MCP-compatible](https://modelcontextprotocol.io) client — live access to your workspace data. Instead of copy-pasting account details into a chat window, your assistant can query FunnelStory directly and answer questions against your current data. ## What Is MCP? MCP (Model Context Protocol) is an open standard for connecting AI assistants to external tools and data sources. It defines a common interface so clients can discover what a system can do and call it without custom integration work on either side. The hosted MCP endpoint lives at `{your FunnelStory app origin}/api/mcp` (for example `https://app.funnelstory.ai/api/mcp`). ## How Connections Work FunnelStory uses **OAuth 2.0 with PKCE** for MCP access. Many clients also use **Dynamic Client Registration (DCR)** — they read server metadata, call the registration endpoint, and obtain a client id and secret without you filling in developer-console fields. Other clients only need the MCP URL plus credentials you create once in the product. How you connect depends on whether you use **personal** Claude (or another desktop client) or **Claude for Work** (enterprise) with org-managed connectors. ### Personal Claude and desktop clients 1. In FunnelStory, open the profile menu from your avatar in the sidebar, then choose **MCP Clients**. 2. Enter a short name (for example `claude`) and choose **Create Client**. 3. In the confirmation dialog, copy the **MCP URL** — it matches your environment (production, staging, or a private deployment). Add that URL in your assistant's MCP or connector settings. 4. If the client asks for OAuth **Client ID** and **Client secret**, copy them from the same dialog. The secret is shown **only once**; store it somewhere safe or create a new client if you lose it. The first time the assistant talks to FunnelStory, your browser opens for sign-in and workspace selection. After that, tokens refresh in the background until you revoke the client. ### Claude Enterprise (Claude for Work) End users usually cannot add arbitrary MCP URLs themselves. A **workspace admin** enables FunnelStory as an organization connector in the Claude admin console. The high-level steps are: 1. Open [Claude admin connectors](https://claude.ai/admin-settings/connectors). 2. Create a **custom Web** connector. 3. Set **Name** to `FunnelStory` and **URL** to your MCP endpoint (for production: `https://app.funnelstory.ai/api/mcp`). 4. Save the connector. Members typically enable it under **Customize → Connectors** in their own Claude settings. ## What Your AI Assistant Can Do Once connected, your AI assistant can: - **Query accounts** — health scores, prediction outcomes, renewal dates, and account properties - **Read metrics and dashboards** — pull chart data for any account or across your book of business - **Review needle movers** — see what signals have been flagged and their history - **Run SQL against your workspace** — the semantic database includes accounts, metrics, predictions, activities, meetings, notes, tasks, tickets, and more - **Work with AI Agents** — list, inspect, create, and execute declarative agents (flows) in your workspace ## Supported Clients Any MCP-compatible client works with FunnelStory. Common choices: - **Claude** — personal apps, Claude Desktop, or enterprise connectors depending on your plan - **Cursor** — AI-powered code editor - Any client implementing the [MCP spec](https://modelcontextprotocol.io/specification) ## Next Steps - [Getting Started](./getting-started.md) — connect your MCP client to FunnelStory - [Authentication](./authentication.md) — OAuth, PKCE, and DCR details - [Available Tools](./available-tools.md) — complete reference for each MCP tool - [Examples](./examples.md) — example prompts to get started - [AI Agents in the product](../../ai/agents-overview.md) — how agents work in FunnelStory beyond MCP --- ## AI agents and notifications **[AI agents](../../ai/agents-overview.md)** are the right tool when **built-in Slack and Teams notifications** are not enough: you need **multiple steps**, **queries across your workspace**, **LLM summarization or classification**, or **several outputs** from one run (for example Slack plus a task plus an email, with logic in between). Agents are **configurable automations**—a **directed graph** of steps with **triggers** that decide when a new **run** starts. You can build **simple** agents (a few steps) or **complex** ones (branching, loops, subflows) using the **agent builder** in the product or JSON with the same schema the runner understands. ## How agents differ from Admin notifications | | **Admin → Notifications** (built-in) | **AI agents** | |--|----------------------------------------|-----------------| | **Authoring** | UI wizards for events, accounts, channels, templates | Canvas / JSON **flow** with `CALL`, `AGENT`, `CONDITION`, `LOOP`, and more | | **Best for** | “When **activity** or **signal** X fires for these accounts, post to Slack/Teams” | Custom logic, data pulls, LLM steps, multi-channel outcomes | | **Delivery** | Slack and Teams (built-in templates) | **Functions** you attach—commonly `slack.send_message`, Teams integrations via configured connections, `tasks.create`, `email.send`, CRM, datasets, `semantic.query`, and others ([functions reference](../../ai/agents-functions-reference.md)) | Use built-in notifications for **broad, low-lift coverage**; use agents when a **specific playbook** should run end to end. ## Notifying from an agent Agents do not use the same “notification configuration” object as **Admin → Notifications**. Instead, you add **steps** that call platform **functions**—for example a `CALL` to `slack.send_message` after a `semantic.query` or an `AGENT` step that decides whether to notify. Triggers can be **manual**, **scheduled**, **interval**, **activity**, **signal**, **conversation**, **needle mover**, **query**, and more ([triggers](../../ai/agents-triggers.md)). ## Where to learn more - [AI Agents overview](../../ai/agents-overview.md) — mental model and when to use agents - [Getting started with agents](../../ai/agents-getting-started.md) - [Triggers](../../ai/agents-triggers.md) — when runs start - [Operations (step types)](../../ai/agents-operations.md) — `CALL`, `AGENT`, control flow - [Functions reference](../../ai/agents-functions-reference.md) — Slack, email, tasks, CRM, and the full catalog ## Related - [Notifications overview](./overview.md) — built-in Slack and Teams - [Slack notifications](./slack.md) - [Microsoft Teams notifications](./ms-teams.md) --- ## Microsoft Teams notifications Post FunnelStory **activity** and **signal** updates to **Microsoft Teams** channels your team monitors day to day. The flow matches Slack: each **notification configuration** wires events, account scope, Teams destinations, and templates together. ## Before you start Complete a **Microsoft Teams** [data connection](../../data-connections/communication/ms-teams.md) so FunnelStory can read teams and channels you authorize. ## Create a Teams configuration 1. Open **Admin → Notifications**. 2. Choose **New Configuration** and select **Teams**. 3. Complete each section: ### When these happen Select **activities** and **signals** that should trigger this configuration. Available signal types depend on what your workspace has configured (for example prediction- or audience-driven signals). ### In any of these accounts Use **Select accounts** to target specific accounts or apply a **filter** so only matching accounts generate notifications. ### Notify these channels Choose the **Teams connection**, then the **team** and **channel** (or channels) to receive messages. If you need **different Teams destinations for activities versus signals**, use the channel picker in the product to assign each side independently, or create **separate configurations** when policies require distinct routing. ### Templates Pick **activity** and **signal** templates so posts stay readable in Teams. 4. **Save** the configuration. Add more configurations when different teams or regions need their own routing. ## Operational tips - Ensure the FunnelStory Teams app (or integration) is installed where your organization requires it, and that channel membership allows the posting identity to deliver messages. - Delivery follows the same refresh-driven timing as other channel notifications—see the [notifications overview](./overview.md). ## Related - [Notifications overview](./overview.md) - [Slack notifications](./slack.md) - [AI agents and notifications](./ai-agents.md) — multi-step or conditional delivery - [Microsoft Teams data connection](../../data-connections/communication/ms-teams.md) --- ## Notifications overview FunnelStory includes **built-in notifications** so your team hears about important **account activity** and **signals** in **Slack** and **Microsoft Teams**. You configure everything **in the product UI** under **Admin → Notifications**: which events matter, which accounts are in scope, where messages go, and how they read—no code required for the standard “when this happens, post there” pattern. That path is ideal when a **single rule** per configuration is enough: fixed channels, template-based messages, and refresh-aligned delivery. ## Built-in channel notifications | Channel | Where you configure | What you get | |--------|---------------------|--------------| | **Slack** | **Admin → Notifications** → new **Slack** configuration | Posts to Slack channels you choose | | **Microsoft Teams** | **Admin → Notifications** → new **Teams** configuration | Posts to Teams channels you authorize | See [Slack notifications](./slack.md) and [Microsoft Teams notifications](./ms-teams.md) for step-by-step setup (including connecting Slack or Teams under **Configure → Connections** first). ## When you need more than built-in notifications Some situations need **branching logic, data lookups, LLM judgment, or several actions in one flow**—for example “if renewal risk crosses a threshold, summarize the account in Slack, open a task, and only email the CSM when the model says it’s urgent.” For that, use **[AI Agents](../../ai/agents-overview.md)**. Agents are configurable automations with **triggers** (schedule, events, SQL rows, manual, and more); they can call the same integrations in richer ways, including **Slack**, **Teams**, **tasks**, and **email**, plus CRM, datasets, and semantic queries. Read [AI agents and notifications](./ai-agents.md) for how this fits next to built-in notifications. ## Who can configure built-in notifications Workspace **Admin** access is required for **Admin → Notifications**. If you do not see it, ask a **Super Admin** or **Admin** in your organization. ## Timing Built-in channel notifications are sent **after** FunnelStory has processed relevant account data—typically within a short window following a **model refresh**, not the instant an upstream system changes. ## Related - [Slack notifications](./slack.md) - [Microsoft Teams notifications](./ms-teams.md) - [AI agents and notifications](./ai-agents.md) - [Signals](../../dashboard-insights/signals.md) — many notification events are signal-driven - [Workspace management](../workspace-management.md) — **Admin** vs **Configure** --- ## Slack notifications Send FunnelStory **activity** and **signal** updates to **Slack** so CSMs and leadership see changes in the tools they already live in. Each **notification configuration** ties together events, account scope, channels, and message templates. ## Before you start Add a **Slack** [data connection](../../data-connections/communication/slack.md) for the workspace and complete OAuth so FunnelStory can list channels and post on your behalf. ## Create a Slack configuration 1. Open **Admin → Notifications**. 2. Choose **New Configuration** (or equivalent) and select **Slack**. 3. Work through the panels in order: ### When these happen Choose one or more **activities** and **signals** that should fire this configuration. Signals can include types your workspace has enabled—such as **predictions**, **audiences**, or **subscription tags**—alongside standard signal rules. ### In any of these accounts Limit delivery with **Select accounts**: all selected accounts, or a **saved filter** so only matching accounts trigger posts. ### Notify these channels Pick the **Slack connection** (if you have more than one) and the **channel(s)** where messages should appear. You can send **activities** and **signals** to **different channels** within the same configuration when you need separation (for example, `#signals` vs `#usage-alerts`). ### Templates Choose an **activity template** and a **signal template**. Templates control formatting and how much context each message includes. 4. **Save** the configuration. You can create **multiple** Slack configurations—for example, different channels or account scopes for different teams. ## Operational tips - Confirm the Slack app is invited to private channels before expecting deliveries there. - Changes to configurations are reflected on the next eligible runs after data refresh; see the [notifications overview](./overview.md) for timing. ## Related - [Notifications overview](./overview.md) - [Microsoft Teams notifications](./ms-teams.md) - [AI agents and notifications](./ai-agents.md) — multi-step or conditional delivery - [Slack data connection](../../data-connections/communication/slack.md) --- ## Renari **Renari** is FunnelStory's AI copilot — a conversational agent that reasons directly against your Customer Intelligence Graph. Ask it anything about your accounts, your portfolio, or your customers, and it returns grounded, sourced answers drawn from your actual data, not generic LLM guesses. Renari is available everywhere in FunnelStory: from the global **Ask Anything** interface, inline on every account, and embedded directly in Needle Movers and Predictions. ![Renari Ask Anything interface showing conversation history, suggested prompts, and the main query input](/img/renari/ask-anything.png) ## What You Can Ask Renari answers questions across the full breadth of your customer data. Common use cases include: **Account intelligence** - "Which accounts have renewals in the next 50 days and haven't had a meeting this quarter?" - "Summarize the recent activity for Acme Corp." - "Who is the champion at Globex and when did we last talk to them?" **Portfolio analysis** - "Show me all accounts at risk with high ARR." - "Which accounts have had the most feature requests in the past 30 days?" - "How many accounts are in the Expansion stage?" **Trend and signal queries** - "What are the most common competitor mentions across my book of business this month?" - "Which accounts have had a personnel change in the last 60 days?" **Drafting and action** - "Draft a QBR summary for Initech." - "Write a follow-up email for the pricing concern raised by Umbrella Corp." - "What should I do before the renewal call with Hooli next week?" ## Show Your Work Renari shows its reasoning in real time. As it processes your question, you can see which tools it is using, what data it is pulling, and what steps it is taking — before the final answer arrives. This transparency lets you follow along and catch anything that doesn't look right. ## Verifiable Answers with Citations Every answer Renari generates is traceable back to its sources. When Renari uses data from a specific conversation, ticket, meeting, or CRM record, it cites that source inline. Clicking a citation opens the original record — so you can verify any claim directly against the data it came from. This makes Renari's outputs safe to act on: you're not taking an AI's word for it, you're reading a synthesis with the receipts attached. ## Playbook-Guided Responses For complex, multi-step questions, Renari uses **playbooks** — structured sequences of steps that encode best practices for common tasks. When you ask Renari to analyze an at-risk account or prepare for a renewal, it retrieves the appropriate playbook and follows it, rather than improvising from scratch. Playbooks can be configured for your workspace, ensuring Renari follows your team's processes and priorities — not just generic advice. ## Conversation Threads Renari maintains full conversation history. You can ask a follow-up question and Renari will remember the context of the exchange. Conversations are organized by date in the sidebar and can be revisited at any time. This makes Renari useful for ongoing investigations — return to a thread, pick up where you left off, and continue building context without re-explaining the situation. ## Suggested Prompts The **Ask Anything** interface surfaces suggested prompts based on common workflows — like analyzing accounts with upcoming renewals or identifying feature request patterns. These act as starting points you can run as-is or customize for your specific needs. ## Where Renari Appears Renari is available throughout FunnelStory: | Location | What it's pre-loaded with | |----------|--------------------------| | **Global Ask Anything** | Your full workspace — all accounts, all data | | **Account detail** | The full context of that specific account | | **Needle Mover detail** | The signal, its sources, and the account | | **Predictions** | The account's health score and driving factors | When you open Renari from within an account or a Needle Mover, it already knows what you're looking at — you don't need to re-state the context before asking your question. ## Related - [Needle Movers](../core-concepts/needle-movers.md) — Renari is available inline on every Needle Mover - [Predictions](../core-concepts/predictions.md) — ask Renari for action recommendations based on prediction scores - [Customer Intelligence Graph](../core-concepts/customer-intelligence-graph.md) — the data layer Renari reasons against - [AI Agents](../ai/agents-overview.md) — configurable agents that execute multi-step workflows --- ## Security FunnelStory is built to keep your customer data confidential, available only to authorized users in your organization, and protected against accidental loss or unauthorized change. This page summarizes how we approach security at a high level. For contractual terms, subprocessors, and formal questionnaires, use your agreement and your FunnelStory contact. ## How we protect data Traffic between your browser and FunnelStory is encrypted in transit using industry-standard TLS. Stored data is encrypted at rest using cloud provider key management, so secrets and customer content are not held in plaintext on disk. ## Application and infrastructure FunnelStory runs on a major cloud provider (Amazon Web Services) in a multi-tenant architecture with strict network and identity controls. We use least-privilege access for operational staff, regular dependency and vulnerability management, and periodic third-party application reviews. Infrastructure access is gated with strong authentication and logging. ## Isolation between customers Each **workspace** is a separate tenant: models, connections, agents, and account data belong to that workspace and are not shared with other customers. Administrative actions that affect a workspace are scoped to users who have been granted access to it. ## Compliance and reviews We maintain a security and privacy program designed for enterprise expectations (including readiness work aligned with SOC 2). Your procurement or security team can request the latest security overview, questionnaire answers, or penetration-test summaries through your FunnelStory account team. ## Related - [Single Sign-On (SSO)](./sso.md) — how enterprise login is wired to your identity provider - [Audit log](./audit-log.md) — who changed what inside a workspace - [Workspace management](./workspace-management.md) — tenants, access, and lifecycle --- ## Single Sign-On (SSO) Single Sign-On lets your team sign into FunnelStory using your company identity provider (IdP), so access follows the same policies, groups, and lifecycle as your other business applications. ## What FunnelStory supports FunnelStory supports **enterprise SSO** (commonly **SAML**-based with major identity providers), **SCIM** for user provisioning from your directory, and options such as **enforcing SSO** so members of your organization sign in only through your IdP when that fits your security policy. ## Configuring SSO SSO is set up together with **your IT team and FunnelStory**. If you want to enable SSO, **contact your FunnelStory team**—they will work with you on the configuration, testing, and rollout for your workspace. ## Related - [Security](./security.md) — encryption, isolation, and reviews - [Workspace management](./workspace-management.md) — workspaces, access, and administration --- ## Workspace Management A **workspace** is your organization’s tenant in FunnelStory: it holds **accounts**, **data models**, **connections**, **agents**, dashboards, and all other configuration. Users can belong to one workspace or several, depending on how your company is set up. This page covers day-to-day workspace lifecycle from an administrator’s perspective. ## Switching and organizing workspaces After sign-in, use the **workspace picker** to open the environment you need. From there you can: - **Select** an existing workspace to load its data and settings. - **Create** a new workspace when your contract allows additional tenants (for example, a separate division or sandbox). - **Rename** a workspace to keep names aligned with how your team refers to them. - **Delete** a workspace only when you are certain the tenant is no longer needed; deletion is permanent for that tenant’s data. Exact options depend on your role and your company’s policy. ## Configure vs Admin Day-to-day modeling work lives under **Configure**—models, connections, signals, health tags, **AI Providers**, and similar. **Admin** is for cross-cutting controls: **Audit Log**, **Team** (invitations on **Team Members**, plus [Shared teams](../getting-started/shared-teams.md) for account assignment), **Workspaces** (for operators who manage multiple tenants), **Subscription** and billing contacts where exposed, **Notifications**, **MCP connections**, and other tenant-wide settings. If you cannot see an Admin item, your user may not have the required role; ask a **Super Admin** or **Admin** in your organization. ## Enterprise deployment ## Related - [Audit log](./audit-log.md) - [Security](./security.md) - [Getting started overview](../getting-started/overview.md) — first-time orientation - [Data connections overview](../data-connections/overview.md) — what you attach to a workspace --- ## Release Notes ## March 1, 2026 #### Template Management UI - Create and manage email and Slack notification templates from a dedicated UI. - Preview and iterate on templates without leaving the app. #### Shareable Dashboard Page - Share dashboards with a clean, focused page experience. - Recipients see key metrics without needing to log in. #### Cohort Analysis Refinements - Streaming support for cohort insights for a smoother, faster experience. - Improved prompts for more actionable recommendations. **Fixes and improvements** - UI polish for needle movers, dashboards, and focus areas. - Fixed layout and scroll issues across several pages. ## February 1, 2026 #### What's Next — Cohort Analysis - Our new What's Next feature analyzes account cohorts to surface upsell, cross-sell, and expansion opportunities. - AI-generated insights highlight product combinations, needle movers, and recommended actions. - Streaming responses bring insights to life as they're generated. #### Template Testing and Resource Grants - Test email and Slack templates before sending to ensure formatting and variables work correctly. - Resource grants give workspace admins fine-grained control over feature access. #### Flow Enhancements - Flows can now send emails and integrate with HubSpot for reading and updating records. - Support for GCP (Google Cloud) components expands your integration options. **Fixes and improvements** - Dashboard chart editing and stat-type chart support for cleaner visualizations. - Needle mover soft delete and undelete for better lifecycle management. - Account metrics now work in filters across the platform. ## January 1, 2026 #### AI Agents and Flow Automation - Introducing AI Agents: create automated workflows powered by Renari that run on schedules or triggers. - Agents can query data connections, send Slack messages, and integrate with Salesforce. - Build custom flows with an interactive chat interface to test and refine behavior. #### Focus Areas - The new Focus Areas page surfaces tasks near due dates and recent notes in one place. - Prioritize your day without jumping between accounts. #### Shareable Dashboards - Create dashboards and share them via a public link. - Perfect for sharing insights with stakeholders who don't need full workspace access. #### Container Accounts and Product Recall - Container accounts now aggregate children for timelines, scores, and predictions. - Product Recall provides per-product audience analysis for renewal and upsell planning. **Fixes and improvements** - Multi-product account views support global filters and improved column options. - Fixed account breadcrumbs and prediction display for product-specific models. ## December 1, 2025 #### Account Hierarchy and Multi-Product View - Visualize parent-child account relationships with our new Account Hierarchy feature. - Multi-product workspaces get dedicated product tabs, breadcrumbs, and product-specific views. - See adoption, metrics, and driving factors per product for a complete picture. #### New Connections: Attio and Fathom - Connect Attio for CRM and deal data. - Connect Fathom for meeting recordings and notes. #### Needle Mover Email Notifications - Get notified when needle movers are assigned, updated, or when you're mentioned in comments. - Stay on top of critical account events without constantly checking the app. #### Mixpanel and Enhanced Data - Mixpanel connection now supports virtual table queries for deeper product analytics. - Workspace defaults and grounding configurations are easier to manage. **Fixes and improvements** - Improved product journey funnel navigation and filter behavior. - Fixes for multi-product account views, row selection, and CSV downloads. ## November 1, 2025 #### Microsoft Teams Integration - Connect Microsoft Teams to bring meetings and conversations into FunnelStory. - Sync group chats and meetings for a complete view of customer touchpoints. - Renari and needle movers now consider Teams data alongside Slack, Zoom, and other sources. #### Jira Integration - Connect Jira to sync issues, epics, and project data. - Link support tickets to Jira for streamlined workflow and visibility. #### Custom AI Providers and Models - Configure custom AI providers, including Google Vertex AI, with your own credentials. - Choose the models that work best for your workspace's AI features. #### Nylas Email and Playbooks - Nylas integration supports sending emails and syncing threads via webhooks. - Playbooks provide automation templates for common workflows and actions. **Fixes and improvements** - Subscription tag signals are now enriched with additional context in notifications. - Improved handling of campaign events and conversation sentiment during syncs. ## October 1, 2025 #### Smarter Task Creation - Create tasks from natural language: our AI parses due dates, assignees, and account associations from your task titles. - Support for multiple assignees lets you loop in the right people on shared tasks. - AI-suggested tasks are now powered by content labels for higher relevance. #### New Data Connection: Amazon Athena - Connect your Amazon Athena data source to run queries and build models on your data lake. - Seamlessly integrate Athena with your existing FunnelStory workflows. #### Subscription and Product Filters - Filter subscriptions by product and use count-based dimension filters for more precise segmentation. - Better control over which subscription data flows into your models and predictions. **Fixes and improvements** - Needle mover matching has been improved for competitor mentions and bug reports. - We've refined the AI Task Manager experience and task filters. - Performance improvements for needle mover processing and chatbot responses. ## September 1, 2025 #### Content Labels — Smarter Categorization - Introducing Content Labels: our AI automatically categorizes conversations and support tickets by topic, sentiment, and impact. - Get clearer needle mover matching for issues, bugs, and competitor mentions. - Renari can now query content labels to answer questions with more accurate, up-to-date context. #### Notes for Multiple Accounts - Attach a single note to multiple accounts for easier cross-account reference. - Keep related customer conversations and insights organized in one place. #### Single Sign-On (SSO) - Workspace admins can enable SSO for secure, unified access. - Streamline login for your team with your identity provider. #### Needle Mover Reports and Grounding - Needle mover Slack reports now support customizable filters and improved message formatting. - Workspace grounding lets you provide context that Renari uses across all answers. **Fixes and improvements** - Needle mover state and assignee filters have been refined for clearer results. - Improved handling of deleted or stale account references in reports. ## August 1, 2025 #### Renari 3.0 — A Refreshed AI Experience - We've introduced a new Renari experience with an improved chat interface and real-time streaming responses. - Get detailed AI-generated cards with charts and insights directly in your conversations. - Share feedback on Renari's responses to help us improve the assistant for your workspace. #### Needle Movers, Reimagined - Our redesigned Needle Movers experience gives you a dedicated page with powerful filtering, sorting, and pagination. - View AI-generated summaries for each needle mover and dive into a detailed timeline of events. - Chat with Renari directly from a needle mover to get context and next steps. - Needle movers can now be associated with multiple accounts for a clearer picture of impact. #### Campaigns and Calendar Settings - Stay on top of your email outreach with the new Campaigns page. - Configure your calendar integration and meeting sync preferences in the new Calendar Settings page. **Fixes and improvements** - Rich text editing is now supported for task comments and campaign content. - Needle mover filters now include impact type to help you focus on the most critical events. - We've improved onboarding completion checks and account property display. ## July 1, 2025 #### Turn Conversations into Tasks, Automatically - Our AI now reads your emails, call notes, and meeting transcripts to identify action items and promises. - Automatically suggests tasks for your team, ensuring nothing falls through the cracks. - Every suggested task links back to the original conversation for complete context. - Save time by eliminating the need to re-read long emails or listen to call recordings to create to-do lists. #### New Connections and a Smarter Search - We've added new ways to bring your customer information together in one place for a fuller picture of every account. - Connect to Elastic to pull in new data sources and get meeting notes automatically from Update.ai. - Ask our AI assistant, Renari, to search your public Slack channels and find important customer details hidden in everyday conversations. **Fixes and improvements** - **Pin Important Account Details:** Keep the most critical info at the top of an account page by pinning it. - **Add Multiple Domains to One Account:** You can now add multiple email domains to a single company account. - **A Faster, Smoother App:** We've rolled out significant performance improvements, making account pages load faster and CSV exports more reliable for a polished overall experience. ## June 1, 2025 #### Introducing the Account Strategy Tab - The new Account Strategy tab serves as a central hub for your goals by ingesting account plans and objectives directly from tools like Jira, Salesforce, and HubSpot. - Centralize and display account plans directly on the account page for a complete, contextual view. - Our health scoring now considers progress toward these goals, making scores more accurate and context-aware. #### Prediction Confidence Ratings - Our new Prediction Confidence Ratings add a crucial layer of transparency, so you can always trust the data you're acting on. - See "low" or "medium" confidence ratings next to scores where key data is missing. - Get a clear explanation for each warning, so you know exactly what data is needed to improve confidence. #### Visualize the Impact of Needle Movers - We're visualizing the directional impact of critical needle mover events to give you a clearer picture of influencing factors. - Quickly identify whether a needle mover signals a churn risk or an expansion opportunity. - Gauge the magnitude of the impact with a simple three-point scale for each event. - See an at-a-glance summary of how all recent needle movers are collectively impacting an account. ## May 1, 2025 #### Introducing AI Tasks - We're excited to launch AI Tasks, powered by Renari, to automatically identify "Needle Movers" in customer accounts. - Renari analyzes signals like feature requests, support issues, or competitor mentions, evaluates their revenue impact, and creates ready-to-go tasks. - Reduce team workload by up to 50% and focus on high-impact actions. - Proactively prevent churn by addressing critical issues as they arise. #### Renari's Enhanced Search Capabilities - Renari can now search through notes stored in FunnelStory and answer questions based on the content. - Quickly find information across your entire knowledge base without manual review. - Surface insights that might otherwise remain hidden in your notes. - Save valuable time when preparing for customer meetings. #### New Data Connections: Zoho Desk and Postmark - FunnelStory now supports two new powerful data connections. - **Zoho Desk Integration:** Connect your Zoho Desk account to run sentiment analysis, identify needle movers, and enable Renari to answer questions based on ticket data. - **Postmark Integration:** With our new Postmark connection, you can leverage FunnelStory's email workflow capabilities, send professional email updates, and track engagement. ## April 1, 2025 #### Renari 2.0 - World’s First AI Data Engineer - We launched Renari 2.0, our revolutionary AI Data Engineer powered by a patent-pending Agentic Customer Intelligence architecture. - Transforms how Customer Success teams operate with a single-agent approach that centralizes data ingestion, research, and problem-solving. - Proactively monitors usage, conversations, and engagement to highlight warning signals and opportunities. - Intelligently prioritizes risk factors and growth pathways to focus on the highest-impact actions first. - Frees your team to focus on strategic relationship-building by engaging customers directly. #### New Notes Page - View all your notes from different companies in one convenient location with our new Notes page. - See both imported and manually created notes in a single, streamlined interface. - Quickly find the information you need without switching between different company pages. #### New Export Button on the Accounts Dashboard - Take your account data wherever you need it with our new export functionality. - Export your accounts data to CSV format with just a few clicks. - Select exactly which columns you want to include in your export for customized reports. ## March 1, 2025 #### Customer Relationship Map - Introducing our new Customer Relationship Map to visualize company-to-company connections. - See exactly which of your team members are talking to which individuals at your customer's company across all communication channels. - Quickly identify relationship strength with visual indicators based on communication frequency and engagement. - Discover weak links or missing connections in customer relationships to strengthen them proactively. #### Import Notes From Other Tools - We've added the ability to import notes from other tools like Gainsight directly into FunnelStory. - Flexible setup allows you to customize the notes system to work exactly how you need it. - Eliminate manual copying and keep all your important customer information in one place. #### New Audience Signals - Introducing signals that alert you when accounts move between Audiences. - Get immediate notifications whenever accounts enter or exit your Audiences. - Set up automatic emails to notify team members when these changes happen. - Audience signals can be sent to Slack and Microsoft Teams for seamless team communication. **Fixes and improvements** - Improved the loading speed of the new Accounts Dashboard. - Enhanced accuracy of "Needle-Movers" signals to better track key account events. ## February 1, 2025 #### Revolutionary New Accounts Dashboard - We've completely reimagined our dashboard to provide a unified view of your customer data. - See all your customer data in one place - from adoption to renewal - with instant access to account health and revenue predictions. - Our new "Needle-Movers" feature automatically spots key account signals like personnel changes and pricing discussions. - Enhanced health score now includes adoption, usage, and customer conversations. - Advanced filtering lets you quickly find accounts using powerful filters for revenue predictions, sentiment, funnel stage, and more. #### Notes and Tasks - Introducing our new Notes and Tasks feature for better organization and collaboration. - Create and manage notes across all your accounts. - Assign tasks and collaborate with team members. - Maintain privacy controls for sensitive information. **Fixes and improvements** - Fixed minor UI issues in the Renari dashboard to improve user experience. - Improved Renari AI's response accuracy and processing speed. ## January 1, 2025 #### Renari AI Assistant Enhancements - We've made major improvements to Renari, our AI assistant, to help you track action items and feature requests effortlessly. - Responses now appear in an interactive card format for better readability. - Smart conversation analysis that: - Identifies action items from discussions. - Tracks feature requests from customers. - Summarizes key tasks from meetings. - Analyzes conversations across Intercom, Slack, Zendesk, and other platforms. #### New Data Connection: Redshift Database - You can now connect directly to your Amazon Redshift database. - Easily configure FunnelStory data models for better analytics and reporting. **Fixes and improvements** - Improved AI-generated email reports to provide more insightful trends. - Fixed minor UI inconsistencies for a cleaner interface. ## November 1, 2024 #### New Data Connections - We've expanded our integration capabilities with support for several new data connections: - MongoDB: Connect directly to your MongoDB database - Microsoft SQL Server: Connect to your SQL Server database - Gainsight: Track customer events and success metrics - Amazon S3: Integrate with your data lake - Mailgun: Send emails through our workflow system - Pendo: Track product analytics and user events #### Agentic Email Reports - Introducing automated email reports powered by AI for effortless insights. - Schedule reports at daily or weekly intervals. - Receive AI-generated insights about your customer data directly in your inbox. - Track trends and patterns automatically without manual effort. - Share insights effortlessly with your team. **Fixes and improvements** - Enhanced What-If Analysis with improved predictive modeling. - Optimized AI-powered task recommendations to better align with renewal management. ## October 1, 2024 #### Task Recommendation in Renewal Management - Added AI-powered task recommendations to our Renewal Management feature. - Get smart task suggestions based on account predictions. - Easily add recommended tasks to your Task Management dashboard. #### What-If Analysis for Renewal Management - Introduced a What-If Analysis feature as part of the Renewal Management toolkit. - Customer Success Managers can now simulate how changes in customer activities or interactions might affect retention/churn likelihood. - Users can adjust factor values to see potential impacts on customer retention. - The analysis provides clear differentiation between simulated and actual likelihood changes. - Results show the new likelihood and corresponding analysis for the simulated scenario. #### Amazon Simple Email Service Integration - Added support for Amazon Simple Email Service as a data connection. - Enables sending emails through our email workflows using Amazon SES. **Fixes and improvements** - Product activity models now offer an optional toggle to affect product engagement calculations. - Improved the display of JSON-type Account Properties on the account page, by implementing improved formatting for better readability. ## August 1, 2024 #### Salesforce Service Cloud Integration - Added support for Salesforce Service Cloud as a data connection. - Track support tickets in FunnelStory, similar to Zendesk and Freshdesk integrations. #### Enhanced Renewal Prediction - Major improvements to the Renewal Prediction feature for more accurate results. - Incorporated various additional data points into the prediction model. - Updated UI for easier understanding and improved usability. - Enhanced chatbot to answer questions related to renewal predictions. **Fixes and improvements** - Added ability to sync "Account Properties" as part of the User Type CRM Sync. - Resolved a UI issue in the task management feature where the text box didn't automatically clear after adding a note. - Added domain field to Slack alert templates, enhancing the information provided in notifications. ## July 1, 2024 #### Renewal Management - Introduced the AI-powered Renewal Management feature. - Identifies potential churn risks early. - Recommends appropriate retention actions for proactive customer relationship management. #### HubSpot Deals Query Support - Added support for querying HubSpot deals in model queries. - Utilizes a similar approach to SOQL blocks like HS_START, HS_END. **Fixes and improvements** - Implemented filtering of customer journeys by audience in the Journey Map. - Corrected a navigation error where clicking on topic column values led to undefined pages, ensuring proper redirection. ## June 1, 2024 #### Enhanced Chatbot Experience - Significantly improved the overall chatbot experience. - Increased accuracy of chatbot responses for better user interaction. - Added a feedback mechanism (thumbs up/down) for continuous improvement of chatbot responses. #### Freshdesk Integration - Added support for Freshdesk as a new data connection. - Enables fetching support tickets from Freshdesk. - Generates support ticket insights including ticket sentiment and topic tracking. **Fixes and improvements** - Resolved an issue where CSV data was overflowing from the chatbot UI, improving the overall data presentation. - Fixed a display issue in the notification configuration screen when selecting signal triggers. ## May 1, 2024 #### FunnelStory Chatbot - Meet our newest AI-powered assistant – the FunnelStory Chatbot! - Interact and ask questions about accounts, users, and activities within your workspace. - Get instant answers to queries like account activities, opportunities, and user engagement. - Simplify your data exploration with a friendly chat interface. #### New Data Connections - We've added Microsoft Teams integration for direct notifications in your MS Teams channels. - Connect your Databricks database to build custom models on your data. - New Gong integration generates meeting summaries tracked in FunnelStory. **Fixes and improvements** - We've added new filters to our Smart Task Management feature. ## April 1, 2024 #### Smart Task Management - Introducing Smart Task Management - a smarter way to handle tasks. - Our platform automatically creates tasks when accounts are stuck, marked as risks, or represent opportunities. - Tasks are intelligently generated from conversations, meetings, and support escalations. - Teams can manually add tasks with details like title, description, assignee, and due date. - View all your prioritized tasks in one dashboard, reassign, add notes, and mark as complete. #### Journey Map with Additional Insights - We've enhanced our Journey Map feature with additional insights. - Explore the retention zone to visualize activities and accounts within specific areas. - Gain a comprehensive overview with our new Journey Statistics section, including journey completion times and single-ton activities. #### Activity Charts View - Introducing our new Activity Charts View page for visualizing account activities. - Gain insights into your audience's behavior with visual representations of activity occurrences and associated account counts. - Make data-driven decisions to enhance your customers' experience. **Fixes and improvements** - We've improved the UI/UX of the Journey Map for better interactions. - Minor bug fixes have been implemented across the platform. ## March 1, 2024 #### Journey Map - We're excited to introduce our new Journey Map feature, providing invaluable insights into your customers' journeys. - This visual representation displays a flowchart-like view of account journeys, starting from a common entry point. - Apply filters to isolate and analyze specific account journeys, uncovering deeper insights. - Use these journeys to configure a Journey Funnel for more targeted analysis. #### Discovered AI Audience - Our Audiences feature now leverages AI to discover audiences based on the firmographic traits of your accounts. - Gain valuable insights into your customer base with AI-driven audience discovery. - Create and manage lists of users based on specific criteria for effective targeting and engagement. **Fixes and improvements** - We've enhanced the overall performance of the platform for a smoother user experience. - Minor UI improvements have been made across various features. ## February 1, 2024 #### [Meetings Tracking using Zoom](/data-connections/communication/zoom) - We’re excited to introduce our new Zoom Integration feature. This feature allows you to maximize your meeting visibility. Easily track your Zoom meetings and leverage AI-generated summaries/transcripts to extract key points and measure overall sentiment. - Gain insights into customer sentiment, revisit past discussions, and assess relationship dynamics with ease. This will enable you to manage customer relationships effectively. #### [Account Enrichment](/platform/audiences/overview) - Our Account Enrichment feature has been enhanced. You can now enrich your account data with firmographic insights for improved targeting. - These account traits can be used as filters throughout the product, ensuring precise audience segmentation and personalized engagement and campaigns. #### [ICP Audiences](/platform/audiences/overview) - Discover your Ideal Customer Profile (ICP) effortlessly with our improved Audiences feature. Our AI will guide you in identifying top-performing user-profiles and suggest relevant ICPs. - You can create targeted user lists based on ICP filters, and easily sync them with HubSpot for streamlined marketing workflows. **Fixes and improvements** - We've improved the Data Model Configuration flow to provide a better user experience. - We've resolved a minor UX issue where users were getting stuck on the model config screen during onboarding. ## January 1, 2024 #### [Subscription Tags & License Utilization Metric](/dashboard-insights/signals) - Introducing Subscription Tags! We've made it possible for you to configure these tags - Retention, Upsell, Expansion, Churn - based on the signals and metrics that matter most to you. - Our new metric allows you to keep track of your license utilization. You can also configure subscription tags on top of the license utilization metric. #### [Slack Notifications v2](/platform/notifications/slack) - We've updated our slack notifications now you can send notifications to different channels. - We've added a new notification template that provides teams with additional information about the event. - Our UI/UX of this feature has undergone a makeover for a more user-friendly experience. #### Updated Dashboard View - We've enhanced the overview dashboard with additional charts and insights. - Now you can gain a great understanding of how your accounts are performing and identify areas of improvement. **Fixes and improvements** - We've improved the AI suggestions for data models and journey funnels. - We've fixed minor UI issues that occurred during user onboarding. ## December 1, 2023 #### [AI-Driven Data Modeling](/data-models/overview) - We've improved data modeling with the integration of AI capabilities. - AI now suggests data models by analyzing database tables, eliminating the need for manual SQL queries. - This enhancement aims to reduce configuration time, allowing users to focus more on data analysis and insights. #### [AI-Driven Journey Funnels](/platform/funnels/overview) - We've updated the funnel planning process with AI insights. - AI now recommends activities for each funnel stage based on an analysis of top-performing customer behaviors. - This data-driven approach is designed to optimize funnel activities, aligning them with proven successful patterns. #### [AI-Driven Onboarding](/getting-started/quick-start) - Experience an updated onboarding process with AI efficiency. - Our new onboarding flow, powered by AI, ensures users can start operating within 5 minutes. - AI takes care of all configurations, providing a seamless introduction to FunnelStory, complete with a sample database for practical exploration. #### [AI-Driven Workflows and Email Recommendations](/ai/agents-overview) - We've introduced AI recommendations to enhance workflow automation. - FunnelStory's AI analyzes triggers and actions within the funnel, suggesting optimized workflows. - AI support extends to email workflows with personalized email content recommendations, simplifying the automation process. **Fixes and improvements** - The Workflows UI has been updated, improving user navigation and speeding up the configuration process. - We've improved charts on the account page, making them more user-friendly and easier to understand. ## November 1, 2023 #### Enhanced Onboarding Checklist - We've revamped the onboarding checklist for a smoother experience. - The checklist is now more user-friendly and includes additional steps to guide you in setting up your initial workflows. #### Intercom Chatbot Integration - FunnelStory now features an Intercom chatbot. - The chatbot provides quick help and directs you to the right support, making your experience with our app more efficient. **Fixes and improvements** - Improved the loading speed for charts on the dashboard, making your dashboard quicker. - We've simplified the data model configuration process, now only the data connections that are compatible with a specific model will be visible. ## October 1, 2023 #### [Audit Logs](/platform/audit-log) - Introducing **Audit Logs**, a new feature for FunnelStory customers. - Audit logs give you a quick look at all the actions performed by users in your workspace like creating, changing, or deleting things. #### [Manage Subscriptions](/platform/workspace-management) - Introducing **Subscriptions**, a new feature for FunnelStory customers. - Now, you can easily take charge of your subscription plan, billing details, and see how much you're using FunnelStory, all from the new **Subscriptions** page. #### [Added Mixpanel as a Data Connection](/data-connections/analytics/mixpanel) - Introducing **Mixpanel**, a new data connection for FunnelStory customers. - Connect your Mixpanel account to FunnelStory to analyze user behavior and product usage. **Fixes and improvements** - We've made the screen for setting up a Journey Funnel look better and work better. - We've also added new columns to the topics dashboard, showing the list of accounts that relate to each topic. ## September 1, 2023 #### [FunnelStory Dashboard](/dashboard-insights/overview) - Introducing the **FunnelStory Dashboard**, providing a comprehensive overview of your product business. - Real-time metrics on account progression, risk assessments, funnel health, and customer sentiment. - Dive into your data to understand customer behavior and track key performance indicators. - Get actionable insights for data-driven decision-making. #### [Topic Tracking](/dashboard-insights/metrics) - Introducing **Topic Tracking**, a new feature for FunnelStory customers. - Monitor keywords and topics mentioned by your customers. - Analyze sentiment, explore related topics, and track keyword mentions. - Enhance your decision-making process with comprehensive insights derived from Zendesk, Slack, and Intercom data sources. #### [Users Dashboard](/dashboard-insights/accounts-view) - Introducing the **Users Dashboard**, a new tool for FunnelStory customers. - Access a comprehensive list of users across all accounts in your workspace. - Customize your dashboard by selecting the columns to display. **Fixes and improvements** - Updated email notifications to include even more information about your workspace. - Updated the overall UI of the product to improve the user experience. - Added new charts to the account properties tab on the account page. ## August 1, 2023 #### [Alluvial & Sankey Charts](/dashboard-insights/accounts-view) - Introducing three new charts to the accounts dashboard: _Alluvial_, _Sankey_, and _Scatter Plot_ charts. - The _Alluvial_ chart visualizes the flow of accounts between stages in your funnel. - The _Sankey_ chart visualizes the flow of accounts within a stage. - The _Scatter Plot_ chart provides insights into account funnel state relative to funnel stage conditions. #### SLA Breaches - Unveiling the new _SLA Breaches_ feature to monitor account SLA breaches. _Support SLA Breaches_ are computed from Zendesk support tickets, while _Conversation SLA Breaches_ stem from Slack conversations. - Dedicated dashboards for _Support SLA Breaches_ and _Conversation SLA Breaches_ are available on the account page. Gain insights into status, priority, severity levels, sentiment, and more. Jira ticket links are displayed if your Zendesk support ticket is linked to Jira. - New columns related to SLA breaches have been added to the accounts dashboard, including SLA Tier Name, SLA Tier Assigned By, SLA Breach Support Ticket, SLA Breach Conversation, and more. #### [Assign Accounts To Team Members](/getting-started/inviting-users) - Introducing the _Assign Accounts_ feature, allowing account assignment to team members. This will help your team members focus on accounts that require attention as they will have a personalized view of their assigned accounts on the accounts dashboard. - On top of assigning accounts to team members, you can also assign a designation to each team member. Here is the list of designations available: - _Account Executive_ - _Customer Success Engineer_ - _Customer Success Manager_ - _Sales Engineer_ #### [Email Notifications](/platform/notifications/overview) - Commencing Daily/Weekly email notifications to users. These emails provide an account summary based on the configured Funnel **Fixes and improvements** - Automated assignment of names to conditions in the funnel stage and health tags. Manual naming of conditions is no longer required. - Added new filters to the accounts dashboard, such as _Funnel State_, _Recent Activity_, _Recent Signal_, and more. - Workspace ID is now part of the URL so that you can easily share your workspace with others and can have multiple workspaces open in different tabs. ## July 1, 2023 #### [Funnel State](/platform/funnels/overview) - Introducing the _Funnel State_ column in the accounts dashboard. Easily track whether active accounts are Stuck, Slow, or On Time in the funnel. This feature is invaluable for monitoring and addressing accounts that are stuck and require attention. - Gain insights into median time taken by accounts to transition between stages, identifying accounts moving slowly through the funnel. - New _Funnel State Change Signal_ added to track changes in the funnel state of accounts. #### [Done Stage](/platform/funnels/overview) - Addition of the _Done_ funnel stage at the end of the funnel. This facilitates tracking of accounts that have completed all funnel stages. #### [Health Tags](/dashboard-insights/signals) - Unveiling the _Health Tags_ feature, offering 3 distinct health tags: - _Opportunity Tag_ - _Risk Tag_ - _Caution Tag_ - Configure health tags as per your business requirements. For example, assign the _Opportunity Tag_ to accounts with daily product engagement. **Fixes and improvements** - Refined funnel stage logic. Accounts now need to fulfill conditions of previous stages to progress, ensuring accurate funnel progression. - Enhanced account page UI for a cleaner and more user-friendly experience. ## June 1, 2023 #### Activity Charts on Account Page - Introducing a variety of activity charts to the account page, enhancing insights into account and user changes. Charts include: - Product Activity - Non-Product Activity - Account Engagement - Total Users - Feature Adoption - Added ability to filter charts by date range. Chart downloads are also supported. #### [New Accounts Dashboard](/dashboard-insights/accounts-view) - Revamped accounts dashboard for a quick overview of your accounts. Customize columns to display essential data. - Filter accounts based on engagement, support ticket sentiment, and more. - Sort accounts based on column headers. **Fixes and improvements** - Improved UI for model configuration, supporting addition of multiple account properties. - Enhanced UI for the account page, simplifying navigation between sections. - Notification icon added at the top for easier tracking of account signal updates. ## May 1, 2023 #### [AND/OR Conditions in Journey Funnel](/platform/funnels/overview) - Introducing the ability to incorporate AND/OR conditions within the journey funnel. This enhancement empowers you to construct intricate funnels tailored to your needs. - Additionally, you can now utilize metrics and account properties as conditions while configuring a funnel stage. #### [Support for New Data Sources](/data-connections/overview) - FunnelStory now extends its support to include Snowflake and BigQuery as data sources, expanding your options for data integration. #### [CRM Syncs](/platform/crm-sync/hubspot) - Enhancing data synchronization capabilities, FunnelStory enables the seamless transfer of FunnelStory-generated data back to your CRM. Leverage this functionality to synchronize data and drive customized workflows within your CRM environment. - We've also introduced an Account Type sync feature, automating the creation of deals in your HubSpot CRM. **Fixes and improvements** - Precision refinement of the funnel stage logic to optimize accuracy. - Introduction of a search bar on the account page, streamlining account search. - Added the ability to rename an activity by modifying the associated data model. ## April 1, 2023 #### [SSH Tunneling](/data-connections/ssh-tunnels) - Introducing SSH tunneling support, enabling secure and efficient connection to your database, even if it's not publicly accessible. #### [Slack Notifications](/platform/notifications/slack) - Seamlessly integrate with your Slack channel to receive real-time notifications. Stay informed about changes in your product and business, ensuring that your sales, marketing, and support teams are always up to date. **Fixes and improvements** - Optimized model refresh time for faster data synchronization. - Implementation of audit logs for model refresh, providing transparency into refresh time and synchronized row count. ## March 1, 2023 #### [Conversation & Support Ticket Sentiment Analysis](/dashboard-insights/timeline) - Enhancing user engagement analysis, FunnelStory now tracks sentiment and emotion in your conversations and support tickets. Gain insights into user sentiment and emotions while using your product. - Introducing signals to monitor sentiment changes. Triggered alerts notify you of transitions between positive, negative, and indifferent sentiments in support tickets and general conversations. #### [Nudge & Lifecycle Events Workflow Template](/platform/funnels/overview) - Introducing new templates in the Journey Funnel, facilitating the creation of Nudge & Lifecycle Events workflows. - Effectively re-engage users who haven't interacted with your product, or orchestrate lifecycle events for users who have completed specific actions. **Fixes and improvements** - Enhanced user experience in workspace management, including workspace renaming and deletion. - Resolved the issue of selecting the same activity multiple times in multiple stages of the workflow. ## February 2, 2023 #### [New Signals [Refactoring]](/dashboard-insights/signals) - Revamped signal functionality for enhanced flexibility. Introducing new signals to monitor pattern changes. - New signals include: - Activity Frequency Signal - User Growth Signal - Support Tickets Signal - Support Tonality Signal - Conversation Tonality Signal #### [Journey Funnel [Refactoring]](/platform/funnels/overview) - Streamlined Journey Funnel for simplified workflow creation. Formerly known as Playbooks, the Journey Funnel now offers improved usability. - Additional templates provided for accelerated workflow setup, catering to both account-level and user-level scenarios. **Fixes and improvements** - Expanded event timeline per-page limit to 30 records for comprehensive event tracking. - Rectified the issue of workflow failures caused by allowing users to have multiple active playbooks. - Enhanced journey funnel audit log, offering detailed insights into workflow runs. ## January 1, 2023 #### User Tagging - Introducing custom user tagging capabilities. Effectively categorize users based on specific attributes, enhancing user tracking and segmentation. #### [Improved Event Timeline](/dashboard-insights/timeline) - Enhanced event timeline functionality for improved tracking of product changes. Gain deeper insights into customer interactions and product engagement. **Fixes and improvements** - Resolved OAuth flow issues for seamless user authentication. - Streamlined user invite and RBAC (Role-Based Access Control) flow, simplifying the process of inviting new users to your organization. - Rectified data synchronization inconsistencies during model refresh. - Simplifying workspace management, we've transitioned from "Organization" to "Workspace" for better clarity in your operational environment. ## December 1, 2022 #### [Role-Based Access Control (RBAC)](/getting-started/rbac) - Enhancing organization management, FunnelStory introduces Role-Based Access Control (RBAC). Invite and assign users to distinct roles, ensuring precise data access management. - Available roles include: - Super Admin - Administrator - Data Admin - Account User **Fixes and improvements** - Improved error handling during data connection setup. - Resolved the issue of playbook stages failing to detect activities. ## November 1, 2022 #### [Add New Data Sources](/data-connections/overview) - Expanding integration capabilities, FunnelStory introduces support for new data sources: - Zendesk - HubSpot - Intercom - Gmail #### [Add Salesforce Integration](/data-connections/crm/salesforce) - Seamlessly integrate Salesforce as a data source. Monitor account opportunities and activities within your Salesforce account. **Fixes and improvements** - Rectified the issue of organization creation without a name. - Streamlined data connection validation process. - Enhanced account dashboard pagination for improved navigation.