/ ACCELERATOR· InVoIQ

InVoIQ — Voice-driven AI BI. Speak the question. See the answer.

Voice-driven and natural-language BI on live enterprise data. Speech-to-text, LLM intent, semantic-layer text-to-SQL, query execution, and visualisation — in seconds, with citations.

Built on Whisper / Deepgram ASR, Claude / GPT for intent, dbt MetricFlow / Cube / AtScale / LookML semantic layers, and your warehouse (Snowflake, BigQuery, Databricks, Redshift). BYOM — a vendor-agnostic alternative to Tableau Pulse, Power BI Copilot, and ThoughtSpot Sage.

Natural-language BI fails on enterprise data without a semantic layer. "Show me revenue" can mean five different things. InVoIQ pairs voice + LLM intent with a semantic layer so the right metric, the right grain, and the right filters are unambiguous before any SQL is generated.

The pipeline

Each stage is a pluggable component. We swap implementations to fit the customer's stack and constraints.

  1. Voice capture. Browser Web Audio or mobile native; on-device VAD (voice activity detection) trims silence.
  2. Speech-to-text (ASR). OpenAI Whisper (self-hosted or API), Deepgram, Amazon Transcribe, or Google Speech-to-Text. Indian-English and multi-Indic support depending on choice.
  3. Intent & entity extraction. An LLM (Claude or GPT) reads the transcript, extracts the metric, the dimensions, the time window, and the filters — constrained to what the semantic layer exposes.
  4. Semantic-layer query. The structured intent is compiled to a query against dbt MetricFlow, Cube, AtScale, or Looker LookML (whichever the customer uses). The semantic layer enforces metric definitions and joins.
  5. Warehouse execution. The compiled SQL runs against the customer's warehouse — Snowflake, BigQuery, Databricks, Redshift, Postgres. RBAC is enforced at the warehouse, not the chatbot.
  6. Visualisation. An LLM picks an appropriate chart type from the result shape (time-series → line; categorical → bar; etc.). Renders via Plotly, Apache Superset embedding, or the customer's existing BI surface.
  7. Optional voice response. Text-to-speech via Amazon Polly, ElevenLabs, or Google Cloud TTS. Optional — most users want the visual.

Why a semantic layer matters

Naive text-to-SQL fails on real enterprise schemas. Joins are wrong. Metric definitions disagree across departments. The same word means different things in different tables. The semantic layer is the contract that resolves all of this before SQL is generated.

For depth, see our resource on why text-to-SQL needs a semantic layer.

What it does well — and what it doesn't

InVoIQ is honest about its operating envelope.

It does well

  • Defined metrics over defined dimensions ("revenue by region last quarter").
  • Common time-window comparisons ("month over month", "year over year").
  • Top-N and bottom-N queries.
  • Drill-down within metrics already modelled in the semantic layer.

It doesn't pretend to do

  • Free-form analysis on un-modelled data — if it's not in the semantic layer, the agent says so and asks.
  • Statistical inference (causal claims, regression) — hands off to a notebook or analyst.
  • Forecasts — references the team's existing forecasting models if any; otherwise refuses.
  • Anything that involves PII without proper authorisation context.

How it compares to native BI tools

The major BI vendors now ship voice / natural-language layers — Tableau Pulse, Power BI Copilot, ThoughtSpot Sage, Looker Conversational Analytics. They work well inside their platforms.

InVoIQ is for organisations that want:

Where it fits

InVoIQ is the voice-driven layer on top of AI-Powered BI. It runs on the Data Platform we build, governed by Responsible AI controls, with quality assured by Eval@Core.

Related resources

We are an intent away