A completely new chat for Spark

We rebuilt Spark's chat from the ground up. The result is a fundamentally different research experience: faster, deeper, more transparent, and more reliable.

Here's what changed.

Watch your research happen in real time

Previously, you typed a question and waited. A spinner spun. Eventually, an answer appeared. You had no idea what happened in between.

Now you can see the entire research process unfold. A research workflow panel shows three stages as they happen:

  1. Request Analysis -- Spark interprets your question
  2. Source Collection -- specialized agents search the web, companies, scientific literature, and patents
  3. Answer Synthesis -- findings are combined into a sourced answer

Each active agent appears as a card showing what it's doing: which queries it's running, which sources it's finding, and whether it's still collecting or already done. You can expand any agent to see the full search protocol: every query executed, every source discovered.

Research agents working in parallel

Four specialized research agents, working in parallel

The old chat had a single AI that picked from a handful of generic tools, one or two at a time.

The new chat dispatches up to four specialized research agents simultaneously:

  • Web Research -- general web search with multiple search engines
  • Company Research -- startup discovery, market landscape, funding data
  • Science Research -- academic papers, chemical compounds, research trends
  • Patent Research -- patent search with keyword, assignee, and classification strategies

When you ask a complex question, these agents work in parallel rather than sequentially. A question like "What startups are working on solid-state batteries and what does the latest research say?" now triggers web, company, and science research at the same time, each running its own search strategy.

Iterative research, not one-shot

The old chat ran one or two searches and immediately tried to answer. If those searches didn't cover the question well enough, you got a shallow answer.

The new chat can go deeper. After the first round of research agents returns, the system reviews the findings. If it determines that a part of your question isn't well covered yet, or that the initial results point to something worth investigating further, it spins up additional research agents to dig into those gaps. This can happen multiple times.

You'll see this in the research workflow panel: new agent cards appearing after the first round completes, each targeting a specific follow-up question that emerged from the initial findings. The result is a more thorough answer that covers angles a single round of searches would miss.

Smarter search strategies

Each agent doesn't just run one or two search queries. It generates multiple keyword variants (synonyms, reformulations, related terms) and runs them in parallel across different search services and databases. Results are deduplicated automatically.

This means the system is far less likely to miss relevant information just because you phrased your question one way instead of another.

Better results through quality filtering

Raw search results are no longer passed directly into the answer. Before synthesis, results go through multiple quality gates:

  • Relevance filtering -- an AI evaluates whether each result actually matches your question
  • URL health checks -- dead links are detected and removed before they can appear in your answer
  • Market data verification -- for financial queries, specific numbers are extracted and verified

You get cleaner answers with fewer irrelevant tangents or broken links.

Better citing of sources

In the old chat, citations were sometimes inconsistent. The AI was asked to cite sources, but it didn't always do that.

Now, citation is enforced structurally. Each research agent extracts source URLs from its findings and passes them forward with explicit instructions to cite them. The system requires every factual statement to include an inline source link.

Time-aware research

Ask "What happened in Q4 last year at Palantir?" and the system now knows exactly what time window you mean. Every agent receives today's date and follows explicit rules for interpreting temporal expressions: "last year" means the previous calendar year, "past year" means the last 365 days, and so on.

Search APIs receive the correct date filters, so results are properly scoped to the time period you're asking about.

Fewer failures

If a model provider times out or errors, the old chat simply failed. You'd have to try again.

The new chat uses fallback models. Each research agent tries its primary model first, and if that fails within the time limit, automatically switches to an alternative provider. You experience fewer failed requests. The system recovers gracefully instead of giving up.

Copy the research workflow

You can copy the entire research workflow to your clipboard as a structured Markdown table. The export includes the agent name, the question it was given, every search query it ran, and every source it found, with titles and URLs.

This is useful when you need to document how you arrived at a conclusion, share your research methodology with colleagues, or simply keep a record of what was searched.

Everything else you know, preserved

All existing features carry over unchanged:

  • PDF upload and analysis
  • Image upload and analysis
  • Knowledge-base ("My Contents") querying
  • Chat session history
  • Follow-up question suggestions
  • "Help me ask" prompt improvement
  • Copy answer to clipboard
  • Save answer to project

Try the new chat β†’