T3 Chat FAQ

Can I trust that the AI is telling me the truth?

To put it simply: no. Large language models have gotten incredibly good, but they're far from perfect. It's good to double check all answers you get. Turning on search can help get more recent answers, but it's not a guarantee for correctness.

AI models sometimes "hallucinate" and produce incorrect/irrelevant answers. Sometimes a simple "retry" will produce a better answer. As mentioned above, we'd recommend checking whether responses from the model are accurate or not, and not relying on them as a sole source of information.

Do you use my data for training?

We do not train our own AI models and have opted out of training data collection with our model providers where possible. Models on the following providers have been configured to not allow training on user inputs with the associated settings:

  • OpenAI: Training is disabled by default, we have not opted in. Reference
  • Anthropic: Training is disabled by default, we have not opted in. Reference
  • Google: Training is disabled on paid models. Reference
  • OpenRouter: We have explicitly disabled the option to route requests to providers that allow training on user inputs. Reference

Sometimes there are new models that can only be used with training enabled. When this is the case, we clearly indicate it so you can make an informed decision about whether to use that model.

If you are bringing your own API key, you are responsible for ensuring that your organization has configured their settings to not allow training if you do not want to allow it.

How does the usage meter work? It seems like it jumps around or drains faster sometimes.

The meter uses two buckets: a Base bucket and an Overage bucket. Usage always spends from Base first, then Overage. Base refills every 4 hours, and Overage refills on your monthly renewal date. If both are depleted, you need to wait for capacity to refill. In settings, these are shown as separate percentage bars so you can see each bucket independently.

Using simple "points" as an example, Base has 20 points and Overage has 200 points.

Whenever you send a message, we reserve an expected cost from your available balances. We apply it to Base first, then Overage. Once the response completes, we settle the final cost by either crediting back unused reserved amount or deducting any additional usage.

Because of this reserve-then-settle flow, your usage can temporarily move up before settling back down, which is why the meter may appear to go up and down at times.

Starting State

Fresh Base bucket and full Overage bucket. Total capacity is 220 points (20 from Base + 200 from Overage).

Base
Overage

Partial Base Usage

12 points are used from Base.

Base
Overage

Reset Before Base Exhaustion

A reset happens and Base is refilled. Base refills, total capacity remains 220.

Base
Overage

Base Exhausted

Base hits zero first. Overage is still full at this point.

Base
Overage

After 50 Points in 4 Hours

20 points come from Base, then 30 from Overage. Overage is now reduced.

Base
Overage

After Reset (with Overage already used)

Base is refilled, but Overage stays reduced. Total available capacity is now 190 points (20 from Base + 170 from Overage).

Base
Overage

Common things that can make cost per message go up:

  • Long threads (more context is sent each turn)
  • Large or many attachments
  • Search-enabled requests
  • More expensive models

Do you have feature XYZ?

We are always looking to improve T3 Chat, and are open to feature requests. If you have a feature you'd like to see, please let us know by opening a feature request on https://feedback.t3.chat.

How do I get support?

If you have any questions or need help, please contact us at support@t3.chat.

How do I set up custom search URLs?

You can start a new chat by opening /new with query parameters.

For example: /new?model=claude-4.6-sonnet&q=Write%20a%20dramatic%20courtroom%20defense%20for%20a%20penguin%20accused%20of%20stealing%20all%20the%20fish

Make sure the prompt in q is URL-encoded.

qstringRequired: Yes

Sends this text as the first user message in the new chat.

Notes: If missing or left as %s, the app redirects to home.

modelstringRequired: No

Selects which model the new chat uses.

Notes: Use a valid model ID, such as claude-4.6-sonnet.

effortstringRequired: No

Sets reasoning effort level for the selected model.

Notes: Invalid or unsupported values fall back to that model's default behavior.

searchbooleanRequired: No

Enables or disables web search for that new chat.

search_limitintegerRequired: No

Sets the max number of search queries or tools for the run.

Notes: Normalized to a whole number and clamped to 1..5.

profilestringRequired: No

Chooses the profile for the new chat.

Notes: Accepts either a profile ID or profile name, case-insensitive.

temporarybooleanRequired: No

Creates a temporary chat instead of a persisted thread.

Notes: Defaults to false if omitted.