Twoody Chat

AI chat connected to your models, tools and documents.

Twoody Chat is the daily interface: ask questions, summarize documents, run tools, keep history and choose the right model. In self-hosted mode, the main path goes through your Twoody Server.

Smart routing

Twoody Server receives the request, uses the configured provider and keeps the chosen mode explicit.

Useful context

Documents, notes, attachments and tools can enrich answers without mixing all your data.

Confirmed actions

Sensitive actions stay explicit: send, post, edit, delete or pay.

Как это работает

01

Write

You write naturally from web, mobile or desktop.

02

Server routes

The server applies auth, model, tools and context.

03

Model answers

The answer streams back from the selected provider.

04

History stays clear

Each thread keeps context without hiding the execution mode.

Важные детали

For the user

  • One conversation can use models, documents and tools.
  • Answers can make local, managed and cloud modes clearer.
  • Chat remains useful beyond the privacy story.

For technical teams

  • OpenAI-compatible routing keeps providers replaceable.
  • Tools can be enabled progressively by capability.
  • Self-hosting avoids silent fallback to another model.

FAQ

Are Chat and Private LLM the same thing?

No. Chat is the user experience. Private LLM is an execution capability: the answering model can run on your infrastructure.

Can we use a cloud model?

Yes, if configured explicitly. The “Twoody does not receive your messages” promise applies to self-hosted mode.