05:00
Streaming
Stream responses token by token.
Lower values keep replies focused; higher values add more variety.
Enable caching
Toggle response caching on or off.
Auto refresh
Automatically refresh cache data.
Latest cache activity
Enable caching to start collecting cache stats.
Minimum tokens required to cache (selected model) 0/1,024
No AI requests sent yet.
Set your name
profile preview
Loading presets...
Download your current preset or upload one from your device.
Database
System prompt (Locked)
Assistant prefill (Locked)
Chat history