about fearstral
model
The live demo uses mistralai/Mistral-7B-Instruct-v0.3.
fear vector
The fear direction was built from paired scenes: the same setup gets written once in a neutral way and once in a fearful way.
A typical pair looks like this: “I unlocked my apartment door and noticed the kitchen light was on.” The neutral continuation explains it away as something ordinary, while the fearful continuation makes the apartment feel wrong, too quiet, and possibly occupied.
Another pair might start with “I was alone in the parking garage when the overhead lights began to flicker.” The neutral branch keeps walking to the car. The fearful branch starts tracking shadows, footsteps, and threat.
The extraction step runs Mistral on many of those contrastive pairs, collects hidden states from layer 29, pools them across the continuation tokens, and takes a mean difference between fearful and neutral activations. That gives one steering direction in activation space that roughly means “push this continuation toward fear.”
real-time steering
The browser sends motion updates about every 100ms. The backend turns those updates into a live fear state using accelerometer, gravity, and gyroscope signals.
The fear state uses a fast startle component and a slower lingering component, plus deadzones and smoothing so ordinary motion stays calm while aggressive shaking pushes the signal up.
During generation, the model reads the current fear value token by token and applies the fear vector at layer 29 with token_strategy=all.
what the site reads
- device motion from the browser
- session-scoped chat messages for the active conversation
- short-lived fear state used to steer replies
No separate sensor app is required for this demo.
limits
This is a demo, not a general emotional companion. It only ships one public emotion direction right now: fear.