Signals are all the hype right now. Well. At least in the web-framework space. However, not everything is rosy and bloomy. In this blog post I will describe some of the dark sides of (Angular) Signals.
TL;DR: Signals are super awesome, because they manage dependencies for us. But they are side-effect-heavy by nature, so you always have to keep those side effects in mind when implementing basically anything.
This blog starts with the basics. If you know what signals are and how they work on a high level, then you can jump to the start of the content.
What are Signals?
Signals. Everyone wants them, everyone adds them. From Svelte Runes to Signals in Angular and Preact. The general concept is framework agnostic though.
In short, I would describe Signals as auto-dependency-tracking event streams, which always hold a value.
Usually, you find an example like this one when Signals are described:
// All examples in this blog post use the Angular Signals system, because it's
// the one I'm most familiar with.
// Store the currently selected (e.g. by the user) number in
// a Signal
const selectedNumber = signal(0);
// "doubled" is a (read-only) Signal, which doubles the selectedNumber.
const doubled = computed(() => selectedNumber() * 2);
console.log(doubled()); // Prints 0
// Update the selected number.
selectedNumber.set(5);
console.log(doubled()); // Prints 10
The most important part in this example is the doubled
Signal. It interacts
with the selectedNumber
and just by that alone the system knows the dependency
between those two Signals.
The second important concept of Signals are effect
s. Like the name implies,
those are for side effects of Signals. For example:
const selectedNumber = signal(0);
effect(() => {
console.log("The selected number is:", selectedNumber());
});
selectedNumber.set(1); // triggers the effect
selectedNumber.set(-13); // triggers the effect
The output will be:
The selected number is: 0
The selected number is: 1
The selected number is: -13
So…
How do Signals work?
On the surface, the trick is actually pretty simple.
Whenever the callback inside a computed
(or an effect
) is executed, then it
will basically set a flag that says “everything being executed right now is in
the context of this Signal” (let’s call this Signal A). And every time a Signal
is called (e.g. Signal B), it will check “is this Signal being called within the
context of another Signal?”. If yes, the called Signal B will track the
dependency of the currently active Signal A and the called Signal B. So every
time Signal B changes, Signal A has to re-evaluate as well.
Because I think most of you readers feel more comfortable reading code, here is the same thing as pseudo TypeScript code:
// This variable tracks the current context a signal is called in
let activeSignal: Signal | undefined;
export function signal(initialValue: any): WriteableSignal {
let currentValue = initialValue; // Assume that this can be updated via .set()
return function () {
if (activeSignal) {
// This Signal was called while inside the context of another Signal.
// Whenever this Signal becomes dirty, the other Signal needs to be
// re-evaluated as well
this.addDependant(activeSignal);
}
return currentValue;
};
}
export function computed(computingFn: () => any): Signal {
let computedValue: any;
const signal = function () {
return computedValue;
};
// Start of the computation //
// First: backup the active Signal - it needs to be restored later
const previousActiveSignal = activeSignal;
// Set the newly created signal as active
activeSignal = signal;
// Execute the computation. Because of the implementation of the Signal above,
// every Signal that is being called right now will add the activeSignal as
// dependant.
computedValue = computingFn();
// Restored the previously active Signal
activeSignal = previousActiveSignal;
return signal;
}
Of course the devil lies in the details, and it’s never as easy as we - the users - get to experience it. But for this blog post, this is all we need to keep in mind: Signals work via side effects.
Side Effects. Side Effects Everywhere
Every time a computed
Signal or an effect
is evaluated, all Signals will
behave differently for the duration of the computation or the effect-ing. That
is the whole point of Signals though, so why care?
Side effects are invisible. You - the programmer - always have to keep them in mind when you do stuff.
Let’s Write A Cache
To show you the dark side of side effects, I want to write a simple cache for
Thing
s:
interface Thing {
id: string;
name: string;
age: number;
}
class ThingCache {
// The store is immutable to avoid updates via side effects
private readonly store = signal<Readonly<Record<string, T>>>({});
// Signals are awesome, because we can write sync code like this!
// No need to "await firstValueFrom()" like you need with RxJS Observables
has(id: string): boolean {
return this.store()[id] != null;
}
// Add something to the cache
add(entity: T): void {
this.store.update((store) => {
return {
...store,
[entity.id]: entity,
};
});
}
// Get something from the cache as Signal
get(id: string): Signal<T | undefined> {
return computed(() => this.store()[id]);
}
}
That looks like an awesome, observable cache in my opinion. It’s just a few lines of code but most likely already covers a lot of use-cases.
Of course we want to use it, too. I’ll quickly whip up a naive implementation
(=from the perspective of a developer, who does not know how the ThingCache
is
implemented) to push Thing
s into the cache.
// Let this be our global cache
const cache = new ThingCache();
// Stand-in for e.g. an API response or a WebSocket stream
let thingSignal = signal<Thing>({
id: "first",
name: "Awesome Thing",
age: 10,
});
effect(() => {
// Every time the thing changes...
const thing = thingSignal();
// ...check if the cache has this thing already...
if (!cache.has(thing.id)) {
// ...and add it to the cache if not.
cache.add(thing);
}
});
Part one: Transitivity of Signal Writes
Right of the bat, you will get an error when you try to run this code.
Angular disallows writing to Signals inside an effect. And for good reason: They could lead to an infinite loop. Writing to one Signal may lead to Signal updates, that trigger the effect again. Because you (or Angular) cannot guarantee, that the Signals you subscribed to have nothing to do with the Signals you are writing to (and never will - code changes could introduce a dependency cycle down the line), it makes perfect sense to block Signal writes.
In our case, though, we must write to a Signal. How else would we update the cache? So we have two options.
Option 1: Set the allowSignalWrites
flag.
effect(
() => {
const thing = thingSignal();
if (!cache.has(thing.id)) {
cache.add(thing);
}
},
{ allowSignalWrites: true }
);
This flag instructs Angular’s effect()
to skip checks if any Signals are
written to. As you can imagine, this is a bad code smell. Right now, this
doesn’t cause any issues, because our mocked “API response” 100% does not depend
on the cache. But what if we update the code such that we read the cache first
before fetching the Thing
:
const thingId = "first";
const thingSignal: Signal<Thing> = cache.has(thingId)
? // The cache has the thing
cache.get(thingId)
: // The cache does not have the thing
fetchFromApi(thingId);
// ...
// there might be many many lines of code here, so that the issue isn't as
// obvious
// ...
effect(
() => {
const thing = thingSignal();
if (!cache.has(thing.id)) {
cache.add(thing);
}
},
{ allowSignalWrites: true }
);
All of a sudden, we have an infinite loop.
Okay, so option 2: Wrap the code with untracked()
. Angular’s untracked()
function allows you to “escape” the reactive context in order to do some
operations outside of it. For example:
const counter = signal(0);
setInterval(() => counter.update((count) => count++), 1_000);
const currentUser = signal<User>();
effect(() => {
const user = currentUser(); // Subscribe to updates of currentUser
const count = untracked(() => counter()); // Don't subscribe
console.log(`Current user ${user.name} and counter is ${count}`);
});
This effect will only re-run, when the currentUser
Signal updates, but not the
counter
Signal.
That means we can trick Angular into not detecting the Signal write by leaving the reactive context first:
effect(() => {
const thing = thingSignal();
if (!cache.has(thing.id)) {
untracked(() => cache.add(thing));
}
});
While you won’t get an error this way either, is this really any better? At least we are properly marking what code is writing to Signals this time instead of just globally disabling the check for the entire effect.
But then again, why does the consumer of the cache has to know that the cache writes to a Signal in the first place? In my opinion, this is an abstract leakage - you are exposing the inner workings of the cache.
So the one benefit the use of untracked()
has - in my opinion - is that the
implementation can be responsible for handling this case, rather then leaving
this to the consumer.
class ThingCache {
// ...
add(entity: Thing): void {
// This could be executed inside an reactive context, so wrap it
untracked(() => {
this.store.update((store) => {
return {
...store,
[entity.id]: entity,
};
});
});
}
// ...
}
Keep in mind, though, that this will not magically solve any infinite loop that you might introduce as described above.
untracked()
. untracked()
Everywhere
Another consequence of pushing the untracked()
down to the cache
implementation, though, is that we can never know, if our code is being
executed inside a reactive context or not. Basically, in every class, service,
utility function, etc. you write, you have to consider “What if this is run
inside an effect()
or computed()
?”
And I think the answer to that question basically boils down to “I guess I have
to wrap it with untracked()
then.” And this is just as bad as not having
Signals in the first place, in my opinion. If 90% of our (service) logic has to
be wrapped with untracked()
, then why even bother creating a whole eco system
of Signals?
And I’m not alone with this issue. Angular’s very onw async
pipe needs to
wrap the subscription part with
untracked()
because there could be Signal calls somewhere in the code it is executing!
Part two: Unintentional Dependencies
If you looked at the code carefully, you might have already noticed early on:
Calling cache.has(thingId)
will call the cache’s this.store()
. By extension,
this also means that our effect()
will subscribe to the cache’s store
Signal.
That in turn means, that any changes to the cache will re-run the effect:
let thingSignal = signal<Thing>({
id: "first",
name: "Awesome Thing",
age: 10,
});
effect(() => {
const thing = thingSignal();
if (!cache.has(thing.id)) {
cache.add(thing);
}
});
// Some other code wants to push things into the cache...
setTimeout(() => {
cache.add({
id: "second",
name: "New Content",
age: 1,
});
}, 500);
// ...and triggers our effect.
Again, there are two solutions. The first one is rather straight forward: The
cache needs to use untracked()
when calling this.store
:
class ThingCache {
has(id: string): boolean {
// This is the same as "const store = untracked(() = this.store())"
const store = untracked(this.store);
return store[id] != null;
}
// ...
}
Great, another untracked()
😄
At least this time there’s a pattern we can extract: If a service (=our cache)
is executing synchronous code and is calling Signals, it needs to wrap the
Signal call with untracked()
.
Side note: If you have an
async function
, then you have to be doubly careful. Because everything above the firstawait
will be executed as soon as the function is called (aka, potentially in the same reactive context as the caller)! So just because you are inasync function
doesn’t automatically mean that you can never run in this problem.
The more sensible solution, however, would be to return a Signal<boolean>
instead of just a boolean
. Basically communicating to the caller “Hey, this
value might update over time.” As a bonus, the consumers of your service can
still decide to not react to changes by using the untracked()
wrapper.
If you can afford to change the call signature of the has()
method, this
should be the way to go.
The updated code:
class ThingCache {
has(id: string): boolean {
return computed(() => this.store()[id] != null);
}
// ...
}
let thingSignal = signal<Thing>({
id: "first",
name: "Awesome Thing",
age: 10,
});
effect(() => {
const thing = thingSignal();
const inCache: Signal<boolean> = cache.has(thing.id);
// When the product is not or no longer in the
// cache, add it to the cache
if (!inCache()) {
cache.add(thing);
}
});
Signal Factories Build Headaches
But not all services that return a Signal are this trivial. Some of them might need to do stuff, before they can return a value. Let’s write another service.
Over time we have noticed that all our components follow the same pattern: Fetch
a Thing
add it to the cache, be updated when the Thing
in the cache changes
(e.g. is updated later on). This sounds like something we put into a service, so
that all components just have to call a single method to do all of that.
Here is my ThingService
:
// This could be a global singleton.
const cache = new ThingCache();
export class ThingService {
// Very simple implementation
fetchAndCache(thingId: string): Signal<Thing | undefined> {
fetchThing(thingId).then((thing) => cache.add(thing));
return cache.get(thingId);
}
}
Great, now let’s use our new service:
// Could be user input
const thingIdSignal = signal("123");
const thingSignal = computed((): Thing | undefined => {
const thingId = thingIdSignal();
const fetchedThing = thingService.fetchAndCache(thingId);
return fetchedThing();
});
What you probably want when writing code like this is that the thingSignal
only updates when:
- Either the
thingIdSignal
changes, - Or the
thing
in the cache changes.
But this is not what is happening.
To understand this, we need some graphs. Each node in the graph is a “reactive
node” (like a computed()
Signal or an effect()
). The edges in the graph
represent the dependencies between them.
After creating the cache and the thingIdSignal
, there are two nodes: The
cache’s store
and the, well, thingIdSignal
. When we call computed()
to
create the thingSignal
the dependency tracking starts and a new node is
created - represented by the node thingSignal
. This is the state right after
calling computed()
:
There are no dependencies yet. Now the callback of the computed()
is executed.
In the first line fo the code, the thingIdSignal
is called:
const thingId = thingIdSignal();
Because the thingIdSignal
is called, Angular knows that there is a dependency
between the thingSignal
.
The second line of the computed()
callback is:
const fetchedThing = thingService.fetchAndCache(thingId);
We have to follow the code into the ThingService
implementation. The first
line there is an asynchronous call to the API to fetch the Thing
:
fetchThing(thingId).then((thing) => cache.add(thing));
It is resolved some time later, so we skip over this for now.
In the next line of the fetchAndCache()
calls the ThingCache
’s get()
:
return cache.get(thingId);
…which will create a new computed Signal:
class ThingCache {
get(id: string): Signal<T | undefined> {
return computed(() => this.store()[id]);
}
}
I’ll represent the return value of the get()
method as get*
in the graph.
Now we have to go back to the computed()
callback at line number three:
return fetchedThing();
Okay, so we are executing the Signal, which will execute the implementation of
the get*
Signal. That’s this line of code:
// This is the code of Signal returned by the ThingCache.get()
this.store()[id];
Let’s track this dependency in our graph.
Due to the fetchedThing
call, we are furthermore creating a dependency between
the thingSignal
node and the get*
node:
So far so good.
Then the Promise for the fetchThing() call resolves.
As a reminder, this is how the code looked:
fetchThing(thingId).then((thing) => cache.add(thing));
It is updating the cache, hence making the store
node dirty. When the store
node is dirty, all it’s dependants need to be re-evaluated.
There is only one dependant, the get*
node. When re-evaluating, Angular
notices that this node now returns a different value (the newly fetched thing),
so it can’t re-use the cached value. Hence, get*
’s dependants also need to be
re-evaluated.
When the thingSignal
node is re-evaluated, we have to execute the full
callback again:
- The
thingIdSignal
is called - the dependency therefore remains as before. - The service is called again.
- It will call
fetchThing()
again - this Promise resolves later (again). - It will call
cache.get
again.- It will create a new Signal again, let’s call this
get**
.
- It will create a new Signal again, let’s call this
- It will call
- The returned Signal is called.
After this interaction, the Graph looks like this:
Note:
get*
is no longer referenced by anything, hence not executed, hence eventually garbage collected.
But as you might have guessed, The Promise from step 2.1 will eventually resolve as well and all of a sudden we are at step 1. again.
We have successfully created an infinite loop.
There is a solution to this, but it’s not nice. Let me show you:
const thingIdSignal = signal("123");
// This is a Signal that emits Signals
const thingSignalSignal: Signal<Signal<Thing | undefined>> = computed(() => {
const thingId = thingIdSignal();
return thingService.fetchAndCache(thingId);
});
const thingSignal = computed(() => {
return thingSignalSignal()();
});
Why does this work? I’ll start with the graph again, but speed up the process this time.
- The
thingSignalSignal
only calls thethingIdSignal
. The service will still create a new Signal (get*
), but there is no dependency between thethingSignalSignal
node and theget*
node. - The
thingSignal
will call thethingSignalSignal
and the Signal it returns (theget*
node).
This is the resulting dependency graph:
What happens now when the fetchThing
Promise resolves?
- The store is updated
- Hence, the
get*
gets re-evaluated - Hence, the
thingSignal
is re-evaluated.
But because the thingIdSignal
is not dirty, the thingSignalSignal
will just
return the last returned value. So the ThingService
is not called! Great
success!
But at what cost?
Our nice looking code from before needs to be split in two. The type of the
first Signal is in my opinion quite a mess (a Signal that returns a Signal? Are
we back to using switchMap
now?). And most importantly: All of this needs to
be handled by the consumer of the service!
I’ve found a workaround to the consumer issue, but the code is only slightly cleaner that way:
export class ThingService {
// Take a Signal as parameter instead of the raw value
fetchAndCache(thingIdSignal: Signal<string>): Signal<Thing | undefined> {
const thingSignalSignal: Signal<Signal<Thing | undefined>> = computed(
() => {
const thingId = thingIdSignal();
// Only when the thingIdSignal changes, fetch the latest thing
fetchThing(thingId).then((thing) => cache.add(thing));
return cache.get(thingId);
}
);
// Create a second Signal, that "flattens" the thingSignalSignal
const thingSignal = computed(() => {
return thingSignalSignal()();
});
return thingSignal;
}
}
We are essentially doing what the consumer had to do before inside the service method. That way the consumer does not need to handle the very weird infinite loop issue.
But there are drawbacks. What if the service consumer doesn’t have a Signal for
the thingId
? For example, the value could be emitted using an RxJS Observable.
Or there could be a new hot thing - like a Signal2 - that is not compatible
Signals?
You might think to yourself, “This is a really convoluted and constructed issue.” But it’s actually the exact issue we stumbled over when we tried to migrate one of our serves from Observables to Signals! Because of this issue we decided to stay with Observables.
You might also think, “None of this would have happened, if you didn’t have a side-effect in your
computed()
.” And you are right! But in reality, we want to write service methods, that are useful to us. They should reduce the amount of code we have to write and the cognitive load we have to keep in our meat memory. If this means we have to do some side effect, then there needs to be a way to achieve this. Otherwise we are stuck with writing the same code over and over again.
Part three: The Lack of Visibility
Apart from all the points above, look again at the original effect:
let thingSignal = signal<Thing>({
id: "first",
name: "Awesome Thing",
age: 10,
});
effect(() => {
const thing = thingSignal();
if (!cache.has(thing.id)) {
cache.add(thing);
}
});
Nothing here looks like it would cause any issues! There are no pointers that any of the code is writing to a Signal. No linter, no compiler, nothing will complain about this code. Yet it can cause so much headache.
I’m not saying that “compilers need to find and fix everything for us”, but we should (be able to) write code such that a compiler can come in and do sanity checks.
Take “exhaustive switch cases” for example:
type Dog = { type: 'dog'; bark(); }
type Cat = { type: 'cat'; miau(); }
type Mouse = { type: 'mouse'; squeak(); }
type Animal = Dog | Cat | Mouse;
const animal: Animal;
switch (animal.type) {
case 'dog': {
animal.bark();
break;
}
case 'cat': {
animal.miau();
break;
}
case 'mouse': {
animal.squeak();
break;
}
default: {
// Allow me to tell the compiler "this
// should have been exhaustive"
animal satisfies never;
throw new Error(`Unknown animal type: ${animal.type}`)
}
}
I don’t want the compiler to mark every and all switch
statements as “this is
not exhaustive”. But I want the tools in my hand to instruct the compiler to
check this for me, such that any code change to the Animal
type will trigger a
compiler error here.
Same for the Signal issues. One idea is to mark functions that read Signals with
something like reading
and functions that write to Signals like writing
. We
could then write:
reading function cacheHas(thingId: string): boolean {
return cache.store() != null;
}
writing function addToCache(thing: Thing): void {
cache.store.update(store => {
...store,
[thing.id]: thing,
});
}
It would work like async
functions, but for Signals.
But this should be the last resort, as we already have a war between blue and red functions in JavaScript.
We can also try to establish a naming convention, such that function names clearly tell the consumer, what this function does. But then again, who is enforcing this naming convention? What if I forget to name something properly? Or the implementation changes and now the function name does not make any sense anymore?
In summary, I’m still a very big fan of Signals. Though after using them for a couple of months now in production, I’ve stepped my toes a bit too often. After the hype settled, I’m a lot more cautious about them now than I was before. But I’m still hoping that people much smarter than me will find solutions to my problems.