Essentially, you add a preprocessing stage to the compiler that can either enforce rules or alter the code
It could quietly transform all object like types into having read-only semantics. This would then make any mutation error out, with a message like you were attempting to violate field properties.
You would need to decide what to do about Proxies though. Maybe you just tolerate that as an escape hatch (like eval or calling plain JS)
One "solution" is to use Object.freeze(), although I think this just makes any mutations fail silently, whereas the objective with this is to make it explicit and a type error.
I thought Object.freeze threw an exception on mutation. Digging a little more, it looks like we're both right. Per MDN, it throws if it is in "use strict" mode and silently ignores the mutation otherwise.
That's opting into immutability, the point of the experiment is having it by default. Plus, that's just the type system preventing you from adding a property. It won't stop you from trying to change the `immutable` field.
I'm genuinely curious, was this AI generated, or just a lack of understanding?
It’s interesting to watch other languages discover the benefits of immutability. Once you’ve worked in an environment where it’s the norm, it’s difficult to move back. I’d note that Clojure delivered default immutability in 2009 and it’s one of the keys to its programming model.
I don't think the benefits of immutability haven't been discovered in js. Immutable.js has existed for over a decade, and JavaScript itself has built in immutability features (seal, freeze). This is an effort to make vanilla Typescript have default immutable properties at compile time.
It doesn't make sense to say that. Other languages had it from the start, and it has been a success. Immutable.js is 10% as good as built-in immutability and 90% as painful. Seal/freeze,readonly, are tiny local fixes that again are good, but nothing like "default" immutability.
It's too late and you can't dismiss it as "been tried and didn't get traction".
That's not what I said, and that's not what my reply is about. The value of immutability is known. That's the point of this post. The author isn't a TC39 member (or at least I don't think they are). They're doing what they can with the tools they have.
Javascript DOES NOT in fact have built-in immutability similar to Clojure's immutable structures - those are shallow, runtime-enforced restrictions, while Clojure immutable structures provide deep, structural immutability. They are based on structural sharing and are very memory/performance efficient.
Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.
I didn't say that it does have exhaustive immutability support. I said the value of it is known. They wouldn't have added the (limited) support that they did if they didn't understand this. The community wouldn't have built innumerable tools for immutability if they didn't understand the benefits. And in any case, you can't just shove a whole different model of handling objects into a thirty year old language that didn't see any truly structural changes until ten years ago.
> I didn't say that it does have exhaustive immutability support
seal and freeze in js are not 'immutability'. You said what you said - "JavaScript itself has built in immutability features (seal, freeze)".
I corrected you, don't feel bad about it. It's totally fine to not to know some things and it's completely normal to be wrong on occasion. We are all here to learn, not to argue who's toy truck is better. Learning means going from state of not knowing to the state of TIL.
> you can't just shove a whole different model of handling objects into a thirty year old language
Clojurescript did. Like 14-15 years ago or so. And it's not so dramatically difficult to use. Far more simpler than Javascript, in fact.
I am not being pedantic, there's critical fundamental conceptual difference that has real implications for how people write and reason about code.
There's performance reasoning, different level of guarantees, and entirely different programming model.
When someone hears "JS has built-in immutability features", they might think, "great, why do I even need to look at Haskell, Elixir, Clojure, if I have all the FP features I need right here?". Conflating these concepts helps no one - it's like saying: "wearing a raincoat means you're waterproof". Okay, you're technically not 100% wrong, but it's so misleading that it becomes effectively wrong for anyone trying to understand the actual concept.
yeah, immutability.js is a solid engineering effort to retrofit immutability onto a mutable-first language. It works, but: it's never as ergonomic as language-native immutability and it just feels like you're swimming upstream against JS defaults. It's nowhere near Clojure's elegance. Clojure ecosystem assumes immutability everywhere and has more mature patterns built around it.
In Clojure, it just feels natural. In js - it feels like extra work. But for sure, if I'm not allowed to write in Clojurescript, immutability.js is a good compromise.
I meant to point out that of course there is value in immutability beyond shared datastructures.
I tried Immutability.js back in the day and hated it like any bolted-on solution.
Especially before Typescript, what happened is that you'd accidentally assign foo.bar = 42 when you should have set foo.set('bar', 42) and cause annoying bugs since it didn't update anything. You could never just use normal JS operations.
Really more trouble than it was worth.
And my issue with Clojure after using it five years is the immense amount of work it took to understand code without static typing. I remember following code with pencil and paper to figure out wtf was happening. And doing a bunch of research to see if it was intentional that, e.g. a user map might not have a :username key/val. Like does that represent a user in a certain state or is that a bug? Rinse and repeat.
> immense amount of work it took to understand code without static typing.
I've used it almost a decade - only felt that way briefly at the start. Idiomatic Clojure data passing is straightforward once you internalize the patterns. Data is transparent - a map is just a map - you can inspect it instantly, in place - no hidden state, no wrapping it in objects. When need some rigidity - Spec/Malli are great. A missing key in a map is such a rare problem for me, honestly, I think it's a design problem, you cannot blame dynamically-typed lang for it, and Clojure is dynamic for many good reasons. The language by default doesn't enforce rigor, so you must impose it yourself, and when you don't, you may get confused, but that's not the language flaw - it's the trade-off of dynamic typing. On the other hand, when I want to express something like "function must accept only prime numbers", I can't even do that in statically typed language without plucking my eyebrow. Static typing solves some problems but creates others. Dynamic typing eschews compile-time guarantees but grants you enormous runtime flexibility - trade-offs.
one thing that it's missing in JS to fully harness the benefits of immutability is some kind of equality semantics where two identical objects are treated the same
even when performance might not be an issue or an objective, there are other concerns about an user land implementation: lack of syntax is a bummer, and lack of support in the ecosystem is the other giant one - for example, can I use this as props for a React component?
yes, I'm aware of composites (and of the sad fate of Records and Tuples) and I'm hopeful they will improve things. One thing that I'm not getting from the spec is the behavior of the equality semantics in case a Date (or a Temporal object) is part of the object.
In other words, what is the result of Composite.equal(Composite({a: new Date(2025, 10, 19)}, Composite({a: new Date(2025, 10, 19)})? What is the result of Composite.equal(Composite({a: Temporal.PlainDate(2025, 10, 19)}, Composite({a: PlainDate(2025, 10, 19)})?
Also, interestingly Clojurescript compiler in many cases emits safer js code despite being dynamically typed. Typescript removes all the type info from emmitted js, while Clojure retains strong typing guarantees in compiled code.
Cool. I didn’t realize ML had such a focus on immutability as well. I have never done any serious work in ML and it’s a hole in my knowledge. I have to go back and do a project of some sort using it (and probably one in Ocaml as well). What data structures does ML use under the hood to keep things efficient? Clojure uses Bagwell’s Hashed Array-Mapped Tries (HAMT), but Bagwell only wrote the first papers on that in about 2000. Okasaki’s book came out in 1998, and much of the work around persistent data structures was done in the late 1980s and 1990s. But ML predates most of that, right?
Immutability is also overrated. I mostly blame react for that. It has done a lot to push the idea that all state and model objects should be immutable. Immutability does have advantages in some contexts. But it's one tool. If that's your only hammer, you are missing other advantages.
The only benefit to mutability is efficiency. If you make immutability cheap, you almost never need mutability. When you do, it’s easy enough to expose mechanisms that bypass immutability. For instance in Clojure, all values are immutable by default. Sometimes, you really want more efficiency and Clojure provides its concept of “transients”[1] which allow for limited modification of structures where that’s helpful. But even then, Clojure enforces some discipline on the programmer and the expectation is that transient structures will be converted back to immutable (persistent) structures once the modifications are complete. In practice, there’s rarely a reason to use transients. I’ve written a lot of Clojure code for 15 years and only reached for it a couple of times.
Immutability is really valuable for most application logic, especially:
- State management
- Concurrency
- Testing
- Reasoning about code flow
Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"
Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'. In immutable-first languages - Clojure, Haskell, Elixir immutability feels like a superpower. In Javascript, it feels like a chore.
A lot of these concepts don't mean anything to most developers I've found. A lot of the time I struggle to get the guy I work with to compile and run his code. Even something relatively simple as determinism and pure functions just isn't happening.
This is shockingly common and most developers will never ever hear of Clojure, Haskell or Elixir.
I really feel there is like two completely different developer worlds. One where these things are discussed and the one I am in where I am hoping that I don't have to make a teams call to tell a guy "please can you make sure you actually run the code before making a PR" because my superiors won't can him.
> Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"
I think immutability is good, and should be highly rated. Just not as highly rated as it is. I like immutable structures and use them frequently. However, I sometimes think the best solution is one that involves a mutable data structure, which is heresy in some circles. That's what I mean by over-rated.
Also, kind of unrelated, but "state management" is another term popularized by react. Almost all programming is state management. Early on, react had no good answer for making information available across a big component tree. So they came up with this idea called "state management" and said that react was not concerned with it. That's not a limitation of the framework see, it's just not part of the mission statement. That's "state management".
Almost every programming language has "state management" as part of its fundamental capabilities. And sometimes I think immutable structures are part of the best solution. Just not all the time.
> I like immutable structures and use them frequently.
Are you talking about immutable structures in Clojure(script)/Haskell/Elixir, or TS/JS? Because like I said - the difference in experience can be quite drastic. Especially in the context of state management. Mutable state is the source of many different bugs and frustration. Sometimes it feels that I don't even have to think of those in Clojure(script) - it's like the entire class of problems simply is non-existent.
Of the languages you listed, I've really only used TS/JS significantly. Years ago, I made a half-hearted attempt to learn Haskell, but got stuck on vocabulary early on. I don't have much energy to try again at the moment.
Anyway, regardless of the capabilities of the language, some things work better with mutable structures. Consider a histogram function. It takes a sequence of elements, and returns tuples of (element, count). I'm not aware of an immutable algorithm that can do that in O(n) like the trivial algorithm using a key-value map.
Try Clojure(script) - everything that felt confusing in Haskell becomes crystal clear, I promise.
> Consider a histogram function.
You can absolutely do this efficiently with immutable structures in Clojure, something like
(reduce (fn [acc x]
(update acc x (fn [v] (inc (or v 0)))))
{}
coll)
This is O(n) and uses immutable maps. The key insight: immutability in Clojure doesn't mean inefficiency. Each `update` returns a new map, but:
1. Persistent data structures share structure under the hood - they don't copy everything
2. The algorithmic complexity is the same as mutable approaches
3. You get thread-safety and easier reasoning for a bonus
In JS/TS, you'd need a mutable object - JS makes mutability efficient, so immutability feels awkward.
But Clojure's immutable structures are designed for this shit - they're not slow copies, they're efficient data structures optimized for functional programming.
> immutability in Clojure doesn't mean inefficiency.
You are still doing a gazillion allocations compared to:
for (let i = 0; i < data.length; i++) { hist[data[i]]++; }
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.
Sure, that’s faster. But do you really care? How big is your data? How many distinct things are you counting? What are their data types? All that matters. It’s easy to write a simple for-loop and say “It’s faster.” Most of the time, it doesn’t matter that much. When that’s the case, Clojure allows you to operate at a higher level with inherent thread safety. If you figure out that this particular code matters, then Clojure gives you the ability to optimize it, either with transients or by dropping down into Java interop where you have standard Java mutable arrays and other data structures at your disposal. When you use Java interop, you give up the safety of Clojure’s immutable data structures, but you can write code that is more optimized to your particular problem. I’ll be honest that I’ve never had to do that. But it’s nice to know that it’s there.
The allocation overhead rarely matters in practice - in some cases it does. For majority of "general-purpose" tasks like web-services, etc. it doesn't - GC is extremely fast; allocations are cheap on modern VMs.
The second point I don't even buy anymore - once you're used to `reduce`, it's equally (if not more) readable. Besides, in practice you don't typically use it - there are tons of helper functions in core library to deal with data, I'd probably use `(frequencies coll)` - I just didn't even mentioned it so it didn't feel like I'm cheating. One function call - still O(n), idiomatic, no reduce boilerplate, intent is crystal clear. Aggressively optimized under the hood and far more readable.
Let's not get into strawman olympics - I'm not selling snake oil. Clojure wasn't written in some garage by a grad student last week - it's a mature and battle-tested language endorsed by many renowned CS people, there are tons of companies using it in production. In the context of (im)mutability it clearly demonstrates incontestable, pragmatic benefits. Yes, of course, it's not a silver bullet, nothing is. There are legitimate cases where it's not a good choice, but you can argue that point pretty much about any tool.
If there was a language that didn't require pure and impure code to look different but still tracked mutability at the type level like the ST monad (so you can't call an impure function from a pure one) - so not Clojure - then that'd be perfect.
But as it stands immutability often feels like jumping through unnecessary hoops for little gain really.
There's no such thing as "perfect" for everyone and for every case.
> feels like jumping through unnecessary hoops for little gain really.
I dunno what you're talking about - Apple runs their payment backend; Walmart their billing system; Cisco their cybersec stack; Netflix their social data analysis; Nubank empowers entire Latin America - they all running Clojure, pushing massive amounts of data through it.
I suppose they just have shitload of money and can afford to go through "unnecessary hoops". But wait, why then tons of smaller startups running on Clojure, on Elixir? I guess they just don't know any better - stupid fucks.
But ok, if mutability is always worse, why not use a pure language then? No more cowardly swap! and transient data structures or sending messages back and forth like in Erlang.
But then you get to monads (otherwise you'd end up with Elm and I'd like to see Apple's payment backend written in Elm), monad transformers, arrows and the like and coincidentally that's when many Clojure programmers start whining about "jumping through unnecessary hoops" :D
Anyway, this was just a private observation I've reached after being an FP zealot for a decade, all is good, no need to convert me, Clojure is cool :)
Clojure is not "cool". Matter of fact, for a novice it may look distasteful, it really does. Ask anyone with a prior programming experience - Python, JS, Java to read some Clojure code for the first time and they start cringing.
What Clojure actually is - it is "down to earth PL", it values substance over marketing, prioritizes developers happiness in the long run - which comes in a spectrum; it doesn't pretend everyone wants the same thing. A junior can write useful code quickly, while someone who wants to dive into FP theory can. Both are first-class citizens.
One doesn't need to "wear a tie" to learn Clojure - syntax is so simple it can be explained on a napkin. You need to get:
1. An editor with structural editing features - google: "paredit vim/emacs/sublime/etc.", on VSCode - simply install Calva.
2. How to connect to the REPL. Calva has the quickstart guide or something like that.
3. How to eval commands in place. Don't type them directly into the REPL console! You can, but that's not how Lispers typically work. They examine the code as they navigate/edit it - in place. It feels like playing a game - very interactive.
That's all you need to know to begin with. VSCode's Calva is great to mess around it. Even if you don't use it (I don't), it's good for beginners.
Knowing Clojure comes super handy, even when you don't write any projects in it - it's one of the best tools to dissect some data - small and large. I don't even deal with json to inspect some curl results - I pipe them through borkdude/jet, then into babashka and in the REPL I can filter, group, sort, slice, dice, salt & pepper that shit, I can even throw some visualizations on top - it looks delicious; and it takes not even a minute to get there - if I type fast enough, I slash through it in seconds!
Honestly, Clojure feels to be the only no bullshit, no highfalutin, no hidden tricks language in my experience, and jeeeesus I've been through just a bit more than a few - starting with BASIC in my youth and Pascal and C in college; then Delphi, VB, then dotnet stuff - vb.net, c#, f#, java, ruby; all sorts of altjs shit - livescript, coffeescript, icedcoffeescript, gorillascript, fay, haste, ghcjs, typescript, haskell, python, lua, all sorts of Lisps; even some weird language where every operator was in Russian; damn, I've been trying to write some code for a good while. I'm stupid or something but even in years I just failed to find a perfect language to write perfect code - all of dem feel like they got made by some motherfluggin' annoyin' bilge-suckin' vexin' barnacle-brained galoots. Even my current pick of Clojure can be sometimes annoying, but it's the least irksome one... so far. I've been eyeing Rust and Zig, and they sound nice (but every one of dem motherfuckers look nice before you start fiddling with 'em) yet ten years from now, if I'm still kicking the caret, I will be feeding some data into a clj repl, I'm tellin' ya. That shit just fucking works and makes sense to me. I don't know how making it stop making sense, it just fucking does.
I just want a way of doing immutability until production and let a compiler figure out how to optimize that into potentially mutable efficient code since it can on those guarantees.
Clojure's persistent data structures are extremely fast and memory efficient. Yes, it's technically not a complete zero-overhead, pragmatically speaking - the overhead is extremely tiny. Performance usually is not a bottleneck - typically you're I/O bound, algorithm-bound, not immutability-bound. When it truly matters, you can always drop to mutable host language structures - Clojure is a "hosted" language, it sits atop your language stack - JVM/JS/Dart, then it all depends on the runtime - when in javaland, JVM optimizations feel like blackmagicfuckery - there's JIT, escape analysis (it proves objects don't escape and stack-allocates them), dead code elimination, etc. For like 95% of use cases using immutable-first language (in this example Clojure) for perf, is absolutely almost never a problem.
Haskell is even more faster because it's pure by default, compiler optimizes aggressively.
Elixir is a bit of a different story - it might be slower than Clojure for CPU-bound work, but only because BEAM focuses on consistent (not peak) performance.
Pragmatically, for the tasks that are CPU-bound and the requirement is "absolute zero-cost immutability" - Rust is a great choice today. However, the trade off is that development cycle is dramatically slower in Rust, that compared to Clojure. REPL-driven nature of Clojure allows you to prototype and build very fast.
From many different utilitarian points, Clojure is enormously practical language, I highly recommend getting some familiarity with it, even if it feels very niche today. I think it was Stu Halloway who said something like: "when Python was the same age of Clojure, it was also a niche language"
This doesn’t make much sense. One of the benefits of immutability is that once you create a data structure, it doesn’t change and you can treat it as a value (pass it around, share it between threads without cloning it, etc.). If you now allow modifications, you’re suddenly violating all those guarantees and you need to write code that defensively makes clones, so you’re right back where you started. In Clojure, you can cheat at points with transients where the programmer knows that a certain data structure is only seen by a single thread of execution, but you’re still immutable most of the time.
Depends on your target. Clojure targets the JVM by default and that has very different constraints than say, compiling to JavaScript for the browser or node.
Compiling to a JS engine this would be great because immutability has a runtime cost
Runtime cost of using Clojurescipt is undeniably there but for most applications is pretty negligible price to pay for the big wins. In practice, Clojurescript apps can often perform faster than similar apps built traditionally - especially for things like render optimization - immutable data enables cheap equality checks for memoization, it prevents unnecessary re-renders; data transform pipelines - transducers give you lazy evaluation, it's great for filtering/mapping through large datasets; caching - immutable data is safe to cache indefinitely, you don't have to worry about stale data;
You guys keep worrying about some theoretical "costs" - in practice, I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used. Situations where it incurs a practical cost to pay are outliers, not a general rule.
Not all, and it is always preferable for it not to have a cost.
> I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used
There’s a big swath of work that could benefit from the development streamlining that something like clojurescript or similar projects can give but any performance hit is deadly, like e-commerce.
There is also the fact that it doesn’t have 100% bindings to the raw JS and DOM APIs if I recall correctly, it’s often wrapped around React or assumed to be
> Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'.
I felt that way in the latest versions of Scheme, even. It’s bolted on. In contrast, in Clojure, it’s extremely fundamental and baked in from the start.
exactly, react could not deal with mutable object so they decided to make immutability seem to be something that if you did not use before you did not understood programming.
It's redundant in single thread environment. Everyone moved to mobile while pages are getting slower and slower, using more and more memory. This is not the way. Immutability has its uses, but it's not good for most web pages.
Yes, js runs in a single-threaded environment for user code, but immutability still provides an immense value: predictability, simpler debugging, time-travel debugging, react/framework optimizations.
Modern js engines are optimized for short-lived objects, and creating new objects instead of mutating uses more memory only temporarily. Performance impact of immutability is so absolutely negligible compared to so many other factors (large bundles, unoptimized images, excessive DOM manipulation)
You're blaming the wrong thing for overblown memory. I don't know a single website that is bloated and slow only because the makers decided to use immutable datastructures. In fact, you might be exactly incorrect - maybe web pages getting slower and slower because we're now trying to have more logic in them, building more sophisticated programs into them, and the problem is exactly that - we are reaching the point that is no longer simple to reason about them? Reasoning about the code in an immutable-first PL is so much simpler, you probably have no idea, otherwise you wouldn't be saying "this is not the way"
We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.
As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.
And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.
I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.
One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.
And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.
Then consider the spread operator, and how much you might see it in TypeScript code:
const foo = {
...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is
newPropertyValue,
};
// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"
const foo = [...array, newItem];
And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()
They're nice, syntactically ... I love them from a code maintenance and
readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.
And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.
Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.
"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.
So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?
That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.
But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
> But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
I would have agreed with that statement a few years ago.
But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"
And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.
Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.
Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.
If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.
If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.
In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.
And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.
In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."
At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.
So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.
Alright, I admit, I have not worked on teams where immutable.js was used a lot, so I don't have any insight specifically on its impact on performance.
Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.
For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.
But sure, I agree with everything you say - Immutability is great in general, but not for every given case.
Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.
Rescript/reasonml is still in development, and a more seasoned dev team can easily pick it as an better alternative to typescript.
Its a bummer haxe did not promote itself more for the web, as its a amazinlgy good piece of tech. The languge shows age, but has an awesome typesystem and metaprogramming capabilities.
Its quite rare to see interop between compile to js languages tho. Also rare to see projects using more (if not in the middle of a rewrite/port) than one compile to js language. YMMV.
> Its quite rare to see interop between compile to js languages tho
js interop in Clojurescript is dead simple. Moreover, you can have shared logic between different runtimes. The promise of Nodejs for code re-use, in practice turned out to be not as straightforward, even though you have js runtime in both places. With Clojurescript, you can have the shared logic in the same namespace - it's mindblowing, you can have functions that work on both - JVM and JS.
Im not talking about X->JS->X interop, but X->Y->JS->Y->X interop, where X,Y = compile to JS language.
> it's mindblowing, you can have functions that work on both - JVM and JS
Thats basically what you could do long before Nodejs (that made server/client code sharing popular) came out in Haxe. You could target a huge number of targets from a single codebase.
Are you saying that getting two different compile-to-JS languages to interoperate is messy and you have to go through multiple transpilation layers to make them talk to each other?
The Clojurescript way isn't about transpiling between different compile-to-JS languages. It's simpler: write once in Clojure, compile to both JVM and JS directly. No intermediate language chains needed. And you are free to use whatever js and java libs directly.
Yes, sure - valid point about Haxe, you're right, it actually did this before Clojurescript or even before Node existed. IIRC Haxe could compile a single codebase to multiple targets. That multi-target approach tho required writing in a lowest-common-denominator lang. Cljs practical in the sense that you get the full power of Clojure on the JVM side and reasonable JS semantics on front-end - without compromise. Haxe often meant sacrificing language features to stay compatible across all targets.
Clojurescript is surprisingly pragmatic in that sense and works well. The downside - you can use Cljs on its own, but it truly shines when paired with Clojure, but JVM, despite being amazing piece of tech has a marketing problem - people hear JVM and immediately think Java.
This is some criticism that lacks any depth or insight.
I've deployed projects in Elm, Scala, Clojure, Purescript and TypeScript has many great qualities that the others don't have.
It's an incredibly powerful language with a great type system which requires some effort in understanding, (e.g. 99% of candidates don't even know what a mapped type is, it's written in the docs...) and minimal discipline to not fall in the js pitfalls.
On top of that you have access to tons of tools and libraries, which alternative ecosystems either don't have (e.g. no compile-to-js language) or have to interoperate with at js level (anything from Reason to Gleam) anyway.
Beyond that, there's other important considerations in choosing a language beyond its syntax/semantics and ecosystem, such as hiring or even AI-friendliness.
Stricter TS is absolutely a valuable effort to chase.
The issue with TS is that its way too easy to fallback to unsafe code. Also the TS typesystem is WAY, WAY too complex. They pile new hard to grasp niche features that has made it really hard to grasp.
The TS sweetspot was (imho) somewhere around 1.8-2.0 era. These days you can run doom in the typesystem.
I cant say about hiring, as i dont hire a dev that knows language X, i hire engineers that know the ins and outs of how software should be written, and know when to pick Go, when to pick ocaml and when to go with C/Rust.
Also i would never use 99.9% of npm packages (js or ts) so i dont really care that much.
As an example writing typeheads for reasonml is not really that hard, as a benefit you know exactly what parts you use.
Also i dont use AI, and we dont accept any PR that are videcoded.
It's not that easy to fall to unsafe code if you know what you're doing, do not have typescript skill issues and use the right libraries, such as fp-ts or effect-ts (an evolution of Scala's ZIO on TypeScript).
Those ecosystems are huge by the way, effect is still very niche-y due to its usage of functional programming and effect systems, niche in a sense, it gets more downloads than Angular.
I have my beefs with TypeScript's complexity, verbosity and limits, don't get me wrong but I don't see any realistic alternative for who wants to write type safe code.
I loved Elm or Reason, but they are not realistic nor productive choices unless your IT team has North American startup budget.
In my real world, we don't have the budgets to lure the kind of brilliant engineers that know when to pick Ocaml and when to go with Rust (or have ever used any of them).
I get the idea of libraries like fp-ts and effect-ts, but like most libraries in this area, they are just boltons. I dont like to write unidiomatic code for given language if it is not designed for it. This means if i wrote a applicative that must satisfy homomorphism in javascript:
APP.ap(APP.of(ab), APP.of(a)) = APP.of(ab(a))
Im pretty sure not a single dev would understand what the hell is going on, and i would have to buy quite a few beers to get that passed in a CR.
I write idiomatic code for the language:
Eg. In Go i loop and use mutability when its the correct thing to do, but in OCaml i almost exclusively use recursion and favour a monadic api, and javascript being a middle-ground i tend to just just the builtins map/filter/reduce and pals for most things. I dont need a monad for the sake of it.
That said i tend to step out of this rule when it comes to errors, and avoid throwing as much as possible. Errors as values is invaluable.
> Im pretty sure not a single dev would understand what the hell is going on, and i would have to buy quite a few beers to get that passed in a CR.
That's a questionable example imho, for few reasons.
The first is that understanding an applicative functor requires you to first understand map, apply and lift.
Starting from an applicative, in any language, including Haskell, is like starting from a monad, (as you know very well they are almost the same thing as a monad is an applicative with one more rule) and then it's quite clear why we get endless blog posts that leave you no brighter.
The second is that we're talking about TypeScript, not JavaScript.
Let's make an example.
Given a definition for map in TypeScript:
map: <A, B>(f: (a: A) => B) => ((fa: F<A>) => F<B>)
which requires a minimal typescript understanding (a function from A to B and a value of type F<A> you get F<B>) to read.
If you can read map you can read ap in TypeScript:
If i need something that must be proven correct. Something that really cant (ungracefully) fail, where there is no null pointer exceptions, or other panics (eg rust has unwrap, and Go has nills).
Something that does not need to run on C speeds (both Go and OCaml have similar performance, somewhere around 80-90% of C/Rust give or take), and when i want to have a fast feedback cycle (the OCaml compiler is even faster than Go's compiler).
Basically OCaml when i want to have a good sleep at night, and be sure im not paged at 2AM for some weird panic or null pointer error.
Thats why i glue stuff together, i might have some "this-is-critical-as-fuck" code written in OCaml, and a webserver written in Go, and some perf critical feature written in Rust/C.
It all depends on the requirements. Its a shame devs shy away from that, and use their only tool (language) for all things, leading to more brittle and overcomplex software.
This is the same _type_ of elitism that I've seen from the Scala community and really makes TS seem unpalatable. Big "A monad is just a monoid in the category of endofunctors" vibes.
Yeah, I'm quite sure inverting trees is more relevant to real world programming than knowing the fundamentals of a type system people say they are expert in. /s
In any case, I am against technical interviews and never ask any technical questions beyond just general talk of how people like to work, and their previous projects.
But I know for a fact the overwhelming majority of people that say they know TypeScript don't know a tenth of what's written in the docs, and mapped types were just a very basic example.
It is also how C++ and Objective-C got users from C land.
The examples on the JVM and CLR got it through, targeting the same bytecode, and Swift even if imposed from above, also had to make interop with Objective-C first class, and is now in the process to do the same for C++.
Turns out adoption is really hard, if a full rewrite is asked for, unless someone gets to pay for those rewrites, or gets to earn some claim to fame, like in the RIG and RIR stuff.
I am a fan of immutability. I was toying around with javascript making copies of arguments (even when they are complex arrays or objects). But, strangely, when I made a comment about it, it just got voted down.
There's something in here for sure, switch over to TS with strict typing and you've got generics to help you out more, at least for validation.
A deep clone isn't a bad approach but given TS' typing, I don't know if they allow a pure 'eval' by default.. Still playing with this in my free time though and it's still tricky.
One thought I recently had, since using deepCopy is going to slow things down, is if the source code for QuickJS could be changed just make copies. Then load up quickJs as a replacement for the browsers javascript by invoking it as wasm.
This has really irrationally interested me now, Im sure there is something there with the internal setters on TS but damn I need to test now. My thinking is that overriding the setter to evaluate if its mutable or not, the obvious approach.
Yeah there's a lot you could do with property setter overrides in conditional types, but the tricky magic trick is somehow getting Typescript to do it by default. I've got a feeling that `object` and `{}` are just too low-level in Typescript's type system today to do those sorts of things. The `Object` in lib.d.ts is mostly for adding new prototype methods, not as much changing underlying property behavior.
Aside: Why do we use the terms "mutable" and "immutable" to describe those concepts? I feel they are needlessly hard to say and too easily confused when reading and writing.
I say "read-write" or "writable" and "writability" for "mutable" and "mutability", and "read-only" and "read-only-ness" for "immutable" and "immutability". Typically, I make exceptions only when the language has multiple similar immutability-like concepts for which the precise terms are the only real option to avoid confusion.
Read only does not carry (to me) the fact that something cannot change, just that I cannot make it change. For example you could make a read only facade to a mutable object, that would not make it immutable.
> Why do we use the terms "mutable" and "immutable" to describe those concepts?
Mutable is from Latin 'mutabilis' - (changeable), which derives from 'mutare' (to change)
You can't call them read-only/writable/etc. without confusing them with access permissions. 'Read-only' typically means something read-only to local scope, but the underlying object might still be mutable and changed elsewhere - like a const pointer in C++ or a read-only db view that prevents you from writing, but the underlying data can still be changed by others. In contrast, an immutable string (in java, c#) cannot be changed by anyone, ever.
Computer science is a branch of mathematics, you can't just use whatever words you think more comfortable to you - names have implications, they are a form of theorem-stating. It's like not letting kids call multiplication a "stick-piling". We don't do that for reasons.
Same reason doors say PUSH and PULL instead of PUSH and YANK. We enjoy watching people faceplant into doors... er... it's not a sufficiently real problem to compel people to start doing something differently.
This is tangential but one thing that bothers me about C# is that you can declare a `readonly struct` but not a `readonly class`. You can also declare an `in` param to specify a passed-in `struct` can’t be mutated but again there’s nothing for `class`.
It may be beside the point. In my experience, the best developers in corporate environments care about things like this but for the masses it’s mutable code and global state all the way down. Delivering features quickly with poor practices is often easier to reward than late but robust projects.
`readonly class` exists in C# today and is called (just) `record`.
`in` already implies the reference cannot be mutated, which is the bit that actually passes to the function. (Also the only reason you would need `in` and not just a normal function parameter for a class.) If you want to assert the function is given only a `record` there's no type constraint for that today, but you'd mostly only need such a type constraint if you are doing Reflection and Reflection would already tell you there are no public setters on any `record` you pass it.
We may be going off topic though. As I understand objects in typescript/js are explicitly mutable as expected to be via the interpertor. But will try and play with it.
Without persistent data structures (structural sharing) - every change requires copying the entire data structure, memory usage explodes, time complexity suffers, GC pressure increases dramatically.
With persistent data structures - only the changed parts are new; unchanged parts are shared between versions; adding to a list might only create a few new nodes while reusing most of the structure; it's memory efficient, time efficient, multiple versions can coexist cheaply. And you get countless benefits - fearless concurrency, easier reasoning, elimination of whole class of bugs.
> That should make arr[1] possible but arr[1] = 9 impossible.
I believe you want `=`, `push`, etc. to return a new object rather than just disallow it. Then you can make it efficient by using functional data structures.
At TypeScript-level, I think simply disallowing them makes much more sense. You can already replace .push with .concat, .sort with .toSorted, etc. to get the non-mutating behavior so why complicate things.
You might want that, I might too. But it’s outside the constraints set by the post/author. They want to establish immutable semantics with unmodified TypeScript, which doesn’t have any effect on the semantics of assignment or built in prototypes.
Well said. (I too want that.) I found my first reaction to `MutableArray` was "why not make it a persistent array‽"
Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.
I love this idea so so much. I have maybe 100k lines of code that's almost all immutable, which is mostly run on the honor system. Because if you use `readonly` or `ReadOnlyDeep` or whatnot, they tend to proliferate like a virus through your codebase (unless I'm doing it wrong...)
That's probably because reassignment is already covered by using `const`.
Of course, it doesn't help that the immutable modifier for Swift is `let`. But also, in Swift, if you assign a list via `let`, the list is also immutable.
Unless you need the index, you can write: for (const x of iterable) { ... } or for (const attribute in keyValueMap) { ... }. However, loops often change state, so it's probably not the way to go if you can't change any variable.
If you need the index, you can use .keys() or .entries() on the iterable, e.g.
for (const [index, value] of ["a", "b", "c", "d", "e"].entries()) {
console.log(index, value);
}
Or forEach, or map. Basically, use a higher level language. The traditional for loop tells an interpreter "how" to do things, but unless you need the low level performance, it's better to tell it "what", that is, use more functional programming constructs. This is also the way to go for immutable variables, generally speaking.
There's no difference between for (x of a) stmt; and a.forEach(x => stmt), except for scope, and lack of flow control in forEach. There's no reason to prefer .forEach(). I don't see how it is "more functional."
Since sibling comments have pointed out the various ES5 methods and ES6 for-of loops, I'll note two things:
1. This isn't an effort to make all variables `const`. It's an effort to make all objects immutable. You can still reassign any variable, just not mutate objects on the heap (by default)
> If you figure out how to do this completely, please contact me—I must know!
I think you want to use a TypeScript compiler extension / ts-patch
This is a bit difficult as it's not very well documented, but take a look at the examples in https://github.com/nonara/ts-patch
Essentially, you add a preprocessing stage to the compiler that can either enforce rules or alter the code
It could quietly transform all object like types into having read-only semantics. This would then make any mutation error out, with a message like you were attempting to violate field properties.
You would need to decide what to do about Proxies though. Maybe you just tolerate that as an escape hatch (like eval or calling plain JS)
Could be a fun project!
One "solution" is to use Object.freeze(), although I think this just makes any mutations fail silently, whereas the objective with this is to make it explicit and a type error.
I used to have code somewhere that would recursively call Object.freeze on a given object and all its children, till it couldn't "freeze" anymore.
I thought Object.freeze threw an exception on mutation. Digging a little more, it looks like we're both right. Per MDN, it throws if it is in "use strict" mode and silently ignores the mutation otherwise.
Isn't the idea to get a compile time error, rather than a runtime exception?
const exploring = Object.freeze({ immutable: true }) exploring.thing = 'new'
Property 'thing' does not exist on type 'Readonly<{ immutable: true; }>'.ts(2339)
So it would be a simple way to achieve it.
That's opting into immutability, the point of the experiment is having it by default. Plus, that's just the type system preventing you from adding a property. It won't stop you from trying to change the `immutable` field.
I'm genuinely curious, was this AI generated, or just a lack of understanding?
It’s interesting to watch other languages discover the benefits of immutability. Once you’ve worked in an environment where it’s the norm, it’s difficult to move back. I’d note that Clojure delivered default immutability in 2009 and it’s one of the keys to its programming model.
I don't think the benefits of immutability haven't been discovered in js. Immutable.js has existed for over a decade, and JavaScript itself has built in immutability features (seal, freeze). This is an effort to make vanilla Typescript have default immutable properties at compile time.
It doesn't make sense to say that. Other languages had it from the start, and it has been a success. Immutable.js is 10% as good as built-in immutability and 90% as painful. Seal/freeze,readonly, are tiny local fixes that again are good, but nothing like "default" immutability.
It's too late and you can't dismiss it as "been tried and didn't get traction".
That's not what I said, and that's not what my reply is about. The value of immutability is known. That's the point of this post. The author isn't a TC39 member (or at least I don't think they are). They're doing what they can with the tools they have.
You didn't understand what you were replying to. Immutability cannot be discovered later on in that sense (in practice).
Javascript DOES NOT in fact have built-in immutability similar to Clojure's immutable structures - those are shallow, runtime-enforced restrictions, while Clojure immutable structures provide deep, structural immutability. They are based on structural sharing and are very memory/performance efficient.
Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.
I didn't say that it does have exhaustive immutability support. I said the value of it is known. They wouldn't have added the (limited) support that they did if they didn't understand this. The community wouldn't have built innumerable tools for immutability if they didn't understand the benefits. And in any case, you can't just shove a whole different model of handling objects into a thirty year old language that didn't see any truly structural changes until ten years ago.
> I didn't say that it does have exhaustive immutability support
seal and freeze in js are not 'immutability'. You said what you said - "JavaScript itself has built in immutability features (seal, freeze)".
I corrected you, don't feel bad about it. It's totally fine to not to know some things and it's completely normal to be wrong on occasion. We are all here to learn, not to argue who's toy truck is better. Learning means going from state of not knowing to the state of TIL.
> you can't just shove a whole different model of handling objects into a thirty year old language
Clojurescript did. Like 14-15 years ago or so. And it's not so dramatically difficult to use. Far more simpler than Javascript, in fact.
Your toy truck is being overly pedantic
I am not being pedantic, there's critical fundamental conceptual difference that has real implications for how people write and reason about code.
There's performance reasoning, different level of guarantees, and entirely different programming model.
When someone hears "JS has built-in immutability features", they might think, "great, why do I even need to look at Haskell, Elixir, Clojure, if I have all the FP features I need right here?". Conflating these concepts helps no one - it's like saying: "wearing a raincoat means you're waterproof". Okay, you're technically not 100% wrong, but it's so misleading that it becomes effectively wrong for anyone trying to understand the actual concept.
Sure, though Immutability.js did have persistent data structures like Clojure.
yeah, immutability.js is a solid engineering effort to retrofit immutability onto a mutable-first language. It works, but: it's never as ergonomic as language-native immutability and it just feels like you're swimming upstream against JS defaults. It's nowhere near Clojure's elegance. Clojure ecosystem assumes immutability everywhere and has more mature patterns built around it.
In Clojure, it just feels natural. In js - it feels like extra work. But for sure, if I'm not allowed to write in Clojurescript, immutability.js is a good compromise.
I meant to point out that of course there is value in immutability beyond shared datastructures.
I tried Immutability.js back in the day and hated it like any bolted-on solution.
Especially before Typescript, what happened is that you'd accidentally assign foo.bar = 42 when you should have set foo.set('bar', 42) and cause annoying bugs since it didn't update anything. You could never just use normal JS operations.
Really more trouble than it was worth.
And my issue with Clojure after using it five years is the immense amount of work it took to understand code without static typing. I remember following code with pencil and paper to figure out wtf was happening. And doing a bunch of research to see if it was intentional that, e.g. a user map might not have a :username key/val. Like does that represent a user in a certain state or is that a bug? Rinse and repeat.
> immense amount of work it took to understand code without static typing.
I've used it almost a decade - only felt that way briefly at the start. Idiomatic Clojure data passing is straightforward once you internalize the patterns. Data is transparent - a map is just a map - you can inspect it instantly, in place - no hidden state, no wrapping it in objects. When need some rigidity - Spec/Malli are great. A missing key in a map is such a rare problem for me, honestly, I think it's a design problem, you cannot blame dynamically-typed lang for it, and Clojure is dynamic for many good reasons. The language by default doesn't enforce rigor, so you must impose it yourself, and when you don't, you may get confused, but that's not the language flaw - it's the trade-off of dynamic typing. On the other hand, when I want to express something like "function must accept only prime numbers", I can't even do that in statically typed language without plucking my eyebrow. Static typing solves some problems but creates others. Dynamic typing eschews compile-time guarantees but grants you enormous runtime flexibility - trade-offs.
one thing that it's missing in JS to fully harness the benefits of immutability is some kind of equality semantics where two identical objects are treated the same
They were going to do this with Records and Tuples but that got scrapped for reasons I’m not entirely clear on.
It appears a small proposal along these lines has appeared in then wake of that called Composites[0]. It’s a less ambitious version certainly.
[0]: https://github.com/tc39/proposal-composites
Records and Tuples were scrapped, but as this is JavaScript, there is a user-land implementation available here: https://github.com/seanmorris/libtuple
Userland implementations are never as performant as native implementations. That's the whole point of trying to add immutability to the standard.
even when performance might not be an issue or an objective, there are other concerns about an user land implementation: lack of syntax is a bummer, and lack of support in the ecosystem is the other giant one - for example, can I use this as props for a React component?
yes, I'm aware of composites (and of the sad fate of Records and Tuples) and I'm hopeful they will improve things. One thing that I'm not getting from the spec is the behavior of the equality semantics in case a Date (or a Temporal object) is part of the object.
In other words, what is the result of Composite.equal(Composite({a: new Date(2025, 10, 19)}, Composite({a: new Date(2025, 10, 19)})? What is the result of Composite.equal(Composite({a: Temporal.PlainDate(2025, 10, 19)}, Composite({a: PlainDate(2025, 10, 19)})?
Also, interestingly Clojurescript compiler in many cases emits safer js code despite being dynamically typed. Typescript removes all the type info from emmitted js, while Clojure retains strong typing guarantees in compiled code.
If we are pointing dates, ML did it in 1973, or if you prefer the first mature implementation SML, in 1983.
The Purely Functional Data Structures book, that Clojure data structures are based on, is from 1996.
This is how far back we're behind the times.
Cool. I didn’t realize ML had such a focus on immutability as well. I have never done any serious work in ML and it’s a hole in my knowledge. I have to go back and do a project of some sort using it (and probably one in Ocaml as well). What data structures does ML use under the hood to keep things efficient? Clojure uses Bagwell’s Hashed Array-Mapped Tries (HAMT), but Bagwell only wrote the first papers on that in about 2000. Okasaki’s book came out in 1998, and much of the work around persistent data structures was done in the late 1980s and 1990s. But ML predates most of that, right?
Mutability is overrated.
Immutability is also overrated. I mostly blame react for that. It has done a lot to push the idea that all state and model objects should be immutable. Immutability does have advantages in some contexts. But it's one tool. If that's your only hammer, you are missing other advantages.
The only benefit to mutability is efficiency. If you make immutability cheap, you almost never need mutability. When you do, it’s easy enough to expose mechanisms that bypass immutability. For instance in Clojure, all values are immutable by default. Sometimes, you really want more efficiency and Clojure provides its concept of “transients”[1] which allow for limited modification of structures where that’s helpful. But even then, Clojure enforces some discipline on the programmer and the expectation is that transient structures will be converted back to immutable (persistent) structures once the modifications are complete. In practice, there’s rarely a reason to use transients. I’ve written a lot of Clojure code for 15 years and only reached for it a couple of times.
[1] https://clojure.org/reference/transients
Immutability is really valuable for most application logic, especially:
- State management
- Concurrency
- Testing
- Reasoning about code flow
Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"
Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'. In immutable-first languages - Clojure, Haskell, Elixir immutability feels like a superpower. In Javascript, it feels like a chore.
A lot of these concepts don't mean anything to most developers I've found. A lot of the time I struggle to get the guy I work with to compile and run his code. Even something relatively simple as determinism and pure functions just isn't happening.
This is shockingly common and most developers will never ever hear of Clojure, Haskell or Elixir.
I really feel there is like two completely different developer worlds. One where these things are discussed and the one I am in where I am hoping that I don't have to make a teams call to tell a guy "please can you make sure you actually run the code before making a PR" because my superiors won't can him.
Well, yes, if your shop hires poorly, immutability won’t save you. In fact, nothing will save you.
> Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"
I think immutability is good, and should be highly rated. Just not as highly rated as it is. I like immutable structures and use them frequently. However, I sometimes think the best solution is one that involves a mutable data structure, which is heresy in some circles. That's what I mean by over-rated.
Also, kind of unrelated, but "state management" is another term popularized by react. Almost all programming is state management. Early on, react had no good answer for making information available across a big component tree. So they came up with this idea called "state management" and said that react was not concerned with it. That's not a limitation of the framework see, it's just not part of the mission statement. That's "state management".
Almost every programming language has "state management" as part of its fundamental capabilities. And sometimes I think immutable structures are part of the best solution. Just not all the time.
I think we're talking past each other.
> I like immutable structures and use them frequently.
Are you talking about immutable structures in Clojure(script)/Haskell/Elixir, or TS/JS? Because like I said - the difference in experience can be quite drastic. Especially in the context of state management. Mutable state is the source of many different bugs and frustration. Sometimes it feels that I don't even have to think of those in Clojure(script) - it's like the entire class of problems simply is non-existent.
Of the languages you listed, I've really only used TS/JS significantly. Years ago, I made a half-hearted attempt to learn Haskell, but got stuck on vocabulary early on. I don't have much energy to try again at the moment.
Anyway, regardless of the capabilities of the language, some things work better with mutable structures. Consider a histogram function. It takes a sequence of elements, and returns tuples of (element, count). I'm not aware of an immutable algorithm that can do that in O(n) like the trivial algorithm using a key-value map.
> I made a half-hearted attempt to learn Haskell
Try Clojure(script) - everything that felt confusing in Haskell becomes crystal clear, I promise.
> Consider a histogram function.
You can absolutely do this efficiently with immutable structures in Clojure, something like
This is O(n) and uses immutable maps. The key insight: immutability in Clojure doesn't mean inefficiency. Each `update` returns a new map, but:1. Persistent data structures share structure under the hood - they don't copy everything
2. The algorithmic complexity is the same as mutable approaches
3. You get thread-safety and easier reasoning for a bonus
In JS/TS, you'd need a mutable object - JS makes mutability efficient, so immutability feels awkward.
But Clojure's immutable structures are designed for this shit - they're not slow copies, they're efficient data structures optimized for functional programming.
> immutability in Clojure doesn't mean inefficiency.
You are still doing a gazillion allocations compared to:
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.Sure, that’s faster. But do you really care? How big is your data? How many distinct things are you counting? What are their data types? All that matters. It’s easy to write a simple for-loop and say “It’s faster.” Most of the time, it doesn’t matter that much. When that’s the case, Clojure allows you to operate at a higher level with inherent thread safety. If you figure out that this particular code matters, then Clojure gives you the ability to optimize it, either with transients or by dropping down into Java interop where you have standard Java mutable arrays and other data structures at your disposal. When you use Java interop, you give up the safety of Clojure’s immutable data structures, but you can write code that is more optimized to your particular problem. I’ll be honest that I’ve never had to do that. But it’s nice to know that it’s there.
The allocation overhead rarely matters in practice - in some cases it does. For majority of "general-purpose" tasks like web-services, etc. it doesn't - GC is extremely fast; allocations are cheap on modern VMs.
The second point I don't even buy anymore - once you're used to `reduce`, it's equally (if not more) readable. Besides, in practice you don't typically use it - there are tons of helper functions in core library to deal with data, I'd probably use `(frequencies coll)` - I just didn't even mentioned it so it didn't feel like I'm cheating. One function call - still O(n), idiomatic, no reduce boilerplate, intent is crystal clear. Aggressively optimized under the hood and far more readable.
Let's not get into strawman olympics - I'm not selling snake oil. Clojure wasn't written in some garage by a grad student last week - it's a mature and battle-tested language endorsed by many renowned CS people, there are tons of companies using it in production. In the context of (im)mutability it clearly demonstrates incontestable, pragmatic benefits. Yes, of course, it's not a silver bullet, nothing is. There are legitimate cases where it's not a good choice, but you can argue that point pretty much about any tool.
If there was a language that didn't require pure and impure code to look different but still tracked mutability at the type level like the ST monad (so you can't call an impure function from a pure one) - so not Clojure - then that'd be perfect.
But as it stands immutability often feels like jumping through unnecessary hoops for little gain really.
> then that'd be perfect.
There's no such thing as "perfect" for everyone and for every case.
> feels like jumping through unnecessary hoops for little gain really.
I dunno what you're talking about - Apple runs their payment backend; Walmart their billing system; Cisco their cybersec stack; Netflix their social data analysis; Nubank empowers entire Latin America - they all running Clojure, pushing massive amounts of data through it.
I suppose they just have shitload of money and can afford to go through "unnecessary hoops". But wait, why then tons of smaller startups running on Clojure, on Elixir? I guess they just don't know any better - stupid fucks.
The topic was immutability, not Clojure?
But ok, if mutability is always worse, why not use a pure language then? No more cowardly swap! and transient data structures or sending messages back and forth like in Erlang.
But then you get to monads (otherwise you'd end up with Elm and I'd like to see Apple's payment backend written in Elm), monad transformers, arrows and the like and coincidentally that's when many Clojure programmers start whining about "jumping through unnecessary hoops" :D
Anyway, this was just a private observation I've reached after being an FP zealot for a decade, all is good, no need to convert me, Clojure is cool :)
> Clojure is cool
Clojure is not "cool". Matter of fact, for a novice it may look distasteful, it really does. Ask anyone with a prior programming experience - Python, JS, Java to read some Clojure code for the first time and they start cringing.
What Clojure actually is - it is "down to earth PL", it values substance over marketing, prioritizes developers happiness in the long run - which comes in a spectrum; it doesn't pretend everyone wants the same thing. A junior can write useful code quickly, while someone who wants to dive into FP theory can. Both are first-class citizens.
> If there was a language that didn't require pure and impure code to look different
I've occasionally wondered what life would be like if I tried writing all my pure Haskell code in the Identity monad.
Same!
Next time I feel an itch to learn a language, I'll probably pick Clojure, based mostly on this comment. Not sure when that will be though.
One doesn't need to "wear a tie" to learn Clojure - syntax is so simple it can be explained on a napkin. You need to get:
1. An editor with structural editing features - google: "paredit vim/emacs/sublime/etc.", on VSCode - simply install Calva.
2. How to connect to the REPL. Calva has the quickstart guide or something like that.
3. How to eval commands in place. Don't type them directly into the REPL console! You can, but that's not how Lispers typically work. They examine the code as they navigate/edit it - in place. It feels like playing a game - very interactive.
That's all you need to know to begin with. VSCode's Calva is great to mess around it. Even if you don't use it (I don't), it's good for beginners.
Knowing Clojure comes super handy, even when you don't write any projects in it - it's one of the best tools to dissect some data - small and large. I don't even deal with json to inspect some curl results - I pipe them through borkdude/jet, then into babashka and in the REPL I can filter, group, sort, slice, dice, salt & pepper that shit, I can even throw some visualizations on top - it looks delicious; and it takes not even a minute to get there - if I type fast enough, I slash through it in seconds!
Honestly, Clojure feels to be the only no bullshit, no highfalutin, no hidden tricks language in my experience, and jeeeesus I've been through just a bit more than a few - starting with BASIC in my youth and Pascal and C in college; then Delphi, VB, then dotnet stuff - vb.net, c#, f#, java, ruby; all sorts of altjs shit - livescript, coffeescript, icedcoffeescript, gorillascript, fay, haste, ghcjs, typescript, haskell, python, lua, all sorts of Lisps; even some weird language where every operator was in Russian; damn, I've been trying to write some code for a good while. I'm stupid or something but even in years I just failed to find a perfect language to write perfect code - all of dem feel like they got made by some motherfluggin' annoyin' bilge-suckin' vexin' barnacle-brained galoots. Even my current pick of Clojure can be sometimes annoying, but it's the least irksome one... so far. I've been eyeing Rust and Zig, and they sound nice (but every one of dem motherfuckers look nice before you start fiddling with 'em) yet ten years from now, if I'm still kicking the caret, I will be feeding some data into a clj repl, I'm tellin' ya. That shit just fucking works and makes sense to me. I don't know how making it stop making sense, it just fucking does.
I just want a way of doing immutability until production and let a compiler figure out how to optimize that into potentially mutable efficient code since it can on those guarantees.
No runtime cost in production is the goal
> No runtime cost in production is the goal
Clojure's persistent data structures are extremely fast and memory efficient. Yes, it's technically not a complete zero-overhead, pragmatically speaking - the overhead is extremely tiny. Performance usually is not a bottleneck - typically you're I/O bound, algorithm-bound, not immutability-bound. When it truly matters, you can always drop to mutable host language structures - Clojure is a "hosted" language, it sits atop your language stack - JVM/JS/Dart, then it all depends on the runtime - when in javaland, JVM optimizations feel like blackmagicfuckery - there's JIT, escape analysis (it proves objects don't escape and stack-allocates them), dead code elimination, etc. For like 95% of use cases using immutable-first language (in this example Clojure) for perf, is absolutely almost never a problem.
Haskell is even more faster because it's pure by default, compiler optimizes aggressively.
Elixir is a bit of a different story - it might be slower than Clojure for CPU-bound work, but only because BEAM focuses on consistent (not peak) performance.
Pragmatically, for the tasks that are CPU-bound and the requirement is "absolute zero-cost immutability" - Rust is a great choice today. However, the trade off is that development cycle is dramatically slower in Rust, that compared to Clojure. REPL-driven nature of Clojure allows you to prototype and build very fast.
From many different utilitarian points, Clojure is enormously practical language, I highly recommend getting some familiarity with it, even if it feels very niche today. I think it was Stu Halloway who said something like: "when Python was the same age of Clojure, it was also a niche language"
This doesn’t make much sense. One of the benefits of immutability is that once you create a data structure, it doesn’t change and you can treat it as a value (pass it around, share it between threads without cloning it, etc.). If you now allow modifications, you’re suddenly violating all those guarantees and you need to write code that defensively makes clones, so you’re right back where you started. In Clojure, you can cheat at points with transients where the programmer knows that a certain data structure is only seen by a single thread of execution, but you’re still immutable most of the time.
Depends on your target. Clojure targets the JVM by default and that has very different constraints than say, compiling to JavaScript for the browser or node.
Compiling to a JS engine this would be great because immutability has a runtime cost
Clojurescript supports transients https://clojureverse.org/t/transients-in-clojurescript/9102/...
Runtime cost of using Clojurescipt is undeniably there but for most applications is pretty negligible price to pay for the big wins. In practice, Clojurescript apps can often perform faster than similar apps built traditionally - especially for things like render optimization - immutable data enables cheap equality checks for memoization, it prevents unnecessary re-renders; data transform pipelines - transducers give you lazy evaluation, it's great for filtering/mapping through large datasets; caching - immutable data is safe to cache indefinitely, you don't have to worry about stale data;
You guys keep worrying about some theoretical "costs" - in practice, I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used. Situations where it incurs a practical cost to pay are outliers, not a general rule.
> but for most applications is pretty negligible
Not all, and it is always preferable for it not to have a cost.
> I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used
There’s a big swath of work that could benefit from the development streamlining that something like clojurescript or similar projects can give but any performance hit is deadly, like e-commerce.
There is also the fact that it doesn’t have 100% bindings to the raw JS and DOM APIs if I recall correctly, it’s often wrapped around React or assumed to be
> Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'.
I felt that way in the latest versions of Scheme, even. It’s bolted on. In contrast, in Clojure, it’s extremely fundamental and baked in from the start.
exactly, react could not deal with mutable object so they decided to make immutability seem to be something that if you did not use before you did not understood programming.
It's redundant in single thread environment. Everyone moved to mobile while pages are getting slower and slower, using more and more memory. This is not the way. Immutability has its uses, but it's not good for most web pages.
You're just waving off the whole bag of benefits:
Yes, js runs in a single-threaded environment for user code, but immutability still provides an immense value: predictability, simpler debugging, time-travel debugging, react/framework optimizations.
Modern js engines are optimized for short-lived objects, and creating new objects instead of mutating uses more memory only temporarily. Performance impact of immutability is so absolutely negligible compared to so many other factors (large bundles, unoptimized images, excessive DOM manipulation)
You're blaming the wrong thing for overblown memory. I don't know a single website that is bloated and slow only because the makers decided to use immutable datastructures. In fact, you might be exactly incorrect - maybe web pages getting slower and slower because we're now trying to have more logic in them, building more sophisticated programs into them, and the problem is exactly that - we are reaching the point that is no longer simple to reason about them? Reasoning about the code in an immutable-first PL is so much simpler, you probably have no idea, otherwise you wouldn't be saying "this is not the way"
programming with immutability has been best practices in js/ts for almost a decade
however, enforcing it is somewhat difficult & there are still quite a bit lacking with working with plain objects or maps/sets.
We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.
As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.
And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.
I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.
One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.
And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.
Then consider the spread operator, and how much you might see it in TypeScript code:
const foo = {
};// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"
const foo = [...array, newItem];
And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()
They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.
And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.
> We shouldn't forget that there are trade-offs
Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.
"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.
So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?
That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.
But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
> But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
I would have agreed with that statement a few years ago.
But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"
And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.
Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.
Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.
If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.
If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.
In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.
And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.
In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."
At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.
So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.
Alright, I admit, I have not worked on teams where immutable.js was used a lot, so I don't have any insight specifically on its impact on performance.
Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.
For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.
But sure, I agree with everything you say - Immutability is great in general, but not for every given case.
Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.
Sounds easier to just use some other compile to js languge, its not like there are no other options out there.
I'm still mad about Reason/ReScript for fumbling the bag here.
Rescript/reasonml is still in development, and a more seasoned dev team can easily pick it as an better alternative to typescript.
Its a bummer haxe did not promote itself more for the web, as its a amazinlgy good piece of tech. The languge shows age, but has an awesome typesystem and metaprogramming capabilities.
That said, haxe 5 is on the horizon.
While TS allows easy integration with JS, this doesn't work well with other languages that compile to JS.
You lose all type benefits of libraries that are written in TS.
Its quite rare to see interop between compile to js languages tho. Also rare to see projects using more (if not in the middle of a rewrite/port) than one compile to js language. YMMV.
> Its quite rare to see interop between compile to js languages tho
js interop in Clojurescript is dead simple. Moreover, you can have shared logic between different runtimes. The promise of Nodejs for code re-use, in practice turned out to be not as straightforward, even though you have js runtime in both places. With Clojurescript, you can have the shared logic in the same namespace - it's mindblowing, you can have functions that work on both - JVM and JS.
Im not talking about X->JS->X interop, but X->Y->JS->Y->X interop, where X,Y = compile to JS language.
> it's mindblowing, you can have functions that work on both - JVM and JS
Thats basically what you could do long before Nodejs (that made server/client code sharing popular) came out in Haxe. You could target a huge number of targets from a single codebase.
Are you saying that getting two different compile-to-JS languages to interoperate is messy and you have to go through multiple transpilation layers to make them talk to each other?
The Clojurescript way isn't about transpiling between different compile-to-JS languages. It's simpler: write once in Clojure, compile to both JVM and JS directly. No intermediate language chains needed. And you are free to use whatever js and java libs directly.
Yes, sure - valid point about Haxe, you're right, it actually did this before Clojurescript or even before Node existed. IIRC Haxe could compile a single codebase to multiple targets. That multi-target approach tho required writing in a lowest-common-denominator lang. Cljs practical in the sense that you get the full power of Clojure on the JVM side and reasonable JS semantics on front-end - without compromise. Haxe often meant sacrificing language features to stay compatible across all targets.
Clojurescript is surprisingly pragmatic in that sense and works well. The downside - you can use Cljs on its own, but it truly shines when paired with Clojure, but JVM, despite being amazing piece of tech has a marketing problem - people hear JVM and immediately think Java.
Agreed. Gleam is a great one that targets JavaScript and outputs easy to read code
Yup. Also rescript if your not a fan of the elm architecture.
Not if you want to use typescript.
Typescript is the obvious choice if all you know/want to learn is JS. But the languge is still garbage because of "valid js is valid ts".
And yes, i know that is what made it popular.
This is some criticism that lacks any depth or insight.
I've deployed projects in Elm, Scala, Clojure, Purescript and TypeScript has many great qualities that the others don't have.
It's an incredibly powerful language with a great type system which requires some effort in understanding, (e.g. 99% of candidates don't even know what a mapped type is, it's written in the docs...) and minimal discipline to not fall in the js pitfalls.
On top of that you have access to tons of tools and libraries, which alternative ecosystems either don't have (e.g. no compile-to-js language) or have to interoperate with at js level (anything from Reason to Gleam) anyway.
Beyond that, there's other important considerations in choosing a language beyond its syntax/semantics and ecosystem, such as hiring or even AI-friendliness.
Stricter TS is absolutely a valuable effort to chase.
The issue with TS is that its way too easy to fallback to unsafe code. Also the TS typesystem is WAY, WAY too complex. They pile new hard to grasp niche features that has made it really hard to grasp.
The TS sweetspot was (imho) somewhere around 1.8-2.0 era. These days you can run doom in the typesystem.
I cant say about hiring, as i dont hire a dev that knows language X, i hire engineers that know the ins and outs of how software should be written, and know when to pick Go, when to pick ocaml and when to go with C/Rust.
Also i would never use 99.9% of npm packages (js or ts) so i dont really care that much.
As an example writing typeheads for reasonml is not really that hard, as a benefit you know exactly what parts you use.
Also i dont use AI, and we dont accept any PR that are videcoded.
It's not that easy to fall to unsafe code if you know what you're doing, do not have typescript skill issues and use the right libraries, such as fp-ts or effect-ts (an evolution of Scala's ZIO on TypeScript).
Those ecosystems are huge by the way, effect is still very niche-y due to its usage of functional programming and effect systems, niche in a sense, it gets more downloads than Angular.
[1] https://gcanti.github.io/fp-ts/modules/
[2] https://effect.website/
I have my beefs with TypeScript's complexity, verbosity and limits, don't get me wrong but I don't see any realistic alternative for who wants to write type safe code.
I loved Elm or Reason, but they are not realistic nor productive choices unless your IT team has North American startup budget.
In my real world, we don't have the budgets to lure the kind of brilliant engineers that know when to pick Ocaml and when to go with Rust (or have ever used any of them).
I get the idea of libraries like fp-ts and effect-ts, but like most libraries in this area, they are just boltons. I dont like to write unidiomatic code for given language if it is not designed for it. This means if i wrote a applicative that must satisfy homomorphism in javascript:
Im pretty sure not a single dev would understand what the hell is going on, and i would have to buy quite a few beers to get that passed in a CR.I write idiomatic code for the language:
Eg. In Go i loop and use mutability when its the correct thing to do, but in OCaml i almost exclusively use recursion and favour a monadic api, and javascript being a middle-ground i tend to just just the builtins map/filter/reduce and pals for most things. I dont need a monad for the sake of it.
That said i tend to step out of this rule when it comes to errors, and avoid throwing as much as possible. Errors as values is invaluable.
> Im pretty sure not a single dev would understand what the hell is going on, and i would have to buy quite a few beers to get that passed in a CR.
That's a questionable example imho, for few reasons.
The first is that understanding an applicative functor requires you to first understand map, apply and lift.
Starting from an applicative, in any language, including Haskell, is like starting from a monad, (as you know very well they are almost the same thing as a monad is an applicative with one more rule) and then it's quite clear why we get endless blog posts that leave you no brighter.
The second is that we're talking about TypeScript, not JavaScript.
Let's make an example.
Given a definition for map in TypeScript:
map: <A, B>(f: (a: A) => B) => ((fa: F<A>) => F<B>)
which requires a minimal typescript understanding (a function from A to B and a value of type F<A> you get F<B>) to read.
If you can read map you can read ap in TypeScript:
const ap: <A>(fa: F<A>) => <B>(fab: F<(a: A) => B>) => F<B>
It's almost the same, with one major difference: you don't have a function from a to b, but one that is lifted in a datatype F, thus F<a => b>.
Why would you ever pick OCaml over Rust or Go?
If i need something that must be proven correct. Something that really cant (ungracefully) fail, where there is no null pointer exceptions, or other panics (eg rust has unwrap, and Go has nills).
Something that does not need to run on C speeds (both Go and OCaml have similar performance, somewhere around 80-90% of C/Rust give or take), and when i want to have a fast feedback cycle (the OCaml compiler is even faster than Go's compiler).
Basically OCaml when i want to have a good sleep at night, and be sure im not paged at 2AM for some weird panic or null pointer error.
Thats why i glue stuff together, i might have some "this-is-critical-as-fuck" code written in OCaml, and a webserver written in Go, and some perf critical feature written in Rust/C.
It all depends on the requirements. Its a shame devs shy away from that, and use their only tool (language) for all things, leading to more brittle and overcomplex software.
> 99% of candidates don't even know what a mapped type is, it's written in the docs
Please don't ask shit like that during interviews. For the love of god.
This is the same _type_ of elitism that I've seen from the Scala community and really makes TS seem unpalatable. Big "A monad is just a monoid in the category of endofunctors" vibes.
Yeah, I'm quite sure inverting trees is more relevant to real world programming than knowing the fundamentals of a type system people say they are expert in. /s
In any case, I am against technical interviews and never ask any technical questions beyond just general talk of how people like to work, and their previous projects.
But I know for a fact the overwhelming majority of people that say they know TypeScript don't know a tenth of what's written in the docs, and mapped types were just a very basic example.
It is also how C++ and Objective-C got users from C land.
The examples on the JVM and CLR got it through, targeting the same bytecode, and Swift even if imposed from above, also had to make interop with Objective-C first class, and is now in the process to do the same for C++.
Turns out adoption is really hard, if a full rewrite is asked for, unless someone gets to pay for those rewrites, or gets to earn some claim to fame, like in the RIG and RIR stuff.
Rust compiles to wasm right?
ScalaJs!
I am a fan of immutability. I was toying around with javascript making copies of arguments (even when they are complex arrays or objects). But, strangely, when I made a comment about it, it just got voted down.
https://news.ycombinator.com/item?id=45771794
I made a little function to do deep copies but am still experimenting with it.
There's something in here for sure, switch over to TS with strict typing and you've got generics to help you out more, at least for validation.
A deep clone isn't a bad approach but given TS' typing, I don't know if they allow a pure 'eval' by default.. Still playing with this in my free time though and it's still tricky.
One thought I recently had, since using deepCopy is going to slow things down, is if the source code for QuickJS could be changed just make copies. Then load up quickJs as a replacement for the browsers javascript by invoking it as wasm.
This has really irrationally interested me now, Im sure there is something there with the internal setters on TS but damn I need to test now. My thinking is that overriding the setter to evaluate if its mutable or not, the obvious approach.
Yeah there's a lot you could do with property setter overrides in conditional types, but the tricky magic trick is somehow getting Typescript to do it by default. I've got a feeling that `object` and `{}` are just too low-level in Typescript's type system today to do those sorts of things. The `Object` in lib.d.ts is mostly for adding new prototype methods, not as much changing underlying property behavior.
Aside: Why do we use the terms "mutable" and "immutable" to describe those concepts? I feel they are needlessly hard to say and too easily confused when reading and writing.
I say "read-write" or "writable" and "writability" for "mutable" and "mutability", and "read-only" and "read-only-ness" for "immutable" and "immutability". Typically, I make exceptions only when the language has multiple similar immutability-like concepts for which the precise terms are the only real option to avoid confusion.
Read only does not carry (to me) the fact that something cannot change, just that I cannot make it change. For example you could make a read only facade to a mutable object, that would not make it immutable.
> Why do we use the terms "mutable" and "immutable" to describe those concepts?
Mutable is from Latin 'mutabilis' - (changeable), which derives from 'mutare' (to change)
You can't call them read-only/writable/etc. without confusing them with access permissions. 'Read-only' typically means something read-only to local scope, but the underlying object might still be mutable and changed elsewhere - like a const pointer in C++ or a read-only db view that prevents you from writing, but the underlying data can still be changed by others. In contrast, an immutable string (in java, c#) cannot be changed by anyone, ever.
Computer science is a branch of mathematics, you can't just use whatever words you think more comfortable to you - names have implications, they are a form of theorem-stating. It's like not letting kids call multiplication a "stick-piling". We don't do that for reasons.
Same reason doors say PUSH and PULL instead of PUSH and YANK. We enjoy watching people faceplant into doors... er... it's not a sufficiently real problem to compel people to start doing something differently.
"read-only-ness" is much more of a mouthful than "immutable"!
Generally immutability is also a programming style that comes with language constructs and efficient data structures.
Whereas 'read-only' (to me) is just a way of describing a variable or object.
This is tangential but one thing that bothers me about C# is that you can declare a `readonly struct` but not a `readonly class`. You can also declare an `in` param to specify a passed-in `struct` can’t be mutated but again there’s nothing for `class`.
It may be beside the point. In my experience, the best developers in corporate environments care about things like this but for the masses it’s mutable code and global state all the way down. Delivering features quickly with poor practices is often easier to reward than late but robust projects.
`readonly class` exists in C# today and is called (just) `record`.
`in` already implies the reference cannot be mutated, which is the bit that actually passes to the function. (Also the only reason you would need `in` and not just a normal function parameter for a class.) If you want to assert the function is given only a `record` there's no type constraint for that today, but you'd mostly only need such a type constraint if you are doing Reflection and Reflection would already tell you there are no public setters on any `record` you pass it.
I'm not sure if it's what you mean, but can't you have all your properties without a setter, and only init them inside the constructor for example ?
Would your 'readonly' annotation dictate that at compile time ?
eg
class Test {
}We may be going off topic though. As I understand objects in typescript/js are explicitly mutable as expected to be via the interpertor. But will try and play with it.
I think you would want to use an init only property for your example
I'm not a C# expert though, and there seems to be many ways to do the same thing.I don't use the init decorator myself but I would hazard a guess it's similar. Don't quote me on that though.
The point does stand though, outside of modifying properties I'm not sure what a "private" class itself achieves.
> I don't use the init decorator myself but I would hazard a guess it's similar.
Genuinely curious, why not? Seems to be less verbose. I don’t write C#s so I’m not sure of the downsides of any particular feature.
For immutability to be effective you'd also need persistent data structures (structural sharing). Otherwise you'll quickly grind to a halt.
Why would you quickly grind to a halt.
Without persistent data structures (structural sharing) - every change requires copying the entire data structure, memory usage explodes, time complexity suffers, GC pressure increases dramatically.
With persistent data structures - only the changed parts are new; unchanged parts are shared between versions; adding to a list might only create a few new nodes while reusing most of the structure; it's memory efficient, time efficient, multiple versions can coexist cheaply. And you get countless benefits - fearless concurrency, easier reasoning, elimination of whole class of bugs.
> That should make arr[1] possible but arr[1] = 9 impossible.
I believe you want `=`, `push`, etc. to return a new object rather than just disallow it. Then you can make it efficient by using functional data structures.
https://www.cs.cmu.edu/~rwh/students/okasaki.pdf
At TypeScript-level, I think simply disallowing them makes much more sense. You can already replace .push with .concat, .sort with .toSorted, etc. to get the non-mutating behavior so why complicate things.
You might want that, I might too. But it’s outside the constraints set by the post/author. They want to establish immutable semantics with unmodified TypeScript, which doesn’t have any effect on the semantics of assignment or built in prototypes.
Well said. (I too want that.) I found my first reaction to `MutableArray` was "why not make it a persistent array‽"
Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.
I love this idea so so much. I have maybe 100k lines of code that's almost all immutable, which is mostly run on the honor system. Because if you use `readonly` or `ReadOnlyDeep` or whatnot, they tend to proliferate like a virus through your codebase (unless I'm doing it wrong...)
Definitely need purely functional data structures then. Is there a rich ecosystem for that for TypeScript?
fp-ts is the strictest fp implementation in typescript land.
https://gcanti.github.io/fp-ts/modules/
But the most popular functional ecosystem is effect-ts, but it does it's best to _hide_ the functional part, in the same spirit of ZIO.
https://effect.website/
How do immutable variables work with something like a for loop?
Is TFA (or anyone else for that matter) actually concerned with "immutable variables"?
e.g., `let i = 0; i++;`
They seem to be only worried about modifying objects, not reassignment of variables.
That's probably because reassignment is already covered by using `const`.
Of course, it doesn't help that the immutable modifier for Swift is `let`. But also, in Swift, if you assign a list via `let`, the list is also immutable.
Looks like Rust is https://doc.rust-lang.org/stable/std/keyword.let.html
Erlang doesn't allow variable reassignment. Elixir apparently does, but I've never played with it.
typescript handles that well already
Unless you need the index, you can write: for (const x of iterable) { ... } or for (const attribute in keyValueMap) { ... }. However, loops often change state, so it's probably not the way to go if you can't change any variable.
If you need the index, you can use .keys() or .entries() on the iterable, e.g.
Or forEach, or map. Basically, use a higher level language. The traditional for loop tells an interpreter "how" to do things, but unless you need the low level performance, it's better to tell it "what", that is, use more functional programming constructs. This is also the way to go for immutable variables, generally speaking.There's no difference between for (x of a) stmt; and a.forEach(x => stmt), except for scope, and lack of flow control in forEach. There's no reason to prefer .forEach(). I don't see how it is "more functional."
You use something else like map/filter/reduce or recursion.
`for` loops are a superfluous language feature if your collections have `map` for transformations and `forEach` for producing side effects
Since sibling comments have pointed out the various ES5 methods and ES6 for-of loops, I'll note two things:
1. This isn't an effort to make all variables `const`. It's an effort to make all objects immutable. You can still reassign any variable, just not mutate objects on the heap (by default)
2. Recursion still works ;)
They don't work. The language has to provide list and map operations to compensate.
[flagged]
@dang likely a bot
"@dang" is a no-op.
Email mods at hn@ycombinator.com.
<https://news.ycombinator.com/item?id=42177590>
Making web pages even more slow. Normal people complain about it all the time, arguing that modern programmers are lazy.