Hacker News new | past | comments | ask | show | jobs | submit login
Figma’s Journey to TypeScript (figma.com)
261 points by soheilpro 24 days ago | hide | past | favorite | 248 comments



Surprising to hear Figma had a custom language for JS. Even more surprising that it was faster than TS. And then they migrate off it onto slower TS!

Seems to happen a lot though. Company makes custom stuff early on, gets big, then migrates to something "standard".


It is actually a fairly indicative story. At the start there is a brilliant individual (Evan) who sets up an entire toolchain + core of the product. They then move on (or get pushed out, or get bored), and with the team (and the product) now being much bigger, things get replatformed to a more familiar, widely used stack. The success of these steps heavily depends on how robust the eng culture is at the organisation. I suspect (no evidence though!) that Evan and other founders have set up an excellent eng culture at Figma and even if they make a mistake at some point, there is sufficient resilience in place to correct. All power to them!


It's ironic. For the past 5 years I've been writing type strict PHP. People love to shit on PHP yet I found that when I started using strict types my code quality improved, amount of lines needed to produce a result decreased, and necessary unit tests to produce the same result also decreased.

Then a few months ago I decided to write a TS project from scratch. For the record I have 18 years of JavaScript experience. What I found was that the biggest barrier to entry was configuring webpack to be "just right". Other devs on my team with similar level of experience would have their eyes glaze over when webpack came up and get annoyed. For good reason. It took me several days to get it to work right. The fact that the tsconfig has 20 options that can effect the transpiler and not have good docs is a problem. The fact that there needs to be two tsconfigs in a react native project that compiles down to web as a secondary build target is another problem. The fact that you need a very experienced dev to spend days configuring webpack is another problem. Finding information about the right configuration is like the blind leading the blind. Most search results on the topic are riddled with half truths and non sense. Many devs rely on a preexisting webpack config and if they do anything to mess it up they are many times completely unable to fix it.

Typescript is fine. I guess. The code produced is nicer. But having to rely on webpack is an issue.

I actually like TS but I wish I didn't need to transpile anything or have to bang my head against a webpack config for days. It's by far the biggest barrier to entry because while you're banging your head against it you're not writing code. And for none technical stake holders when you have nothing visual to show at stand ups that can create friction and make the engineers seem like they aren't doing anything.

So far at multiple companies I've had to configure webpack for extremely complex JavaScript based single page apps which took me literally months of messing around with it until it worked just right. And until it does work "just right" the non technical folks think you're wasting time.


> What I found was that the biggest barrier to entry was configuring webpack to be "just right".

As someone who has just spent a whole week trying to plumb Vite + Rollup into an ASP.NET web application, I can relate to this on many levels.

I can produce 90% of our functionality with vanilla javascript + a sprinkling of JQuery, but to get something 'modern' in Vue.JS fitting into the application comfortably is a bloody chore. Sparing the gory details, it feels like orchestrating a thousand moving parts while being blind with a gun named 'ship or die' held to your back.

For comparison, the EF core at least gives me logs. C# is a delight to debug. Print statements can tell me what I need. These parts feel wholistic.

Yet the web stuff is just so scattered, so much to configure, so many options where if you want to do something even slightly non-standard you are in the dark, mashing the conf files until it works and you aren't even sure why but you have to move on.

This feels different from mastering one language, even though it has a steep learning curve. I hit roadblocks in perl but they weren't as frustrating and it felt like everything was feeding back to a cohesive whole. With webdev, it doesn't feel like that at all. I don't know why, I wish it wasn't so.


Yep, you're walking in one of the voids that is largely ignored by modern front end web dev. Frameworks like React and Vue advertise that your can easily add them to any page and that's technically true... but when you have a real world app built with a backend framework and you want to integrate it with React/Vue in a sane way... good luck to you!

All the pieces exist to make it work, but you won't find much documentation to help you. You'll have to rely finding blog posts, but of course if the post is more than a year old most of the libs or tools they're talking about will have totally changed. Once you do get everything up and running you'll often find that the dev experience is less than great.


That is really unfortunate. Webpack is a nightmare and outdated. I wonder how you came to use it? Node has a nice intro: https://nodejs.org/en/learn/getting-started/nodejs-with-type... (skip ts-node and go directly to tsx). Or Deno or Bun run your TS code directly. Modern frontend frameworks like Vue or Svelte have their own tooling, mostly based on Vite and Esbuild. I think it was just bad luck that you came across Webpack ...


I know you're trying to help, but this further highlights the problem with frontend dev.


It’s like Gradle vs Maven for the Java ecosystem.

They do fundamentally the same things but with very different approaches and tradeoffs.

Webpack and Vite are very different approaches to the same problem with different tradeoffs[0][1]

[0]: namely, webpack and its inevitable successor rspack, are way more flexible and arguably powerful but at the cost of higher complexity and more proprietary features like the webpack/rspack specific runtime. Superior in asset handling though, in many respects, and the high level of optimizations you make once you hit a certain complexity threshold is greater than what Vite/Rolluo has currently without extensive custom plugins

[1]: Vite or Rollup is most likely what most projects need. I’d recommend always starting there, as most of the advanced and flexible features of webpack/rspack are very much not what most need


Yes, so on the only sizeable TS project I did (which was a library, to be used by other teams) I bypassed webpack entirely and went for a mixture of tsc and esbuild. But knowing to steer clear of webpack (or even - that you can!) is a barrier.


I use TypeScript and esbuild for all my stuff. Even then, I often spend a crazy amount of time getting modules working.

Between the various TypeScript module options and various package.json module options (and various code patterns used), modules make JavaScript way more painful than it should be.

I think most of the JS language standards work the past 10 years has been awesome, but modules was definitely rushed and poorly thought though, causing years of frustration.


Yep. Webpack is absolutely terrible but still unfortunately seems to have a lot of mindshare.


I agree that config and tooling are the hardest part of getting Typescript working. Everybody is saying use a framework, but if your use case deviates from the frameworks it can get pretty difficult. My use case that was very tricky to config was.

- SSR rendering of react in an express app (both typescript).

-Trying to get VSCode visual debugger to work for both the client and server code paths.

- Getting the various test libraries to work correctly (I still can’t get the NYC code coverage library to work).

- Mix of ESM, CommonJS, misconfigured npm packages that don’t expose their types correctly.

I ultimately used Vite, and got things working 90% the way I wanted and called it good enough.


If you’re using vite you should use Vitest with code coverage. NYC is redundant at that point


Never start with Webpack. Use Vite with a template and go from there.


Or just go straight to esbuild. I've found vite just makes things more complicated and slower. Particularly, the "smart reloading" breaks in subtle ways and turning every source file into a request doesn't scale well. This can probably be configured away somehow, but again, that just makes things more complicated.


The request-per-module thing only happens with the Vite dev server: production builds are bundled with rollup similar to webpack.


Remix. Vite done right, mostly pre-configured out of the box.


Vue, Nuxt or Svelte are even better. No need to waste time and energy with React and it's peculiarities. However, if you want or must React, then Remix, hands down.


While I very much like svelte, it's not fully mature yet and it doesn't have a very deep ecosystem, plus the additional hiring time/cost basically means building anything other than a small scale solo project with it is going to end you up in the red.

On the Vue vs React debate, honestly it comes down to preferring templates vs components. Vue is simpler but there are good reasons for a lot of the React complexity, and React still has a stronger ecosystem and more developers.


But then you have to waste time with vue/sveltes particularities ;). It's just a matter of what you're used to imo


> Remix. Vite done right…

Remix is moving to Vite as its default compiler. https://remix.run/docs/en/main/guides/vite


I'm aware, I've been on the experimental vite branch for a while. It is literally "vite, done right"


If you want to spend a minute or two waiting before your server is back up after making any kind of change, Remix sounds about right.

My project is relatively tiny too. I’ve never regretted a choice more than Remix.


My projects reload instantly in WSL, what environment are you using?


A minute or two? A sizeable Rails web app I am working on takes maybe 5 seconds to start...


I'd make the exact opposite suggestion: Always use WebPack. There will be a package in the future that has a particular WebPack configuration to make; and you don't want to figure out how to do that in another bundler.


No thanks. Avoiding those hypothetical future packages is a better tradeoff.


I’m always amazed to see the number of SaaS companies based over here in the more privacy focused, non-Microsoft open source EU centric spaces, that use PHP but without a care in the world for strict typing.

It leads to scenarios where I receive OpenApi specs that loom like this:

   type:
     - string/integer
They just don’t give a shit because this kind of crap works in PHP.

They could use:

   oneOf:
     - type: string
     - type: integer
Which is nastier to deal with a typed language client, but at least it conforms to the spec.

So thank you for actually caring about types in PHP.


And even after all the work configuring webpack, did you have readable stacktraces when an error happens?


Thank you. The replies to your post all seem variants of “you should’ve used X instead of Y”, but when you’re transpiling, you’re inviting a world of subtle bugs and edge cases. The added value, if it at all exists, is almost never worth the trouble, IMO.


tsx, esbuild, Bun, Deno, etc...all things that don't require you to use Webpack and just write/run TS...


it's worth pointing out Evan (co-founder of Figma) also created esbuild.


Aren’t there enough frameworks available to avoid Webpack? Seems pretty low level for app developers these days.

I’ve done Webpack configurations and Browserify before that. I’ll be glad if I never go there again.


As a solo dev with a successful electron app I can say that the 5+ year journey from babel+flow+webpack to typescript+webpack, between two targets (main node and renderer chromium) not to mention native modules, node ABI, dual package jsons, electron itself as a giant shifting foundation… has been one of the most intimidating challenges in my dev career and I’m coming out the other side much stronger and confident. Props to everyone involved.


This is the reason that JS frameworks are a thing. Next is buggy and overbuilt, but Remix is pretty much plug and play, I strongly recommend checking it out.


+1

I’ve been using remix for the last 6 months and I’m super happy with it.


Im surprised that Remix doesn’t get much love in the community. Or is it because Vercel and their influencer team is yelling so loud about Next that we can’t hear the Remix people?


I feel like Remix is rising pretty fast. The death of create-react-app has pushed people to frameworks and Next (while loudly marketed by Vercel) feels overweight and underpolished for people who just want something that focuses on the most common use cases with minimal setup / fiddling where remix shines.


It’s going to take some time but it’s going to take over.

Next really fucked up with the app router, and people are realizing everything is just a trap to get more customers on vercel.


You want to do a lot but you don't want to pay for it. There is a shit ton of complexity on the web and the current frameworks (ie: NextJS/React/TypeScript) try to hide/manage this complexity but this only goes so far.

As soon as you hit an edge outside of their matrix of management you open the dark Pandora box of front-end development.


You don't have to transpile anything, you can just put your TS types in JavaScript comments. Or am I missing something?


You can, but it's ugly and time-consuming. And makes it harder to parse what's actually a comment.


Do you get any editor support when going this route?


Yes, vscodium just works.


Webpack is legacy.


I remember when I first read this blog post, https://www.figma.com/blog/how-we-built-the-figma-plugin-sys.... Besides now feeling a bit old realizing this was 5 years ago, I remember thinking what an amazing engineering culture they must have at Figma (besides having a bunch of brilliant people). I mean, they talked about essentially trying out a tech path for a month and then deciding that path was a dead end - I find this so rare in startups where there is a lot of pressure to continually demonstrate "progress".

As a corollary, though, I think those kinds of cultures are only possible if your team is composed of primarily brilliant people, because these brilliant people can move faster than most competitors even if they do wander down an unproductive path for a while, and there is total trust that the folks on your team are capable and self-motivated.


If you read carefully they migrated all the perf sensitive parts to C++/WASM. At that point only glue was left and their custom language didn't have a reason to exist anymore.


It happened in the PHP community as well, facebook being the poster child, but Yahoo also had a fair number of internal optimizations and I saw a few other companies tweak their way to get better perf/security.

Then comes a point where the community catches on and has bigger momentum than the company, so it makes sense to move to the standard implementation.

I'd kinda see Google's Borg -> k8s move as slightly similar, though they're the one inviting the community around the standard they built themselves.


k8s is not allowed to use in the most projects utilizing the internal stack, at least yet. In fact, it hasn't reached to the feature parity level necessary to replace any big projects running on Borg.


If other companies can run big projects on Kubernetes, so can Google.


If you genuinely believe that you have better understanding on how Google's internal infrastructure is running, you should be able to provide better evidence than "other companies can do".


Im sorry but this statement is pretty funny to anyone who has seen google’s internal infra. Where are those companies who run like 10-20k machine kubernetes clusters?


Isn't FB/Meta still on Hack + HVVM?


They are. At this point it's a completely different language with its own stack, as far as I know that will be a core component for the forseable future.

Now they also have modern PHP and some other languages alongside with hack from what I understand.


It’s not completely different, it’s still mostly the same language with some big additions. If you read their dev blog, you’ll see they fixed a bug that was also present in php a few months ago.


I don’t think there’s any vanilla php in use at meta


Don't know if it's vanilla, but there seems to be some:

https://medium.com/@aarthimanikandan2006/does-facebook-still...


That’s just some guy that asserts stuff though. I mean, so am I to you, but fwiw I’ve never seen or heard of any non-hack PHP in the current codebase


I don't think Google uses K8s internally. Borg is still here though.


They run k8s on borg. From what I can tell as someone that worked a lot on early k8s and a large GCP customer they still use borg mostly internally. k8s is their path to creating cloud portability and neutralizing AWS "network" effects.


That is an impressive misrepresentation of history.

PHP and its community were dying by the time FB used it. People here on HN kept talking php down.

In 2014 FB made their own flavour of php with a bunch of perf features, called hack.

Eventually a lot of the perf features hack made its way into php.

FB is still on its own flavour. Php community is still dying.


Perhaps I’m misreading your comment, but PHP was definitely not dying in 2004. Nor was anyone talking it down on HN, as HN didn’t exist.


The days of running /index.php for your own forum or script are indeed in the rear view. But it's still very big for small to medium enterprise, where Java + Spring would be excessive. Most of my local web consultancies who produce things like ticketing websites or specialised directories will reach for it.

But maybe the php forums + guestbooks was what you had in mind with 'php community', in that case you have a point. Most of the kids have moved on.


You’re crazy if you think PHP is dying. Development on the language is steadily pushing forward with big QoL improvements coming to 8.4 later this year.

The popular frameworks are still growing steadily and WordPress is starting to slowly shift off some of its stranglehold on old versions now that most webhosts don’t even offer old versions. That said, even Wordpress will work out of the box with old versions (they just don’t write new features against new PHP versions).

This is arguably the most exciting time to be a PHP dev!


Dude, go to TIOBE, stackoverflow survey, Google Trends or basically any ranking/metric that compares programming language popularity and user base, and without any question php is getting less and less usage over the last few years, often even ignored. Fewer and fewer job postings mention PHP at all. These are my first hand observations, if you don't believe it, you are welcome to look it up right now.


Meh, as far as I can see PHP has a ton of very active development around web services and frameworks, which is its core value proposition. PHP as a language should probably slow down in general but the people who use it don't seem to be really dying out or slowing down as much as the bubble leads us to believe.


People hate php for no reason. They talk about performance or whatever while building rest crud apps. Literally any language can handle that easily and your bottleneck is usually the database. I've scaled startups on PHP to hundreds of thousands of users running on a few cheap ec2 instances. But no one wants to build new php projects instead focusing on Go, Python, or Ruby. I honestly don't get it. PHP devs earn less. The syntax is super easy to pick up. Don't you want cheaper labor?

I've started to learn the ecosystems of the other languages. It's all the same shit. Really.


I think most people experienced PHP as the guilty party behind a lot of really, really terrible LAMP stack projects, but most of the time it was because mod_php was serving requests.

I used PHP to run some service workers managed with supervisord and it was fine. I just get annoyed with the class-based hierarchy but I'd guess they've evolved since 2017 or whenever I used it last.


I think php has a gentler learning curve, but you still need the same level of expertise to get something decent out of the door. From the recruiting side it's still a PITA to find good engineers and it's reflected in the final cost of hiring. I might be biased, but moving jobs every now and then had more impact than doing php or ruby (the other contender would be nodejs, I think go and python tend to be used on different purposes or complementary to the web stack)


I’ve written PHP off and on since the .php3 extension was a thing. People had very good reasons to hate PHP then. It’s greatly improved, but largely due to the composer ecosystem helping to paper over the worse bits. The global functions are still an awful mess.


Composer’s means of including packages doesn’t do the language any favours imo — it doubles down on namespaces (and complex PSR 4/7 ones at that) and the cli isn’t particularly intuitive.

To me, what PHP needs is a simple module system with scoped functions and variables, an object literal syntax rather than `new \stdClass`, and first-class simple to use threading/async/promises for concurrent requests and IO.


The best thing about PHP: shared nothing architecture.

The worst thing about PHP: shared nothing architecture.

It works extremely well until it doesn’t really scale anymore.


There was a lot of tension relative to where PHP would be going, but php7 got it out of the tunnel (in no small parts because of hack, as you mention, buy not only). Some companies diversified before that, in particular moving to nodejs for instance, but others took php from there as it got a lot of attention again, and php8 didn't disappoint either.

There's still a ton of stuck that could be fixed, and php will always be talked down in some way or another but I don't think it's in a bad position as it is now. There's pretty significant code bases newly built on php right now, even if it's not making the headlines.


> PHP and its community were dying by the time FB used it.

> Php community is still dying.

https://www.tiobe.com/tiobe-index/php/

https://w3techs.com/technologies/overview/programming_langua...

Yeah I dunno about that one.


Those stats are based on public information so if someone starts a private project you won't even know it. It makes sense that a mature ecosystem would have fewer open source new projects.


The w3techs number is often quoted, but it has never been validated by anybody else, and its methodology is very questionable if not flat out wrong.


I'm not one to make perfect the enemy of good. If it's the best we have, it's what I will cite until better is provided.


My opinion is that there isn't anybody else providing the numbers because it's impossible to get a correct number. Suppose Google homepage runs PHP, google.com/a uses Go, google.com/b uses Django, netflix.com use a custom framework running on Node.js, and facebook.com uses an unknown framework. W3 will happily tell you that the entire google.com is powered by php, fail to attribute netflix to JavaScript, and won't count facebook.com at all. That's exactly what's happening with their methodology. They only attribute each website at most once and rely on decade old hints from headers and error pages that are often non existent on newer/in-house web frameworks these days.

Want a real example? Everybody knows that Instagram runs Python. But https://w3techs.com/sites/info/instagram.com can't tell which server side language it runs. There you have it.

I would not use the number unless it can be validated, rather than use it simply because it's "the best we have". No, it's not even remotely good.


What nonsense! Just because you dont use it or (as is apparent) dont know anything about it doesnt mean it’s “dying”.


Faster than TS for a narrow use-set is a fairly low bar. But as you scale it's a bar you have to measure against the available talent from the market. You've got to weigh the cost of a developer who can pick up a completely new framework sight unseen versus the weight of a developer who has documented experience in a platform or technology you're already using. If you go full custom development you can get incredible performance out of literally any language and framework out there. But as your project needs to handle more and more scenarios, the number of developers needed to maintain it reaches a point where you're better off with Off The Shelf software despite it being slower and not specifically suited to your needs.

If you can maintain your Unicorn hiring criteria long term, maybe fully custom stacks are maintainable. For most organizations, they need to move to something that the average hire can maintain going forward. That means big name boring software vendors for the most part.


Weird and quirky things like custom languages are bad for your resume if you're just a user (e.g a dev who uses it) rather than the implementor. That means ambitious people will consider it to be a reason to leave. It also makes hiring harder and onboarding takes longer. You also have a big transformation project to do if you ever drop your custom language.

Sometimes it's worth it if the speed boost is huge, or if you can write safer code, or if you actually want to gatekeep hiring to people who like learning new languages.


Often because the "standard" thing was not always the standard.

Like, all those people that chose Flow now have something "non-standard."


Coffeescript used to be "standard" in Ruby on Rails community.

(insert canned laughter)


Coffeescript had astonishing success, so many constructs made it to ecma standard.

Coffeescript is still better than js with many ideas - everything is an expression, comprehensions, existential operator, extended switch statement, chained comparisons overall terse, readable syntax.

Some things are terrible ie. type annotations through clunky comments.


I think the absolutely fatal mistake CoffeeScript made was implicit variable declarations, and how that worked with variable shadowing. Once it became clear that it was downright dangerous not to hack explicit variable declarations in using IIFEs, the entire language became a clunky mess.

I have a ton of respect for the language and all of the stuff it cross-pollinated into JS, but it’s an interesting object lesson in how a seemingly tiny design choice can turn out to be disastrous.


Yes, that was stupid. Also not embracing flow/ts when they had a chance still riding on atom/rails.

I guess when exploring uncharted territory it's a bit of a dice roll - you can't keep winning all the time.

Otherwise great contrib to advance frontier.


>implicit variable declarations, and how that worked with variable shadowing.

Coffeescript was before my time so I never used it, can you give an example of the problems this caused?


Basically, if you want to declare a local variable, you just assign it where you want it declared. It gets scoped automatically to whatever block it’s first assigned in. But if a variable with the same name was already declared earlier (for example in the parent block), then the language doesn’t provide a way to say “this is a new declaration”.

So what can happen is that you have some large block of code where, for example, near the top, you’ve said “x = 1”. Then, maybe a few hundred lines down you have a loop, and inside the loop you say “x = getWidget()”. You think you’re declaring a new variable, but actually you’re reusing a variable from the outer scope. There’s no way to know if you’re accidentally doing this except to search the entire enclosing scope.

It’s even worse in the other direction. You have a small block way down in a function where you’ve said “x = getWidget()” and this is all fine and correct. But you need to add something to the top of the function, and you add “x = 0”. You’ve now retroactively changed the scope of some random variable you weren’t even thinking about.

Edit: Actually my memory’s a little rusty, but I think they actually removed any kind of block scoping entirely by version 1, but everything I said above still applies to nested functions, which you tend to use liberally in CS.


> existential operator,

JavaScript has had this for a bit now and it is really nice.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Existential operator can be used as optional chaining operator but also in trailing position ie:

    if window?
      env = 'browser'
      ...
is equivalent to:

    if (typeof window !== "undefined" && window !== null) {
      env = 'browser'
      ...
    }
But yes, nullish coalescing and optional chaining that came from existential operator are good.


Why the laughter? CoffeeScript was great. TypeScript is even greater. All IMHO and YMMV, of course.


Some companies built a lot in CoffeeScript. Maintaining that won't be fun.


IIRC it compiled down to readable JS, so one reasonable option is just to delete the CoffeeScript and maintain the generated JS code.


Also it feels like LLMs were kind of born for that type of conversion.


And prototypejs before jquery!


Why? Because Evan Wallace is gone. That’s my wild guess.


One of the first points in the article was that wasm became widespread enough that they could migrate them performance critical parts to C++ and the rest to typescript. The necessity for skew simply wasn't there anymore


We actually recognized that Skew had become a liability years ago, and Evan worked on a proof-of-concept to remove Skew around ~2020. But as described in the article, benchmarks showed that performance would have been severely impacted (mobile Safari was an especially big problem). The rewrite only became possible once mobile WASM was performant enough for us to move the hot parts of our code to it and we had enough engineering resources to do it safely.


It does smell like someone's pet project. I'm wondering what the folk at Microsoft had not figured out, that they indeed had at Figma. Typescript is open source, so why wasn't this optimisation just made as a contribution to Typescript at the time?

Also, was Typescript really as claimed in its "infancy" at the time mentioned in the article? They didn't mention a particular year.


Let's not paint pet projects black. Most things we're using today were somebody's pet projects including linux, llvm/clang, swift etc.


>Let's not paint pet projects black.

I don't follow. Where did I do this?


> It does smell like someone's pet project.

"Smell" has negative connotations in English. If you say that "this code smells" it means it's bad, and likewise it's very easy to read your "It does smell like" as you meaning it's a bad thing.


Thank you for the English lesson, but it's a big leap from this supposed connotation to "painting pet projects black".


Not so great a leap; every part of your comment implies you think they made a strange decision going the route of using someone's pet project.


It's something Typescript doesn't want to do. Their goal is to remove the type annotations, maybe add some code for backwards compatibility, and otherwise do as little as possible to stray from the untyped JavaScript equivalent.


With enough investment, a pet project can become a mature platform.


Unix and C were pet projects until they enabled secretaries to better write patent proposals.


They likely realized that maintaining their own language is very risky and going full on standardized means they can take advantage of massive progress in the entire ts ecosystem.

Custom is great right up to when the lead eng leaves.


Hey, I worked on this project! Wrote a bit more on Twitter here: https://twitter.com/andrew_k_chan/status/1786769203912925477

The title of the post is misleading because we used Typescript at Figma for nearly a decade in other parts of the codebase, and there was more Typescript than Skew for almost that entire time. As the blog post explains, Skew was used in our mobile engine (and eventually for our prototyping player, mirroring feature, and maybe one or two other product surfaces I'm forgetting).


Skew wasn't just a little bit faster than TypeScript.

According to Evan Wallace (former Figma CTO), it was 1.5x to 2x faster due to better optimizations enabled by stricter type system.


I really wish browsers had continued to develop a "use strong" mode for JS. It sounded like there were significant challenges, but curbing some dynamism in order for more predictable optimisation sounds like a great tradeoffs for production-quality apps.


Take that to the extreme and you get WASM, or it's predecessor asm.js

It's also what JITers like V8 internally do, you'll get major performance hits if you do weird dynamic things.


If you want complete type erasure, then a system that ensures monomorphism is critical to good JS performance.

If you can leave the type system in place, then you could move toward an ML-style type system and see massive improvements within strictly-typed parts of the code as you'd only need type guards and inline caches at the boundaries of the strictly-typed code.


> Take that to the extreme and you get WASM, or it's predecessor asm.js

Only if your code was pure number crunching.

Normal code can't just go through some belt tightening to become WASM/asm.js code. Compiling it down that way is very different from making the control flow more understandable to the optimizer.


I think taken to the extreme "use strong" is more like putting Dart in the browser. But yes, you can sort of get the benefits of use strong with the right discipline.


Yeah, I wonder where that performance increase actually comes from. This[1] lists their optimizations.

My guess is mainly the integer optimizations. And I guess making sure that functions are always called with the same argument types. The other optimizations is already done by the JITs.

[1] https://evanw.github.io/skew-lang.org/



> According to Evan Wallace (former Figma CTO), it was 1.5x to 2x faster due to better optimizations enabled by stricter type system.

It's probably not as simple as that. If hotpaths are optimized, the 2x advantage is quite likely to vanish.

But then one could say - well, you're forced to write optimized JS in several places and that impacts readability. Sure, but the trade off was to use an entirely new compile-to-js language with less tooling support and mindshare. It seems now that it wasn't worth it. The blog post sort of sugarcoats this contentious previous technical choice.


> It seems now that it wasn't worth it.

While I'm not going to debate this particular instance, I would caution against assuming all moves (with or without blog posts written about them) are indeed the best and wisest moves that could be made given the situation.

It's incredibly hard to look at our industry at large and declare that teams/companies are doing the best thing they can at any given point (where "best" is defined here as the most prudent thing, all things considered).


The 2x speed difference was the most surprising part of the whole article to me!

I wonder if it’s possible to limit one’s use of typescript to just the subset that gives that same performance…


> To complete an operation like `const [a, b] = function_that_returns_an_array()`, JavaScript constructs an iterator that iterates through the array instead of directly indexing from the array

This is interesting. Why doesn't JS just directly index arrays for destructuring?


Any object can become iterable by adding Symbol.iterator, and destructuring should work for them. You can even patch Symbol.iterator on arrays itself, and the VM has to cope:

    > Array.prototype[Symbol.iterator] = function*() { yield 1; yield 2; yield 3; }
    > [...[4, 5, 6]]
    [1, 2, 3]
The terrible performance of the iterator protocol was discussed and ignored at the time, by saying that escape analysis would solve it [0]. Nearly 10 years later, and escape analysis has still not solved it. It's extremely GC-hungry and still sucks. It's just a bad spec, designed by people who are not performance-conscious.

It might make sense for engines to specialize destructuring assignment and splicing of Arrays to remove their iterator protocol overhead (if the user hasn't patched Symbol.iterator) but that's a whole other can of worms.

[0] https://esdiscuss.org/topic/performance-of-iterator-next-as-...


Because things like proxies exist, I guess.

Also, what's even more crazy is that destructuring {0: foo, 1: bar} can be faster in some javascript engines when destructuring an array with two or three element.


Quite a bit faster it looks like: https://jsbench.me/6zlvrupqmj/1


91.86% slower on my Android phone with Firefox, so it definitely depends on which JS engine.


Yeah, absolutely. With the linked benchmark I just got the following results on desktop Linux:

- 17M ops/s ± 1.65% for array destructuring in Chrome

- 169M ops/s ± 0.78% for object destructuring in Chrome

- 545M ops/s ± 3.68% for array destructuring in Firefox

- 81M ops/s ± 0.8% for object destructuring in Firefox

So per the principle of "optimize for the bottleneck", one could choose to use object destructuring, because the slowest Firefox is still comparable to the fastest Chrome option. Or when, for example, you're running Node on a server and know which JS engine you're using.


77M ops/s ± 0.31%; 1.3B ops/s ± 0.17% for Safari. Difference is huge

800M ops/s ± 0.5%; 837M ops/s ± 0.32% for Chrome on the same computer.


What a cool website!


They don’t really mention the ongoing developer experience impact (even if it is outweighed by the popularity of TypeScript) of losing Skew’s niceties, they just talk about the one-off transpilation of them when migrating the codebase. For example, the fact that it’s easy to end up with files that need to be imported in the right order in TypeScript, or things will break; or the fact that destructuring is slow and so should not be used (when performance is at all important). I know from using TypeScript for years that there are dozens of these gotchas (some inherited from JavaScript, some not), requiring an extensive style guide at the very least, especially if you have a lot of engineers.

I wonder if some engineers were sad to see Skew go.


It was certainly important for us to make sure developers were happy with the result. That's why our rollout included stuff like a phase where developers continued writing Skew but their changes automatically checked-in generated TS. This way, devs could see what it would look like in PR review and report issues. As for performance and runtime correctness, you're right that there are some gotchas with TS. We caught issues like the array destructuring one with instrumentation and strict monitoring.

I was definitely sad to see some features of Skew go. For example, operator overloading and integer types. But the move was ultimately a decision the whole team made, and I agree with them that it was the right one.


>>> Modern JavaScript features like async/await and a more flexible type system

So Skew only had callbacks?


Maybe `Promise`s?


For people like me who know little about Figma, what motivates their use of WebAssembly?


You probably know that Figma is a UX design software. This means it’s basically a graphics program: you draw shapes, you scroll and zoom around. It does that extremely well. It’s unbelievably snappy even on a very large canvas with many complex UI screens. Very few desktop applications run nearly as well these days. I’m convinced that this kind of optimization is an important part of their success.


unbelievably snappy until you build something complex with complex components that have a lot of hidden variants, exponentially scales the number of existing layers and it goes dogshit with <10 fps, lag, and screen freezes (even on apple chip)


That sounds more like a "Doc, it hurts when I do this" problem though.


Yes, but it’s hard to explain that to a designer without technical background.


I'm curious about how it all hangs together, WebAssembly and Typescript.

The very first mention of WebAssembly in the article:

"Some years after WebAssembly obtained widespread mobile support, we replaced many core components of our Skew engine"


I'd assume all the graphics computation happens in C++ code via WebAssembly, which is then rendered in the browser via WebGL. The Typescript part is the glue and all the non-gfx parts of the interface, like the top bar / sidebars / etc.

Since Figma is also all about multiplayer, I imagine they might have a system that takes changes to a document, packages them up in a compact binary format, and then sends that over the wire (to Figma or to other connected clients). A decent decent target for a WASM module would probably be that serialization/deserialization step.


Vector editing has a lot of CPU bound tasks. Evan has some writing regarding the transition to WASM https://medium.com/figma-design/webassembly-cut-figmas-load-...


Thanks, that is helpful.


They started with the goal of photoshop in the browser.


Photopea is great and replaced my need for PS. I only do light weed editing though


> light weed editing

Sounds like getting stoned and making a meme or something


Haha. Auto-correct. I think I meant light "work" editing maybe??


This other blog post has some interesting details about how Figma wrote a custom TypeScript DSL + compiler to solve security problems (permissions).

https://www.figma.com/blog/how-we-rolled-out-our-own-permiss...


This pains me a bit. Every bigger company has their own inhouse tooling, language, kubernetes. Why not share.

If Skew was open sourced maybe it had become a better typescript.


Skew is open source, but no longer maintained: https://github.com/evanw/skew


I’m not a fan of this take and it usually comes from lack of OSS experience.

Just because something is open source, it doesn’t mean you get free contributions. Every non-trivial PR must be followed by lengthy reviews, discussions and possibly rewrites.


I like TypeScript and we have a full-stack system on TypeScript but it's not perfect. Configuring TypeScript for monorepos is a nightmare. Having to make sense of it with internal packages under a pnpm monorepo requires lots of manual tsconfig.json work to make all of the paths work with each other. And our production toolchain was basically unmaintainable until the excellent tsx package became available.

It's also crazy slow. We're having issues with Zod where it slows down our TypeScript language server performance significantly, so as a result we've had to introduce project references and disable project reference redirects.

All-in-all, there's plenty of work to be done to make TypeScript better. Especially in monorepos, and especially in making it performant.


Do note that Zod is a bit of an outlier in how slow it can make type checking, e.g. https://dev.to/nicklucas/typescript-runtime-validators-and-d... one of the tests hits nearly 300ms, but no other library even touches 100ms.


Zod is known to be slow. Typebox for example is much faster - not saying you should switch, but that it's not a Typescript issue or something that needs to be addressed by Typescript.

Your monorepo issues sound like you didn't use the same config in all the packages? If that's the case, enforcing the same config and coding standards would be the first thing I would fix. Again, not a Typescript issue. Or did you use ts-node before tsx? Yeah, tsx is much more robust. It just works.


We already do all of this, thanks.

We’re considering Typebox but it’s a big lift and the author of Typebox, while responsive in GitHub, doesn’t seem interested in improving interoperability and documentation regarding usage with tRPC.


> I like TypeScript and we have a full-stack system on TypeScript but it's not perfect. Configuring TypeScript for monorepos is a nightmare.

It isn't TypeScript that needs to support your particular set of tooling and library choices, but the other way round. We have a mid-sized monorepo (multiple apps, many services) which is mostly typescript. It works alright with a boring npm workspaces based configuration.


Zod is recommended by most modern type safe packages. Also this is not the first time we’ve run into TS performance issues. MUI also suffers from poor TS performance.

I agree with your assessment re: library choices in theory but it suck to run into these DX problems that you wouldn’t normally run into with other languages, like Go for example.


Saying typescript is slow because zod is slow is like saying c++ is slow, because javascript is slow. Not to say that typescript is quick in any way (how could it be, as something written in javascript). But letting typescript execute code to infer types in a large scale application seems like a self inflicted issue.


What do you mean by that? I don't think Zod is doing anything special as far as the TS type system goes, although its types are necessarily complex to make its "magic" work. But it's not executing code to infer the types


But TypeScript IS slow. It just so happens that the issue is currently with Zod but JavaScript in general has always been less performant than other lower level languages. This is why JavaScript tooling is all built in other languages.


It’s interesting to read comment threads of people that are dead set against Typescript. It’s a tool that has very few downsides and that improves nearly every single line of code you write. Either they’re scared to learn something new, not willing to take the time, or misunderstanding how useful it is. For anyone reading these comments and agreeing with Typescript naysayers, I would think more about why the commenter and yourself feel that way. You’re putting yourself at a big disadvantage.


As anything - "it depends"™. I did not notice "every single line of code" getting better at all. Yes, it makes things easier on a large team where people do not have time to do codebase discovery - or where people are moved to be highly interchangeable, on big codebases. Yes, static verification can help those teams and those codebases.

But it also introduces a lot of extra work "just to appease the type system". It rarely improves performance (if ever). Because TS has no runtime inference/validation, working with larger libraries or the browser can be a chore because half of your code are type signatures or casts.

So - not necessarily a naysayer, but I do believe that TS is oversold and with smaller teams/projects it might be slowing you down as opposed to helping.


I manage a relatively junior developer who has been using ts ignorer statements a couple of. times. I have said to him, that everytime he feel inclined to either use ts ignorer or do type coercion, he should call me first.

every single time it is a reasoning flaw implementing a solution that is sub par and bug riddled. Had they just let types guide them, they would have become better developers and not had broken the application.

I am curious though. can you provide a snippet where types would be a disprovement?


The worst I had to deal with was converting anything browser-native into data structures that would satisfy the type checker (dealing with native references), and the whole "struct or class instance" dichotomy. Specifically - when there is a lot of DOM-native input (like drag&drop events and their targets and the targets of their targets) that have to be "repackaged" into a TS tree (ending up with properties which would be a JS-version of void*).

An example of what I call "ceremony" would be

  interface BlockIndex {
    [key: string]: UploaderBlock;
  }
  const perServerId = {} as BlockIndex;
  uploaderFiles.map((fe) => fe.blocks.map((b) => b.serverId && (perServerId[b.serverId] = b)));
While somewhat useful, this is in internal code which never gets used directly, and there are 4 lines of ceremony for 1 line of actual code.


The ceremony is caused not by typescript but your misuse of map. You don’t need to create perserverid as an object first. Instead you could flatten fe.blocks, and then filter by b.serverId and then map to a key,value array and use Object.fromEntries to turn this into a keyed object.

Something like:

    const perServerId = Object.fromEntries(uploaderFiles.flatMap(fe => fe.blocks).filter(b => Boolean(b.serverId)).map(b => [b.serverId,b]))
And typescript infers the types correctly. But I still wouldn’t write it as one line, and I’d use lodash instead.


for most frameworks these typing are built. in (eg. react).

my expectation is that there are some packages/DOM typings so you don't need to write them?

regardless, your point stands: typing external dependencies is a pain.


Duck typing can lead to a false sense of security when you /think/ you have Foo when in reality you have Bar with the same shape.

Also Typescript sucks at keeping track of type changes in a single scope. While in Rust I can assign string to foo and then update it with int, I can't in Typescript. This leads to worse types or worse code for the same operation. Combined with typescript's lack of statements as values, conditionally initializing a value is pretty obtuse.

Those are the issues that come to mind right now.


> Duck typing can lead to a false sense of security when you /think/ you have Foo when in reality you have Bar with the same shape.

This is literally always your problem with javascript, its only sometimes your problem with typescript. It's a weird argument.

> Also Typescript sucks at keeping track of type changes in a single scope.

Isn't this considered a very bad practice? Also rust does not allow this, it only allows shadowing.

> Combined with typescript's lack of statements as values, conditionally initializing a value is pretty obtuse.

Can you give an example?


For the first one: It's not an issue in JavaScript because there isn't some compiler telling me yeah that's fine, I have to confirm myself.

For the second one: I know it is shadowing, what I mean is I find commonly that I'd like to have it in Typescript as well. In JavaScript is not necessary since I can just use the same variable.

For the third one: If I have some string variable that needs to be created from either one set of instructions or another, in Rust I do exactly that:

let foo = if x { ... } else { ... }

In ts your options are making it mutable undefined and mutate it inside the if else, using a very weird unreadable ternary, using an IIFE that returns into the constant, or creating extra functions to move the logic out. None of these are even close in readability, locality, or soundness to the rust example.

I find the _combination_ of those things that make it harder to write ts than js.


1: You are similarly able to confirm yourself when you ducktype in TS, regardless: once you've ducktyped once in TS you are then at least helped by the compiler. Again, this is really not a good argument at all.

2: This is a programming practice I never see and would seriously question if its necessary ever, let alone "commonly". I think you may have picked up bad practices from writing in dynamic languages. Please see this for a few example arguments against this practice: https://softwareengineering.stackexchange.com/questions/1873...

3: You are now debating that Rust has better typing than TS, which makes sense because Rust is made from the ground up to have extremely well done static type checking, whereas typescript has to comply with dynamic typing originating from JS. It follows trivially that Rust has the better design because it has more freedom to do what it wants. JS < TS < Rust


I am curious on any example where changing type in scope is more performant or more readable.

it is not really an argument against typescript that Javascript is so bad that you need to spent time tracking your changes.


While in Rust I can assign string to foo and then update it with int

When do you need to do that? Can you give an example?


I suspect they're talking about shadowing. You can't change the type of an existing variable, but you can create a new variable with the same name but a different type.


You can use branded types for the first case.


Even as a one person developer, you inevitably need to come back to old code and understand what's happening. Types help with that. The size of the team or codebase is irrelevant.


Small projects have a habit of getting bigger and small teams have a habit of growing also - usually to deal with the mess of the small project that is now bigger


Is that a bad thing? If you're building an MVP, do you really worry about how 2k developers are going to work on this 10 years from now?

Requirements change and the code base needs to adjust with those requirements, that's gonna happen no matter what. I've met a lot of people trying to predict future requirements, deciding to overenginer today for a brighter future. I have very rarely seen anyone guess the future requirements accurately.


Small projects become bigger projects much faster than that! I’m not suggesting that anybody should think too far ahead when it comes to building mvps, but if it’s a choice between a typed language and a dynamic one like JavaScript, baking a poor decision in early is going to hurt later. And later is much sooner than you think.

That’s not going to negatively affect your initial velocity, if it does, the team isn’t strong enough.

If the project is just a one off website or something genuinely small, sure, who cares? Otherwise it's worth realising that you'll be dealing with the fallout of poor early decisions pretty quickly.


> That’s not going to negatively affect your initial velocity, if it does, the team isn’t strong enough.

This. Note also that "a poor decision" might as well be "have developers fight the type system instead of delivering UI and pivoting if users don't like it".


There are also cases where small teams that have grown grow unproportionally to the size of the product, and while the product is set up in a fairly sane way (and there is little wrong with it!) having 20 fresh people swarm into it destroys both the architecture and the execution. And with a small team enforcing cohesion for both of these is much easier! So a small project might as well stay small, but this should be somewhat a priority.

Mythical man month in action.


Some people also hate parking between the lines and returning shopping carts at the grocery store. Those are similar in that they have negative value to the individual but help the community around them.

TS often can interrupt an individual's flow, so feels like a negative value. It's only when the whole team is using it on a bigger codebase with lots of changes that the benefits start to manifest.


Not just with teams, going back to a solo project after some time is so much more of a hassle if you don't have any types to guide you.


A million times this. Many a time I have done something "clever" to elegantly solve a problem in Javascript, only to come back to it a year later and not understand what the hell I did. The context for the problem wasn't fresh, so I didn't understand why I was doing that "cleverness", nor what restrictions there were on that solution, etc.

I rewrote one of those projects in Typescript a while back, and came across a similar "clever" solution (mainly having to do with dates having potentially multiple sources, so being in potentially multiple formats), and it made the code _infinitely_ easier to understand. So much so that when I came back to it recently, one quick glance at the types for that section of code gave me all the information I needed to confidently extend that code without worrying about bizarre runtime errors.

People forget that even in single-person teams, you're actually working with many different "people" over the lifetime of the project, given how different you and your understanding of the context of your code will be over time.


Imagine you come from a small town where there are no parking lines at all, and everyone efficiently parks on unmarked blacktop in a respectful way.

Now imagine you go to a big city where they have a bunch of lines in the parking lot and people only half use them correctly, parking over the lines, diagonal, etc.

The existence of lines doesn't guarantee good behavior. The absence of lines doesn't guarantee bad behavior.

This is the argument I see for javascript-only folks who don't necessary enjoy using "the worlds most bloated javascript linter"

For the record, I am a Typescript enjoyer and I use it in my personal projects as well as professionally, but even I can admit that it's not automatically superior to javascript and it has a number of really frustrating and time-consuming downsides.

It's very easy to type the args and returns of a function and protect callers, but it's much more challenging to work with types between libraries and APIs all together. Lots of `as unknown as Type` or even the dreaded `any` to try and cobble the stack together.


100%. Great to have type consistency. Terrible to deal with similar conflicting extended types in the enterprise codebase that make minor changes because someone couldn't figure out a compiler issue 5 years ago.

For the record I don't like the syntax either. Combining ES type spreading with TS type annotation makes for difficult reading in my opinion. Why settle for this bastardized language and not just compile something made to be strongly typed into js?


That's apples and oranges, though. Sure, if you have a dev team that "parks via the shuffle algorithm", sure, painting lines isn't going to help.

But if you have a dev team that is taking the time to efficiently park in a respectful way, if you paint lines, _you're going to make that parking job a hell of a lot easier to do!_ And THAT's the big win of Typescript.


> the dreaded `any`

You're dreading javascript


This kind of aggressive "there is something wrong with you if you don't have the same preferences and priorities as me" is such a turn-off.

I don't use it because the compiler is just too slow; waiting 2.5 seconds for even simple files is a massive pain. I want the old "CoffeeScript experience" where you compile stuff on-demand in development and output errors in the webpage or stderr. It works very well, is low complexity, and requires almost no effort. But you can't as it's just too slow.

esbuild doesn't typecheck so it's not an option. And hugely complex background builders are, well, hugely complex, it's not an option.

TypeScript-the-language may be nice, but TypeScript-the-tooling is not particularly great.

And even if this was solved: any build step will add complexity. The ability to "just" fetch /script.js and "just" edit it is quite nice. This is also why I've avoided CSS pre-processors since forever (needed a bit less now that variables are widely supported).

Of course different projects are different and for some projects these downsides are less pronounced. There is no one perfect solution. But there are definitely downsides to using TypeScript.


I don't think anti-TS people are 'scared to learn something new'. I'm sure most of those people write TS on a daily basis, because it's an industry standard right now.


I'm not against TypeScript, but I don't really see the massive advantage. I rarely see problems that are due to typing, and the downside is usually limited as I keep my JS on the frontend, not the backend. Regular JS/ES6 just flows better.


>I rarely see problems that are due to typing

This is a fallacy similar to the Blub paradox: if your language has a weak[1] type system, then it isn't capable of recognizing many problems as "type error". But stronger type systems can express stronger invariants. So something that isn't a type error in one language will be a type error in another. This changes how the programmer conceives of problems.

Example: missing a case in a switch statement isn't a "type error" in C or Java, but it is a "type error" in languages like Rust or ML, because they have sum types with exhaustiveness checking. Other examples: array bounds checks can be eliminated with dependent types; lifecycle bugs like use-after-free and double-free can be eliminated with substructural types.

[1] "weak" in an informal sense of "not very expressive"


The real Blub paradox to me is that the most powerful and expressive language is best characterized by minimalism at the language level.


Correct me if I'm wrong, but doesn't the Blub Paradox imply that languages dedicated to Code Golfing are at the pinnacle of expressiveness, and look down on everyone else's languages (for their extreme verbosity, compared to the golfing languages)?


No I don't think it's about compactness of expression, but rather what it's possible to express at all.


Yeah or even simple typos, or mixing up the order of arguments, are things that are hard to catch in regular JS (except at runtime) but trivial in TS.

I suspect a lot of people might have had bad experiences with codebases which overuse complex types or trying to type things like Redux which is messy. When I use TS for personal stuff I’ll typically be a bit loose about things like any in places where I don’t care (for now) and I feel it doesn’t add much overhead, but I have been using it for a long time so it’s become second nature.


This is a very fair comment, and you seem open to understanding why types are useful.

"problems that are due to typing" is a very difficult thing to unpack because types can mean _so_ many things.

Static types are absolutely useless (and, really, a net negative) if you're not using them well.

Types don't help if you don't spend the time modeling with the type system. You can use the type system to your advantage to prevent invalid states from being represented _at all_.

As an example, consider a music player that keeps track of the current song and the current position in the song.

If you model this naively you might do something like: https://gist.github.com/shepherdjerred/d0f57c99bfd69cf9eada4...

In the example above you _are_ using types. It might not be obvious that some of these issues can be solved with stronger types, that is, you might say that "You rarely see problems that are due to typing".

Here's an example where the type system can give you a lot more safety: https://gist.github.com/shepherdjerred/0976bc9d86f0a19a75757...

You'll notice that this kind of safety is pretty limited. If you're going to write a music app, you'll probably need API calls, local storage, URL routes, etc.

TypeScript's typechecking ends at the "boundaries" of the type system, e.g. it cannot automatically typecheck your fetch or localStorage calls return the correct types. If you're casting, you're bypassing the type systems and making it worthless. Runtime type checking libraries like Zod [0] can take care of this for you and are able to typecheck at the boundaries of your app so that the type system can work _extremely_ well.

[0]: https://zod.dev/ note: I mentioned Zod because I like it. There are _many_ similar libraries.


Do you ever see null or undefined access errors? As a TypeScript developer I haven’t seen one for many years.

Also, when you have types it changes how you code itself. When I change a schema or refactor some function, I don’t need to think at all to make sure I’ve updated all the code that depended on the old schema or API; just fire the TypeScript compiler and it tells me everything that needs to be updated.

I’ve also not seen any issues for a long while where I’ve missed some conditional case, because I use discriminated unions with switch statements more, something that looks weird in normal JS but is very useful with types, since it tells me if I missed a case automatically.

Add that I’m managing a team of engineers, and so I can easily make sure they’re also not missing cases either, by setting the convention and having them see the light.

Putting aside other things like for instance always knowing that we’ve validated inputs for API endpoints since unvalidated inputs are the unknown type and therefore effectively unusable; or always knowing we’ve parsed and serialized dates correctly since we use branded string types to distinguish them from any other string with 0 runtime impact; the list goes on.

So yeah, it might just be the case that you haven’t actually internalized what coding with types even means, so you’re unable to imagine how it can help you.


I always feel like those comments are written by people working on 2-person projects who never worked in a 50+ people shared codebase, and not understanding that the world different than theirs exists, and what challenges that brings.


Please don't sneer, including at the rest of the community.

https://news.ycombinator.com/newsguidelines.html


It's not that it's bad. But sometimes the project and the team are not that big that its qualities matter. And you lose a little bit of readibility with type-intensive code.

And nicely written TypeScript looks awesome, but badly written TypeScript can be a huge mess, as it can with any language, but TypeScript purists sometimes forget that the language is just a part of a nicely written and designed system.


I would describe typed code as more readable, not less. I take “readability” to mean ease of understanding, not how much my code sounds like written english. Not knowing what the type of something is makes understanding harder.


Inferred types seem to be an indication that even the most type-safe languages (eg rust) recognize that types hinder readability, at least in some way.


>You’re putting yourself at a big disadvantage.

Why not just appreciate the diversity of opinion and move on, rather than lecture people?


It's the Great Typing War all over again.

Some people feel more comfortable with JavaScript, Common Lisp, Lua, etc.

Some people feel more comfortable with TypeScript, Typed Racket, Luau, etc.

And that's okay.


> It’s a tool that has very few downsides and that improves nearly every single line of code you write.

Sometimes I just don't feel like dealing with those very few downsides though, but I can accept it's mostly personal preference.

At my age, sometimes I just don't want to deal with:

1. Yet another configuration file (tsconfig.json in this case). When something breaks, having one more place to look at is not something I want. The more extra files like this are needed for the development environment to even work (as in, something undesirable happens if you remove them), the less confidence I have in the project's long term reliability/stability.

2. That same configuration has misleading naming. The `"strict": true` setting should be called `"recommended": true`, or at least `"preset": "recommended"`, because it's not even strict. I would expect this `strict` flag to enable everything to the most restrictive way possible, and let devs disable checks (if) they don't want them. In its current state it doesn't enable strict checks like `noFallthroughCasesInSwitch`, `noImplicitOverride`, `noImplicitReturns`, `noUncheckedIndexedAccess`, `noUnusedLocals`, `noUnusedParameters` (I might be missing more).

3. Related to previous point: Inconsistencies between projects. So I work on one project with strict settings, tsc properly mentions possibly undefined accesses, etc; and then I move to a different project, and if I forget to context switch ("TypeScript config is different here"), I could be accidentally trusting the compiler to keep undefined accesses (and other stuff) in check, when it's not actually doing so.

4. Last time I checked, I couldn't just have a git repo "foolib" that is 100% TypeScript (100% .ts files, zero .js files), and `npm install` that repo on a separate project, and have it Just Work™. There's always extra steps that need to be done if you want to use .ts files from a separate package (usually compile to .js and install that; or using a bundler (read first point again)).

5. Why does the "!" operator even exist (or at least, why isn't there a flag to forbid it (for example the strict flag)). In my experience, using it is just developer laziness, where someone just doesn't want to write proper checks because "it's noise".

---

Those 5 points came off the top of my head so I'm almost certainly forgetting stuff.

It's mostly "death by a thousand cuts" kind of stuff, so sometimes I might not mind, but other times I might not be in the mood to deal with this and heavily influences my decision to go with TypeScript (keeping it approachable to as many people as possible) or a different language/ecosystem altogether.

Yes, I could "just" write a package that I can just npm install and it autoconfigures TypeScript and other stuff for me (and I have done so, for my own sanity). But I shouldn't need to do that, and it's too brittle for my taste.


Don’t let perfection be the enemy of the good


[flagged]


Please don't cross into personal attack, no matter how wrong someone is or you feel they are.

https://news.ycombinator.com/newsguidelines.html


People think typescript is really the silver bullet. If a developer writes crappy code, using tool x will not make him write great code. and the tool may not improve the project either. It's really tiring to work with people like you who are morbidly in love with a tool.


[flagged]


Why don't you like it?

Personally, I don't like it either. Firstly - entirely irrationally - because I don't like Microsoft. I grew up with them acting incredibly hostile towards the open source community and the rest of the industry, and I find that hard to forget. I know that has nothing to do with the merits of TypeScript, but I can't say it's not there.

Secondly, I don't like the complexity incurred by any compile-to-JS language. For Figma I bet it's worth it, I bet they did their homework. But the amount of tiny projects I've seen that are an unreasonable maintenance burden because of all the packages and toolchains they use, it'd be funny if it wasn't sad.

I might still use TypeScript, but for me it's more of a "give me a reason to use it" than a "give me a reason not to use it" kind of situation.


You may like JSDoc[1] if you just want some type-safety from the IDE without the compilation overhead.

It’s done wonders when I’ve had to wrangle poorly commented legacy JavaScript codebases where most of the overhead is tracing what type the input parameters are. Otherwise it’s just nice when writing a small library where you don’t want to setup a bunch of build tools.

Personally, I’m impartial to TypeScript or JSDoc at this point. But I’d rather have either over plain JavaScript.

[1] https://jsdoc.app/


I finally kinda came around to it after a decade of writing JavaScript. It took me a while because I’d spent about a year converting Coffeescript back into JavaScript after it fell out of favor at a company I worked at. I told myself I’d never work on another js-transpiled language.

I do love strong typing, but what I don’t love is a language that mimics strong typing over a loosely typed system. Every single time I’ve used it, I’ve run into some weird problems. A third party package might not have a types file, or it might have an incorrectly-defined type, etc. It can be extremely frustrating in my experience.


I also hate microsoft. I worked with their piece of shit browser for 20 years. From ie3 onward they all sucked and lagged behind. Their OS was complete garbage and somehow they managed to penetrate like 90% of the desktop market.

but beyond that typescript i find is cognitive overload for someone who considers themselves an expert at vanilla javascript. It more than doubles the development time. I rarely use the inflections because 50% of the time it points to "any" so its utterly useless...and these kids who claim "but you won't get an undefined error!" is just well fix your fucking undefined error when you see it...is my response. WHy do I need to 3x my dev time and hurt my brain in the process so some jackass doesn't have to worry about an undefined variable.

I also think the reason MS released TS is because they were unable to find good devs who knew JS. ALl that typeshit in other languages (most enterprise crap like java and .net/c#) could now all be converted to ts devs without having to rehire. TS is for offshore teams imo.

That being said I will say on a team of 50+ devs on the same codebase I would vote for typescript becuase you will get a shit load of js spaghetti otherwise from these people who can't be bothered to understand js.


I'd argue that if you ever use a single "any" then you're not using Typescript but simply writing JavaScript with more steps and a compile time. Any can be useful during development, but it should never reach production. I think the reason you see so much of it in a lot of open source packages is that they've only really adopted Typescript because their consumers expect it and it was just easier for them to do it this way.

We use very strict Typescript, but a lot of our internal utility packages are actually written in pure JavaScript because of the way the Node package environment works. Like our date tool, our helper API package for odata, and, our fascist linting extension package. They're all pure JS with provided types. This is because the handful of people, and the handful of times we work on these libraries it's always people who know exactly what they are doing while they do it. These same people (myself included) won't necessary be in similar positions when we write Typescript, which is why we don't use JavaScript in our day-to-day development. Since our strict Typescript setup protects us from ourselves.

I'm not personally a big fan of Microsoft. Professionally I do think they are one of the absolute best IT-business partners for organisations either enterprise-sized or approaching, because of how they run their support for operations. That being said, I do think what they've done with both Typescript and C# where they sort of mix the best from both languages, has been really good for both languages. At least until you look at something like Blazor which is basically just rebranded web-forms. But I guess that's what happens with large organisations, and at least as far as Typescript is concerned, they seem to be making the "right" decisions.


I use JSDoc instead of TypeScript on a codebase where I am the sole developer. It skyrockets autocompletion, helps memory recall after off time and helps prevent stupid human errors that I am bound to make unconsciously. That's IMO what type systems are designed to do - reduce the error rate of human driven development.


I felt the same about typescript early on. Now I really like it as I can easily program logic for hours without running the app once. I'm not sure I could do that without typing.

It also makes reading code a lot easier. I can see where something is used, where it's defined, etc etc. It's much easier to get around.


Yeah, I think that's a good point. I'm actually quite on the fence about dynamic and static typing - I like both. With a small team that knows what they're doing, I tend to go dynamic. With a larger team, it does tend to become a bit of a mess and weird workarounds to not having static typing (like some mystical 100% test coverage) start to surface.

I love the approach Python and MyPy took there: Python supports but ignores type annotations. You can add them to any part you want and check it with MyPy, but you can truly do it gradually, and you can always decide not to do it. An approach like that would fit JavaScript beautifully, I think.


I'm not absolutely certain, but doesn't Typescript work exactly the same way Python does if you set it up to do so? I think the reason you may think you can't have dynamic types in Typescript is because it's generally recommended not to do so. Sort of because that if you're using dynamic types, then you might as well be using pure JavaScript with JSDoc. Having a couple of decades of experience with dynamic and static typing, I tend to prefer static typing because it's just "easier" (more maintainable, lets you onboard people faster, lets you do a lot less code-reviewing, and other advantages all of which are more on the "managing people" side of things). Dynamic typing can be good and it certainly lets you prototype faster, but to do it well over a 5 year period you're going to need some serious governance.

One thing I never liked about Python is that sometimes it "guesses" the types wrong behind the scenes. This is sort of an issue with most "magic" that you'll also see in some of the parts where Java, C# and others, lets you skip writing a lot of boring code by "guessing" your intentions. But where it's relatively easy to tell the runtime environment how to do it the way you want it in something like C#, it's not in Python. It's also often a lot less obvious unless something fails in a manner you've taken steps to expect.

With JavaScript (and this includes Typescript) you have another set of issues. Since classes, types and so on are all abstractions of object and since Javascript will happily pass "nothing, but not null" around. Well... It's just so easy to make spaghetti or even code which isn't very performant. Things like classes have very little function in Typescript, I'm not saying you should never use them, because you should never be religious about these things. For the most part it's typically better to use a type/interface and a stand alone non-hoisting functions.

Personally I prefer rather opinionated languages like Go, or very strict languages like Rust where everything is very locked down and unmutable until you specifically tells it not to be. Of course Rust comes with it's own "interesting" things like the borrow checker.


They say TS is optional but I've ran into issues using 3rd party libraries that require it when calling their methods.

I just never really used typing. I started with perl and then did php and ruby for awhile before focusing on javascript. So I never knew what I was missing.

I'm just much much faster in vanilla js than I am with typesh


> supports but ignores type annotations

That’s how TS works too though - compilation just strips the types via babel etc, with type checking a separate process. You opt in file by file by switching .js to .ts


Well, the nice thing with Python types is that the _only_ difference to untyped Python is the type annotations. Last time I worked with TypeScript (two and a half years ago), it felt more like a different language _similar_ to JS. In my experience it was quite... viral. With MyPy I've genuinely seen just specific parts of a code base become typed and didn't notice any friction.

I wonder what would happen if that proposal for type comments in JS went through. Would TypeScript become just a type checker / optimizing compiler?

Google's Closure had an (IMHO) nicer approach (using doc comments for type annotations, see: https://github.com/google/closure-compiler/wiki/Types-in-the...), but I don't get the impression it'll ever catch on outside Google.


Closure team have also deprecated a lot of the old tooling. Closue was ahead of it's time for sure and as someone who heavily used the Closure Library and Closure compiler in advanced mode, it is sad that it did not catch on. However using TSickle you can transpile Typescript into javascript which closure compiler uses for advanced optimisations.


I don't like Typescript because it forces me to think about types and data structures and stuff. Which is a Good Thing because I absolutely have to think about that stuff when working on large codebases with a team of colleagues: without the inline documentation and text editor help TS gives me when working on those codebases I'd be (at least!) 10x slower when refactoring old code or adding new code. And nobody wants to pay a slow developer!

However ... the one place I refuse to use Typescript is in my side project - a JS canvas library. I can justify this because: 1. it's a big codebase, but I know every line of it intimately having spent the last 10 years (re-)writing it; 2. nobody else contributes (and I kinda like it that way); and 3. I keep a close eye on competing canvas libraries and I've watched several of them go through the immense (frustrating!) work of converting their codebases to TS over the past few years and, seriously, I don't need that pain in my not-paid-for life.

Even so, I do maintain a .d.ts file for the library's 'API' (the functions devs would use when building a canvas using my library) because the testing, documentation and autocompletion help it offers is too useful to ignore. It is additional work, but it's just one file[1] and I can live with that.

[1] https://github.com/KaliedaRik/Scrawl-canvas/blob/v8/source/s...


Lesson to learn: don’t build custom languages


>… TypeScript, the industry standard language for the web

This breaks my heart.


Why? It's the truth.


The frontend language for the web is JavaScript.

People who don’t like JavaScript but, for whatever reason, still want to write frontend apps have chosen to invent a new language that transpiles to it. To call that effort a standard is heartbreaking to me.


things can both be true and heartbreaking


Ah yes.. Switch to Typescript and give up all your advances to get a brand new set of crutches.


They should have brought you in as a consultant so that they would have arrived at the correct decision. Alas, they are now doomed to be wrong forever.


Can you elaborate what you consider "crutches" in this case?


They had a custom language giving them full flexibility to achieve making a product in the browser that put them ahead of everyone else, and instead of embracing that, they throw it out to be able to hire of the shelve developers to code in a language that doesn't let them move freely. Typescript is like crutches in the sense that it might support you in not falling, but you only really need them if you're crippled in some way.


> They had a custom language giving them full flexibility to achieve making a product in the browser that put them ahead of everyone else

From the article it sounds like it was only used for the prototyping system so a single code base could run in the iOS client and the Web client.

I believe the main UI of Figma and what makes it so performant and magical to use is C++ https://www.figma.com/blog/webassembly-cut-figmas-load-time-...


What are you talking about? Typescript is a fantastic improvement on JavaScript.

Skew doesn't look fundamentally better enough that it would be worth the downsides. Lack of IDE support alone is probably enough to cancel out a productivity gains from a better language.


I like that typescript catches when I need to do null checks so I don't end up with the most notorious runtime error seared into the brain of every JS developer "cannot read property of undefined".

Some parts are nice, like the string literal typing "this" | "that". Other things are hacky, like "branded types", gross.

But then I think of my commercial codebase which is extremely well tested, regular old JS, and wonder if is worth the hassle.


Off topic: Does anyone know of any sites made with Figma so I can see what the UX is like?

As a user I don't care about DX if the resulting UX is bad.


Basically every significant app you interface with is designed with Figma at this point.

Figma itself has no opinion about the resulting UX. It's a tool for designers, and they can design great UX, horrible UX, and everything in between.

Your question is kind of like saying "I want to see a house built with a hammer, to see if a hammer makes nice looking houses".


You wouldn’t really be able to tell. Figma is often just a tool for working out designs and building up a design system for your site. That later gets translated to your front end by your devs. Figma itself isn’t a styling library like tailwind, etc.


But are the designs lean or bloated.

If Figma helps building lean designs it's good otherwise not.

Designers tend to put too much useless parts into sites, like unnecessary transparencies and animations.


that's like asking if a pen or keyboard lead to bloat. depends on who's using them.


More like if AI leads to spam or if guns kill people.

The latter one is harder if you only have a knife instead of an assault rifle.


No it’s more like, does vscode lead to better code than jetbrains? Figma doesn’t really provide meaningful constraints on a design that would let you identify it as Figma sourced, much as you won’t be able to tell whether a site was written in a particular IDE.


You seem to have a fundamental misunderstanding of what Figma is. Your question would be better answered if you took some time to look at figma, rather than looking at the designs people create with figma.


So tell me why the UX in the web gets worse if all these tools and framework make the DX better?

On mobile I have a limited data plan and the speed isn't always the best and on many pages I have to wait because beside the ads I have to download 3, 5, 10 or even more MB of data just to get pages where button, links and headline are undistinguishable because of recent design decisions.

Seems to me these tools are worthless in the end, if you are a user. After bootstrap it went pretty much downhill.


It's good with SVGs, themes, grouping elements into reusable components from your design system, and doing an art board per screen together so you can get an idea of your workflow through the app.

Not great at animations, not great at fully exploding all of the app state. Overall pretty good middle ground for designers and devs to interact.


If a designer uses ‘unnecessary’ transparencies and animations just because a design tool makes those things easy, that is not a problem with the design tool.


If a tool makes something bad easy it is partially responsible.


It’s just a system for animating the transitions between flat designs and adding scroll areas so you can preview an estimation of a design on mobile and web.

Doesn’t even support more advanced prototyping things like inputs and dynamic changes.

Not a site builder


Most large sites you see are made "with Figma". Designers use Figma for mocks and hand it off to web developers to implement.


You can safely assume everything that comes out of Microsoft is designed with Figma.


As mentioned, Figma is the de facto standard for UX design. Probably six or seven out of ten websites, apps, and desktop applications were designed on Figma, if they're relatively new. But Figma doesn't make sites, it's just a tool for designing them.

If you sign up for Figma, you can quickly get a sense of what it does. Some features are behind a paywall, but mostly things related to collaborating or managing large teams and design systems.

There are also approximately 1 billion Figma tutorial videos on Youtube that would show you the interface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: