Hacker News new | past | comments | ask | show | jobs | submit login
Bun 1.1 (bun.sh)
462 points by ksec 58 days ago | hide | past | favorite | 263 comments



Request for the bun team: please provide a clear support policy/EOL timeline. Bonus points for clarity on the stability guarantees that are offered between versions and modules.

https://github.com/oven-sh/bun/issues/7990 (Via https://github.com/endoflife-date/endoflife.date/pull/4382)


I don't think the bun devs are currently parcelling things up like that so doubt this would be worth the effort at this time, i.e. I don't think they're backporting any fixes from 1.1 into a 1.0.x release.


Stating that would be helpful as well.



No LTS versions please. Release often, but don't break old code. Be more like Rust (v1 forever) than like Node (new major releases every year or so).


> Release often, but don't break old code

That may become expensive fast


As far as I'm aware, Rust has been doing this for 9 years now (77 new versions), I'm not sure if that's been "expensive" but people seem to like it and it's working well so far


I use both Deno and Bun in production (albeit, on different projects).

They are both great upgrades from node. In particular with the first class support for TypeScript.

Bun is great for large projects with the enhanced DX over any node based environments I have worked on - I use it for a mono-repo project with several frontends and a GraphQl backend. Involved test suites run in 5 seconds, etc.

Deno seems to work really well i lambda style environment (I use them with Supabase) due to their module approach that are entirely stand alone. This is great for small scripts to glue things together.


Bun has -i/--install=fallback which I thought was pretty similar to Deno but I haven't used Deno much to compare. I was thinking about starting to write my scripts with `#!bun -i` but haven't fiddled with it much yet.


The API for the shell function is kind of neat, in that it seems to prevent you from accidentally creating shell injection vulnerabilities by calling it without properly quoting the arguments.

For example, in Python you could easily do this:

  message = '; cat /etc/passwd'

  # Whoops, shell injection vulnerability!
  subprocess.run(f'echo Message: {message}', shell=True)

  # Correct (assuming sh-compatible shell).
  subprocess.run(f'echo Message: {shlex.quote(message)}', shell=True)

  # Correct (without using shell).
  subprocess.run(['/bin/echo', 'Message:', message])
But the Bun API doesn't separate quoting from executing the command, so you can't make that kind of mistake:

  let message = '; cat /etc/passwd';

  // Works correctly.
  await $`echo Message: ${message}`.text();

  // Fails safely by throwing error about incorrect usage.
  await $('echo Message: ' + message).text();


It uses types to get quoting right? Or it quotes everything (regardless if it's already quoted)?

Ironically, the first time I saw the former was in a Python templating library (in the early 2000s -- from distant memory I think it might have been the work of the MemsExchange team?)


Formatters basically differentiate the literal parts of the string and the template arguments. There's also a neat postgres library that does the same for sql quoting.


It is very cool, but not new. zx does that too. And I did the same thing for MySQL years ago.


Tagged templates[0], the language feature that enables this, were introduced in ECMAScript 2015 apparently – arguably at least somewhat new in the lifespan of JavaScript. :)

Java is getting a similar feature with template processors[1], currently in preview.

It would be nice to have it in Python as well – i.e. not just f-strings, but something that (like tagged templates) allows a template function process the interpolated values to properly encode them for whatever language is appropriate (e.g. shell, SQL, HTML, etc.). Apparently someone is working on a proposal[2], although there doesn't seem to be much recent progress.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[1] https://openjdk.org/jeps/459

[2] https://github.com/jimbaker/tagstr/blob/main/docs/pep.rst


Did not realize Bun had (even if rudimentary) macros - bundle time executing code support. That is pretty neat! https://bun.sh/docs/bundler/macros


Yes its neat - I wish more bundlers adopt it!

In the past I've written a little plugin that is available for almost all bundlers allowing you to do that: https://dev.to/florianrappl/getting-bundling-superpowers-usi...


this is one of the most interesting new things in the bundler space - Parcel is introducing some form of macro too.

It's way overdue to have some way to program what happens at bundling time (without writing your own bundler plugin)


Wow that’s pretty neat. Because of that I also learned about import attributes (https://github.com/tc39/proposal-import-attributes) which is probably going to be quite useful and make the 50 lines of imports in some of my files look even dumber.


Some loaders already encode stuff in the path url style, e.g. vite-svg-loader

import iconUrl from './my-icon.svg?url'

Maybe I'm not imaginative enough but this seems like a reasonably restricted (i.e. simple) way of parametrizing imports.


That's what I've been waiting for too. The URL params always seemed rather sketch to me. No bundlers seem to be using it that way though.


Didn’t Google’s closure compiler so something similar way before we had imports?


Seems like a good release. I watched their video, and some charts were a bit unclear, as in, I didn't know if they were comparing with the previous Bun version or Node.js.

My experience with using Bun in side projects has been good. The built-in APIs work well in my experience, and I hope popular runtimes adopt at least a subset of them. The hashing and the SQLite bindings come to mind as APIs that I wish were available in Deno and Node.js as well.

They collect some telemetry by default. I don't think the install script tells you that. The only mention of it that I've found is in the Bunfile documentation: https://bun.sh/docs/runtime/bunfig#telemetry

I'd prefer if it was opt-in, and that users were given instructions for disabling it if they want to during installation.

They offer an option to create a dependency-free executable for your project, which bundles the runtime with your .js entrypoint. That works great if you want a single binary to distribute to users, but at the moment, the file size is still pretty big (above 90MB on GNU/Linux for a small project). Not terrible, but nothing comparable to Go or QuickJS yet. I wonder if in the future, Bun would offer an option to compile itself with certain features disabled, so we'd get a smaller binary.

I have been playing with using Bun as a Haxe target. It works pretty well and IMO it's a choice to consider if you like Haxe more than TypeScript, or if you want to add a web server to an existing Haxe project without adding another programming language. You can also to do things like generating validation code at compile time, which seems hard to do with just TypeScript.


Note that we do not currently send telemetry anywhere. The extent of what we track right now is a bitset of what builtin modules were imported and a count of how many http requests were sent with fetch(), and a few things like that. This is used in the message printed when Bun panics so we can have a better idea of what the code was doing/did when it crashed


Sorry, I didn't understand your post. From the last 2 phrases, it seems to be the case that you do collect some data.

So, when you say

> Note that we do not currently send telemetry anywhere. You mean that Oven do not send that data to someone else, right?

I still see value in having a privacy policy so that users can find out what is collected by Oven in a concise way, and how to opt out of that. As far as I know, the fact that any data is collect at all, and that there's a flag to disable it is only mentioned in a documentation page for Bun's TOML config file.


> They collect some telemetry by default

I wondered why every time I upgraded bun macos would pop up a permissions dialog. This explains that.

Anyway, it can be disabled by adding the following to your environment:

    export DO_NOT_TRACK=1
This is the data collected:

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

This 64 bit machine_id line is concerning:

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

It may be a unique identifier for your machine.


That is unrelated. This is a code signing / notarization issue because we don’t distribute Bun via the Mac App Store and likely the way it was installed was via npm or something other than .zip file we distribute. Code signing is necessary due to the JIT entitlement in Bun (otherwise Bun would be a whole lot slower)


Mac code signing needs the machine id?

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

Why then is the unique machine id collected on Linux?


This is dead code from before Bun 1.0. This code exists but is not run and probably stripped from the final executable (Zig is great at dead code elimination). We do get the Linux kernel version to detect if syscalls like pidfd_open are supported and enable fast paths.


Thank you for letting me know about telemetry; I had no idea.


Then people go crazy when Golang announced they want to enable informed opt-out telemetry.


> I'd prefer if it was opt-in, and that users were given instructions for disabling it if they want to during installation.

So you mean an informed opt-out, right?

Thank you for the interesting hands-on experiences and insights regarding Bun. I followed the coverage and posts by Jarred here on HN for quite a while, think since the initial alpha release, but haven't used it.

Will keep your examples for helpful added platform APIs in mind for when I hopefully come around to doing sth with Bun!

It sounds like a great platform for JS scripting as well - I also think that could be a good and easy way to test the waters.

Really, kudos to Bun and Jarred Sumner for living up to the promise he made when the first version was announced!


> So you mean an informed opt-out, right? I worded it wrong - I meant I'd prefer if it was opt-in, or at least informed opt-out, like you said. Thanks for pointing it out. :)

> It sounds like a great platform for JS scripting as well - I also think that could be a good and easy way to test the waters.

It is. Some other features you might enjoy are the built-in TypeScript support and test runner. It works well for one-off scripts too, if you'd prefer not using Bash. For me, it was refreshing coming from Node.js. Hope it is an enjoyable experience for you as well.


I work on Bun and happy to answer any questions.

Note that Bun v1.1 is still compiling at the time of writing (probably for another 20 minutes)


Now that Discord will start showing ads, is there any chance Bun will support a communication platform that is open & ad-free like IRC, XMPP, or Matrix?


What kind of question is this? He is working on a JavaScript/TypeScript runtime not building a communication product. Why would you run to him to solve your pet grievance with Discord?


I imagine the complaint is around Bun's use of Discord as community coordination tooling -- linked in the site's header. The post isn't implying Bun should be involved in the creation of an alternative.

(I'm not here to throw shade at using Discord for OSS "communities" - I do as well - And am concerned about the path forward. Just want to clarify the question's intent.)


If open source, free software is a good enough ethos for your code base, it should be good enough for your community communications. Supporting only a proprietary locks out a swath of users & Discord in particular is an information blackhole—and now with ads!


Which users are unable to use Discord? Note that unable is different from unwilling, and "locked out" is not an accurate description of the latter.


Me! Discord does not consider me a real person if I don't have a phone number it approves of.


Users that need special clients (accessibility, hardware, etc.). Users blocked by US sanctions. Users that have been moderated off the platform for something not in your community (account bans even happen accidentally). Users with privacy/anonymity concerns about the data collection (& now ads)—especially the chat rooms that require a SIM card. Users that take their FOSS or otherwise ethical software views or anti-corporate views to heart & want something built on those principle—from wanting to use free software to make free software, to wanting to outright avoid what some now call enshitification where a free (gratis) account clashes with the idea of freedoms, etc.


I'm assuming it is because Bun uses Discord? There are Discord links on their site and on Github, though the links don't seem to work.


I think they used the word "support" to mean "use", rather than "build". That is - they weren't asking Bun developers to _build_ an alternative to Discord, but rather to stop _using_ Discord.


Ship an open-source Discord alternative that more people want to use than Discord and we’ll happily switch.


Is there a privacy policy available for the telemetry collected by default by Bun?


The opt-out telemetry is worrisome. Along with the fact that there doesn't appear to be a way to disable it for single-file executables if you plan to distribute a Bun cli app to users: https://github.com/oven-sh/bun/issues/8927


According to this, it seems they don't actually send the telemetry:

https://news.ycombinator.com/item?id=39901755

Hope my interpretation is correct.


Yeah I saw a Github discussion where he mentioned that the code for uploading telemetry data was disabled, but he also said he plans to re-enable that at some point: https://github.com/oven-sh/bun/discussions/2605#discussionco...

I would prefer to have the telemetry become opt-in before data collection is turned on.


Completely agree.


Are there plans for adding concurrent or parallel execution to the test runner? I recently tried looking at the code base to maybe implement it myself, and it looks like it wouldn't be easy without some reworks.


We need to do some form of this but I’m not exactly sure what yet. I suspect same process but multiple globals might work well. A lot of tests spend time sleeping or waiting for things. They might benefit from that kind of paralellism (like async/await, except between things it runs a whole other global object)

Threads could also work but the problem is you have to re parse & evaluate all the code. That’s a lot of duplicate work. It’s probably still worth it for large enough apps


Isn't there some way of cloning a loaded vm after loading a module? I would imagine that should be possible some how, that way you could parse once then execute in multiple threads.


When will `worker` API be ready for production?


Not a question, but wanted to say: Great job!


> I work on Bun and happy to answer any questions.

I think I saw somewhere that 1.0 did not support NextJS. Does 1.1?


I think I answered my own question. According to https://bun.sh/guides/ecosystem/nextjs Bun does not yet support NextJS.


Yours and the team's performance is so impressive. I'm not brave enough to use Bun in production yet, but count me in in a year or so ... great stuff.


When are we getting UDP/dgram support?


GG for Windows support and all the addition in 1.1 :) Thxx


love it, even `bun upgrade` is fast:

On my rpi 4 that I capped to 600mhz for performance testing:

Bun v1.1.0 is out! You're on 1.0.36 [3.93s] Upgraded.


I find it hilarious that we now present runtimes and other programming stuff like it was Apple presenting a new iPhone. This would be satire 15 years ago. No disrespect to Bun tho, I love Bun.


The audience for these types of announcement is bigger.

My intuition is that there are many more consumers of node-like environments today than any runtimes 15 years ago.


That's because there's so much cacophony emanating from internet that you have to shout to be heard today.


I find these articles useful when migrating a legacy system, as they sometimes contain migration notes or rare minor details from the developers.

This combined with the Wayback machine makes for a great way to keep track of detailed information


I feel like it’s meant to be satire here as well.


If it is... they're doing a really good job staying composed, because I couldn't tell. It would be amusing if true.


I feel like such a downer when I ask this about Bun and Deno, but: why should I use them instead of Node?

I don’t mean to take away from the obviously impressive engineering effort here. But VC funding always gives me pause because I don’t know how long the product is going to be around. I was actually more interested in Deno when it promised a reboot of the JS ecosystem but both Bun and Deno seem to have discovered that Node interoperability is a requirement so they’re all in the same (kinda crappy) ecosystem. I’m just not sure what the selling point is that makes it worth the risk.


We could drastically simplify the building and deployment process of our services. By far the greatest advantage is that it runs TS natively. Dropping the compilation stage simplifies everything. From docker imaging to writing DB migrations to getting proper stack traces.

You don't need source maps. You don't have to map printed stack traces to the source. Debugging just works. You don't need to configure directories because src/ is different than dist/ for DB migrations. You don't have to build a `tsc --watch & node --watch` pipeline to get hot reloading. You don't need cross-env. No more issues with cjs/esm interop. Maybe you don't even need a multi-stage Dockerfile.

That's for bun. Deno might have a similar story. We did not opt-in to the Bun-specific APIs, so we can migrate back if Bun fails. Maybe we could even migrate to something like ts-node. Shouldn't be that hard in that case.

IMHO the API of Bun, as well as the package manager, sometimes tries to be _too_ convenient or is too permissive.


Kind of. When you do try to run bun in production you'll find out that it has significant differences to node -- like not handling uncaught exceptions: https://github.com/oven-sh/bun/issues/429

Then you'll use bun build and run node in production, only to find that sourcemaps don't actually work at all: https://github.com/oven-sh/bun/issues/7427

So you'll switch to esbuild + node in production :)

Definitely excited for the promise of bun, but it's not quite baked yet.


We’ll add the uncaught exceptions handler likely before Bun 1.2 and fix the issue with sourcemaps. Sourcemaps do work (with some rough edges) at runtime when running TS/JSX/JS files, but are in a worse state with “bun build” right now.

We’ve been so busy with adding Windows support that it’s been hard to prioritize much else


Every couple weeks I try again to run my app fully on bun. For now I just use it as a packager.

The big ones for me are continual prisma issues. Mainly due to poor decisions on their side it seems…

Vite crashing. Because I’m using remix.

And then the worst one I don’t see a way around: sentry profiling which requires v8.

I can’t wait for the day everything can be on bun. Everything else sucks and is so slow or requires really bad configuration to make it work.

Can’t believe node itself and TS are so terrible with module compatibility. Bun solves all of this and is 20000x faster when I can use it!


What prisma issues are you running into? For us we just installed node alongside bun in our docker container and then ran prisma with node… was there something else?


you're running your app using the node runtime still.

Half the reason I want to use bun is to not use node for the runtime so that it's faster and the docker image is also smaller.


Much appreciated and definitely rooting for bun! It’s still my goto choice for dev and can’t wait to switch production back to bun :)


Curious: does it run TS natively or does it just transpile for you? Because the former suggests exciting opportunity for better compiling or JITting if it can actually commit to holding on to the typing.


It does not do any type checking. You have to run tsc with noEmit separately. If you run `bun run foo.ts`, it just ignores all type annotations. It is transpiled to JS internally by removing the types (or it skips the types while parsing). While doing that, it keeps track of the original source locations. If you see some stack trace, you get the original location in the ts source.

Running tsc with noEmit is pretty much the standard in the frontend as well, as the TS is bundled by esbuild/rollup directly.


is there such a thing as running TS natively? Even tsc doesn't do that.


You can use tsx as loader with node if you want to directly run typescript.

node --import tsx ./file.ts


The problem is, if you have ESM and then a tool in your repo like jest, requires commonjs.

Now you have to compile stuff.

In my case I’ve had apps use certain ts config options, and then another library has a start script which is incompatible.

So you’re stuck needing a different TS config for both things. These annoyances are solved with bun


Does it support editing the source-files while in the debugger?

I've been hesitant to move to TypeScript because I'm unsure how well the debugger works in practice.

My current platform is Node.js + WebStorm IDE as a debugger. I can debug the JavaScript and I can modify it while stopping at a breakpoint or debugger-statement. It is a huge time-saver that I don't have to see something wrong with my code while in the debugger and then find the original source-file to modify and recompile and then restart.

Just curious, do Deno and Bun support edit-while-debug, out of the box? Or do I need to install some dependencies to make that work?


I'm not sure how difficult it would be for Nodejs to support .ts files natively. But if that's the main reason to use Bun, I'd be worried about its long term viability. Node could announce native .ts support at any time and then Bun might not look so good.


It's still faster. `bun test` is like a gajillion times faster than jest + all the voodoo to make it run TS.


>a gajillion

That seems like a made up unit of measurement. If you have real statistics, show those. If not, then this is just hearsay.


The benchmark is right on the main page of bun.sh, just ctrl+f for „bun test“


what if I don't like typescript ? and like duck-typing my way through. and adding annotations when necessary via jsdoc ?

not everyone likes typescript or finds it useful. e.g svelte dropping typescript ?

not to take away the amazing work people at bun have done.


many people have commented, bun is all the tools in a single dependency: a test runner with in-memory db included? Shell support for window? Single file executable packaging? with macros? Code scratchpad that auto-installs dependencies? Programmatic APIs for transpiling/loading jsx (not tsx)? ... so on.


ts-node has all of these features too?


Yes, but it's slower. Here are the times for running a script that just prints hello.

    $ time bun hello.ts 
    real  0m0.015s
    user  0m0.008s
    sys   0m0.008s

    $ time ts-node hello.ts 
    real  0m0.727s
    user  0m1.534s
    sys   0m0.077s


ts-node does type checking by default using `tsc`. Bun does not, so this isn't an equivalent comparison


Now try with SWC enabled.


> By far the greatest advantage is that it runs TS natively.

So why doesn't any major runtime run Java natively? Or C++ natively? Or Rust natively?

Why is this such a cool unlock that hasn't been done for any other language?

---

85% of this are people tired/bored of Node.js.


Python is among the most popular languages and it doesn't require a compile step.

TypeScript is entirely metadata, so it just doesn't make sense to need to compile it.


I don't understand your point. I don't even understand your argument.

The java runtime runs java.

And C++ and Rust are compiled and have no runtime.


JVM runs java bytecode.

Java bytecode is compiled from Java (javac), Scala (scalac), etc


Java is compiled, too. The Java runtime is the JVM which runs byte code.


the jvm bytecode has been designed to be bytecode from day 1.

JS is the TS's bytecode but it has been designed to be a language to be developed in, which causes impedance mismatches as tools and people get confused about the usage context.


Just, what?!


The most compelling argument for Deno is the permission system in my opinion. Node added a permission system recently, but it's much more coarse grained than Deno's. Being able to limit a script to listening on a specific hostname and port, or only allowing it to read a specific environment variable is pretty cool, and makes me less paranoid about trusting third party dependencies. Both Bun and Deno are also more performant than Node in many cases, and add a bunch of little quality of life improvements.


The real question is how much you can trust this. Those kinds of permission systems have been tried before - e.g. .NET used to have something called "Code Access Security". It was retired largely because the very notion of VM-enforced sandbox was deemed inadequate from experience. IIRC SecurityManager in Java was something similar, also deprecated for similar reasons. I'm afraid that Deno will just be a repeat of that.


I definitely wouldn't make the Deno sandbox my only line of defense — I'm a strong proponent of defense in depth. Now having said that, there's definitely a precedent for trusting V8's sandboxing capabilities. Cloudflare is running untrusted user code across their entire network and relying on V8 isolates as a sandboxing mechanism for Cloudflare Workers. I'm not sure I would go that far, but I do think we should be taking advantage of the strides browser developers have been making from a security perspective. When I re-watched Ryan Dahl's original conference talk where he introduced Deno, the sandboxing aspect was the part that resonated the most with me. But again, it's always best to have multiple layers of security. You should sandbox your applications and audit your dependencies, those mitigation techniques aren't mutually exclusive.


.NET sandbox was used for ClickOnce and Silverlight for many years. Java's was used for applets even longer. It worked until it didn't.

The people who designed those things ultimately threw in the towel and said that if you want that kind of security, use containers or VMs.


> The people who designed those things ultimately threw in the towel and said that if you want that kind of security, use containers or VMs.

I can see why they chose that route. It's a huge maintenance burden. I can't imagine Google throwing in the towel when it comes to securing their browser's JS engine though.


That's assuming that you can fit everything within the needs of the browser sandbox. But server-side JS needs are broader than that.


It's much easier to worry about locking down the few server-side modules which allow access to the underlying OS, than it is to have to worry about securing V8's JIT compiler. Node's module-based permission system literally just bans certain standard library modules from being imported (Deno's is more fine grained thankfully). That's a much smaller attack surface area to worry about compared to securing the underlying JS engine.


Also with Deno, it become very easy to write typed cli. .ts file can be run as script very easily with permission access defined on top of the script such as:

#!/usr/bin/env -S deno run --allow-net

Then one can just run ./test.ts if the script has +x permission.

Also project such as https://cliffy.io has made writing cli way more enjoyable than node.

It is a good idea to beware of the VC. So it is good idea to support project such as Hono (projects conform to modern web standard, and is runtime agnostic for JS).


> Also with Deno, it become very easy to write typed cli. .ts file can be run as script very easily with permission access defined on top of the script such as:

I do this all the time. I used to use `npx tsx` in my hashbang line to run TS scripts with Node, but I've started using Deno more because of the permissions. Another great package for shell scripting with Deno is Dax, which is like the Deno version of Bun shell: https://github.com/dsherret/dax

> Also project such as https://cliffy.io has made writing cli way more enjoyable than node.

This looks cool. I've always used the npm package Inquirer (which also works with Deno), but I'll have to compare Cliffy to that and see how it stacks up in comparison.

> Hono (projects conform to modern web standard, and is runtime agnostic for JS)

Hono is awesome. It's fast, very well typed, runs on all JS runtimes, and has zero dependencies.


What do you think of WebAssembly modules? It looks to me like the shared memory support that came with the threading proposal (which seems somewhat widely supported) should allow libraries to be isolated in their own modules and exchange data through explicitly shared memory even if they run on the same thread.

With secure isolation being a requirement for web browsers, and with the backing of multiple big companies, it seems like there should be enough momentum to make it work properly. Or maybe the browsers rely entirely on process isolation between different origins and won't care about the security of isolating individual modules?


Wasm is better positioned for this in that, as a lower-level spec, the attack surface is inherently much smaller. But also in many ways it is essentially a VM, just for an architecture that has no dedicated hardware outside of VMs.

Wasm shared memory semantics are very similar to process isolation on OSes, so presumably the same techniques can be used there (and if those techniques are faulty, then so are containers and VMs).


Imho those permission systems are still too rudimentary and too non-automated. Instead of CLI flags I would like to see permission enforced at dependency boundaries, e.g.

    import foo from 'foo' with {permissions: ['fs', 'net']}


Enforcing permissions at dependency boundaries would be the ultimate goal, but trying to separate first-party code from third-party code within the same thread is a herculean task (if I pass a callback to a dependency, which permissions does it run with for example), and you can't really lean on JS engines to do the heavy lifting, because they weren't designed with that threat model in mind.

The best you can do currently is run your dependencies in a Worker, and enforce permissions programmatically for the worker [1]:

For example:

  new Worker(import.meta.resolve('./worker.js'), {
    type: 'module',
    deno: {
      permissions: {
        net: ['news.ycombinator.com'],
        read: [new URL('./sqlite.db', import.meta.url)],
        write: false,
      },
    },
  })

This isn't perfect by any means, and you shouldn't rely on it like a silver bullet, but if given the choice I'd rather have permissions in my security toolbox.

[1] https://docs.deno.com/runtime/manual/runtime/workers#specify...


The speed increases are nothing to sneeze at; I've moved a few Vite projects over to Bun and even without specific optimizations it's still noticeably faster.

A specific use case where Bun beat the pants out of Node for me was making a standalone executable. Node has a very VERY in-development API for this that requires a lot of work and doesn't support much, and all the other options (pkg, NEXE, ncc, nodejs-static) are out-of-date, unmaintained, support a single OS, etc.

`bun build --compile` worked out-of-the-box for me, with the caveat of not supporting native node libraries at the time—this 1.1 release fixes that issue.


I've hit significant failures every time I've tried to use `bun build --compile`; the most recent code I was trying to compile hit this one[1].

I documented how to build a binary from a JS app with node, deno, and bun here [2]. Node SEA is a bad DX, but not that complex once you figure it out.

1: https://github.com/oven-sh/bun/issues/6832

2: https://github.com/llimllib/node-esbuild-executable


I’m pretty sure issue #6842 was fixed in Bun v1.0.32 or so and we forgot to mark the issue as fixed

Will check once I get to my computer

Edit: was not fixed. We made other changes to fs.readSync to improve Node compatibility, but missed this. It will get fixed though


Bun's standalone executables are great, but as far as I'm aware unlike Deno and Node there's no cross compilation support, and Node supports more CPU/OS combinations than either Deno or Bun. Node supports less common platforms like Windows ARM for example (which will become more important once the new Snapdragon X Elite laptops start rearing their heads [1]).

[1] https://www.youtube.com/watch?v=uWH2rHYNj3c


We'll add cross-compilation support and Windows arm64 eventually. I don't expect much difficulty from Windows ARM once we figure out how to get JSC to compile on that platform. We support Linux ARM64 and macOS arm64.


Does it support building an EXE with all source-code removed?


It also helps avoid a node/v8 monoculture, just like with web browsers. I'm sure the ecosystem as a whole will get better because of it, even if you decide not to use it.


Avoiding a monoculture by introducing a VC-backed alternative is just asking for another monoculture.


I used both Deno and Bun.

Bun is really nicely compatible with node.

Speed of course is excellent, but the main reason I use Bun and recommend it:

You replace node, npm, tsx, jest, nodemon and esbuild with one single integrated tool that's faster than all of the others.


Nicely compatible, until it's not.

We banned all these forks at work.

The dev onboarding overhead is not worth the benefits.

Having all 1700 repos using the same build tooling is more important than slight increases in build performance.


Why do you have 1700 repos?


The wonders of JavaScript package dependencies, where basic CS stuff is a function exported as a package.


Probably an agency environment, or a enterprise environment that insists on having private mirrors of all 3rd party code.


Banned? Is that why you had to post this on a green text account? Because that sounds immature. If you really have so many repos it sounds annoying that there isn't room for team level experimentation.


Devil's advocate: Deno and Bun are not yet fully backwards compatible with Node. I myself have run into a _ton_ of pain trying to introduce Bun for my team.

This can become a big time sink on bigger teams. That time could be saved by just not allowing it until a full team initiative is agreed on.


It's not immature, it's pragmatic. You do have to weigh the benefits of being able to use non-standard tools vs the cost of having not being able to reuse the same tooling, linters, compilers, and what-not for all projects.

When you have a lot of projects to support, it's rare for the benefits to outweight the costs


> If you really have so many repos it sounds annoying that there isn't room for team level experimentation.

For what it's worth, I'll say that I can understand such top down governance: you'd have an easier time around moving across projects that you work on within the org, there'd be less risk of a low bus factor, BOM and documentation/onboarding might become easier.

Same as how there are Java or .NET shops out there, that might also focus on a particular runtime (e.g. JDK vendor) or tooling (an IDE, or a particular CI solution, even if it's Jenkins).

On the other hand, if the whole org would use MySQL but you'd have a use case for which PostgreSQL might be a better fit, or vice versa, it'd kind of suck to be in that particular situation.

It's probably the same story for Node/Deno/Bun, React/Vue/Angular or anything else out there.

No reason why that mandated option couldn't eventually be Bun, though, for better or worse.


I can give a bit of perspective here. I'm currently porting our the Vanilla Forums frontend (~500k lines of typescript) from using Node, Yarn (we used it back before npm supported lockfiles) and Webpack, to build with bun and Vite.

There are a few notable differences:

- The out of the box typescript interoperability is actually very nice, and much faster than using `ts-node` as we were before.

- Installations (although rare) are a fair bit faster.

- With bun I don't have to do the frankly crazy song and dance that node now requires for ES modules.

- Using bun is allowing us to drop `jest` and related packages as a dependency entirely and it executes our test suite a lot faster than jest did.

For my personal projects I now reach for bun rather than node because

- It has Typescript support out of the box.

- It has a nice test runner out of the box.

- It has much runtime compatibility with browsers (`fetch` is a good example).

- The built-in web server is sufficient for small projects and avoids the need to pull in various dependencies.


My old dog experience has proven multiple times that staying with the main reference tool for the platform always pays out long term, as most forks or guest languages eventually fade out after the hype cycle is over.

The existing tools eventually get the features that actually matter, and I avoided rewriting stuff twice, on the meantime I gladly help some of those projects to backport into the reference tooling for the platform.

The only place I really haven't followed this approach was in regards to C++ in UNIX, which at first sight might feel I am contradicting myself, however many tend to forget C++ was born at Bell Labs, on the same building as UNIX folks were on, and CFront tooling was symbiotic with UNIX.


Yepp! There is absolutely no reason to upgrade early. If it becomes the reference tool one day, then it should be easy to switch.

But you also have to remember this is JS we’re talking about… stuff changes every 10 minutes.


Bun test is so enjoyably faster than Jest.

I have a file of thousands of string manipulation tests that Jest just crashes on after 3 minutes while Bun runs in in milliseconds.


How is the Bun test runner’s compatibility with Jest’s methods? Can a mature test suite be easily ported?

We are currently looking at vite and vitest to run 1600 jest tests.


You can track the progress here: https://github.com/oven-sh/bun/issues/1825

There's still a ways to go but folks are actively contributing.


FYI if you want to make a list on HN you're gonna need to add an extra line break everywhere.


ESM interop is inarguable. But these days Node has a test runner and compatibility with browsers (it implements fetch)… I guess I feel like Node is likely to catch up with most of this stuff over the lifetime of any long running project.


One of things that makes me more bullish on bun rather than Deno is that bun is intentionally aiming for compatibility with node and the npm ecosystem while Deno doesn't seem to be.


Sure, but node also has a huge amount of baggage and as others have pointed out is much slower.


> - The built-in web server is sufficient for small projects and avoids the need to pull in various dependencies.

ElysiaJS is a good library when you do need a bit more with the routing + middleware. It has great benchmarks as well.


Doesnt node have a built-in test runner too these days?


I really like Deno for small scripts and small side projects - it's just fast to get started with. And it allows me to use web standards, like URL imports to grab packages from CDNs instead of having a config file. There's just less to think about, like oh what was Node's crypto thing? Node is making strides in web compatibility, and building in things like a test runner. And I don't have much interest in migrating company projects away from Node. But Deno feels really fresh and light when I just need to run some JS.


Bun's equivalent of "npm i" is extremely fast, at least an order of magnitude faster on all 3 of my machines.

Since I run "npm i" many times per day, that in itself is a big timesaver, not just for local dev but also in CI pipelines.


Have you tried pnpm?

https://pnpm.io/


How does it compare to pnpm


I found Bun to be faster. Monorepo support is a bit kludgy though. Once you know of the workarounds, it's ok. See my comment on https://github.com/oven-sh/bun/issues/5413#issuecomment-1956...

AFAIK, Pnpm monorepos do not follow standard npm. Bun does follow standard npm monorepos.

Pnpm's feature to override dependency versions is nice for legacy projects with many 3rd party dependencies. Not sure if Bun has the same feature. I mostly use it on greenfield projects with dependencies that I control.




Node is safe choice, IMO. I tried deno and I think it's cool, but I'm staying on node for the time being. Things that deno makes easier are not that hard with node and stability matters for me. For example I had to spend few hours rewriting my tiny service with changed deno API. Don't have any experience with bun, though.


If there are any specific places where Deno didn't support a node API, please file a bug -- we definitely shoot for as close to node.js as possible, and if you have to modify your code that's most often on us.

(I'm a deno core dev)


No, I wrote my service some time ago (basically GitHub Webhook which does some crypto to validate payload and invokes kubectl) using Deno API and few months ago I took some time to update dependencies but found out that with new standard library version some APIs are either deprecated or removed, I don't really remember, but I felt the need to rewrite it.

Don't take it as a criticism, I totally understand that you need to iterate on API and nobody promised me that it'd be stable till the end of time, but still work is work.


Ack, I ran into a similar issue when writing a github webhook. It might be related to JWT handling if it's the same issue.

If you recall that exact issue, definitely feel free to file an issue.


How do we report that. Some issues has come up with aws on esm.sh https://esm.sh/@aws-sdk/client-secrets-manager@3.540.0

It is just not working.


https://github.com/denoland/deno/issues is the ideal place -- we try to triage all incoming issues, the more specific the repro the easier it is to address but we will take a look at everything that comes in.


Bun has been great as a package manager, test suite runner and typescript interpreter. We use node in prod


I have ask myself the same question a couple of weeks ago and decided to use Node for some side stuff. Simply because of Node being the most mature, boring choice. Still, I like the DX improvements of both Bun and Deno a lot. We'll see how it all plays out in some years.


Cute logo.

And UX is pretty great: integrated fetch, simplified fs api, integrated test runner (I miss good old TAP style assertions though), ESM/CJS modules just work, some async sugar.

I think if they offer me a paid *worker solution, with sqlite, that's something I'm willing to pay for.


I don't know how you're using Node and not thinking "I wish there was a better option than this". I can't wait to jump ship but Bun/Deno aren't quite there yet, for my needs.


Curious what you are missing to make the jump.


I can't speak for deno, but bun is drop in compatible for most things and the test runner speed alone is enough to make it worth using.


I haven’t used them yet for full sized apps, but they are both fantastic for scripting and small CLIs. Between the ease of running scripts, nice standard libraries, npm ecosystem, and excellent type system, I now feel TypeScript is a better scripting language than Python or Ruby.


I used to think so too, but that was because I had never really used Python. I still think Ruby is a mess, but it's so amazing how easy it is to manipulate data in Python, and so much faster.

I recently wrote a Node/Bun/Deno app that parses a 5k line text file into JSON.

The JavaScript on any runtime takes 30-45 seconds.

The Python implementation is sub 1 second.

I would not have been able to finish the tool so quickly if I were stuck relying on JS.

I still love Typescript but I'm not as blind about it now.


That runtime doesn't make any sense. This script creates a 1,000,000 line CSV string and then parses it into JSON in 700ms with Bun, and this is doing both things the slow way, creating the string with a giant array map and join, and parsing with a giant split('\n') on newline and map.

https://gist.github.com/david-crespo/8fea68cb38ea89edceb161d...


What does the code do? 30-45 seconds to parse a 5k line text file into JSON sounds like something is going very wrong


The real question is why would I use Bun over Vite? Even the ThreeJS developers determined Vite is the best.


Bun and Vite are not analogous. Bun is a runtime with a standard library, bundler, test runner. Vite is a bundler. You can run Vite through Bun.


Bundler for backend vs bundler for FE


What's the revenue model for Bun? What happens when the VC funding runs out?


Javascript edge hosting.

Bun team stated that here:

https://news.ycombinator.com/item?id=35966373


I imagine some hosted service like everyone does is a likely option.


I'm not sure why this is being downvoted -- it feels like a valid question for anything that enjoys wide adoption.


It's a valid question, but does it matter for anyone except the dev team? Bun is open source, so VC-backing is mostly a helpful jumpstart. If they find a viable business model – great, development can be funded in perpetuity. If they don't, development was funded for a while by someone else's money and then Bun is just like any other open source project that lacks direct funding (most of them).


I suppose you're right -- the MIT model makes it a non-issue


I think it does matter. Open-source software can still suffer from "enshittification" when there is constant need to generate profit. Fortunately it is open source, so it can be forked when things get bad, but even then there still may be lots of tech debt to undo.


Right...but if you're going to fork it and create fragmentation, then we might as well go back to Node which has at least been stable for the past few years.


Huge fan of Bun.

Came for the ts interoperability, stayed for the performance.

Also seems like the most sensible project in the space - I tried Deno and it was... rough. Bun on the otherhand was easy to intergrate and a very pleasant experience.


I started using Bun by default for small personal projects. Having to set up Node, with Typescript and reloads always took the fun out of quickly prototyping something.

Have yet to run Bun in production tho.


Is it me or does this project tries to do too many things at once? "Bun is an npm-compatible package manager", and an http server, and a websocket server, and a test library, and a bundler, and... why?


I think is that modern languages like Go and Rust (there are for sure others, but I don't have experience with any other language that ships with all the tools) ship with all the tools you need, formatter, linter, test runner, etc, Go goes even further than Rust and ships a very complete std focused on web and Javascript is used primarily in web, so it makes sense that ships with all the libraries needed to build a web server, also websocket is a standard, so it's easy to implement it and make it work with the browser.

Nowadays if you start a new Javascript project you need to setup, vite/esbuild/webpack, eslint/oxlint/biome, prettier, typescript, etc, that is a ton of dependencies that YOU need to maintain for years, and if it is part of the tool you are using, then you don't, ideally there shouldn't be a breaking change, let's see how bun manages that when the time comes.

I am waiting for bun or a tool that has everything I need to bundle my frontend app, I'm very tired of fiddling with all the dependencies and try to make to work every dependency together, I have a legacy project that I work on, I would have migrated a long time ago to another tool, but there is none that would fix the current issue and it's managing the project build, test, formatter, lint dependencies, after using Rust I feel super frustrated with the Javascript ecosystem state.

Also you asked why? I want to work on the project, there is already a lot of work to maintain the dependencies up to date, the tooling should not be part of that work.


> Nowadays if you start a new Javascript project you need to setup, vite/esbuild/webpack, eslint/oxlint/biome, prettier, typescript, etc, that is a ton of dependencies that YOU need to maintain for years

That's a choice YOU make, not everyone makes that choice, especially because they want to be able to continue working on a project for months/years without accruing automatic technical debt as all those projects move forward without actually thinking about backwards compatibility.


Right, but that’s why languages like Rust have an excellent DX. There is no complex choice about what linter, formatter, test runner, doc builder, or package manager to use. These at all such common requirements for building software at scale with lots of contributors that the language tooling just includes them. It’s not hard to maintain, because the language toolchain is so foundational it needs to be stable.

JS doesn’t have anything like that, which is why projects like Deno, Bun, Biome, etc are interesting. These projects explore how JS can also get a great out of the box experience without requiring the complex setup and maintenance steps that so many existing tools require.

Besides, professionally, you normally don’t make the choice in a vacuum. Linters, package managers, testing, bundling/building, type safety, and even formatting are all very useful in big projects with lots of people. So you often don’t get to say “ah we just won’t have unit tests because jest doesn’t care enough about backwards compatibility.”


No one likes to set up or update things. But when everything is coupled (why not even include a framework in Bun?), you are even more dependent on choices made by them and will be forced to upgrade no matter what. For example, if they decide that their implementation of the test component wasn't good enough in version X and they completely reimplement it in version X+1, you will have to upgrade your code, but maybe at that point you don't want to rewrite all your test suites, you just want to get the new http3 request handler, but you still must rewrite all your test suites...

When things are not that coupled, independent components you bring together, you can update the webserver, and/or some components, and keep the old version of the test library for now, until you decide it's time to upgrade.


Languagy type things tend to be more stable than the myriad of random npm packages out there. Hopefully Bun remains pretty stable.

I'm currently fighting PHP + Laravel. I want to upgrade PHP but I can't because the version of Laravel I'm using depends on an older version of PHP. So I have to upgrade them both in lockstep anyway.


Because that’s what people want. That’s how you can get a really good developer experience similar to golang or other languages. Just install one tool to build, lint, format, run tests, run your local project. No time spent trying to setup a bundler when what you want is to build a new project.

Regarding runtime libraries, it’s similar to the battery-included approach of go or python, you get what you need to get started out of the box and only reach for dependencies when you want to go further. Testing library, an http server, a websocket server that’s perfectly reasonable to have as core library of a runtime developed to run web servers.


Because the project would be a failure otherwise, and including those is the main goal of the project.


if the goal is to be an alternative runtime (which it is actually claimed to be), I don't think I like these non-standard APIs.

I however find Bun more useful as a Swiss army knives, to use with nodejs, to reduce the number of development dependencies.


That's the selling point


That's the scary point


Migrating bundlers and changing test-runners is no fun. If bun can handle that with consistent performance and no goofy upgrade paths, I'm happy.


I like their selling point and if there is enough demand, I think the node community will implement at least a few of those features.


"...Bun on Windows passes 98% of our own test suite for Bun on macOS and Linux."

Does this mean the release was made with failing tests, or am I misunderstanding?


Looks like it, it seems the 2% are mostly odd platform specific issues that the authors' did not deem very important (my assumption for the release happening anyway). AFAIK this[1] PR tries to fix them.

[1]: https://github.com/oven-sh/bun/pull/9729


Skipping particular tests depending on platform is a very common practice, for better or worse.


That's not what's happening. Bun has tests that are supposed to work on the platform, but currently doesn't.

Skipping tests from a Linux suite that doesn't make sense to run on Windows is very common. Skipping tests that should pass on a platform but doesn't just in order to cut a release isn't as common.


Every growing project has things thing the maintainers would like to work but that do not currently. The job of the maintainer is to balance the value of releasing a cut of the software as stands and learning about other unknown issues while continuing to work on the known, or to work on the known and ignore the unknown until finished with the known. Neither way is strictly good or bad.


At some point it becomes infeasible to 100% the test suite for all configurations.

At some point further away it becomes infeasible to 100% the test suite for any configuration.


[flagged]


> Bun is run by a little prince.

What does that even mean? Bun is run by a fictional French character capable of interplanetary travel? Or are you calling them a spoiled brat with money? Why does that affect code organisation? Or is it code quality?

If you’re going to criticise the project and the people who run it, some clarity of communication and specifics on the issues would be appreciated so others can evaluate your claims. Otherwise it’s just empty insults which do not advance the discussion.


Really impressive list of changes. Bun sounds like the dream node alternative, I hope they succeed in their mission.

And I’m glad they spent time on Windows support, that’s something neglected in the web development world


I am curios, why do you think they did not succeed yet?


From their website:

> The goal of Bun is to run most of the world's server-side JavaScript and provide tools to improve performance, reduce complexity, and multiply developer productivity.

Bun is still pretty young and experimental, and not really production ready, though it’s getting there fast. If it grows enough to force node to improve, or if it takes over node, that would be a success based on their own goal


There's been some polls on social media: Overall picture was 80-90% using Node. Then Bun, then Deno. I'd bet in the real world it's 99% Node for production. If in 3 years 5% were using Bun, it would be a great success (Node usage is huge). I think they're on track, but would not recommend Bun for production backends as of now.


Hooray for Windows support! That was keeping me from using Bun since I'm on Windows a fair bit. My experience with Bun has been excellent so far and I'm looking forward to using it more.


I just tried using Bun to run one of our more complex projects. Did the same with Deno a week ago and too many things weren't working. with Bun everything loaded perfectly, and I could immediately drop ts-node and nodemon because they're essentially redundant when using Bun. great stuff!


Is Bun executing TS or is it also compiling down to JS and executing that?

Edit:

The docs mention:

> Because Bun can directly execute TypeScript, you may not need to transpile your TypeScript to run in production. Bun internally transpiles every file it executes (both .js and .ts), so the additional overhead of directly executing your .ts/.tsx source files is negligible.

https://bun.sh/docs/runtime/typescript

The idea I'm getting from this is that both JS and TS are transpiled to something else. Are types preserved in this bytecode, AST, or whatever it is?


Transpiling TS is really easy task, because TS developers made huge effort to make it possible. You basically remove all types and that's about it. Probably could be done with simple character streaming algorithm.

At this point, IMO, it should just be implemented within V8. Would make things much simpler for everyone.


Totally agree.

There is a TC39 proposal for that: https://github.com/tc39/proposal-type-annotations


That proposal is not fully compatible with Typescript: https://github.com/tc39/proposal-type-annotations?tab=readme...


It is more compatible than not and there are workarounds for most of the things that wouldn't work, many of which you shouldn't be using in Typescript today, have known workarounds (enums), and/or you probably have lint rules already warning against using (namespaces). (There are more details elsewhere in that proposal document outside of the direct linked FAQ question.)


Great point. It is 100% possible to survive without enums and namespaces.

And in fact: I bet the Typescript team itself would deprecate them (or at least add extra checks to TS to avoid them) if the TC39 proposal above passed.


Yeah, if this type annotations proposal gets to higher stages I expect the associated ES target to get far more strict warnings/errors on those features.

Though I'd suggest that the Typescript team also doesn't seem to be waiting on that to try to deprecate them, either. They've made it somewhat clear that the only reason enums and namespaces survive is a commitment to deep backward compatibility (both go all the way back to 1.0) and are neither features that they would add today (without waiting on TC-39 proposals for those features in JS first). As an interesting side note: up until very recent versions the Typescript code base itself was one the biggest users of both enums and namespaces. (They managed to finally do a lot of namespace removal in the very recent ESM rewrite. I haven't checked where they are at in enum removal.) It is always fascinating to me how important backward compatibility can be when you bootstrap your compiler in its own language.


Hmmm good catch.

But on the other hand, it would be already fantastic to have at least a subset of TypeScript...


It's not just stripping types: enums have a runtime representation that needs to be generated.


Why would doing work at run time that can be done at build time be a good idea? I have a CI, I'd rather have it so the build work rather than delegating that to my production servers.


(note that you still have to tsc everything anyway in the CI to check the types, so when you ship TS files to production your CI does the hard work, but then doesn't finish the easy part it so your prod server has to do it? Why?)


You can also ask why every node server traverses the file system to load dependencies at runtime instead of build time. Packaging your server with bun build can be the answer to both.


"TypeScript & JSX support. You can directly execute .jsx, .ts, and .tsx files; Bun's transpiler converts these to vanilla JavaScript before execution."

--

https://bun.sh/docs#design-goals


It’s not running TS directly, it’s just preconfigured to transpile TS to JS without the user having to bring extra tooling. Neat, but you’ll see the docs still recommend tsc for type checking at build.


I wonder what's the benefit of TS if there's no type-checking? If types are not checked that means the TS type-declarations could be totally wrong and nobody would know. In other words they could be misleading.

Why incur the type-declaration overhead if they are not used after all?


This is how typescript is run today. Typescript types never exist at runtime regardless of how typescript is run. There is no overhead defining types because they are deleted at runtime. The purpose of typescript is to make the editor experience better (autocomplete, error highlight). Typically typechecking is run in addition to tests to make sure there aren't a bunch of errors no one saw in editor.


So "type-checking is run". Could it not be run by Bun automatically?


It could be, but even today without Bun, a common approach is to do type checking in a separate step from the build. This is because tsc doesn’t parallelize well, so type checking will slow down the build a lot. So you can put the type check step in a separate CI job, and have it fail like unit tests would. Then the main build can be a lot faster since it just has to strip the annotations.

Plus, for local dev, iteration and watch/rebuild is more important than failing with invalid types on every change. Sometimes it’s helpful to circle back to fix/update types after you’ve tried a couple approaches. (TS can still be finicky at times!) On top of that, your IDE should report type errors as you work anyways.


Makes sense.

I would still prefer though that Bun did it for me, in a separate process perhaps, so I wouldn't need to configure a separate CI job, or manually enter the tsc-command. I read that Bun has its own test-runner too so why not its own type-checker too.

On Node.js I just edit the source-code then re-start the debugger on it, and edit it while in the debugger then rinse and repeat.

I use runtime assertions to catch errors in argument-types etc. as needed.


the only time you run type-checker is on CI. For the majority of the time you only need the code to compute and your editor/IDE should already have its own bundled type-checker. Unless bun has its own type-checker which means it has to play catch-up with tsc (if that's even possible, typescript's type system is very complex), I don't find a lot of benefit for Bun to merely call tsc for me.


> or is it also compiling down to JS and executing that?

This is the only way to execute TypeScript. That's how every tool that "executes" TypeScript works.


I don't buy it. That might be how every tool today works, but there's no reason V8 or whatever can't run TS the same way it runs JS.


If it runs TS "the same way" it runs JS then how is that different to what I described? It would be a JS engine that strips the types before execution.

There is a persistent fantastical hope that TS can somehow be a compiled language. It can't, not without breaking compatibility with JS. Until, of course, someone manages to compile JS - but at that point TS would be irrelevant.

I say this as someone who loves TS and wouldn't want to be without it: TS is a fancy linter. JS defines the execution semantics of the language.


Bun transpiles Typescript to JavaScript before execution, removing types and doing some dead code elimination


We've been using bun for a while now. We love the speed, but we love the integration even more. No need to use node, npm, nodemon, tsx, esbuild and jest.

Bun is our one-stop-shop for Typescript.

Thank you!


Can I ask what you use it for specifically?


Bun is used in github.com/yolmio/boost as a Typescript runner to build a complete system. We use it also for all kinds of other scripts.


I use Bun as a test and dev build runner for TS programs. I still compile with tsc though.


Not sure I see the benefit of the bun shell. I use shell scripts when I know that the other people using the script will be able to run it in a similar shell to me, in order to cut down on dependencies. If I need it to be cross platform I just use a scripting language like JS.

Bun shell keeps the more esoteric syntax of Unix-like shells but also requires a dependency (Bun itself). If you already have Bun installed why wouldn't you just write a JS script?


I mandate WSL when working with windows developers, and GNU coreutils for mac developers so that I can assume some things about their dev environments. This solves that problem for shell scripts, there are still other ways where it's useful but a scripting environment is probably the biggest one.

The velocity of writing filesystem and file manipulation files in shell is many times greater than in javascript. Bun shell lets you leverage the power of shell for that stuff while being able to leverage your javascript code at the same time, with fairly minor downsides in exchange.


> The velocity of writing filesystem and file manipulation files in shell is many times greater than in javascript.

I think you're right, after achieving a level of expertise. I've never gotten there myself which probably explains my bias, but I appreciate hearing your perspective.


I can definitely see the value of not wanting to write the boilerplate kind of stuff you need to do for more shell like scripting. I can also see wanting to emulate the traditional syntax while abstracting away whether you are on a Unix system and still allowing traditional JavaScript syntax in between the sheep-y parts.

On the dependency side it'd be slick if you could bun --compile these like normal bun apps.


It's realy nice when I need to do shell stuff - much nicer to be able to use js to write a shell script than either go look up shell syntax again or use js with child_process.spawn()


One of my favorites: Bun has a working FFI that is also fast.

https://bun.sh/docs/api/ffi

So has Deno, but Bun's feel more evolved as it comes bundled with tools and sufficient examples for working with pointers. The only thing Bun is missing right now is the Deno equivalent of non-blocking FFI calls, ie `await mylib.myFunc()`.

Another one on my wishlist for Bun: embedded Bun. A library distribution would be nice, so that we can call into Bun as ie "libbun.so" from other languages. It would be more resourceful than just embedding WebKit/SpiderMonkey/v8 as these lack any real capabilities besides running vanilla JS.


Looks like a great development.

One thing I miss in Node.js is ability to run a HTTPS -server in a simple way without having to muddle with generating & installing a correct type of certificate. I understand there can be a "self-signed certificate" but there doesn't seem to be any npm-module I could install to take care of that.

Since Bun is a "server-side" JavaScript platform it would be great if it could support https out of the box too.


Does this work? https://get.localhost.direct/

No need to mess with self signed certs, there are already some public 127.0.0.1 wildcard domains like the one I linked.


caddy supports self signed certs by default

https://caddyserver.com/docs/automatic-https


More precisely, it runs its own local CA to issue certs for itself. Not exactly the same as self-signed certs (which is a cert whose key was used to sign itself), but better because leaf certs are short-lived and easily cycled. This allows for setting up trust easily, just need to trust the one root CA cert and every leaf cert for any domain served will be trusted.


Does it work with Node.js or Bun or Deno?


You can proxy to internal services with a few lines of config....

  :2080

  reverse_proxy :9000
Then access https://localhost:2080 and you'll be accessing whatever is on port 9000, be it node or bun or anything else.

https://caddyserver.com/docs/quick-starts/reverse-proxy


Curious how do you actually see this working. Care to elaborate?


Faster this, faster that. Is it finally segfault free? I've tried it like 3 times in span of last year with different projects only to find out it segfaults at runtime or when installing package.


Same. Tons of tooling breaks and segfaults. Our codebase has a dylib unknown symbol error that hasn't been fixed since before v1.


I only use bun for tests/builds/storybook, but I haven't had it segfault at all. I suspect that you've got a dependency that is hitting an undocumented node API that isn't fully implemented. They talk about those in the blog post, they're a known thing.


But look how quickly it segfaults!


Lots of good changes here, Bun shell + windows support solves a real problem.

Is `remix run dev` fully working now with the additional node API support?


I admire your video production values :) Nice job there, One thing is, what gear do you guys use? The audio was really good for what was probably a shotgun mic?

Can you tell I don't use bun yet? Because none of my questions are about the release. But I tried it out a few weeks ago on my NextJS project and it didn't work, but will try bun 1.1.


I was quite curious about the .bunx file format. I think this could be a quite a useful thing, a universal binary format. Then I see the shim DLL:

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

Even before this past week's XZ backdoor revelation, checking binaries into source control rather than building from source seems quite questionable. In fairness to the Bun developer's, they have a comment in their build.zig file acknowledging that this shim should be built more normally rather than being checked in.

Then I look into the source for it:

https://github.com/oven-sh/bun/blob/801e475c72b3573a91e0fb4c...

For no discernible reason, it is using a bunch of undocumented Windows APIs. The source cites this Zig issue as one reason for why they think it is OK to use undocumented APIs:

https://github.com/ziglang/zig/issues/1840

I don't see any good reasons here cited for using undocumented, unstable interfaces. For Zig's part, there seems to be some poorly-explained interest in linking against "lower level" libraries without any motivating use case (just some hand waving about security and drivers, neither of which makes much sense. Onecore.lib is a thing if you wanted a documented way of linking an executable that run on a diverse set of Windows form factors. And compiling drivers may as well be treated as a seperate target, since function names are different). For Bun, I assume they are trying to have low binary size. But targeting NTDLL vs. Kernel32 should not make a big difference, especially when the shim is just doing basic file IO. For an example of making small executable with standard API, you can make hello world 4kb using MSVC just by using /NODEFAULTLIB and /ENTRY:main with link.exe and this program :

    #include <Windows.h>
    #define MY_MSG "Hello World!\n"
    int main() {
        WriteFile(GetStdHandle(STD_OUTPUT_HANDLE), MY_MSG, sizeof(MY_MSG), nullptr, nullptr);
        return 0;
    }
So it should be possible to make a .bunx shim of small size without having to resort to undocumented API. (Current exe is 12kb). But even if the shim exe was 100kb, that would be still be an acceptable tradeoff for me than having to debug any problem that results from using non-standard APIs.


the motivation behind zig#1840 is that while the functions in ntdll aren't as well documented as the kernel32 functions, theyre not unstable and not having our binaries depend on kernel32.dll would lead to faster startup times as well as allow us to do things like use more performant algorithms for UTF-8 <-> UTF-16 conversion. on top of the things mentioned in the issue like having APIs with more powerful features.


For Bun's shim, it is linking against kernel32 anyways. And there is nothing special about it's use of NtCreateFile, NtReadFile, NtWriteFile, and NtClose that would preclude it from using the standard functions.

I'm not sure it's possible to not have kernel32 loaded into your process anyways. Even if you create an EXE that imports 0 DLLs, kernel32 gets loaded into the process by NTDLL. The callstack from main:

    ConsoleApplication1.exe!main()
    kernel32.dll!BaseThreadInitThunk()
    ntdll.dll!RtlUserThreadStart()
There are valid reasons to use APIs from NTDLL. Where I disagree with zig#1840 is the idea that it is always better to use NTDLL versions of API. Every other software ecosystem uses the standard Win32 APIs and diverging from that without a good reason seems like a good way to have unexpected behavior. One concrete example is most users and programmers expect Windows to redirect some file system paths when running on WOW64. But this is implemented in Kernel32, not ntdll.

https://github.com/ziglang/zig/issues/11894


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: