Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript macros in Bun (bun.sh)
136 points by fagnerbrack 11 months ago | hide | past | favorite | 55 comments



I was ready to be excited from the title, but was utterly disappointed :(

IMO these aren’t macros in the Lisp-sense of the word (or Rust, or even C); yeah they run code at compile time, but that’s where the common ends.

Macros should be able to apply syntactic transformation on the code. Lisp is famous for allowing that by representing code as lists. Rust has a compiler-level API to give tokens and run arbitrary code, then spit new tokens out. C macros operate on the tokens level, so with enough magic you can transform code to the shape you want.

This… isn’t any of that.

A pretty good example (and something I’m still sad that it didn’t take off) of macros in JS is Sweet.js[0]. Babel macros[1] are a bit higher level, where macros require the input to already be a valid AST, but that’s also something I’d call macros.

This… I’d say it’s more of a compile time code execution feature, not a macro feature.

[0]: https://www.sweetjs.org/ [1]: https://babeljs.io/blog/2017/09/11/zero-config-with-babel-ma...


An earlier version of macros in Bun looked more like what you describe. We nerfed it because the API was too complicated. But we probably will revisit in the future.

https://gist.github.com/Jarred-Sumner/454da846d614f7bb4bcceb...


Came here to suggest renaming this feature for the reasons described here.

Ideas are cheap, and naming is hard, but I like Zig’s “comptime” term, which is closer to this than “macro”. “comptime” distinguishes it from C preprocessor stuff, but also describes what it actually does more clearly.


I've been using ts-morph for this recently, the accompanying AST viewer makes getting started easy.

https://github.com/dsherret/ts-morph

https://ts-ast-viewer.com


Out of curiosity, what have you made?


It was mostly utilitarian, modifying the outputs of some code generation tools where maintaining a fork was going to be difficult. The most recent was munging some file upload details where the OpenAPI definition is incomplete (or only recently defined). Also adding some aspect oriented changes across the code (audit/log/debug).


Every time I see a new Bun update it's always because Bun has implemented a feature that violates the spec in some way. While this does rely on syntax from a proposal, said proposal has not made it into the language yet. Another failure I see is the fact that the "type" property is not meant to be used for this type of language augmentation, it's meant to describe the type of the file without inferring from the file extension. I think Bun has a habit of moving too fast and breaking things and if it catches on I worry it will have just as many legacy anti-spec things as Node does.


Import Attributes are a stage3 proposal which means it is very likely to become part of the language. It’s the last stage before becoming part of the language.

> Another failure I see is the fact that the "type" property is not meant to be used for this type of language augmentation, it's meant to describe the type of the file without inferring from the file extension

It sounds like you’re confusing Import Attributes with Import Assertions, which was the previous iteration of Import Attributes.

Interpretation of import attributes is host defined. For Import Assertions, that wasn’t true - they were intended never to impact runtime evaluation. That’s the difference with Import Attributes. Import Attributes do impact runtime evaluation.


What I specifically meant is that the "type" property can impact runtime evaluation as per the spec, however it's implied to be used specifically for impacting runtime evaluation based on a file's type. Say I have a module in a language that transpiles to JavaScript based off of the "type" property. In Bun, I would not be able to use any functions exported by it as a macro because the "type" property is being used for defining the module as a macro. That's the specific issue I have.


Exactly. Import attributes are supposed to effect the imported module, not the importing module.

This turns what looks like a function call in the importer into something like macro expansion (it doesn't look like actual macros though).


I think this is mostly consistent in terms of effect. With any application of import attributes that affect the dependency, the dependent module could behave differently for any affected aspect it accesses.

The only inconsistency is that Bun is front loading some of these effects to the server/bundler runtime and effectively memoizing the equivalent behavior before it reaches a client. But it’s not doing that of its own volition, it’s doing it to address an explicit attribute in the source code.

The only way this would be a meaningful problem is if the explicit value has a chance of colliding with either existing code in the ecosystem (highly unlikely, no one is really using this syntax yet except perhaps in an equally experimental context), or some plausible pending standard (unless one was proposed in the last couple weeks, I’m pretty sure I can rule that out too).

I share other commenters’ lament that this is not a true macro solution (and I think it should actually just be renamed to something like comptime). But I don’t think this deserves the deviation-from-standards challenge it’s getting in this thread. And lest I come off as a Bun fanatic, I think I’m one of the people who more frequently questions potential Bun spec deviations when they come up on Twitter.


I mean, it looks like they support Import Assertions, too:

https://bun.sh/blog/bun-macros#how-it-works

So the worry about having legacy anti-spec things still seems pretty valid.


What about adding signals?


Eh. Just treat Bun as a testing ground for (very) experimental new features. It's no secret that the project moves very quickly, and anything that gains enough traction can be folded back into the spec.


I like that there's an engine willing to break the spec. It's good to have variety and see how people experiment with this stuff, at least for the fun of it!


How do you think features make it into the spec?

Specification and implementation are chickend-and-egg.


Is bun actively participating in these discussions, or are they just implementing whatever they wish and not participating in the chicken-and-egg process?


The main JS implementations do not run ahead of the spec. They implement at Stage 3.


Import attributes are at Stage 3: https://tc39.es/proposal-import-attributes/


I'm well aware, but this is not just import attributes. It's using import attributes syntax to trigger a transform of the importing module, in addition to changing the behavior of the exporting module.


More concise than my previous reply: that’s just moving the same outcome around in time. If a fully runtime-only import attribute would result in inlining a static-at-build-time value where its export is called, this is just inlining it sooner. That seems fine to me unless it doesn’t fit the import attribute use case. In which case… don’t voluntarily use it?


A. Import attributes are stage 3.

B. If you blur the line between "runtime" and "compiler", you'll realize that a lot of JS ecosystem (e.g. Babel) run ahead of that.


I guess today's software development culture requires it. constantly discover new things and continue by breaking the old.


This blog post is missing a few things from the official docs: https://bun.sh/docs/bundler/macros

---

The way we do this kind of thing at Notion is very simple. We have normal CLI commands that generate code and write it to disk, and we check in those outputs. Then in CI, we run all the generation commands and verify the codegen is up-to-date. Checking in generated code means it's really easy to understand the behavior of codegen and how it changes, and everyone gets excellent typechecking and auto-completion of codegen'd artifacts.

The downside to static codegen is that the "templates" we use for codegen need to be valid Typescript files, or we risk breakage if imports or types change.

Bun macros have some advantages like execution at build time for stuff like Git SHA interpolation, but because these macros can't actually emit code, they feel less powerful that straight-up codegen. I could replace a lot of the Notion CLI commands with Bun macros if:

1. There's a return type for macros that emits code, like:

    // src/macros/prism.ts
    export function emitPrismImports() {
      const importOrder = toposortLangagues(PRISM_LANGUAGES)
      const imports = importOrder
        .map(lang => `await import(/* webpackChunkName: "prism" */ "${importPath}"`)
        .join('\n')
      return Bun.code`${imports}`
    }

    // src/client/syntaxHighlighting.ts
    import { emitPrismImports } from '@notionhq/macros/prism' with { type: 'macro' }
    async function highlight(text, lang) {
      emitPrismImports()
      return Prism.highlight(text, lang)
    }
    
2. There's a way to run `bun build` and just do macro evaluation. I need to continue using Typescript, Webpack, Jest, etc for now, so we need eval'd macros on disk so they can be typechecked, tab-completed, and bundled by other tools.

The uncompiled versions still wouldn't typecheck nicely though :thinking_face:


> the "templates" we use for codegen need to be valid Typescript files, or we risk breakage if imports or types change.

Any tips on how to do this? I’ve been running into similar problems with code generation in TS, and I haven’t been able to come up with a good technique for how to solve this problem yet. (Anything I’ve been able to find is either an AST that’s awkward to use, or string templating that can’t be valid code since it has extra syntax in it.)


The way I do it is pretty dumb. We have files like src/test/_template.test.ts that are valid TS, that our codegen commands like `notion new test` regex-replace to “instantiate” the template. The convention we follow is to name replacement targets with double underscores. No need for some kind of specialized template library - just regex & replaceAll.

The scripts that consume our templates are mostly very simple ~15 line affairs but there are a few very complex, 1000+ line affairs that use the Typescript type checker API to codegen model classes and test data generators from hand-written interface types. Those things read ASTs but writing ASTs seems like a huge waste of time to me. AST is only nice if you need to very carefully morph existing code, and even then I usually end up doing a string replacement of an AST node’s position in its original file. In all that code I think the only thing that writes using an AST is import statement stuff.

Luckily that super complex stuff runs on every commit so it can’t break. It’s the one-off codegen commands like “make me a new component” or “make me an integration test” that are in danger of going stale, and even those we could write some CI tests for like “make a new thingy, check that it typechecks”.

Anyways here’s a full example template file:

    /*__FILE_METADATA__*/
    /* ================================================================================
    
     __TEMPLATE_CLASS_NAME__.
     Docs: https://dev.notion.so/notion/Record-Framework-1f41f97e1cc746628a6af57ba75d75ad
    
     THIS IS FILE IS PARTIALLY GENERATED. ONLY EDIT THE EDITABLE REGION BELOW.
     __GENERATED_BY__
     __GENERATED_FROM__
    
    ================================================================================ */
    
    import type {
     /*__VALUE_IMPORT__*/ BlockValue as __VALUE_TYPE__,
     /*__TABLE_IMPORT__*/ BlockTable as __TABLE__,
    } from /*__SCHEMA_IMPORT__*/ "../schemas/Block"
    
    import { Model } from "./Model"
    
    /*__REMOVE_LINE__*/ // : Value <Value> as __MODEL__<Value> | undefined <- Conform to template args in RecordStore.
    
    /**
     * This class is generated from {@link __VALUE_TYPE__}.
     * To customize, edit the section in {@link __TEMPLATE_CLASS_NAME__} below.
     */
    abstract class Generated__TEMPLATE_CLASS_NAME__<
     Value extends __VALUE_TYPE__ = __VALUE_TYPE__
    > extends Model<Value, __TABLE__> {
     /*__REMOVE_LINE__*/
     __BODY__: undefined // Put the goods here.
    }
    
    /**
     * Read and interpret the data of a __RECORD__ record.
     */
    export class __TEMPLATE_CLASS_NAME__<
     Value extends __VALUE_TYPE__ = __VALUE_TYPE__
    > extends Generated__TEMPLATE_CLASS_NAME__<Value> {
     /*__USER_EDITABLE_SECTION__*/
    }


Cool, thanks! “Just use a template syntax that’s also valid code” is a neat trick; I was sort of headed in that direction, but having a concrete example is useful.

What are the commented bits like /* __REMOVE_LINE__ / and / __SCHEMA_IMPORT__ */ used for?


The template replacer function takes a map from regex to replacement, and asserts that each regex must occur at least once which helps catch mistakes where you update the template but not the code calling it.

__REMOVE_LINE__ is a common hack with our template system to remove lines in the template that are needed to make the template valid, but aren't wanted in the resulting code. We replace /^.__REMOVE_LINE__.$/gm with "". In this case we need to accept the __MODEL__ template var because a different template called with the same arguments needs it, but we don't care in this one.

__SCHEMA_IMPORT__ is another marker we use to replace a whole line, here we replace it with `} from "${generateData.schemaImportPath}"`


Out of curiosity, what are you trying to make?


I’ve run into this in a few different contexts, which usually boil down to “I have a schema, and I’d like to automatically write code that uses it.” (Think things like API definitions, database models, binary serialization formats, and the like.) Trying to do it at runtime with something like io-ts results in a lot of indirection that’s hard to follow, and writing the files by hand results in silly mistakes because it’s tedious to get all the details right. Generating the files and checking them in gives you the best of both worlds: the readability of hand-written code, with the accuracy of automation. But I’ve never been able to get the generator itself as nice as I’d like; string templating mostly works adequately, but as the parent says it lacks integration with the regular toolchains.


Pretty cool! Sounds like it was inspired by comptime in Zig (which shouldn't be too surprising since Bun is written in Zig).


It was mostly inspired by comptime in Zig. Babel macros came to mind too, but those rely on an AST and Babel-specific api. What’s really cool about comptime in Zig is it feels like ordinary code. Macros in Bun aim for that too.


If Zig drops LLVM, where does that leave Bun development? There is no alternative Zig implementation.


Woah, nice. Zig has a nice sweet spot of providing useful functionality at compile time vs. having "macros can make the text you read mean practically anything" like in, e.g., Lisp.


I feel like macros might be a poor name for the feature. Most people in Javascriptland expect macros to enable some sort of AST manipulation, whereas these are "just" compile-time statements. Still pretty cool, but putting "Javascript macros" right there in the title might've been the wrong choice


Is there a way to disable arbitrary I/O in macros? I would like to use this feature to compute lookup tables and inline the results of pure functions, but I really really do not want my bundler making arbitrary http calls or other sorts of nondeterminism. I need to be able to reproduce my bundles confidently.


It's cool to be able to compute things at compile time, but one thing that doesn't seem clear from this post is: can macros return code? If so, how? It seems like they can only return values that are computed at compile time, which severly limits what you can do with macros.


This is mentioned under "Limitations"

> The result of the macro must be serializable!

> Functions and instances of most classes (except those mentioned above) are not serializable.


Well those are both slightly different than "code". I mean something more like returning an S-expression, or a JS AST object, and having that be inlined. That should have no problem with serialization, since it could just be dropped into the AST (unlike a function or class).


Neat! I like that it's not allowed in npm modules. Module authors can do whatever compile-time codegen they want as part of their own builds.


gotta say I'm puzzled why time is spent on features like these, rather than focusing on getting the remaining node apis finished


I wrote nearly all of macros in Bun over a year ago, when Bun was in private beta. We just never documented it or talked about it much

Our focus is very much on Node.js compatibility


Second this. I've tried Bun a couple times and have been surprised by how fast it actually is but also run into issues getting it to work with existing libraries. Issues that make it seem like it's almost there but just not quite able to replace node yet in most cases.

It's surprising to see effort spent coming up with and developing niche new features rather than on bridging the gaps (within reason) with node.

I've never worked on a language project like this before though so I'm not in a place to cast any judgement, just echoing the sentiment that this seems strange from the outside. Maybe this is just what it takes to keep the project interesting for Jarred? I can relate to getting bored with a project once I've proved out the difficult parts and most of what remains just feels like chores


I love macros and feel like they would be a great addition for the JavaScript language and ecosystem. That being said I would much rather them to be standardised. Now with the past few years of TC39 it feels like introducing new big things worked only when one of the big vendors made them available de-facto and then they got real chance to progress in the review stages (take for example decorators that were boosted by TypeScript).


Babel has macros too! My super small npm package gets git commit hashes using the babel-plugin-macros.

https://www.npmjs.com/package/babel-plugin-macros

https://github.com/AbhyudayaSharma/react-git-info


Hmm, seems to me that this is for a really niche use case (adding metadata like timestamps or git commit tags to your JS bundle). Is it really something that needs to have native language and syntax support over just adding a custom build step?


Very nice. This should get rid of a lot of clunky webpack/esbuild/etc junk I have lying around to inline various constants or otherwise one-time configure the runtime.


Are people really turning to Bun over Vite these days? I struggle to see any advantage it offers.


Is Bun the current killer app in Zig?


Written in zig? But that doesn’t matter?


It matters for Ziglang and Zig users.


How so?

The "killer app" is the app that necessitates adoption of the platform.

E.g. Lotus 1-2-3 is the killer app of the IBM PC. Lots of users want Lotus 1-2-3, therefore they need to adopt the IBC PC platform.


Real world testing requires real world workload.


Sure, but that's not what "killer app" usually means.


I agree. I just think “killer app” means “killer end user application”, whereas you seem to be using it as “killer application of zig”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: