On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.
This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.
Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).
Same since they own Bun, they have every incentive to make this seem easier than it was.
This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.
Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?
I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.
I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.
I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
> I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
it wasn't. probably quite a lot of preparation i would think. and it's very much a first pass which is far from idiomatic rust and far from memory safe. still impressive though for what it is.
It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.
If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.
This is approaching the size of the Rust compiler itself; except that BunJs is mostly a JavaScript interpreter wrapper + a reimplementation of the NodeJS library (Rust STD wrapper).
I think BunJS is becoming the canary for software complexity management in the LLM era.
Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.
It wouldn't have been that hard to do that from Zig if they'd wanted to. They don't, because they want to do everything themselves so that it works exactly the way they want (except the core JS engine for which this is infeasible—though even that has custom patches). After all, there are already plenty of libraries on npm for those other parts of the stack and they do work in Bun.
Bun is not a JavaScript interpreter, it's "only" a reimplementation of the NodeJS library + various other libraries. Bun uses JavaScriptCore as its JS engine. So Bun itself does (or at least should do) no JavaScript parsing, interpreting or JITing.
EDIT: I misread, sorry! You said "JavaScript interpreter wrapper", which is correct.
Bun is now almost twice the size of JavaScriptCore, too, by linecount after this.
This is the 'world class' engineering that Jarred claims he can't hire Americans to do, by the way https://x.com/jarredsumner/status/1969751721737077247. This company is parasitic to its literal (javascript) core.
No, it does parsing and a bunch more. The Bun founder says it best in this comment:
"Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code."
I'm not sure if it's just the leading '+' or if there are other factors for phone number detection on iOS, but on mobile the line count changes are underlined and I can tap it to start a call, which, if it is because of the diff size, is something I find pretty amusing.
Apple has had a feature called Apple Data Detectors since the 90's that looks for different patterns in text and allows you to perform actions on them.
So if the text includes a phone number, email address, flight number, package tracking number, street address or other pattern in the data it is underlined and allows you to perform one or more actions.
The patterns it looks for and actions it takes are extensible by developers.
Interestingly, the entire line gets formatted once it reaches seven digits, +lines and -lines both, so I guess the -lines is just interpreted as a dash. But your eight digit string doesn't. Perhaps it's not interesting, though I've never really given it a second thought before.
There’s certainly some regex or similar involved that tries to recognize phone numbers, and then hyperlinks the whole thing. My point was that it’s not solely the plus sign that is triggering it.
I think the unusual thing is that it was written in a week. I highly doubt that they read and understood all 1M lines. But if it works and people use it, what does that mean for software? Should we still care about the code that’s written? Should we even look? I’ve always thought so, but maybe I’m just biased.
I think we should care way more about what the validation story is of code. The obvious question does it all work? I'm happy to not look at any code if we have good ways to validate what is there. The other thing I care about is the architectural structure of the code. Given its a port I don't think that would have changed.
I don't know enough about what Bun does... But Rust is so insanely complicated, it's hard for me to wrap my head around how Bun is equally complictated.
If anything, it's a little surprising that the Rust code isn't significantly larger because I tend to think of Rust as requiring somewhat more boilerplate than JS.
Not to mention how trigger happy LLMs can be when it comes to being overly verbose and adding unnecessary bits even with explicit direction not to do so.
> I think BunJS is becoming the canary for software complexity management in the LLM era.
Yeah, Cursor did the same thing, bragging about how many lines of code they managed to produce for a semi-working browser, completely missing the idea where less code is better, not the other way around.
I think their point was that the project is complex, with the implicit assumption that the complexity is to a large degree inherent.
Even if it's mostly accidental, and the code is overengineered slop (which it is), the system being able to decompose a problem and deliver something is impressive in terms of stability: it wasn't sucked into rewriting everything from scratch every time it would run into issues, it didn't have infinite subagent recursion with a one-agent-per-line type workflow, etc.
Cool you can just search specifically for potentially unsafe code in Rust. How do you search for unsafe code in Zig? Or do you just have to assume it's everywhere?
If half of your code is unsafe then unless you exercise tremendous discipline (Claude basically doesn't) you will just end up with a big ball of unsafe, peppered with hallucinations in whatever random documentary comments Claude decided to make. I doubt they enforced the confinement of unsafe to a specific architectural layer or anything like that.
Aren't the Rust unsafes a reflection of the Zig it was ported from? However now that you're working with Rust, you're in a position to continue improving and eliminating the unsafes.
if half of your files in a million line codebase are unsafe that doesn't tell you much any more. Presumably the point of a Rust rewrite is that you actually make use of Rust's safety features in a coherent way.
But given the whole "let AI rewrite this for me" stunt nature of this project that was not going to happen because that would require well, actual thinking and a re-design. So now you have Zig disguised as Rust and a line-by-line port because the semantics of idiomatic Rust don't map on the semantics of Zig.
>if half of your files in a million line codebase are unsafe that doesn't tell you much any more.
If half of your files in the first pass of a million line rewrite are unsafe then that's completely fine. Do you understand what the tag actually is? It doesn't even mean that the code is actually unsafe, just that the compiler can't guarantee its safety, which can happen for a number of reasons, some benign.
Who rewrites a 700K codebase trying to be idiomatic from the get go ? That's setting yourself up for failure, whether you're a human or a machine.
And? This is absolutely the correct and standardized way to do mechanical rewrites: you do a rewrite that maps directly to the original source so you can rely on the original correctness guarantees and bug-for-bug compatibility and log issues, and then you go into the next phase where you begin to use idiomatic constructs.
This is the same in COBOL-to-Java ports that have been done in banking and insurance for the past 20 years.
it isn't, because those guys didn't think a naive 1-1 machine translation would give them the benefits of Java, which somehow the people involved in this rust rewriting seem to think they've already gained despite the virtually identical code.
If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
> If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
If it were just a marketing stunt you wouldn't have a fraction of a percent of the test suite passing with the remaining bugs being realistically very fixable, and everything written in languages with type systems that give far more guarantees than what COBOL is possible.
You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected, and maps with many people's experience with using LLMs for tasks like these.
>You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected
no I'm being negative because as I just said, if you want to do a purely syntactic translation you don't even need an LLM, that's called transpilation and we've been doing it programmatically for decades.
This is the kind of thing that looks great to people who can't program, think this is some new superpower unlocked by the mystery magic of LLMs and that is exactly the kind of impression Claude wants to sell.
The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?
Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.
It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.
1. Get hired into a company where you have a solid bet on making multi-century lasting generational wealth (>$50,000,000).
2. Every waking moment do everything in your power to boost the company that might give you the ability to define the direction of technology for the rest of your life.
3. Use the only thing you have (bun) to help push you in this direction and do things to help boost LLM marketing (a technology that already deeply struggles to find customers and has to rely on welfare (lucrative government contracts) to make sales).
---
Honestly think this generation of tech workers in SF are more evil than those that worked at Google + Facebook in the early 10s.
What does that have to do with rewriting from zig to rust??? This thread is what's pushing LLM marketing, not the rewrite itself.
If the rewrite is just a stunt and it will crash and burn it will do that whether we spend our free (or work) time writing comments. If there is any hype around this particular topic, it's happening here not in the GitHub repo.
Google and Facebook workers just made a lot of cash and mostly made everyone's life harder by Leetcode and bad interview process, they didn't threaten and actively work to put millions of SE on the street.
> they didn't threaten and actively work to put millions of SE on the street
Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.
They (we) did it to tons of other industries. And we collectively patted ourselves on the back, saying that automation is a good thing and we're the good guys for doing it and people who lost their jobs will adapt and maybe they should just learn to code.
Now it's happening to (some of) us and suddenly it's evil?
No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.
We either don't think about it ("what could go wrong?"), don't care about it (eh), justify it ("I need to eat!!!", "I'm just following orders"), or actively embrace it ("It's the future!").
> Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.
Nah. The fact that such opportunity wasn't available attracted a different sort of person.
What is it with tech bros and ridiculous asocial agenda? You have some guilt complex or whatever shit?
> No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.
unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.
Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.
There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.
It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.
There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.
There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".
Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.
`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.
One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.
You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.
This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be
pub struct CheckedType(UncheckedType);
e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.
For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as
You can construct a string from a vec of bytes via
fn from_utf8(vec: Vec<u8>) -> Result<String, _>;
as well as the unsafe method
unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;
Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).
The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place
even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.
For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.
Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.
Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.
I think the goal was to do a massive rewrite for Anthropic (they acquired bun) and show that rewriting projects from lang -> lang with Claude can reduce security vulnerabilities to help with the hype for an IPO.
This is an interesting experiment but I’m skeptical of any claims of success by Jarred/Anthropic due to the incentive to hype agents. There’s probably a trillion dollars at stake with the IPO. And Anthropic seems to be developing this part of their business with Mythos and the super review features.
But I’d like to see the same experiment done on a project without so much relying on the story being success.
There's a reasonable request to run the same analysis for the Zig version of the code as a comparison.
In lieu of that, it seems the Swivel devs ran an analysis on Tigerbeetle, one of the other major Zig projects, and found only 7 medium/low priority issues:
Still writing the blog post about this. Will share more details.
For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?
I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.
My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!
posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.
Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.
Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?
Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.
The top comment at that link points out how many of the sibling comments are delirious and emotional, kneejerk responding to the news rather than giving any sort of sober analysis.
That people were overreacting with emotional meltdowns (common in AI-related threads) is perfectly compatible with the branch making enough progress to get merged.
I'm reading through the top comments next to his and don't see that. You can always find delirious and emotional takes, but those didn't dominate the discussion
> [...] Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
> I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.
No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
> so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.
It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.
Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.
See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.
Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.
>Correct or not, it's reasonable to conclude they were lied to.
No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.
But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.
I might surprise you, but tech projects have social part of it. Decisions like that are discussed with community. It is completely fine to not give a single shit about community, but then don't act surprised when community doesn't give a shit about you.
Decisions like this are discussed however the maintainers of the project wish to discuss them. And a majority of the time, these decisions are made and discussed solely by the maintainers, so I really have no idea what you're talking about.
9 days ago this is how the migration was described:
> I work on Bun and this is my branch
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
9 days after that comment, the rewrite has been merged to master.
9 days after "this is my branch" "the code doesn't work" "I'm just curious" "high chance it's thrown out"... it's merged to master.
-
Some people saw the original as an attempt to downplay the importance of the branch in response to negative feedback, rather than accurately describing what the branch represented.
Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.
Experiments graduate to production all the time, but given the timelines involved, their predictions were correct.
Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.
There is no way a human rewrite like this wouldn't be roughly at the same stage with a 9 day delta. In that case, some of these accusations would be reasonable to make. But that is not the case here.
Yes because it was an experiment and tests were indeed failing at that point in time, but guess what ? When an experiment succeeds you probably don't throw away the results.
You're free to look down on whoever you want. I'm free to tell you I couldn't care less, and that both replies so far just confirm how much of an emotional meltdown the reactions here really are. Your comment has managed to have nothing to do with the point I was making.
Just because the machines can generate code that quickly doesn't mean that human thought has changed to moving faster. Everyone's had a problem they were working on, and the solution doesn't come sitting at the desk staring at the code, but three days later in the shower, eureka! hits. Just because machines are writing code hasn't changed the underlying human thought speed substrate. That's why people see nine days as too fast, even in this sped up AI era.
Human speed thought doesn't matter here because it's not human reviewed. The code was generated. It exists and it (now) works to the extent they're satisfied with going through with a canary release. Going on about about '9 days' is working with a mental model that simply does not apply here. That is my point.
If you think there should be human review or that there should have been a lot more human collaboration, that's one thing but accusing Jarred of lying about his intentions is another thing entirely, and one where '9 days' is not remotely the proof people think it is in this situation.
The chain we're on and the comments I originally responded to have such concerns. And I mean, if it's not going to be reviewed by humans then really what makes 9 days too soon ? Should the code just sit there collecting dust until everyone agrees an arbitrary amount of time has passed ?
Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?
In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
> It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:
It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.
The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.
Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.
This actually happened to me a couple months ago. Started a Rust rewrite of a project as an experiment, then a few weeks later it was presented to the team and promoted to mainline.
Although in that case the language change was almost incidental — the rewrite was very much not a straight 1:1 port, but more of a substantive architectural overhaul and longstanding tech debt cleanup; Rust was just one of many tools and design decisions that helped get the best possible end result. There were also various reasons it made sense to attempt a rewrite within that particular window of time.
The upshot is we've ended up with a substantially stronger QA posture, a much higher-quality and more maintainable codebase, and an extremely positive audit report by a group that was brought in to review the project. There were some early kinks to work out, but the longer we've lived in this version of code the more it's proven itself to be a stronger foundation than its predecessor.
Of course, Bun is its own thing and all circumstances are unique. I have no idea how that rewrite was approached, whether it was the right decision, or how it will ultimately prove itself. Just saying the shift from "experiment" to "official new direction" is normal and credible, and that I'd give it some time to see how it handles contact with reality before passing judgement. If it's truly a disaster, nothing's stopping them from reversing course and backporting any new changes to the old Zig codebase.
It's a high profile open source project. While Bun/Jarred don't owe anything to anyone, nobody should be surprised when decisions like these result in strong backlash.
Imagine if Guido or Linus said a couple of days ago that they're just experimenting and then submitted and merged complete machine-assisted rewrite of CPython or Linux in Rust.
I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass
Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?
They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.
I bet the answer is industry changing even if the token cost is high.
This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.
That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.
Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?
Surprisingly, they appear to have not disclosed any vulnerabilities whatsoever. It's likely there have been numerous vulnerabilities in the past, but they are all being ignored.
Yeah! Why would the company that stands to make themselves look better in front of an IPO do such a thing?! Next thing you're going to tell me was that this whole rewrite was another marketing ploy to help potentially turn themselves in multi-millionaires!
Oh yes, I don't doubt they'd eventually be able to seriously reduce that number, probably to a handful of places. I don't doubt the strategy employed here, rewriting it keeping it similar, then slowly change it. I do still doubt they'd be able to completely eliminate memory issues in the end regardless.
When I read what you wrote, I was like "of course, duh, I'm stupid" but running `ag "unsafe" src | grep -i "bunsafety"` it doesn't seem to be the case actually, I see zero bunsafety mentions from it.
However, `ag unsafe` does over-count anyways, just in a different way, matching stuff like SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION and _unsafe_ptr_do_not_use and others.
Better command with same previous commit, `ag -w unsafe src | wc -l`, reports 13914 "unsafe" usages now, slightly better but pretty awful still.
My understanding is that that's because they were trying to do a structurally homologous port from Zig to Rust, precisely to keep their mental model and not change "too much" at once, and then they plan to refactor to make it safe Rust later.
it's clear that as of the time of this merge, no human has read any appreciable fraction of current mainline bun, so it's not particularly clear how much of a "mental model" exists anymore.
Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?
Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.
Model open source leadership. Imagine the meltdown if Linus says Linux kernel is not going to be rewritten and then one day wakes up and merges full machine-assisted rewrite in Rust.
Ah, I thought you referring to a person. I'm sorry for misreading you.
It's still a bad HN comment, I'm afraid (denunciatory rather than curious, for one thing), but it wasn't a personal attack and not a post that would normally clear the bar for a mod reply.
I don't know if the intent was to deceive, but the comments certainly had the effect of deceiving me. I came away from that first thread thinking, "Ah, so the 'story' here is that someone on the project tried an experiment on a branch that they probably should have put in a branch on their personal fork." I was no longer thinking it was a serious possibility that an AI rewrite would get merged.
Wow. This is going to be interesting to follow. There's absolutely no way any of this code was reviewed, but maybe we're in a post-human world now where you can trust the models to write and review the code. This is like Gastown but on a higher profile project. Will be fascinating to see how this project is able to add new features going forward (or even _if_ it will be able to).
Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code? I'm more than slightly worried about using Bun going forward myself, but I'm not sure to what extent that applies to using Claude as well.
Reminds me of going on linkedin and seeing all these sales and product people who are talking big game about engineering now. Well yeah they are definitely producing something but not sure I'd call it "engineering."
You can trust them to flag some things during review that may or may not be relevant. But just like with human review and unit testing, you cannot guarantee the absence of bugs after an LLM code review. It's just another set of (virtual) eyeballs.
I trust them somewhat to flag bugs. I don't trust them to produce clean, maintainable code - even code maintainable by the LLM itself. Any sufficiently complex LLM changeset can be assumed to contain duplicated logic, method scope creep, and code changes without accompanying documentation changes that the model often will not catch no matter how many rounds of review you run. If those issues make it into a commit, the next time you ask the LLM to update some of the functionality that it introduced earlier, bugs will creep in.
I find that documentation creep is wildly better in AI coded environments than human ones. You can deterministic force a documentation sync process on every PR, documentation rot has gotten way better.
Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.
It also modified many of the tests to make them pass in mischievous ways. You can't trust a test suite to catch regressions if the new version doesn't use the same test suite.
I'm actually excited for somebody trying experimenting with automated translation, but I'm afraid this will be lots of backwards compatibility issues.
I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.
The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.
The whole idea that my RUNTIME contains code that a single human hasn't looked at does make me uncomfortable, but if this actually works without a ton of issues it's pretty remarkable.
The speed of the change did. This is the “climate has always been changing” argument climate deniers make. It is a true statement which is still a lie by omission. Climate deniers purposely ignore that the climate has never changed at the current rate, and AI-stans neglect to mention that before AI nobody was merging a 1M+ lines of code in one go.
> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves
Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.
Yeah, Claude is very creative in finding ways of "solving" problems that go against what the user probably intended.
Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.
I doubt it will end up as stable release very soon, but I'm happy to be proven wrong. I have some skepticism about this whole rewrite, Jarred Sumner has enormous internet following and it feels like an ad.
How do you wash to define ad, and why does it matter? If I tell you I had lunch, I mean. okay, great. If I tell you I had a delicious Coca-Cola with my lunch, sure. If I happen to work at Coca-Cola, does that now become an ad? And what level does it become an issue? And I what is the issue?
If you work for Coca-Cola then yea there’s reason to question your intent even if simply because you aren’t objective due to your proximity to Coca-Cola.
On the other hand, the sleep fits better to the test description, "should allow reading stdout after a few milliseconds". Even if 1 != 'a few'. It's possible the part of the commit reverted here, https://github.com/oven-sh/bun/commit/a42bf70139980c4d13cc55..., defeated the purpose of the test by removing the sleep. I don't think adding the sleep back is an example of AI cheating.
> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.
Wow, This is definitely quite something for sure.
Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.
It's OK, we'll see how it goes. He and Antropic are giving it us for free, and nowdays just forking the old version is easy if a project needs that. Even maintenance is much easier using LLMs.
I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.
I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.
Still, those banks / enterprises won't appreciate the number of unit test changes.
And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.
Jared has commented on this elsewhere in the thread, basically claiming the parent you replied to is outright lying: it has removed no tests and has not meaningfully changed annotations to reduce coverage of effectiveness. It added additional tests and made a few changes to hard coded values due to differences in, as an example, how LLVM and Zig handle stack frames.
The MR is right there, linked at the top of this page. You can check who is telling the truth.
That said, I don't know how anyone is actually claiming to have done that. All day, the size of the MR makes the diff take too long to load and GitHub dies. I'll have to pull it later to check myself.
in tsz[0] 100% of tests pass yet I have a ton of bugs. I don't think any software out there is fully tested really. I'm experimenting this this idea as well. So far learned a ton.
I'm convinced the future of writing code is heavily LLM assisted
> it's basically solving the ,,tests not pass'' problem by changing the tests themselves.
False.
0 test files were deleted. 0 pre-existing tests were skipped, todo’d, or had assertions removed. 5 new tests were added in test.skip/test.todo state to track known not-yet-fixed bugs in the port that lacked test coverage before.
The merge changed 28 test files in total.
+1,312 lines
−141 lines
Most of that +1,312 is new tests.
The depth-of-recursion tests for TOML/JSONC parsers went from 25_000 -> 200_000 because Rust’s smaller stack frames (LLVM lifetime annotations let the optimizer reuse stack slots) mean 25k levels no longer reaches the 18 MB stack on Windows.
Same, just gonna stick with node. On the other hand, the trial by fire will be interesting to see... long term I can only imagine the kinks will surely work themselves out
As an educational thread, see this one from a week ago where Jarred again deflects from a merge decision and legions of foot soldiers attack anyone who predicted the impending merge:
From "This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely." and what seems to amount to some experimental curiosity -- to merging the whole thing in 10 days!? This seems really crazy.
Having just migrated all my teams repos to Bun, I feel… stupid. I was already feeling a little nervous by the time of the acquisition but this is pretty rough.
Love seeing the tests themselves getting modified, with random `sleep(1)` thrown around in a few of them. This bodes well, I pray some idiot at some large AI co actually ends up using this garbage in prod
It's going to work for the most part. Most people know that. It's a file by file, mostly function by function, conversion from one low level language to another with a very large test suite (with lots of Rust unsafe to work around differences). I've done that for C tools and it's fine, with some obscure edge cases here and there. The challenges are going to be making the new, very ugly, alien codebase idiomatic Rust in future and adding features or debugging the complex issues. I wish the developers luck. They're in for a slog.
I think given the novelty of this, a lot of eyes will be on it, so a lot of issues will be dealt with out of the gate. The problem will be when smaller projects that aren't in the spotlight think it's safe too and then do stuff like this after being encouraged by bun, and for those projects then lots of bugs will just remain unfixed. Basically a nation state adversary's wildest dreams came true today.
If that scenario happens it just means the collapse will be slower but still inevitable as anecdotes pile up and reach critical mass of common knowledge.
If most of the glaring problems are addressed (massive unsafe usage), and metrics show improvement (less crashes), then did it really go wrong? The fact the code is not idiomatic is less interesting, because that can be addressed incrementally. Let's wait 3 months and reflect.
I'm thinking regressions and broken tests. Bun is already known to segfault a lot and their existing tests were rather lackluster, the Rust port being just as unsafe would be the least of their problems.
This assumes that the memory safety bugs in the unsafe Rust port are the same as the Zig codebase. A total rewrite with so little review is virtually guaranteed to introduce many new bugs which very well may be more severe than the old bugs.
I expect it will be just fine. It's like bragging about getting the words right on a mental health exam. AI was given the answer, it just repeated it back in a slightly different format. Even a stupid human could have done that.
However, you can never prove that it hasn't gone wrong, because there are so many long-form problems with software (quiet bugs, maintainability issues, etc). This creates FUD.
did you read their Mythos paper? they're anthropomorphizing it like crazy. Maybe it's just cheap heat, but if they really believe the LLM is conscious..wew
This kind of frivolous nonsense disqualifies bun from ever being a serious option to me. I'm not building any kind of software used in a professional setting on 1M lines of unreviewed code.
Odd take. Bun was not option for me because or Zig. There was no security. Issue tracker has 3000 issues about segfaults. Now I might actually reconsider.
I don't believe you actually think it's odd to not want to run unreviewed code in prod. I accept that you might disagree, but I don't believe this is a take you haven't heard a million times before.
Regardless the outcome, this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better.
I hope the zig/dev community forks the project and continues the development. I'd rather use the fork than this project that has sacrificed its contributors for marketing purposes.
> this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better.
What? How?
You contribute to projects run by others with the understanding that others run the project, is this not the default assumption others have too when contributing to FOSS?
Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore? In my mind, pretty clear it wouldn't, I'm only a contributor after all, not the maintainer or the person running the project.
> Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore?
No, the big difference is that the described scenario does not require getting familiar with a new 1M LoC codebase written in a different language to be able to continue contributing to the project.
For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.
So it's disrespectful because before you could contribute, but because of the direction of the project, you no longer can?
Does that also means it'd be disrespectful to make projects more complicated and complex, because maybe someone who contributed initially don't know these new concepts, so introducing those would require this individual to learn about those things?
All of this still sounds like entitlement to me. Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.
> For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.
Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.
> Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better
This is so far from the reality. The power of open source is coming from the contributors. Contributors are the most valuable assets of an open source project - without them most of the free tools you use would be significantly worse - including bun. The reason my open source projects got somewhat successful is the community that formed around the projects. And, it is hard to create a community when you give contributors no chance to participate in the projects direction, especially in such a critical decision that has enormous consequences.
> Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.
Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it? That sounds crazy to me.
> Contributors are the most valuable assets of an open source project
You're talking about something else. Open source is literally about "This code has a specific license that allows you to do X" where X and Y differs by the license. Contributors or not matters squat if some open source project is valuable or not.
Don't mix concerns here, you're talking about "open development" or something else, not specifically open source.
Sure it's hard to create a community and get contributors and what not. But a maintainer choosing a different language and people feel that being "disrespectful" instead of just "stupid" or "dumb"? No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.
> Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it?
Nobody said that the problem is not knowing rust. The problem is changing the whole stack of a project overnight. This requires significant effort to get familiar with, even if a contributor have all the experience in the world with the new stack.
> Don't mix concerns here, you're talking about "open development"
Call it however you want, bun could not be the tool it is without its >800 contributors.
I think most maintainers would rather you not contribute to their project if your contribution comes with the idea in your head that you're now a stakeholder who has some share in the project's technical direction.
Of course they're a stakeholder. They've made an investment of time and effort, and they're hoping that it will pay off. The question is whether a maintainer will respect that.
If you want to maintain sole ownership of something that >800 people contributed to, that reflects on you. People will judge you. Most maintainers would feel obligated to concede some control. But LLMs have intentionally aimed to devalue programming, so this transition is totally consistent with the new ownership. And it may be wildly successful, because they've got an unlimited supply of tokens for the foreseeable future.
But I'd say the opposite: Most maintainers would feel blessed to have a lot of contributors so invested that they felt a need to have a say in the direction of the project.
> No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.
Well in this case Jarred and Bun can run their project their way, and since they're not made for me, they can just happen to be available to someone else like Claude code and they can stay in their happy read-only land.
> Don't spoil that by acting so entitled about how they should maintain and develop their project.
Are you sure you even understand what entitled means?
> Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.
Tell me you've never worked on any meaningful OSS project.
Good luck to Bun, if I was in any of its contributors list, and not on Anthropic's payroll, I'd say goodbye and never touch the project with a ten foot pole. And I say this as an honest feedback, save your "don't let the door hit you on the way out".
The difference is exactly the speed. Slowly transitioning from one thing to another gives the opportunity to contributors to get involved in the process.
Just because some set of hypothetical contributors want a slow-moving target and the maintainers want to be on Rust now, I'm supposed to be mad at the maintainers? Why?
PR so thick, the page failed to load the first time I opened it, and the comments still continue to fail to load. Absolutely hilarious. Though that may be just GitHub having a normal one, hard to tell these days.
1 009 257 lines added
4024 lines removed
6755 commits
2188 files touched
I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.
The maddening thing is that there's a right way to do this if you have the patience and professionalism to do so. It requires building a bit of scaffolding (feature flags, cross-language calling support, harnesses for shadow testing, etc.), then you ship-of-theseus the codebase incrementally. This is not even incompatible with LLM-assistance, plus it breaks the thing up into smaller, reviewable changes that don't break your diff tool!
However, doing it the right way takes a bit more time, involves community feedback, and doesn't produce headlines about huge codebases being rewritten by LLMs in just a few days, so ...
> you can always claim you would have used even more caution and process.
Well, specifically, my claim is that any serious professional in this industry would have done so. But we're essentially in agreement, in the sense that yes, I am allowed to make this claim, and in fact already did, in the comment you are replying to.
EDIT: Actually I've been thinking about this a bit more. The thing about commenting on something that someone did is that you must always comment on it after they did it, otherwise it wasn't "something they did." However, being a "Monday morning quarterback", as I understand it in this context, means "criticism of someone's actions afterwards", so it would appear that I am doing that. I also understand this phrase to have a negative connotation, and I would hate to connote negatively in this otherwise very positive community. Quite a dilemma! Glad I have my life coach LLM to help me sort all this out.
Ah yes, you are actually describing fish shell's Rust rewrite. They specifically called it The Fish Of Theseus which is of course a reference to the ship of Theseus.
Not sure there is much of a point in reviewing a port of this size. It has >1000 instances of `unsafe` and uses the same patterns as the zig code according to Jarred. It feels like a vibe-ported version of what the TypeScript team are doing porting from TypeScript > Go with codemods.
I just skimmed through the porting guide and based on the number of unsafe blocks, this looks like a fairly straight-forward mechanical translation.
If that is the case, why didn't they just "vibe-code" a Zig->Rust translator and a small Rust/TS/JS/whatever script to orchestrate things. You don't even need pretty printing support because rustfmt exists.
You'll save on a bunch of tokens, probably a lot of time/enegy, the process becomes auditable and (hopefully) deterministic, and if there's a mass bug in the translation, you only have to fix it in one spot.
I'm confused. Never heard of Bun until a few days ago here on HN. It's some nodejs wrapper thingy, written in Zig, and someone decided to use LLM to rewrite it in Rust. Is this a big deal? Who is even using this software? Why is this big?
Bun isn't a node.js wrapper. It's an alternative to node.js that sits at roughly the same spot in the stack.
Node.js is a distribution of the V8 JavaScript engine (the thing that executes JavaScript in the Chrome browser), along with a bunch of standard library code written mostly in C++.
Bun is a distribution of the JavaScriptCore engine (the thing that executes JavaScript in the Safari browser), along with a bunch of standard library code written mostly in Zig (and now Rust). Bun's standard library is in many cases compatible with or inspired by the Node.js standard library, but with some changes for convenience and performance.
Answering “who is even using this software” is unfortunately missing in your answer. I am honestly curious. I’ve never seen it “in the wild” (in job descriptions, hearing from past colleagues, meetups etc). Only place I heard about it is HN and Twitter.
It's primarily used by people who tend to sit on the cutting edge e.g. startups and developers who follow the latest tools. It's not well worn enough to be adopted by slower enterprise environments. Bun is well known within web development but if you don't work in the space and don't keep up to date with modern tooling it's unlikely you would have awareness of it.
To my limited knowledge, "serious" production systems most likely use Node.js instead of any alternatives, and I don't see any movement towards adopting Bun.
I don't think Rust vs. Zig has anything to do with why people are talking about this. It is a large piece of "real software" that underwent a full language transition in ~1 week using LLMs. That is a big deal regardless of the language and will be a case study regardless of how it turns out.
It’s a watershed moment. Basically one of the most controlled applications of an LLM into a robust codebase without regard for the implications of doing so.
Anthropic needed something like this and it must proceed flawlessly. My guess is that nothing will explicitly break. But that’s the difficulty of LLM generated code: nothing breaks. You sit with a codebase that swallows all errors and appears to be working. Silently failing makes debugging performance and behavior much harder.
Bun is not a node.js wrapper, it is a node.js alternative. It had non-trivial adoption, tens of thousands of stars on github for whatever that's worth (before the AI spam took over stars). It was then purchased by Anthropic and now we're witnessing open source software that people used be sacrificed to the altar of LLM marketing hype.
I think relatively few people are probably running Bun in production, but as a dependency management system and bundler for the JavaScript ecosystem, it's similar to `uv` from the Python ecosystem in how much faster it is compared to the most popular alternatives so it's fairly popular in that space.
I've never done any JavaScript development of any kind and had never heard of this either. I thought it was a package manager at first, but apparently it's an entire runtime.
My question is, if it's this trivial to rewrite Zig to Rust, and trivial in general to write Rust at all, why not just use Rust for your server side code in the first place? What's the value of continuing to use JavaScript and putting so much effort into the runtime?
>Is this a big deal? Who is even using this software? Why is this big?
Let's see. $10T in market cap, a significant chunk of everyone's assets and retirement funds, are currently dedicated to AI build out because of the potential for AI like Claude Code, which is recently doing $3b in revenue, and built completely on Bun.
If Bun is able to successfully vibe code a complete language shift in this short of time, it much more concretely validates the potential of vibe coding / AI for the entire industry.
I don't understand the rationale behind how any project, especially of this magnitude, can seriously build something stable this way.
My consolation - and it could be pure cope - is that at least I am in the same boat as a huge company like Anthropic, and they surely wouldn't be stupid enough to also build their cli tools around something that they saw as risky.
Cue the clueless CEOs of zig shops (I don't know many, but still):
"Rust is faster and safer! Port it! If you don't do it, I'll do it myself, because AI can do everything a programmer can, including the stuff you don't want to do. Ship it!"
Why would it be? There is projects like Roc that did the opposite, they went from rust to zig, as they (had to) use lots of unsafe rust. And before you ask, no it was not an AI generated rewrite.
So the geniuses in the datacenter prefer to rewrite the full codebase in another language instead of maintaining and improving its own fork or contributing to make the current language better.
Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...
> Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027
Imagine you want to monopolize programming by pushing LLM as an obligatory middle-men. Then people who can program without LLMs are direct threat to your business plan. It's time for us to start hiding. I'm cosidering adding `co-authored by Claude Code` to my hand-written commits and running Claude in useless loops to mock API usage.
No matter how I look at this, it's churn for the sake of churn.
Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.
At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.
> At some project scale the language really stops being any limiting factor
That's not entirely true. At a certain scale, some languages start becoming increasingly more of a factor. Memory issues in C/C++ codebases, for example. This is pretty well established at this point, which is why there's a push to move away from memory-unsafe languages. Which likely would include Zig, for better or worse.
I agree that new software should avoid memory unsafe languages, but I would disagree that rewriting existing projects in a memory safe language at all cost is a universally good idea.
It's I think not churn for the sake of churn. It's likely encouraged by the fact that Zig itself will not accept AI written code contributions.
So now imagine your company and project -- written in Zig -- has just been acquired by the world's biggest/second-biggest AI company.
That company's most successful and popular tool is running on your platform that is written Zig.
And Zig maintainers want nothing to do with you.
What kind of pressures, real or imagined, do you think that puts on the developers of Bun?
Honestly, from what I've seen from a distance, actual rigorous software engineering doesn't happen at Anthropic. From what we saw of the Claude Code source, the reliability issues over the last few months, and now this. It's just a bunch of people getting high on their own supply falling all over each other. Quality issues galore and a delirious frenzy.
FWIW I don't think it's intrinsic to AI. Codex is very well written (in Rust, BTW), fast, and consistent.
The "idiomatic Rust" thing rubs me the wrong way. If someone writes Rust that compiles and works, that's Rust. full stop. Telling people it doesn't count until it's "idiomatic" is just gatekeeping. It quietly says you're not a real Rust dev until you've put in years and absorbed all the unwritten rules, which shuts out exactly the people who are still learning. Everyone writes "non-idiomatic" code when they start. That's not a failure, that's how learning works. Even if being written by LLMs, the devs still will need to improve their knowledge to keep the codebase.
But that's not idiomatic. Idiomatic would look something like this
fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
return a + b;
}
The benefit of the idiomatic approach is now you have a function which handles a bunch of types from u32, to f64 and it also handles custom types and traits which implement the add ops.
The first method is what you might write if you were, for example, translating from C to Rust. It isn't idiomatic but it's easy to do.
The other thing to realize is that compiler authors optimize for idiomatic. The more you do things in a strange fashion, the more likely you are to stumble over a way of writing code which isn't being looked at when the language team is looking at performance and compile time optimizations.
There's nothing wrong with non-idiomatic code per say. However, part of learning a language is learning the idioms. It makes you better at that language.
I beliebe q3k's comment should be read as "[even if it's acceptable to the most stringent of gatekeepers] then this would be churn for the sake of churn."'
Not really. Rust is designed to be written in a certain way. If you machine translate C into Rust you end up with a load of `unsafe` code that follows the C style but consequently doesn't get any of the benefits of being written in Rust.
Imagine if you translated assembly to C++, but you just did it by putting everything in `asm("...")` calls. That's not idiomatic C++ and you wouldn't get any of the benefits of using C++.
That said, the Rust code I skimmed actually did look surprisingly idiomatic. It wasn't full of `unsafe` like I would have expected.
> AI is entirely besides the point here. The changes in this Zig fork are not desirable to upstream for several reasons. [1]
So my view here is that besides AI policies to filter low value contributions and "contributor poker" [2] to attract contributors vs just contributions, a well thought of genious implementation aligned with the Zig roadmap instead of the "hacky implementation for a flashy headline" [1] would have made the cut.
But then again this entertaining drama will sadly get deprecated by mid 2027 as the datacenters will be churning out their own opusrust and clankzig.
Honest question, how many of the leaks and crashes can be attributed to zig the language vs possibly (maybe, we don't know) a loosey-goosey, slot machine approach to development heavily reliant on AI? Will the inherent leaks and crashes be fixed, purely by dint of porting to Rust?
If LLMs can achieve this level of task in 9 days, why do we even need Bun in the first place? Shouldn't we just write our apps in Rust and not even deal with JS?
Why even rust at the first place? I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle. Instead of writing webpage, one can just write prompt for each page and serve it with claude.cgi
Yes, it can. Just vibe code Claude to connect to your lithiography machine and voila! Claude will run your factories. Claude can even apply oil to your rusty machines if you choose the $1000/month package
And if you inject information about the user into the context, everyone can have their own personalized version and we'll turn the internet into the tower of babel where no two people see or experience the same thing.
So many of the code comments on the new port concern only discussion on how it was ported, usually referring reader to the original zig implementation.
So now I'd basically be reading 2x the amount of comments and code to understand _why_ anything is happening.
I think one of the things I had forgotten about but sheds some more light in my mind about how this was done is that anthropic bought bun.
The change of tone with the author in the capabilities of Claude. The strategy of merging everything at once instead of a more slow, careful cutover. The “single” author story that every company loves to put forth.
Well, that escalated quickly. I think I first heard rumors of this a week or two ago. That's a very vast turnaround for such massive code-churn. I don't know how to feel about this.
Too bad modern computers are not capable of processing 800 paragraphs of text. That’s several hundred kilobytes! Maybe the technology will advance thanks to AI…
Github actually made my computer lag when there were no comments at all because of the 1 million lines of code added iirc. I could've responded something first but well I wanted to say something meaningful and didn't have anything so I just closed it.
I had to literally force quit my browser because of how much it lagged iirc.
If the bun team is around I would be interested to get their opinion on this: in the old time migrating a 1M codebase from one language to another meant you would pretty much become an expert in the target language. The output of the work is team experience/knowledge + the actual rewrite. With that Bun rewrite do you feel that the Bun team learned something other than “Claude can rewrite a very large codebase in no time”, which is impressive in itself. Is the output only the rewrite, or did you learn something along the way? And how do you feel about your answer? Not a snark question, like a lot of others I’m myself trying to understand how I feel about how our profession is/has been changing.
Then I decided that software is of limited value without a team to maintain it. Not necessarily because they fix it, but because they represent a bunch of humans who collectively understand it and therefore give it more possibilities.
One of Bun's longstanding issues was that bootstrapping Bun required Bun, so distributions were unusable to ship it or anything that depended on it: https://github.com/oven-sh/bun/issues/6887
Any ideas if this is now changing and Bun can be bootstrapped with "just" Rust?
That's pretty... brave? Not releasing it in parallel and spending a few months testing it against the old mainline version to surface issues BEFORE a potential merge?
This may be the largest AI-generated codebase right now, by a lot. It'll be interesting to see how this plays out.
Frontier AI software development still falls short in the design/architecture department, in my recent experience. Though it's pretty impressive at making "working" code.
This being a fairly direct conversion from one language to another, even keeping the same interfaces across files, means the architecture is already in place.
The detailed test coverage is also very helpful for Claude. But even detailed testing can't cover every edge case.
So my questions are:
How well did Claude do on the edge cases?
And how maintainable will this codebase be going forward?
> This may be the largest AI-generated codebase right now, by a lot.
I'm sure there's lots of other large scale applications of AI, just not many/any projects that are open source and so high profile - with the changes being done so far.
Personally, in the past 3 months I've shipped about 2.3M lines of a legacy project migration, though the new codebase is Java + Oracle ADF because of reasons™ and instead of being an interesting codebase, it's more forms heavy and essentially acts as a front end for a large Oracle instance, think more CRUD than application runtime (with an upsetting amount of XML).
The difference also is that it wasn't migrated by using AI on every file, but rather dumped the DB schema into JSON, and converted the old form contents to a YAML intermediate format that describes what's in the forms and have been iterating ever since of creating code that generates code - basically AI assisted development of a codegen solution + AI assisted sidecars that get merged with the generated code based on markers, when something can't be automated that way and often times also AI controlled browser based testing (since Playwright is in the cards for everything, but not yet).
Seems to be going pretty okay so far, will probably take months more of iteration and fixes, currently the automated testing is taking a while because let me tell you - not only Oracle ADF is shit, but so is WebLogic, like fuck I'd be so closer to being done if I was allowed to pick Python + HTMX or even Java + Thymeleaf. That's still better than a team spending a year on the migration and getting like 10% of the way there.
Obviously there's no more details to publicly share, but the overall vibe is clear: as long as you can test any changes, you can iterate faster than without AI - and the code ends up being more readable that colleagues would often write. The problem is that people would squint at the suggestion of 100% test coverage previously so most code is even written in a way that is straight up not testable (and often nothing is decoupled from the framework properly and tests take way too long, both time and resources).
I for one think it's a fascinating experiment to see how well it goes. Though if it actually works and leads to bun getting better over the coming months, I suspect the arguments against it will just take on a different flavor.
The problem is that many negative effects of this kind of thing won't be clear or immediate, so it's not an easy test to make useful. At minimum, this increases the opacity of the box, reducing perceived trustworthiness.
Zig is still a moving target with big fundamental changes being made to the language from version to version - nowhere near v1. When rust was at this stage of its development you wouldn’t have been able to name many projects either.
Node.js itself is getting quite close to running TypeScript natively, but they don't support using ES imports of CJS packages and importing with no-extension qualifier.
If this means that segfaults become rarer with Bun I might consider using it in production again. As it stands, Bun has been great as an all-in-one TS/JS package manager, build system and test runner but unstable enough that I still want Node running in production backends.
Hopefully this means Bun can now support things that were limitations of the Zig libraries like being able to upgrade standard TCP sockets to tcp without closing them.
By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake. You are also not allowed to move from the PoC phase to lets-do-it phase within a couple of days without being called names. Why are we concerned with speed all of a sudden? Are we in the "people will literally die if a car moved faster than 25 mph" era of software engineering? Let them do whatever they want, they've shown the will to move on from wrong decisions, they will do it again if the Rust port fails to deliver and the whole industry gets to learn from it, whatever "it" might become.
I can't ignore how much this sounds like Stockton Rush.
> "Apparently if you build a submersible with carbon fiber you are a witch and need to be burned on a stake. But look we're making reliable trips down to the Titanic with no problems."
Realistically, this is a forum of experienced engineers watching a company make some extremely questionable but very flashy engineering decisions. There's going to be a lot of people standing around here going "gee I dunno, that seems questionable".
Personally, I think the rewrite will largely work - logically, direct translations from one language to another are pretty well within the realm of the few things LLMs should perform extremely well at. But I also think more information will come out showing this was much more bespoke than just prompting an agent to do the translation. This just feels too much like an ad for Anthropic, I think it's likely there was a lot more human involvement and planning than we are being told.
That you're only just "learning" that these things are true is a damning admission. And to fix your bad analogy, it's more like "hey maybe we shouldn't be allowing f1 street races through school zones".
That analogy might work if this situation is 'reckless behaviour risking children's safety' but in this case it's much closer to 'We made an large, potentially risky change that you can choose to avoid until it's more mature'
They never denied they'd switch, just that they'd need solid improvements confirmed before they switched. Clearly internally they've decided they've seen the gains necessary to carry on with the switch
This is silly IMHO. They haven’t released a new official Bun version with this code yet. It is a canary release. Give them a chance to figure it out and try it out and see how the limited number of production users of bun as a runtime experience the move. If it succeeds, this will massively accelerate development and they will have much to teach us all about how to safely code 1M lines with AI and merge it in days. If it fails, we will know that AI isn’t ready for that yet
> By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake.
You've just learned that you can't do random shit and not get called out? Were you born yesterday?
The AI polarization is making me sick. Please don't let this style of comment become normalized on HN (and that includes equivalently tribalistic anti-AI comments).
Anyone running bun in production right now has to be sweating lol, this is a ridiculous change for a part of your software stack that really ought to be reliable.
Heavy implications on how the future will be formed if things go well with this port. It would prove a lot of people wrong if things go well 3 months down the road.
The top comment in the thread explains it pretty well, so please don't pretend it's anything else. The point is they went from "chillax, it's just an experiment" to "we'll switch languages via a 1M line vibecoded patch" in two days. People that rely on this software are understandably fearful, since there is no way this change has been properly revised and tested. Although perhaps the mistake was relying on such software in the first place... And so are contributors too, which have seen essentially the entire codebase replaced in a week.
People relying on this software can absolutely choose to stay on current/recent versions until this becomes more mature. My assumption is that the current state allows for public testing, but anyone needing a stable version wouldn't be affected and can choose to not be affected by it.
Merging a complete rewrite in another language in 9 days seems insane to me. Maybe I'm just too cautious but with something like this I'd split off as a separate binary and get some heavy use customers involved as testers first to see if it causes any unforeseen problems before slowly expanding it out.
I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.
I don’t think you’re too cautious. Big upgrades and rewrites is somewhat of a „work hobby” of mine and this seems waaay too fast. I don’t know how the Bun canary process works and I guess their test suite is better than typical projects but still… I can’t imagine this working out well without testing it on a variety of big projects for a significant amount of time.
There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.
That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:
I thought for sure the peanut gallery was overreacting. Especially when the concern was absurd - because who would do such an insance thing? Like, at the time I legitimately thought 'no way a project switches over in a few months'. Even as an absurd hypothetical, I couldn't even imagine the prospect of it being done in a matter of days.
It seems it was an experiment at that moment, and that it went well? I do hope they release it under 2.x though, cannot imagine how a 1M LoC can break in so many ways, especially if what xiphias says is true:
If I got magically handed the perfect rust rewrite for a project of this magnitude, it would take way longer than 9 days to merge, because I would need to make sure it's actually good.
> it would take way longer than 9 days to merge, because I would need to make sure it's actually good
What if another (unstated) goal of your rewrite was to provide marketing material for how advanced your acquirers AI tools are? The faster the turnaround, the better they (and therefore you) look.
> It seems it was an experiment at that moment, and that it went well?
There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.
Either the “experiment” claim was a lie or they are being irresponsible.
You have no idea if it was a lie or not. I routinely have my clanker fleet spend a couple days toiling on some crap that I assume I will throw away, but it turns out pretty awesome, so I keep it.
It's entirely plausible that when that comment was posted, he doubted it would work well enough to keep.
(Sensible default for LLM code, btw. But sometimes it works great.)
I have a friend who get super mad when he fails ">80% chance of success" throws.
This isn't case of this tho. Even he said that there is a high chance of RIIR, 9 days still insanely short time for such rewrite if you're planning to have some sort of community around the project.
Surely the mods will be here to remind you that it's against the rules to direct personal attacks towards other community members, to fulminate and brigade.
Or do those protections only cover whiny open source developers upset about a chat bot writing blogs?
Does anything from that comment say that there was 0% chance the experiment wouldn't be merged into main? I see "very high chance all this code gets thrown out completely", which just means the low chance of it not being thrown out has occurred.
It doesn't say what will happen, but isn't their comment responding to people who don't like the look of this rewrite, and telling them basically that they don't have to think/worry about it? I definitely read it as 'not yet' and not 'another week or so'.
It's also a recipe for failure for ports in general. Same goes for the "not idiomatic Rust" comments above — that would be nonsense.
You want to port it as faithfully as possible to the original, porting it bug-for-bug, quirk-for-quirk. Then, over time, after the port has been proven to be as identical to the original as possible, you can gradually fix those kinds of internals.
That's why TypeScript's tsgo native port is so good.
tsgo will inherit many benefits from go, even if it is never fully "idiomatic".
This is in direct contrast to this port, which requires significant re-architecting (or made "idiomatic", if you wish) in rust to achieve any of the benefits of the language. You can't re-architect one step at a time.
I don't think you want to achieve any benefits of Rust in the initial port. Because at this scale you will definitely introduce new, and probably subtle, bugs that are not present in the Zig version.
You just want it to be the same, to the maximum extent the language allows. E.g. 1000+ unsafe is the right move, for now.
Reaping the benefits of Rust is for _future_ development.
With weird sadness I have to say, we are getting targeted with new kind of marketing. It doesn’t look like it was just technical decision. If anyone was following what was going on X, it was crazy with amount of content about it.
I couldn’t believe before with all fearmongering being marketing, but I am coming to conclusion it is. It’s hard to get any signal over noise in attention economy. They know what they are doing and it’s Deja Vu of crypto, but now we are targets with rage baits, guerilla marketing, buzz
It's cool how you can just do this now in 2026. I hope it gets cheaper and easier to do with other big projects written in outdated or just not good enough languages
Depending on the model I could easily see it approaching 7 figures since Mythos security scans have been 6 figures already and don't require nearly as much output.
I hope it's obvious why I'm removing Bun dependency in all my projects. Would be great to have a non-affiliated zig-bun fork that focuses on, well, runtime.
I have full faith, it's the same really smart people that built bun (Jarred and team) that have spearheaded this and are running it. So I have no reason to believe that this was done carelessly.
That said, I'm still shocked and amazed that something this big is possible these days. But as we've seen multiple times now, one of the most important things your codebase can have is a solid test suite.
I will continue to use bun, because at the end of the day, it isn't just the technology, but the talent/people behind the technology that ensures that it will be solid.
And since that hasn't changed, I will still trust bun and its direction.
Also, bun is mostly glue code and sort of "user space" libraries (my words) as Jarred has said on X, most of the underlying runtimes like JavascriptCore, etc weren't rewritten.
So this isn't like 100% of what we think of as bun was rewritten. It's more like the scaffolding and harness.
Doesn't doing this in the matter of a week or so, by definition mean it was done carelessly?
How could it be possible to test such a complicated piece of software, and review such a large amount of code in such a small timeframe?
Spoiler, it's not. They're merging slop.
yeah but it also made some tests pass by changing the tests. i’m not super familiar so i’ll dig more on weekend but it seems sus pending more review. i’ve had ai do similar things that i caught in manual review. cheating the test is bad.
It is welk known that agents can cheat or go off on tangents and not recover. Just recently deleted a bunch of code files that I didn't ask for. The code wasn't even used anywhere.
This is a wild experiment! I do think the incentives are heavily weighted to Anthropic for this to go well. I have mixed feelings about how it will go, but it will result in an important outcome…
I don't really understand the point of this. Is it Anthropic showing off well their LLMs work? Was it too difficult to find Zig devs so Bun swapped to Rust? Did Jarred read one too many memes about "rewriting in rust" and took it at face value??
I would imagine that there will be bugs migrating all at once, performance will probably be close to the same, and the maintainers will need to context shift from Zig to Rust. A very confusing decision for sure.
Claude is significantly better at rust than zig. Zig is changing all the time. If you check my profile comments I did a quick experiment recently to demonstrate. Essentially, Claude could generate a basic working tcp echo server in a few seconds. For zig, either asking it to do it just with zig, or with specific versions (.15 and .16 because some fundamental language changes necessitate different implementations) failed to produce working code in all three cases and also took magnitudes longer to generate the code.
Aside from the big marketing play, Claude not being able to easily generate zig code was probably a big motivator - it doesn’t make anthropic look good and it doesn’t fit into how they’re doing things
Also, you’re assuming that actual traditional maintainers even exist now. Likely it’s a smaller team of people running mythos agents with an unlimited budget and no real need to fully understand the code
Probably some combination of: Anthropic is heavily invested in the Rust ecosystem and they want their core tools to be built on Rust. More Rust developers. More Rust training data so LLMs write better Rust code than Zig code. Advertisement for Claude Code doing major work on a high profile open source project.
On one hand I kinda feel validated for having jumped ship on Zig 3+ Years ago[1] and moving everything to Rust[2], with the language simply being too unstable and unsafe in my eyes, despite my love for comptime and people arguing that Bun and Tigerbeetle were proof that it wasn't the languages fault.
But I also feel bad for the Zig project to loose one of their flagship projects, because while I find the project ultimately anachronistic, I know what it's like to pour your sweat, heart and soul into something,
and having it replaced within a week is a sobering experience even from afar.
A couple years ago this would have been unthinkable because of how slow legacy codebases and rewrites are.
I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone. And I wonder if they will follow suit eventually simply due to marketing pressure (after having been bitten by the Zig compiler I was surprised that they were putting their super duper high reliability database on top of it at all, but with another big player using it there was at least some peace of mind for their enterprise customers).
> I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone.
In general, we never like to appeal to popularity (a logical fallacy), but why would you assume here that we would point to Bun specifically (or any project for that matter) [1] as an example of Zig’s quality?
We prefer to judge Zig’s quality on its own intrinsic merit:
For example, we subject the language through TigerBeetle to inordinate amounts of fuzzing, perhaps more than any other language (you could say Zig is lucky to have TB’s test suite aimed against it!).
Literally 1,024 dedicated CPU cores, 24/7.
Zig holds up remarkably well.
We also recently pledged $512K to the ZSF, together with Synadia.
These are the kinds of things we prefer to point to. Not hype, but real end-to-end systems engineering, and long term financial support, regardless of the language we choose to use.
[1] I picked Zig back in July 2020. At the time, the largest project was River, but already Zig was a phenomenal choice, and the years have only shown that Zig was probably one of the best design decisions in the development of TigerBeetle. It turned out better than I imagined.
Correct me if I'm wrong, but the three largest Zig project (by far, with a huge gap between them and the rest of the pack) are Bun, Ghostty, and TigerBeetle.
A language so niche that it only has 3 major projects is a liability. Now it has 2 major projects, one of which is yours. Even I as a weird language connoisseur would raise an eyebrow at that.
After switching from Zig to Rust, I felt like the language was helping me improve the correctness of my project, to argue that the fuzzing of your project helps improve the correctness of the language feels backwards and adds to my suspicions.
We both know that fuzzing is great, but that wether you fuzz with 1000 cores or 1.000.000 cores, at an exponentially growing state space it doesn't make (that much of a) difference (I know that you guys are not doing naive fuzzing, which is extremely cool, but the shape of the problem is still O of evil shaped). Most things you can find with fuzzing are shallow-ish, and if you want to go deeper you need formal verification (for which a strong type system is a good first approximation and I'm not aware of something like Kani in Zig).
I like TigerBeetle and I still wish you guys all the success in the world, but I can't help and wonder where you could be by now if your language was lifting you up, instead of you having to lift up your language.
While I don’t have personal experience with either project, I feel it is safe to say that Bun and TigerBeetle are not comparable projects: TigerBeetle has a strong focus on testing and correctness, and Bun maybe not so much. IIRC, TB did well in the Jepsen test and had one segfault in a client library. Bun has had quite a few memory safety issues, in fact, the stated motivation for the Rust move is to eliminate those going forward. We shall see how that pans out.
I'm pretty sure they'll miss the full developer salary that Oven used to sponsor them, which they no longer do.
I'd wager one doesn't do a rewrite like that, if you are in great personal standing with the language foundation.
That same "just don't use it" attitude was what drove me away from Zig btw. I would have been fine in restricting myself to a somewhat stable subset, e.g. if, loop + function calls, but they didn't want to provide any tiered stability guarantees for the language.
Opinionated is great, no local minima is great, but you have to accept that if you don't want to engage with the needs of your (professional) community then what you do is a hobby project. A very cool hobby project beloved by thousands, but a hobby project.
I'm not expecting the whole language to be stable, but I expect certain parts of it to be more stable than others. E.g. control flow vs. async.
I'm not saying that they can't work that way, more power to them. But then having the expectation of anybody using it in a professional setting is also unrealistic. You can't have your cake and eat it too, either it's your personal project and you are fine with nobody using it but you, or you evangelise for people to use it, but then you also need to make at least some effort to not break their stuff on a whim, or to accept their change requests when they put in the work as was apparently the case for bun.
Tbh I don't see Zig hit 1.0 with a meaningful user-base, it's probably going to mostly get eaten by Rust or some other language and will continue to exist as a niche thing, kinda like D.
Having one of the flagship/showcase codebases rewritten to Rust in a week feels like a death knell. Either the community or the language is too unworkable if someone that heavily invested into it jumps ship, and I'm afraid it's kinda both.
Will be interesting to see how this pans out. Some people will see minor issues as proof that AI is terrible, but honestly if this gets released and is relatively uneventful it just highlights how the art of building software had changed completely in the last few years.
It's not that weird to end up with this when translating C/Zig/C++ to Rust. A first pass can use unsafe and then when the code is in Rust you can work on reducing the unsafe.
Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.
Sure, but that's kind of orthogonal. Imagine doing this by hand I still think going like-for-like with the Zig, even if that means a lot of unsafe, is a good approach.
But I suppose if you are already using LLMs it's more reasonable to try and go from Zig straight to Rust with no/minimal unsafe.
The benefit of using Rust is that you know exactly where the unsafe code is so you can handle it explicitly and deliberately to avoid issues by imposing carefully crafted constraints... oh.
The result is so horrible that Anthropic will quietly move to Node in 6 months. Now they got their headlines and in 6 months everyone will have forgotten about it.
To me the interesting thing to watch about this project is that if it fails and Bun becomes a piece of shit even with all the resources at their disposal, it means LLMs are probably not going to be the revolutionary tech everyone has been hyping it up to be. It’s useful sure, but software engineers aren’t going away. How could anyone interpret this any other way?
I wonder if the whole acquisition was done so that they have guinea pigs that can’t say no…
or if I want to be cynical… so that they have a big enough project where they can force gigantic rewrites without considering the outcome from the project’s point of view, all so that they can fuel their marketing strategy.
I mean aside from the somewhat...dishonest statements from the people involved, giving false explanations is one thing, but calling people who smelled this "overreacting" gives this a weird taste.
I am neutral on such a rewrite itself, there are pros and cons to the whole "rewrite in Rust" topic. People are making decent arguments. But the way the initiator here reacted makes it seem like the Bun team itself thinks they are doing something weird here...
Guess reviewing any code isn't exactly their thing either anymore? And I guess adjusting the tests themselves is certainly one way to make things pass.
Ultimately this just seems like it was done specifically to make Bun more "ai friendly". Whether it turns out good or not that appears to be the motivation behind it.
It's interesting that the developer who spearheaded the hype of Zig abandoned the engineering without addressing the segfault.
They could have also taken the approach of gradually porting from Zig to Rust via FFI.
Yes, this is a slop show by the AI lab.
Well this is uncomfy. Not what...a week ago this was just framed as an experiment and now it's being rammed through?
Even if it works/is correct/etc, this is shockingly careless.
If I'm going to be using your thing to build on top of, I sure as hell don't want to see you 180'ing a week after you just said you weren't going to do exactly what you just did.
i find it hilarious how desperate people are to cope that this can’t possibly work, must be horrible, etc. for all i know, it is. but let’s just see how well it works, rather than “no true scotsman” grouse about it. it is so sad.
it reeks of “doth protest too much” energy. if it were so obvious that ai was insufficient to do the work, then i don’t think you’d have to circle the wagons about it. you could just confidently watch the market turn on the product and know the reason why. and all that would prove is just how special you all are that ai cannot replicate your genius.
the reality is that foundation model makers have been dogfooding their own vibes for multiple years now, and it is clearly is good enough for _them_. but yeah, i’m sure that’s just a total fluke and they are all idiots. /eyeroll
What does this mean for bun add-ons like opencode's opentui? Did FFI also somehow get ported or will that have to be updated? https://github.com/anomalyco/opentui
Node's been calling native code distributed in a npm package "add-ons" for a decade and a half.
Fair call on the same C abi. Adapting to node 26.1.0's new FFI is happening in https://github.com/anomalyco/opentui/pull/104 . There's also some new FFI adapters opentui is adding there, and they're adding a worker.
So there is some adaption. That was sort of the interesting useful actual look I thought might be informative, where-as I feel like you were mostly just trying to be curt & maintain a status quo of keeping us all uninformed/unknowing. Let's try actually providing useful steps forwards when we post, ok?
This will go down in history as the biggest mistake of software engineering of all time.
Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.
Claude Code itself is purely vibecoded, both CC and Bun leads are saying that humans are not writing code at Anthropic anymore. It is amazing how much money they intend to squander, because it's all funny money to them, investors just give it to them hand over fist for them to burn. Developing wrappers around the model isn't even the hard part and yet they're going to burn themselves to the ground getting high on their own supply.
> Claude Code itself is purely vibecoded [...] money they intend to squander [...] going to burn themselves to the ground getting high on their own supply.
This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.
People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?
2B+ in revenue on hundreds of billions in investments and future commitments is completely worthless. Anybody can turn $100b into $2b, that's not a fucking accomplishment. And to the extent that something is driving any revenue, it is the model, not the TUI. Any success Claude is having is despite the godawful TUI, not because of it.
claude.ai (their chatgpt equivalent) was nowhere before cc came about. CC was coded in a few weeks by people, then a few months by people + cc, then mostly cc take the wheel. It is without a doubt the main reason why they're successful. It is also the main reason why their coding models are as good as they are. They've incorporated the early data into their training recipes, and evolved model + harness together.
They appear to be lining up a funding round at a $900 billion dollar valuation. Or to be more conservative they already raised at $380 billion. A long way from worthless.
Maybe this is the best marketing trick for Claude Code ever. Maybe there was pressure from Anthropic to do this and prove the value. Even partial success is enough to prove the value, justify the value and usage, and AI dependency even further.
Running the rust version in their prod for two weeks should be long enough to catch the biggest crashes and fix them. I'll be up to bug bounty hunters to find the big one that crashes all their app servers at once.
Well, realistically as well, humans gave us softwares that are full of security holes (and bugs), which one have you seen that a human perfected on the first time around? Give AI some time as well to be fair.
My initial reaction was that this is pure insanity but in fairness this is a fairly 1:1 port of existing code, so the developer's mental model of it should still match fairly well.
I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.
Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.
We have hundreds of projects that run on Bun. (Some are Bun-specific for whatever reason, but most are "runtime-agnostic TypeScript code that runs on Bun, Node 24.2+, and Deno, but that means they run their test suites on Bun, in addition to the other two.)
Out of curiosity, I installed the canary Bun and just ran a bunch of them. It didn't take me long to find one that works on stable Bun and crashes on "canary" Bun.
schematic git:(main) bun upgrade --canary
[1.55s] Upgraded.
Welcome to Bun's latest canary build!
Report any bugs:
https://github.com/oven-sh/bun/issues
Changelog:
https://github.com/oven-sh/bun/compare/0d9b296af...19d8ade2c
schematic git:(main) bun run main.ts serve
Schematic Editor running at http://localhost:4200
Bundled page in 25ms: src/web/index.html
frontend TypeError: Cannot destructure property 'isLikelyComponentType' from null or undefined value
at V0 (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:2534)
at reactRefreshAccept (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:6090)
at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:8766:27
at CY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8973)
at nY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:9285)
(...more like this...)
at m (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8773)
at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6482
at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6548
from browser tab http://localhost:4200/
^C
schematic git:(main) bun upgrade --stable
Downgrading from Bun 1.3.14-canary to Bun v1.3.14
[2.02s] Upgraded.
Welcome to Bun v1.3.14!
What's new in Bun v1.3.14:
https://bun.com/blog/release-notes/bun-v1.3.14
Report any bugs:
https://github.com/oven-sh/bun/issues
Commit log:
https://github.com/oven-sh/bun/compare/bun-v1.3.14...bun-v1.3.14
schematic git:(main) bun run main.ts serve
Schematic Editor running at http://localhost:4200
[browser] Version mismatch, hard-reloading
Bundled page in 20ms: src/web/index.html
# working fine as usual... ¯\_(ಠ_ಠ)_/¯
I mean "passes test suite" is one thing. And a good thing. But... "doesn't break any (or even, say 99.5%) of the apps deployed around the world that are built on bun" is a pretty radically different thing.
It's hard to feel like this is responsible behavior, but I will reserve judgement for now, and see how long they persist this "canary" phase.
If they extend it for a lengthy period, and even like, fix bugs on the Zig version and the Rust "canary" version, then... I would be mollified to a great extent, since it is so easy to switch between the Zig stable version and the Rust canary version.
As a pretty heavy user of Bun, I'm actually pretty psyched for it to switch to Rust... but given the abruptness and speed so far, I can't quite shake the "new AI dealer getting high on his own supply" vibe.
But I hope they enter an intensive phase of prioritizing any and all "canary" bugs, and come out on the other side with a better product, and an even faster rate of improvement (which has honestly been pretty wild already).
(Yes, of course, I will have my clanker file a bug report with repro... but that may take a few days.)
vibe coders keep saying that now you can have 100x productivity, that you can write a million lines of code in a week and do what would take a team of 10 experienced developers a year.
where are all these million lines vibe coded projects? I don't see them. its all hype
This PR appears to be over a million lines (though GitHub won't load for me).
Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.
At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.
I trust Jarred to make the right decisions regarding bun, which seems to be his passion.
Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.
Anything bad that comes from this, will simply be fixed.
I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages
I might not necessarily agree with the haste / stability of this, but I commend Jarred for pushing boundaries on what AI coding is capable of, can't deny that.
4 years ago this would've seemed like science fiction.
On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.
This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.
reply