Hacker Newsnew | past | comments | ask | show | jobs | submit | cookiengineer's commentslogin

If someone drops 5 confirmed ring 0 exploits/bypasses within 3 months and claims that they got a 6th one... why on earth would you doubt that the 6th one suddenly is fake?

Do you know how hard discovering even one of those is? And how many months of work it takes?


this claim is in another galaxy, not your average 0-day

Note that RedSun and Bluehammer were silently patched, with no response to the CVEs by Microsoft, and not accrediting the researcher's work.

That's what this is about. Microsoft doing bad security practices while trying to get away with it, leading to this outcome.

The researcher also claims to have another version ready which allows to also bypass TPM+PIN via a similar backdoor, which I'm inclined to believe.

Why do I believe that? 5 ring 0 zero days within 3 months are so statistically unlikely to be found, by the same person, in such a short time. Whoever this person is really knows their exploits, and must be in the league of Juan Sacco.


the only way to bypass PIN would be an actual backdoor in Bitlocker. no way around that. an actual backdoor in microsoft encryption was never documented, and there are Snowden documents showing FBI pressing Microsoft into introducing one and Microsoft refusing

so I call bullshit on the PIN bypass


You're assuming the PIN was ever connected to the key itself in the first place. We don't know how that mechanism works, it could just be a totally separate gate that IS bypassable.

> the only way to bypass PIN would be an actual backdoor in Bitlocker. no way around that. an actual backdoor in microsoft encryption was never documented, and there are Snowden documents showing FBI pressing Microsoft into introducing one and Microsoft refusing

A USB stick containing a masterkey to decrypt a bitlocker volume is literally the definition of a backdoor.

Go on, try it out. It works.


no, to access a bitlocker volume which automatically decrypts

thats an LPE, not an encryption backdoor

the USB stick doesnt decrypt bitlocker, it just gives you root after bitlocker was AUTOMATICALLY decrypted


Smells like a compromise. Microsoft enables BitLocker by default, thus protecting companies and users at scale. But the price is a backdoor they hope noone finds.

Someone else claimed this doesn't affect people who actually care about security and enable boot-time password protection.


> no, to access a bitlocker volume which automatically decrypts

> thats an LPE, not an encryption backdoor

No. RedSun and Bluehammer were LPEs

> the USB stick doesnt decrypt bitlocker, it just gives you root after bitlocker was AUTOMATICALLY decrypted

No, that's not what the bypass does. Maybe go try it out and verify it before you come to your quickly made conclusions?

It's not tied to "automatically decrypted" volumes, whatever that would imply for your setup requiring a pretty pointless TPM keystore for that.

If your case were true, it would also imply that any bitlocker cryptography never really worked because it was automatically decryptable without the need for a password/hash/whatever to get your keys from the keystore, which actually makes it so much worse. Even worse than the previously known coldboot attacks.


its pretty obvious you have no idea how bitlocker works, and its various modes - TPM only, TPM+PIN, PIN only

> its pretty obvious you have no idea how bitlocker works, and its various modes - TPM only, TPM+PIN, PIN only

How could anybody besides a Microsoft employee, given the appearance of this bypass technique?


Linux can decrypt BitLocker-encrypted drives. The cryptography is known and solid. The issue is that, as 'aiscoming says, its surroundings in Windows make the quality of the cryptography irrelevant.

In the default BitLocker configuration, Windows puts all the key material in the TPM, locked behind the usual trusted-boot stuff: known-good BIOS hashes the bootloader and tells the TPM, bootloader hashes the kernel and tells the TPM, kernel hashes the initial process and tells the TPM, (I’m not sure how far it goes in this specific application,) and at the end of it the TPM won’t release the keys unless the entire chain was correct. This process does (modulo TPM flaws) ensure the disk will only be decryptable when in the original computer running the original OS. It does not ensure that the original OS will not subsequently give a root shell to anyone who walks up to the keyboard and types in a cheat code, and that’s essentially what’s happening here.

Celebrite et al. take a similar approach: after your Android phone boots and you first enter your PIN (which, unlike with BitLocker defaults, is required to unlock the TPM, thus the distinguished status of “before first unlock” aka BFU vs “after first unlock” aka AFU), the key material is already in RAM and breaking dm-crypt is not necessary; all that’s needed is find a USB stack vulnerability or a Bluetooth stack vulnerability or whatnot that can be leveraged into a root shell.


Note that Microsoft did take the “Linux can decrypt drives in TPM-only” scenario into account. If any UEFI settings are changed related to stuff like boot order, the computer is supposed to see that the settings have changed and require the recovery password to unlock the volume. Knowing the quality of vendor firmware implementation, I’m not sure how well this works in practice.

Agreed that the default Bitlocker config is much less secure than having a PIN at boot time due to the amount of code that gets run.


Same experience. Wanted to get an X1C after I had saved enough, then I was lucky enough to be able to send it back within the 2 weeks time frame. I live in the EU so I was able to demand the refund.

Now I am rebuilding my old Ender 3 with Openbuilds parts into a CoreXY setup, all metal hotend, sturdier metal frame, and the newer RAMPS board with a raspberry pi and klipper setup. Don't know enough about the multi tool related things, but maybe I am gonna focus on that afterwards.

I am having tons of fun while doing so, it has been quite a while since I rebuilt my Anet A8 into an AM8 with a custom Marlin firmware back then.


> APIs are contracts. Not the pinky promise of "I'll do my best guess"

You have never had to work with PHP backends, have you?

JSON in PHP is a flustercluck. Undefined, null, "" or "null", that is always the question.

If you use a typed Go/Rust client and schemas, you usually end up with "look ahead schemas" that try to detect the actual types behind the scenes, either with custom marshallers or with some v1/v2/v3 etc schema structs.

It's so painful to deal with ducktyped languages ... that's something I wouldn't wish on anyone.


Yes I have. You learn the quirks and then it’s ok.

I mean, there is still people who think that a UFO was sighted in Roswell at the radar testing site of Area 51.

Imagine that, 70ish years later there is people that cannot grasp how modern the A-12 prototype was. [1]

In my opinion the US has a real scientific education problem. So much so that people still think that alien life that built machines so advanced that they can bridge distances over lightyears travel time... just the belief that they will remotely resemble our appearance anyhow is statistically so close to 0 that I have no words to express how unlikely it is to happen. You have a greater chance getting hit every millisecond of your life by a lightning strike than this being the case.

[1] https://en.wikipedia.org/wiki/Lockheed_A-12


We have control flow. It's requirements specifications and test driven development. You just have to enforce it, so the agents cannot cheat their way around it.

I decided to build my agentic environment differently. Local only, sandboxed, enforced with Go specific requirement definitions that different agent roles cannot break as a contract.

That alone is far better than any hyped markdown-storage-sold-as-memory project I've seen in the last weeks.

Currently I am experimenting with skills tailored to other languages, because agentskills actually are kinda useless because they're not enforced nor can any of their metadata be used to predictably verify their behaviors.

My recommendation to others is: Treat LLM output as malware. Analyse its behavior, not its code. Never let LLMs work outside your sandbox. Force them to not being able to escape sandboxes. And that includes removing the Bash tool, for example, because that's not a reproducible sandbox.

Also, choose a language that comes with a strong unit testing methodology. I chose Go because it allows me to write unit tests for my tools, and even agents to agents communication down the line (with some limitations due to TestMain, but at least it's possible).

If you write your agent environment or harness in Typescript, you already failed before you started. Compiled code isn't typesafe because the compiler doesn't generate type checks in the resulting JS code.

Anyways, my two cents from the purpleteaming perspective that tries to make LLMs as deterministic as possible.


Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.

I switched to llama.cpp because of that.

To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.

Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.


Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.

I'm personally waiting to be downgraded to simply being called "lazy".

When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.

Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.

For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.


slopcode is a pejorative that means nothing to me. if you have an actual criticism to make, then do it

If I am not allowed to criticize software stability or lack thereof, what am I allowed to criticize? The color of the terminal output or what? What is an "actual criticism" for you?

Can you elaborate why those bugs weren't found by e.g. fuzzing in the past?

I'm genuinely curious what "types" of implementation mistakes these were, like whether e.g. it was library usage bugs, state management bugs, control flow bugs etc.

Would love to see a writeup about these findings, maybe Mythos hinted us towards that better fuzzing tools are needed?


If I had to guess, I'd say that AI is better at finding TOCTOU bugs than fuzzing because it starts by looking at the code and trying to find problems with it, which naturally leads it to experiment with questions like "is there any way to make this assumption false?", whereas fuzzing is more brute force. Fuzzing can explore way more possible states, but AI is better at picking good ones.

In this particular sense, AI tends to find bugs that are closer to what we'd see from a human researcher reading the code. Fuzz bugs are often more "here's a seemingly innocuous sequence of statements that randomly happen to collide three corner cases in an unexpected way".

Outside of SpiderMonkey, my understanding is that many of the best vulnerabilities were in code that is difficult to fuzz effectively for whatever reason.


Fuzzing isn't good at things like dealing with code behind a CRC check, whereas the audit based approach using an LLMs can see the sketchy code, then calculate the CRC itself to come up with a test case. I think you end up having to write custom fuzzing harnesses to get at the vulnerable parts of the code. (This is an example from a talk by somebody at Anthropic.)

That being said, I think there's a lot of potential for synergy here: if LLMs make writing code easier, that includes fuzzers, so maybe fuzzers will also end up finding a lot more bugs. I saw somebody on Twitter say they used an LLM to write a fuzzer for Chrome and found a number of security bugs that they reported.


I never understood why there is no interactive Help program like there was in the "old days" when CHM files on Windows 95/98/XP were a thing. These CHM files and the interactiveness are heavily underrated, and they were some really good documentation, especially the ones from IDEs and compiler suites.

Today I wish there was something like this but made for tutorials and wizards. If someone presses "Help" they should not have to go online on your website just to literally never find any help for their problems.

We are in the golden age of LLMs, yet nobody uses LLMs to explore and discover locally hosted knowledge bases ... which are in my opinion the single most useful use case of them. You could build such a great UX with it.

For example, I'm selfhosting a lot of archived wikis via a kiwix server. Devdocs, wikipedia, dev and cyber related wikis. Having an LLM assistant running on those locally was probably the best improvement for my learning experience. And the workflow is integrated into my custom New Tab page, it's literally a search field on my homepage of the browser, so it's always accessible.


Damn. This went from car battery style electrocution to custom PCB design unexpectedly quick.

Kudos, amazing design.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: