It scales quite well. Linux itself is developed in this way. Or perhaps you think Linux isn't at a large enough scale? (No sarcasm, I know that there projects out there much bigger than Linux.)
I'm not sure about "scale" but Linux, as an open-source project where external actors want their code included, can push arbitrary amounts of work onto those actors with no additional cost to itself.
They can say "to get your code upstream you have to do twice as much work" or 3x or 4x or whatever. It's not their cost to bear.
An internal team pays that cost. They have to consider whether the trade off of having a pristine commit history is worth the additional overhead of doing it.
I personally care more about PR size and am happy to squash all commits in a PR. If that is too big I'd rather see multiple smaller PRs.
I think the extra work for a single developer to perform atomic commits is justifiable.
How does this work with multiple developers working on the same repo? I'm assuming everyone should work on their own feature branch and send PRs once their branch is done? Should the commits also be tagged by the feature branch they're on? Should the CI approval workflow be run against any combination of commits on the feature branch or against the final HEAD?
With git, I think it helps to think of development in terms of patch series.
An individual commit is a single patch, intended to do one thing (and hopefully do it well), and a feature branch is a patch series.
A pull request is then a request to review the series. If you need to change things, git allows you to rewrite your commits to send in a revised set of patches.
Before merging, your CI would create a temporary branch off the current master, merge your feature branch to that, and run tests against the result. I don't think testing individual commits (fully, at least) in a series makes much sense if you're going to merge all of them to master anyway.