I suppose I could use my experience to give some #PEP517 build system recommendations.
For pure #Python packages:
1. #flit_core (https://pypi.org/project/flit-core/) — it's lightweight and simple, and has no dependencies (in modern Python versions, for older Pythons it vendors tomli).
2. #hatchling (https://pypi.org/project/hatchling/) — it's popular and quite powerful, but has many vendored dependencies and no stand-alone test suite (which makes it painful to maintain in #Gentoo).
For Python packages with C extensions: #meson-python (https://pypi.org/project/meson-python/) — which combines the power and correctness of meson build system with good very Python integration.
For Python packages with Rust extensions: #maturin (https://pypi.org/project/maturin/) — which is simply a good builder for precisely that kind of packages.
Now, I strongly discourage:
A. #setuptools — lots of vendored NIH dependencies (that can alternatively be unvendored for cyclic deps), lots of deprecations over time (we're still seeing tons of deprecation warnings all over the place), many unsolved bugs (e.g. parallel C extension builds are broken in a few ways), a lot of technical debt, and if all that wasn't enough, it's slow.
B. #poetry-core — a very tricky build system with lots of pitfalls (I've reported a lot of mistakes done when migrating to it).
C. Practically any other build system — writing new backends is trivial, so everyone and their grandmother must have one. And then, they often carry a lot of NIH dependencies (if you're reinventing a build system, you may reinvent everything else), lack experience and reintroduce the same bugs. And if that wasn't enough, packaging them in distributions is a lot of work for no real benefit to anyone.
Some time ago two #PEP517 build systems introduced #PyPI trove classifier verification. At a first glance, it makes sense. After all, if you made a mistake somewhere, you'd rather know early than when you try to upload the package. The problem is, that the verification fires for people building packages locally too — including #Gentoo users.
Now, this function was based on the #Python "trove-classifiers" package. Whenever a new classifier is introduced, a new release of the package is made. When you're building a package using tools such as `build` or `pip` (isolated build), the newest version of this package is being installed from the Internet. On the other, a Gentoo user may have an old version, unless we enforce an upgrade via package dependencies. Then building packages that use newer classifiers will fail, and with a confusing message too. Confusing because: 1) contrary to the message, the classifier is valid; and 2) even if it weren't, it doesn't affect us in any way.
And so we asked for an ability to disable this. While it took some time, the #Hatchling showed understanding and eventually merged my patch. On the other hand, the #setuptools maintainer… well, started a long and tedious debate that resulted in ignoring the trivial solution to the actual problem (as "unnecessary complexity"). Instead, we were given another option: we could entirely disable `pyproject.toml` validation. It's not really acceptable, for two reasons: 1) because setuptools actually rely on this validation (so removing it could result in broken package installs instead of an error, if the file is not valid), and 2) because it produces an awful warning on every package build. So we'd end up bullying Gentoo users with false warnings, and some of them would probably end up filing invalid bugs to various upstreams.
The bottom line is: don't use setuptools.
https://github.com/pypa/hatch/issues/1368
https://github.com/pypa/setuptools/issues/4459