bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

252
active users

#hardenedbsd

4 posts3 participants0 posts today

Current status: Setting up an internal test #Radicle network. I'd like to see if we can at least provide our own Radicle seed network for the #HardenedBSD src and ports trees.

I don't want to place undue burden on the main Radicle network. At least, not until we confirm that it can handle our larger repos.

Continued thread

Setting the promisc flag manually on em0 didn't help.

What helped is adding the em0 interface to a new bridge0, setting the IPv4/IPv6 addresses on the bridge. Things seem to be stable (for now.)

The promisc flag is indeed still set on em0.

Weird.

One of our two Cavium ThunderX1 servers is now online! It's currently building the base set of packages I use.

Once the server is fully set up, I'll kick off our first full arm64 package build in multiple years.

arm64 support is coming back to #HardenedBSD!

Replied to Shawn Webb

@lattera @phreakmonkey
IIRC, until some point (but before switching to OpenZFS) opensolaris_load="YES" was needed BEFORE zfs_load="YES" in /boot/loader.conf.
And both ancient ZFS codes on FreeBSD (aka ZoF in contrast with ZoL) and OpenZFS are derived from CDDL'ed code from OpenSolaris.
And more, Hardened BSD is derived from FreeBSD. So at least some of error messages from ZFS contains "Solaris" in their wordings.
#ZFS #HardenedBSD #FreeBSD #OpenZFS #Solaris

#HardenedBSD goal for this weekend: Deploy two servers that have been gathering dust.

First server will be one that we can provide signed test images for when we need to bisect the codebase for bugs.

The second server will be for other experiments.

Amazing that it took them this long. The new product lineup from #Synology for this year only accepts Synology branded drives.

If you happen to own an older Synology and are looking to replace: consider moving towards a more standardized platform. Like the Zimacube. Or if you are feeling a bit more adventurous: build your own.

Don't switch to #QNAP, #Asustor or whatever - they'll be the next brand to enshittify after Synology.

There is some fantastic and open software available. I personally prefer #HardenedBSD with #ZFS - but there is so much more!

I migrated the #HardenedBSD #Vaultwarden instance from one host to another.

Today, I'm grateful for vm-bhyve and #ZFS.

On the original host:

# zfs snapshot tank/bhyve/vaultwarden-01@2025-03-13
# zfs send tank/bhyve/vaultwarden-01@2025-03-13 | ssh sync@second-host:/path/to/sync/storage/vaultwarden-01.2025-03-13.zfs

Then on the new host:

# zfs recv < /path/to/sync/storage/vaultwarden-01.2025-03-13.zfs
# vm config vaultwarden-01
[editor brought up to change network0_switch to the proper value for the new host]
# vm start vaultwarden-01

And I have the zfstools package performing auto-snapshots of the entire VM's storage every 15 minutes, hour, day, week, month, and year.

That way, if the VM is compromised, I can simply rollback the entire VM to the last known good state.

The #HardenedBSD dev/build infrastructure will be powered down in around an hour from now (currently 23:07 UTC). This is in preparation for the planned electrical work tomorrow (12 Mar 2025).

The infrastructure will be back online within 48 hours.

Thank you for your patience and understanding.