If I were to host a regular #HardenedBSD hacking sessions via #Signal, using screen sharing, would you attend?
If I were to host a regular #HardenedBSD hacking sessions via #Signal, using screen sharing, would you attend?
I wish Signal Desktop supported screen sharing. I would probably do regular #HardenedBSD live hacking sessions over Signal if it did.
edit[0]: According to https://infosec.exchange/@david_chisnall/114270027799791523, Signal does support screen sharing! I'm definitely gonna give this a shot.
building #GerbilScheme from source on #HardenedBSD, as one does.
I need to give this Scheme a try, since it is one of the most "batteries-included" Schemes I have seen, with documentation that doesn't make me feel great despair.
Current status: Setting up an internal test #Radicle network. I'd like to see if we can at least provide our own Radicle seed network for the #HardenedBSD src and ports trees.
I don't want to place undue burden on the main Radicle network. At least, not until we confirm that it can handle our larger repos.
We've now exposed the #HardenedBSD arm64 package builder web interface. You can now follow along on the progress of our arm64 package builds.
Setting the promisc
flag manually on em0
didn't help.
What helped is adding the em0
interface to a new bridge0
, setting the IPv4/IPv6 addresses on the bridge. Things seem to be stable (for now.)
The promisc
flag is indeed still set on em0
.
Weird.
OSDay 2025 - Why Choose to Use the BSDs in 2025
There was limited time, so I couldn't go into much detail and had to keep things more general and structured than usual.
https://it-notes.dragas.net/2025/03/23/osday-2025-why-choose-bsd-in-2025/
One of our two Cavium ThunderX1 servers is now online! It's currently building the base set of packages I use.
Once the server is fully set up, I'll kick off our first full arm64 package build in multiple years.
arm64 support is coming back to #HardenedBSD!
@lattera @phreakmonkey
IIRC, until some point (but before switching to OpenZFS) opensolaris_load="YES" was needed BEFORE zfs_load="YES" in /boot/loader.conf.
And both ancient ZFS codes on FreeBSD (aka ZoF in contrast with ZoL) and OpenZFS are derived from CDDL'ed code from OpenSolaris.
And more, Hardened BSD is derived from FreeBSD. So at least some of error messages from ZFS contains "Solaris" in their wordings.
#ZFS #HardenedBSD #FreeBSD #OpenZFS #Solaris
That tower system on the bottom: that's #HardenedBSD's first ever crowd-funded system for the project. That system is a decade old and still in use :-)
I long for the day when we at #HardenedBSD can switch to #Radicle as our code forge.
#HardenedBSD goal for this weekend: Deploy two servers that have been gathering dust.
First server will be one that we can provide signed test images for when we need to bisect the codebase for bugs.
The second server will be for other experiments.
The electrician completed his work. Both of the new 20A circuits are working. The #HardenedBSD infrastructure has now been moved to the new circuits. Zero downtime. :-)
The electrician will be here shortly. I plan to keep the #HardenedBSD dev/build infrastructure online unless the electrician needs me to power it off.
Amazing that it took them this long. The new product lineup from #Synology for this year only accepts Synology branded drives.
If you happen to own an older Synology and are looking to replace: consider moving towards a more standardized platform. Like the Zimacube. Or if you are feeling a bit more adventurous: build your own.
Don't switch to #QNAP, #Asustor or whatever - they'll be the next brand to enshittify after Synology.
There is some fantastic and open software available. I personally prefer #HardenedBSD with #ZFS - but there is so much more!
The dogs are finally tired. Now I can take care of some #HardenedBSD stuff, starting with the libc/csu/rtld related merge conflict in our 14-stable branch.
Goal for this weekend: Resolve the #HardenedBSD 14-STABLE merge conflict with upstream #FreeBSD.
This involves the libc/csu/rtld issues from before. I gotta re-learn and apply what I did on 15-CURRENT.
I migrated the #HardenedBSD #Vaultwarden instance from one host to another.
Today, I'm grateful for vm-bhyve and #ZFS.
On the original host:
# zfs snapshot tank/bhyve/vaultwarden-01@2025-03-13
# zfs send tank/bhyve/vaultwarden-01@2025-03-13 | ssh sync@second-host:/path/to/sync/storage/vaultwarden-01.2025-03-13.zfs
Then on the new host:
# zfs recv < /path/to/sync/storage/vaultwarden-01.2025-03-13.zfs
# vm config vaultwarden-01
[editor brought up to change network0_switch to the proper value for the new host]
# vm start vaultwarden-01
And I have the zfstools
package performing auto-snapshots of the entire VM's storage every 15 minutes, hour, day, week, month, and year.
That way, if the VM is compromised, I can simply rollback the entire VM to the last known good state.
The #HardenedBSD dev/build infrastructure will be powered down in around an hour from now (currently 23:07 UTC). This is in preparation for the planned electrical work tomorrow (12 Mar 2025).
The infrastructure will be back online within 48 hours.
Thank you for your patience and understanding.