bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

253
active users

#k8s

2 posts2 participants0 posts today

Great kudos to the k9s creators k9scli.io/ - it is really what I need to get an quick overview and info of my #kubernetes clusters. Once the keybinding are memorized its the fastest way to gather information I found so far. No browser based dashboard convinced me so far.
#k8s

k9scli.ioK9s - Manage Your Kubernetes Clusters In StyleK9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project is to make it easier to navigate, observe and manage your Kuber...

The new k8s bug has a lame name: IngressNightmare. *sigh* Where's the clever word play?

Too many people relying on simple appending Nightmare to the name of the attack surface these days... might as well get an LLM to name them if we're just going to copy the last bug name all over again.

Anyway, you can read about it here:

wiz.io/blog/ingress-nginx-kube

#threatintel, #k8s

wiz.io · Remote Code Execution Vulnerabilities in Ingress NGINX | Wiz BlogWiz Research uncovered RCE vulnerabilities (CVE-2025-1097, 1098, 24514, 1974) in Ingress NGINX for Kubernetes allowing cluster-wide secret access.

Made my first #Go contribution. It's very basic but on the Helm project.
It will help people to easily know how much you are waiting for your #k8s resources to be ready.
It will be useful when the "helm upgrade --wait" is taking too much time.

More to come.

Continued thread

I know that I could manually delete some Pods from the full nodes and k8s would likely reschedule them on the open node, but that would be a manual process.

So my question: Is there a way to trigger k8s to look at all pods and potentially reschedule some of them, potentially including moving pods from one node to another?

2/2

Incredibly stupid question potentially, but does Kubernetes have sort of a "look at current Pods and potentially reschedule on other nodes" trigger? I've just had the situation that after a sequential node restart, I had two nodes completely full, up to having some Pods pending, while the last node to be rebooted still had CPU free. But to make use of that, some Pods on the full nodes would need to be rescheduled.

1/2

Today I've been experimenting with a slow-rollout canary idea for Kubernetes. Basically:

1. Set minReadySeconds to >= your probe's `failureThreshold * periodSeconds`

2. Set maxSurge to something like 5% so it deploys small batches of pods at a time

3. Have the health-check endpoint return 500s if that pod's error rate is above your error threshold — I'm using middleware to track error count vs total request count

Man Prometheus is a pain to recover once its data store is in any way out of shape. Did NOT help that it was buried inside Kubernetes inside a PVC.

Thankfully it was only Dev environment today but if this ever pages on Prod we're losing data as it stands.

I'll write something up for a run book but eesh.