Cloud Native & Open Source: A Team Lead’s Working Journal 💻

A team lead’s perspective on building and managing modern, open technology.

The Exploitability Gap: Insights from Datadog’s State of DevSecOps 2026

Intro We have all been there: a Slack notification triggers an alert for a “Critical” CVE, and the scramble to patch begins. But as our clusters grow, so does the noise. The most jarring security stories are often the ones happening silently inside our own production environments. Datadog recently released its State of DevSecOps 2026 report, and the numbers provide a sobering reality check for anyone managing cloud-native infrastructure. The report reveals that 87% of organizations are currently running at least one known exploitable vulnerability in their deployed services. Even more concerning is that many of these services rely on libraries that have been abandoned by their maintainers. This is not just a theoretical problem; it is based on telemetry from thousands of real-world cloud environments, making the findings impossible to dismiss. ...

March 6, 2026 · 3 min · 626 words · Matteo Bisi

Amsterdam Bound: Gearing Up for KubeCon EU 2026

The countdown to KubeCon + CloudNativeCon Europe 2026 has officially entered its final stage! In just a few weeks, the global cloud-native community will descend upon the RAI Amsterdam (March 23–26), and I couldn’t be more excited. For ReeVo, this is a massive and strategic milestone. We are returning as a proud sponsor, and we are arriving in force. This isn’t just a marketing trip; it’s a mission. ...

March 4, 2026 · 3 min · 535 words · Matteo Bisi

ACTUI Follow-Up: Submenus and Image Management

Quick Follow-Up After publishing the initial ACTUI article, I kept developing the tool. I started using it regularly and shared it with my team. Some feedback came in, and I naturally improved things during my free time. This is a quick update on what changed. What Changed Submenu Structure The original flat menu worked for a demo but felt cluttered with more features. I restructured the interface into three main sections: ...

February 27, 2026 · 2 min · 355 words · Matteo Bisi

How Distillation Attacks Are Reshaping the Global AI Landscape

Introduction to the AI Frontier The AI race has largely boiled down to a high-stakes contest between the US and China. On one side, established US companies like Anthropic, OpenAI, Google, and X have continuously pushed the boundaries of frontier AI models. Anthropic, the research lab behind Claude, is best known for its focus on AI safety and its unique ‘constitutional’ approach to alignment. Meanwhile, several Chinese tech firms have been fast-tracking models to compete with the best systems coming out of the US. This competition reached a turning point when Anthropic revealed it had been targeted by industrial-scale ‘distillation attacks’ from three major Chinese AI labs. ...

February 23, 2026 · 3 min · 562 words · Matteo Bisi

Back to Basics: Why Containers Are Just Fancy Linux Processes

The path into platform engineering has changed. Many engineers today start their careers working directly with Kubernetes, writing YAML and managing Helm charts before they ever spend extended time at a Linux terminal. The tooling is so well-abstracted that you can be genuinely productive for months before the underlying system ever becomes relevant. That is a real achievement for the ecosystem. The gap shows up at the worst moments, though: a container crashes with a permission error, a security team flags a pod running as root, a privilege escalation CVE lands and it is not clear whether the cluster is exposed. These are Linux problems, and they are much easier to reason about once you understand what the YAML actually maps to at the kernel level. I have been in those conversations many times, and I always come back to the same set of fundamentals. ...

February 20, 2026 · 11 min · 2292 words · Matteo Bisi

The Challenge of Securing AI Agents: A DevSecOps Perspective

As a DevSecOps Team Leader, my job is to secure customers using modern technologies. Sounds straightforward, right? The reality is far more complex. Every day, I face the challenge of enabling innovation while maintaining security. The rapid adoption of AI has introduced a new dimension to this challenge: agentic AI assistants that do not just chat, they act. This challenge connects directly to something I wrote about recently. In my article on spec-driven development with GitHub Spec-Kit, I discussed how structure and governance matter when using AI for coding. The same principle applies here: when AI agents can execute code, access secrets, and operate with user privileges, we need structure and governance more than ever. ...

February 17, 2026 · 5 min · 1059 words · Matteo Bisi

Testing Spec-Kit: Building a Functional Container TUI in 2.5 Hours

Introduction: Theory Meets Practice In my previous article about GitHub Spec-Kit, I explored the theoretical foundations of spec-driven development: why structured AI workflows matter for compliance, auditability, and team collaboration. I discussed the high-level concepts of audit trails, liability, and how spec-kit transforms “vibe coding” into a rigorous, documented process. Today, I’m sharing something different: a raw, unfiltered hands-on experience building a real tool from scratch using spec-kit. This is a chronological journey documenting what actually happened when I let spec-kit drive the development process from constitution to working code. ...

February 12, 2026 · 9 min · 1747 words · Matteo Bisi

AI CLI Standardization: From Tool Lock-in to Portability

Introduction: From Web Chatbots to CLI Tools AI is a powerful tool, and for IT professionals, the most effective way to leverage it is through CLI tools like GitHub Copilot CLI, Claude Code, Gemini CLI, or similar agents. In previous articles like GitHub Spec-Kit, I explored spec-driven development and structured AI workflows, but I realized I skipped fundamental concepts: why CLI tools beat web chatbots and how to standardize your AI setup for portability. ...

February 6, 2026 · 12 min · 2506 words · Matteo Bisi

When Your Update System Becomes the Attack Vector: The Notepad++ Supply Chain Compromise

The recent Notepad++ supply chain compromise shows how even widely trusted, open-source tools become vectors for state-sponsored espionage when their distribution infrastructure falls into the wrong hands. This was a surgical, six-month operation that bypassed traditional code security controls by exploiting the update mechanism. What Happened and Where the SDLC Failed In 2025, Notepad++, a widely used open-source text editor, suffered a sophisticated supply chain attack. Chinese state-sponsored threat actors compromised the shared hosting provider in June, gaining control of the update distribution system. Even after losing direct server access in September following a kernel update, attackers maintained persistence through stolen credentials until December 2. The fixed version 8.8.9 with hardened update verification was released on December 9. ...

February 3, 2026 · 7 min · 1370 words · Matteo Bisi

ClawdBot → MoltBot → OpenClaw: A Case Study in Confusion Attacks and Security Risks

What is ClawdBot/MoltBot/OpenClaw? For those unfamiliar with the project, OpenClaw (formerly MoltBot, previously ClawdBot) is a personal AI assistant platform that integrates with multiple messaging channels including WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and many others. The project is available at github.com/openclaw/openclaw and maintains a website at openclaw.ai. The tool is designed to be a “local-first, single-user assistant” with capabilities that include shell command execution, filesystem operations, browser automation, and integration with various cloud services. It’s essentially a bridge between AI models and your entire digital ecosystem. However, OpenClaw does not provide model access itself; users must configure it with their own API keys from providers like Anthropic, OpenAI, or others. ...

January 31, 2026 · 11 min · 2145 words · Matteo Bisi