Book Series

The Fuckup Almanac

A guided tour of humanity’s finest screwups: from outages and crashes to real-world disasters — exposing how complex systems fail, and why “human error” is almost never the real cause.

About the Collection

The Fuckup Almanac

Bookstores are full of books about success stories.
This series is not one of them.

Instead of following survivorship bias and dissecting how something worked under a perfect alignment of circumstances, The Fuckup Almanac looks at the opposite: how IT and engineering (do not) work universally. By studying failures rather than victories, the series focuses on rules that apply regardless of ambition, competence, or good intentions.

The Fuckup Almanac is not a simple collection of disasters. Failures are the entry point — sometimes dramatic, sometimes absurd — used to explain the mechanisms behind modern technology, engineering, and the systems that quietly run the world.

Yes, the goal is education.
But education delivered in a way that remains readable, engaging, and occasionally satisfying in the same way disaster stories always are. If you come for the schadenfreude, you’ll get it. If you stay, you’ll walk away with mental models that help you recognize the same failure patterns long before they make headlines.

The series is designed to work both linearly and out of order. Reading it end to end gradually expands the perspective — from digital foundations, through software and algorithms, into physical engineering and human factors. At the same time, every volume and chapter stands on its own, with enough context to be understood even when read in isolation.

Throughout the series, a few principles remain constant:

  • complex ideas are simplified, but not distorted;
  • respect is paid where it’s due, without overdramatization;
  • humor is used where it helps understanding, and dropped where it wouldn’t.

No prior knowledge required.

The primary audience is curious minds who want to understand how IT and engineering actually behave at scale. Thanks to the level of accuracy and care put into the explanations, the series remains useful not only to newcomers, but also to students and experienced professionals alike.

Volumes in the Series

Start from the beginning or jump in anywhere.

WIP
Vol 2: Stuff We Built on Top

Vol 2: Stuff We Built on Top

With the foundations in place, Volume II moves upward — to the layers we eagerly piled on top of them.
This is where abstractions multiply, dependencies sprawl, and systems become fragile long before anyone notices.

The focus shifts to software ecosystems, automation, and algorithms: brittle chains of trust built on open source, interfaces that actively invite mistakes, and systems that collapse the moment popularity exceeds expectations.
Failure here isn’t rare — it’s designed into the scaling model.

Yes, there is plenty about AI.
And no, we didn’t need it to find countless ways to shoot ourselves in the foot.
It merely helped us do it faster, at scale, and with far more confidence.

View Page STATUS: BACKLOG
WIP
Vol 3: Hard Engineering & Physics

Vol 3: Hard Engineering & Physics

Volume III steps outside of IT and into the physical world, where failures are no longer measured in downtime but in force, heat, pressure, and consequences that don’t roll back. It turns out reality has a higher budget than most disaster movies.

Explosions, toxic releases, collapsing bridges, and structures that manage to set cars on fire — scenes that would be rejected in Hollywood for being “a bit much.” When things go wrong here, the results are spectacular, sometimes literally.

And yet, once the smoke clears, the failure patterns are uncomfortably familiar. Assumptions fail, margins vanish, warnings get ignored — proving that the same rules apply, whether the system runs on code, steel, or concrete.

View Page STATUS: BACKLOG
WIP
Vol 4: Human Factor & Hubris

Vol 4: Human Factor & Hubris

After a tour of disasters across technologies and industries, Volume IV finally turns to what truly connects all of them: people. Not as a footnote, but as the common denominator behind every system that failed.

Ego, arrogance, overconfidence, cost-cutting, hype — just the opening entries in a long list of our finest qualities. Decisions made under pressure, misaligned incentives, and shortcuts taken “just this once” prove to be highly effective ways of manufacturing catastrophe.

This is not a story about a few bad actors. It’s about organizations and cultures collapsing exactly as designed — because sometimes the most dangerous part of any system is the one convinced it knows better.

View Page STATUS: BACKLOG