This place needs more of this kind of documentation.
I failed to use IP tables for years. I bought books. I copied recipes from blog posts. Nothing made sense, everything I did was brittle. Until I finally found a schematic showing the flowchart of a packet through the kernel, which gives the exact order that each rule chain is applied, and where some of the sysctl values are enforced. All of a sudden, I could write rules that did exactly what I wanted, or intelligently choose between rules that have equivalent behaviors in isolation but which could have different performance implications.
After studying the schematic, every would just work on the first try. A good schematic makes a world of difference!
If the author agrees, I could try to learn Serbo-Croatian (I'm Polish, good with languages) and translate it to English. I'm kinda a burnout Linux geek, who cannot look at computers much more. Translating a book would be fun, but I would need some sponsoring. Amadeusz at [the old name of icloud].com
the book is licenced under CC BY-SA so you should be OK with translating as long as you follow the licence terms.
you could try do a first pass in an AI model to translate and then proof-read it for quicker translation. good luck, it would be fun and potentially impactful ;)
To my knowledge, sadly I can't find an English version of it. I'm too wishing for a future English version so that I can read it. But I guess it will be a lot of work to translate it into English.
If someone could program a visualization tool that would generate such diagrams automatically, that would be even cooler (but likely a mission impossible).
Automatic generation would be really tough because of all the levels of abstraction traversed in this diagram in particular... But tools like Mermaid / PlantUML can get you in the ballpark, and PGF/TikZ could be a reasonable target if you want to attack that mission by generating text instead of images.
For containers you will also have own TCP/IP stack similarly to what is shown for VM on the diagram, this is done when a container uses slirp4netns to provide networking. An alternative is to use kernel TCP/IP stack, this is done when pasta is used for networking, diagram on this page shows the details:https://passt.top/passt/about/.
There’s complication, and there’s complexity. Fools admire complication, engineers design solutions to complex problems. This is a diagram explaining the latter.
I think it was put pretty well by describing things as accidental complexity (of which you want as little as possible) and essential complexity, which is inherent to the problem domain that you're working with and which there is no way around for.
The same thing could sometimes fall into different categories as well - like going for a microservices architecture when you need to serve about 10'000 clients in total vs at some point actually needing to serve a lot of concurrent requests at any given time.
> inherent to the problem domain that you're working with and which there is no way around for
I'd phrase it to reasonable taken trade-offs for customer/user support and/or selling products.
> going for a microservices architecture when you need to serve about 10'000 clients
So far I am only aware of the use case to ship/release fast at cost of technical debt (non-synchronized master) of microservices.
As I understand it, this is to a large degree due to git shortcomings and no more efficient/scalable replacement solution being in sight. Or can you explain further use cases with technical necessity?
This place needs more of this kind of documentation.
I failed to use IP tables for years. I bought books. I copied recipes from blog posts. Nothing made sense, everything I did was brittle. Until I finally found a schematic showing the flowchart of a packet through the kernel, which gives the exact order that each rule chain is applied, and where some of the sysctl values are enforced. All of a sudden, I could write rules that did exactly what I wanted, or intelligently choose between rules that have equivalent behaviors in isolation but which could have different performance implications.
After studying the schematic, every would just work on the first try. A good schematic makes a world of difference!
Was it this one? https://en.wikipedia.org/wiki/File%3aNetfilter-packet-flow.s...
One of my favourite webpages. Have used it countless times over the years.
I use this all the time when writing iptable rules.
It is also worth mentioning TRACE target that will dump to logs which exact rule the packet hit, it's invaluable big firewalls.
Can you share the diagram? Would love to become iptables-enlightened.
Eventually I used more detailed diagrams, but this one was like a lightbulb going off:
https://www.frozentux.net/iptables-tutorial/images/tables_tr...
I couldn’t find one that annotated where sysctl configurable were shown. But this is a useful annotation, even if it’s an exercise for the reader.
It is time to be nftables enlightened instead.
Similar diagram, right in nftables wiki:
https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_...
It's more of a netfilter (the thing behind iptables and nftables) diagram rather than just iptables.
If you know how iptables maps to that diagram you are very likely to be able to quickly understand how nftables does too.
Sure, but we really shouldn’t be encouraging the use of iptables in 2025.
That's not realalistic for most of the Linux world.
Soooo many systems are still using iptables even though we "should" be using nft everywhere.
If you're going to be a Linux Sys/Net Admin today, you need an understanding of both systems.
Besides the diagram you'll find tutorials on https://www.frozentux.net/category/linux/iptables/ too.
And at http://www.easyfwgen.morizot.net/ there's an old, but still useful generator for an iptables setup. That should help to understand iptables.
For anyone who is interested, the author of this diagram also made a Linux disk I/O diagram (https://zenodo.org/records/15234151). These diagrams are from his book Operativni sustavi i računalne mreže - Linux u primjeni (https://zenodo.org/records/17371946)
Shout out to the brilliant and generous work of the author!
Do you know if there is a English version of the book?
If the author agrees, I could try to learn Serbo-Croatian (I'm Polish, good with languages) and translate it to English. I'm kinda a burnout Linux geek, who cannot look at computers much more. Translating a book would be fun, but I would need some sponsoring. Amadeusz at [the old name of icloud].com
the book is licenced under CC BY-SA so you should be OK with translating as long as you follow the licence terms.
you could try do a first pass in an AI model to translate and then proof-read it for quicker translation. good luck, it would be fun and potentially impactful ;)
To my knowledge, sadly I can't find an English version of it. I'm too wishing for a future English version so that I can read it. But I guess it will be a lot of work to translate it into English.
The Disk I/O diagram is excellent, thank you for sharing.
Linux Kernel map is an other good one https://commons.wikimedia.org/wiki/File:Linux_kernel_map.png
That's pretty cool!
If someone could program a visualization tool that would generate such diagrams automatically, that would be even cooler (but likely a mission impossible).
Automatic generation would be really tough because of all the levels of abstraction traversed in this diagram in particular... But tools like Mermaid / PlantUML can get you in the ballpark, and PGF/TikZ could be a reasonable target if you want to attack that mission by generating text instead of images.
*simplified.
Doesn't even go into iptables/nftables
If you look closely to iptables, iptables looks back at you.
For containers you will also have own TCP/IP stack similarly to what is shown for VM on the diagram, this is done when a container uses slirp4netns to provide networking. An alternative is to use kernel TCP/IP stack, this is done when pasta is used for networking, diagram on this page shows the details:https://passt.top/passt/about/.
Is it possible we see the diagram as an svg? I am seeing it only as embedded in the pdf, and really difficult to read .
Click on "Download" below the embedded PDF viewer and you'll get the PDF.
I'm surprised to realize I'm familiar with most of the stack just from decades of Linux usage and no formal study of the stack.
I'm not sure if this takes into account para-virtualized networks on VMs, ie. VMware vm's with "virtual" hardware access
It's been a few years for me tho, so perhaps it's covered with the VM section.
Lovely diagram, thanks for sharing it!
These usually attach in the bridge or NAT flow.
Anyone figure out what the colour scheme means?
s/Aplication/Application/g
Any recommendations for a map of Linux user-space network management options?
qdisc is too small in this diagram and to easy to miss.
[dead]
[dead]
[dead]
wow
Fools admire complexity.
There’s complication, and there’s complexity. Fools admire complication, engineers design solutions to complex problems. This is a diagram explaining the latter.
I think it was put pretty well by describing things as accidental complexity (of which you want as little as possible) and essential complexity, which is inherent to the problem domain that you're working with and which there is no way around for.
The same thing could sometimes fall into different categories as well - like going for a microservices architecture when you need to serve about 10'000 clients in total vs at some point actually needing to serve a lot of concurrent requests at any given time.
> inherent to the problem domain that you're working with and which there is no way around for
I'd phrase it to reasonable taken trade-offs for customer/user support and/or selling products.
> going for a microservices architecture when you need to serve about 10'000 clients
So far I am only aware of the use case to ship/release fast at cost of technical debt (non-synchronized master) of microservices. As I understand it, this is to a large degree due to git shortcomings and no more efficient/scalable replacement solution being in sight. Or can you explain further use cases with technical necessity?
Where/how would you simplify it without losing features?