You had me until declarative syntax and then I lost my lunch
Ok, I watched this wonderful talk The Six Stages of Systemd and wanted to follow the speaker through his process. Unfortunately, as someone who actually builds cloud computing environments spanning multiple data centers, and doing the systems engineering across everything from embedded devices, to appliances, to entire data centers filled with hypervisors, I don't buy it. There's a number of reasons for this, most of which have to do with the reality of all of the edge cases I've had to deal with across hardware vendors and their interactions with systemd and udev. While the notion of a declarative syntax sounds appealing, and is currently hip among people learning puppet, or salt, or chef, or what ever the hip configuration management system of the day is, having actually done factory automation for high volumes of linux servers, I can say it ultimately fails for the very same reason that we have large numbers of hackish, irregular init scripts that have acquired decades of hard won lessons.
Let's go to udev, which is now merged into systemd. Udev does not under all circumstances enumerate hardware on a bus in a consistent fashion. I have had the misfortune of having had several vendors, who due to the way the traces on the motherboard get reused / abused, (Supermicro I'm looking at you), systemd does not consistently enumerate the built-in ethernet controllers. This is particularly fun when you have a pair of 1GbE and a pair 10GbE, with different bonding and SNAT rules on separate physical networks. Imagine the fun of having to manually having to gut udev and hand rewrite udev rules for 50 racks of servers! It also required going in an lobotomizing Redhat so that it could never attempt to "fix" the device mapping. It made me so nostalgic for the perl script that used to inspect the vendor ID and the MAC addresses and wrote the network scripts to bind to the correct device, I ended up repeating myself. Systemd enforces this ass fuckery as a core value, and makes it even harder to get consistent configurations across a datacenter.
Next let's consider the wonderfully deterministic rules which specify the ordering of how things get initialized, I mean the adoption system wide of a vague hand wavy, precedence system based on what ever the fuck the packager of some package chose. Install a new package that has a .service file and "poof", your entire boot behavior and order can change. Got a cyclic dependency? Fuck, you won't even discover this until that unplanned reboot. Too bad you're not running a purely deterministic process like a bash script. A developer installed a new service that conflicts with the precedences you set to get out of your last config nightmare? Well fuck and fuck again. Why the fuck do I let other people touch these machines again? Oh yeah, because I can't do everything and people don't grok dependency trees. Seriously, go install xterm on minimal install of any Linux, how many packages can you guess will be installed? I've seen between 80 and 130 on two different Linuxes in the past month. Do you think that declarative syntax in the .spec files solves the problem of exponential complexity?
But what about faster boot time? Aren't faster boot times and cgroups the greatest boons ever? Please. I use cgroups for all of my services. I get them for free using lxc containers. I don't need fucking systemd to boot lxc. Seriously, my base system exists to boot containers these days. Using init to boot services? Please. My systems boot to the point they can run a container fast enough, and the rest of it is a bash script. Seriously, a single bash script. With service discovery, my machines ping a DNS server, find out what containers the need to run, and then they run them in the order specified. No fucking around with precedence levels, no messing with chkconfig and rc.d, no writing fucking .service files. I want my init process to get me a shell and get the fuck out of my way. Computers are supposed to work for me and to paraphrase the Great Alan Cox, I know how to program fucking state machines; a computer is a fucking state machine.
In my personal experience, systemd has no fucking clue how to model a state machine. Not only does it fail miserably at the smell test for determisitic behavior, but I find myself ripping the guts out of udev every chance I get so my devices actually work. The benefits of faster boot are hard to realize, as most boot times are constrained by POST routines of various pre-OS firmware, and the purported ease of configuration shits the bed the second you have to deal with squirrely hardware. Maybe I've been doing this for too long, and have too much experience with the core issues. But systemd is one giant fucking backwards leap that makes hard things impossible, and easy things fraught with peril. I've already begun warning my customers against RHEL and future CentOS versions. Should Debian adopt systemd, I will likely switch all of my servers to busybox + mdev. After all, Linux is a marvelous kernel, and I don't need the distros fucking things up in user space if they only choose to get in my way.
PS. I run Slackware at home since '94... Patrick Volkerding has had the right idea for a long time.