Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, it's almost no effort. If you want standalone machines or laptops, you make your own package repo, point all the machines at that and let them run updates. You can install a cron job that lists packages and emails them out when the machine is connected so you can keep track of if and how packages actually are updated. It's even easier with machines that are hardwired/non-portable, because you can NFS boot and do a read-only root setup and remove all the harddrives from the local machines. Updates in this environment come from having them boot from a different root directory, allowing you to test new applications and upgrades without effecting everyone. The NFS root one was for a callcenter (obviously a limited application set, but if you're deploying a large number of machines, many of them will have overlapping usage profiles). Now a-days, I could see storing all the different configurations in git making rolling back, creating new versions, and deploying new versions easy (but you could use git for a windows install too).

I've deployed both setups, and support issues went way down compared to Windows. The biggest recurring problem with the NFS root setup was hardware failures, but the "fix" there is to swap out the machine, which takes 5 minutes, and the user continues to work. Without having to store files on the local machine, by having NFS home directories, there's no need to copy files over to give someone new hardware either. With standalone machines, the backup strategy is even easier also, because you KNOW the user can only write to their home directory.

The hardest part of the NFS root was going through and changing all the configs to not require local write access (like for logging) or having it mount a ramdisk scratch area, and if you're experienced with Linux, this shouldn't be problem. I know there are projects out there that are supposed to make this easier and do a lot of work, so it may be even easier now.



I agree with your points for the 90% of software that is pre-packaged and ready to install on your choice of distro. I'm more curious about the remaining 10% of apparently problematic 3rd party software that are 'hard' to install on Linux, and how much work it would take someone knowledgeable to create packages for and compare that with the time someone wastes on Windows with more mundane deployment issues.


Are these really comparable? The time it takes someone knowledgeable to do something difficult vs the time spent on more mundane issues where amount of knowledge doesn't really come into play?

But I see what you are saying. The Microsoft provided Windows software maintenance environment is really far behind every Linux distribution. If you have a network of non-mobile machines that mount root via NFS, the work of the admin to install hard-to-install (because it's not available as a package) software doesn't necessarily even require creating package -- you install it on the root. I did this with some one-off stuff we needed that ended up in /usr/local. Voila, everyone has it. And while we all know this kind of software management (packageless) leads to a "messy" system, it's actually easier to pay off that technical debt because you can spend time later to provide a whole new, cleanly managed environment for your NFS root users WITHOUT causing them any downtime at all to do the upgrade. In a properly managed environment, the apparently problematic 3rd party software is just that: apparently problematic. You do the hard stuff ONCE, and that scales out to X number of machines (mobile or not). With Windows, you keep doing the hardstuff over and over and over because each machine diverges and there are so many things you need to touch during the install, and even things like installation are not nearly as automate-able as they are in Linux.

What would be really interesting (and I care about this not very much since I don't do even small Windows network deployments anymore, nor do I have plans to do so in the future) is some kind of installwatch style system for Windows, that re-packages installed software, installed with setup.exe, into, let's say, RPMs, including {pre,post}-install scripts that modify the registry (this is possible, I've used tools that do registry diffs). You'd have to have a clean master Windows machine to do this properly (easily solvable with virtualization), but it could be the difference between night and day when managing Windows installations compared with Microsoft's massive updates and each vendor's own installation method.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: