As long as you are looking for “free and someone else maintains it” you are going to be hopping services regularly to stay on something that has VC money to burn for goodwill.
If you host it yourself you either have a maintenance headache or a shortage of features.
Depends what your time is worth really; I use buildkite with runners on AWS for work, and it doesn’t suck.
I used Drone CI in the past and liked the focus on Docker containers for everything. It seemed to contain the sprawling monster of bash that seem to occur in enterprise pipelines.
I’ve been quite pleased with the free, unlimited GitHub Actions for public repos. I’m sure the party will end eventually, but it’s easy enough to move to whatever is best whenever that happens. And it’s not VC money that’s funding it, so who knows how much goodwill and market share Microsoft wants to buy?
GitLab CI is nice, if you're comfortable with GitHub as your primary remote, it's easy enough to set up a secondary remote (and have most git actions duplex to both).
We’ve been running Gitlab on-prem for around 6 years. We run self-hosted Gitlab CI runners on an on-prem Kubernetes cluster. We’re a team of around 20 and collectively execute around 1000 Gitlab CI jobs every day across numerous client projects for web, mobile, cloud deployments, etc. It works amazingly well.
I know this isn't super helpful, but I built my own bors-style CI bot over a few weekends, and I'm very satisfied with it. It looks for PR comments with a specific keyword, then pulls the branch, tests it, and pushes a merge commit automatically.
Taking this approach has two big upsides. First, the bot is just a binary running on a cheap VPS; so I know that it'll be fast and that I can ssh in at any time to debug if necessary. Second, if there's a feature that I want, I can just add it, rather than twiddling my thumbs. For example, I noticed that I was manually deleting my merged PR branches every time, so I added a few lines to the bot and now it deletes them automatically.
The obvious downsides of this approach are: 1) Implementing even an MVP of such a bot can consume significant time and energy, which you may not have a lot of; 2) If there's a bug, it's your fault and now you need to spend more time and energy tracking it down; and 3) Your bot might have a security vulnerability that exposes secrets, allows injection attacks, etc.
I'd love to see tooling/frameworks that make it easier to create custom CI systems. I think startups (and perhaps larger companies as well) could benefit a lot from building a CI in-house, since it allows you to optimize for your own specific needs. I see a lot of parallels to code linters, where spending a bit of time writing custom lint rules or static analysis tools can have a large payoff.
I did something similar for a self-hosted github enterprise installation:
* A webhook would be fired at my host when a new pull-request was created, or updated.
* When the webhook was received we'd store details of the repository, the branch-name, and the PR-ID into a queue, from the webhook paylad.
* A bunch of worker machines would poll the queue, and when a new message was received they'd checkout the repository, run "make test", and add the output of the run as a comment to the PR.
This was done as a temporary thing, rather than spinning up Jenkins/similar, with the expectation we'd get Github Actions coming to Github Enterprise in the near future.
Chiming in to say: Avoid Jenkins like the plague. Jenkins is a bottomless pit of vulnerabilities and obscure bugs and outdated documentation that will waste weeks of your life.
(Caveat: If you plan to do devops at a Big Corp, then you might as well get good at Jenkins because they already use it.)
I say almost because Gitlab CI lacks one critical thing: support for tasks independent of a commit or other event. Stuff like "take a dump of the production database and synchronize it to the integration environment".
Also, Gitlab CI is, due to its nature of polling workers instead of the master pushing work to the slave as well as spinning up a new container for each job instead of reusing the same environment, slower than Jenkins which does matter for some people.
A particularly dumb case showing this is when something needs to be done on a remote server via ssh - in Jenkins, one has to click "SSH Agent", choose the credential, and you can use "ssh user@host" just fine and do whatever you want. In Gitlab CI, one has to check if ssh is available on the runner image, install it if it isn't present, eval ssh-agent, and only then it works - and all of this needs to be re-done at each run (additionally meaning that your jobs have a dependency on an Internet connection plus the distribution's package servers!). In Jenkins, with a proper tool configuration I can specify something like Maven or NodeJS and Jenkins will automatically install the tool if it is not present and then never again while in Gitlab, again, as it's stateless all will need to be re-done every single build time.
Hey op - Wanted to chime in here some of the things you said aren’t accurate anymore.
GitLab CI has the ability to do SSH on the Runners. You deploy a runner and configure it to use SSH. Then it won’t use containers at all and instead use SSH.
The same is true for configuring the runner in a shell capacity. You can then reuse the same environments over and over just like Jenkins does.
As for Maven and NodeJS if you’re using containers, you simply build a dockerimage with those baked in, and use it for your builds. GitLab also has container storage that allows your images to work seem less and quickly with the runners.
For independent tasks without commits. You can easily configure a gitlab job to trigger only if a pipeline variable is present. Then trigger the pipeline via HTTP POST Request, via the UI or vi an event.
I talk about and demonstrate all of these topics on my blog www.lackastack.com - Shameless plug, but I hope it helps.
That's a good point, there is a vacuum for a good general purpose task automation tool that Jenkins has historically filled.
The problem is that each of the magic functions you listed above are separate plugins. They're not part of Jenkins itself. Each plugin may (and often will) push breaking changes and vulnerabilities to your Jenkins instance, if they haven't been outright abandoned by their maintainers. Over time, your builds will steadily accumulate hacks to work around broken plugins, and your Jenkins instance rots.
The same is largely true of GitHub Actions... so don't use any: wit you have Jenkin's doing anything other than task management and running shell scripts you maintain, I'd argue strongly you are doing it all wrong (as if nothing else you are buying into lock-in for no reason).
manually running pipelines is much easier and more confifurable in jenkins. there are many issues which are created in gitlab so it is easier to migrate from jenkins but those issues are not prioritized
Jenkins is powerful and painfully complex. You might need that complexity for large projects but personally I've found it to be so frustrating to set up simple stuff.
I really liked the expressivity of the yaml syntax when I did some prototyping with Concourse recently, and the Resource abstraction is very nice.
I found the docs to be thorough for the API, and really sparse for how to actually wire up a GitHub build pipeline or do a deploy to k8s, seems it’s really niche and not many people are writing guides, which concerns me.
I also love Concourse and feel it has the perfect set of abstractions to be able to compose arbitrarily complex pipelines. You can extend it by creating your own resources (I actually wrote a tool to help make this easier[0]). The pipeline visualization is better than anything else I've seen. I'm not aware of a hosted Concourse solution, but there really should be.
BOSH is not a requirement to host it yourself, and I've never used it.
Probably the easiest way to run it is to use the helm chart[1] and run it on Kubernetes. There are some things to keep in mind, like the workers have their own containerizer that is not Kubernetes-aware. You don't want it competing with Kubernetes for scheduling containers, so it's best to let the worker pods be the the "owner" of their node, and you can use affinities to enforce that.
At one company where we ran it directly on VMs in AWS, I used Ansible to build the CloudFormation stacks for the database (Aurora) and autoscaling groups, and the machines self-configured with Ansible to download and extract the Concourse tarball and start it. I wish I could make that code public but it now lives inside the company where I used it. I guess my point is that running it isn't magic, it's just a program (actually two, web and worker) that you start with some arguments that are pretty well documented[2].
It all runs on a couple of ec2 instances in AWS. I'm not the guy who installed it (and I started at my current company after it was setup so maybe it was complicated?) but as far as maintenance goes it's been pretty much set and forget. Our team is pretty DevOps heavy and we never really need to do anything with it.
Definitely easier though than using something like Jenkins, but personally I'm a big fan of Github Actions. I've never used it in a production setting though, only for personal projects.