Very often on HN I see links describing the latest and greatest GitLab feature. It really seems like they've been working hard and doing a good job, and they seem to have a clear vision ("master plan", in their terms). Where is GitHub in all of this? Aren't they concerned about being outinnovated?
Github is so ubiquitous that they can't afford to move fast and break things like Gitlab can - another team at work considered moving to Gitlab to use all these new features, found bugs everywhere and eventually went back to Github.
That said, I find the openness with which core architectural issues are discussed to be refreshing. (e.g. see the very detailed writeup explaining the root cause of the above issue, here https://gitlab.com/gitlab-com/infrastructure/issues/677)
Mainly intermittent errors clicking around the interface, particularly 404s when clicking on files that existed - they're not sure whether that was a problem in the web interface or the files actually temporarily went missing.
The HN community at some point decided to collectively turn on GitHub, because of some incident where (choose your own adventure: someone at GitHub was accidentally trying to help someone who wasn't white/male/heterosexual OR they're too busy waging identity wars to actually work OR someone with the right intentions definitely went too far, actual damage was limited, but it's the type of issue that people remember forever)
I think they collective criticism has long ago become unfair. Two reasons:
– The OSS community is currently working incredibly well. It's innovative, productive, inclusive, and the quality is excellent. Github is responsible for a large chunk of this development. Go check out sourceforge to get an idea of how it used to be. Seriously – it's like David Foster Wallace's Fish/Water metaphor: we don't know how good we have it, because there's no point of reference to compare it too.
It's also quite obvious that GitLab profits from the conceptual work done at GitHub. I'm not advocating that there should be any protection for software concepts, screen layouts etc. But I like to at least acknowledge where good ideas originated, and GitHub has had quite a few (once again: it's easy to forget after getting used to it).
I also had the experience that GL was dog-slow when I tried it. That probably changed, or I really couldn't understand anybody using it.
The more interesting speaking volume was when they then removed it because the "concept of meritocracy is divisive"... they really did get infected with some silly ideas.
It's funny how threatened people feel by what GH was/is trying to do, when that remains their first and only association (two?) years after a minor incident where some support person made a mistake.
I'd also question the assumption that it's possible to run a business completely "neutral". That just means remaining in the Status Quo, which is exactly what the "other side" of that debate wants. Is it unfair to promote women so as to increase the share of female CEOs (Fortune 500: 4%)? Yes, it could be. But doing nothing is not neutral.
Github hired the her after that to thought police all of github. I don't particularly care how they're run internally but I don't want to do business with a service that will ban your account for political disagreements.
I honestly don't remember the details – a customer service rep locked or deleted a repository because of a complaint about a word in it, which je completely misunderstood. Something like master/slave.
Someone else also proclaimed their support for affirmative-action hiring on twitter, in words that some of my white cis-male brothers felt were insulting. (they actually were, but until I've picked cotton for a decade or been shot by police I'm not going to consider myself victimized).
From there on, GH has been under strict observation. They're obviously on the progressive end of the spectrum so people are now finding new transgressions left and right. Community guidelines were the last. They're not different from Google or any number of SV companies, but the association seems to stick.
Super quick one. One of the things I'm working on at work is a staging env per branch system. We use k8s too. Biggest consideration here is individual environment variables per branch.
Eg. For testing we have a separate db to work on so we have test data and a db where we can perform migrations if needed.
Another example would be adding a new feature that requires a different env variable for another api key.
Curious how or if these features support it. Super sorry if it's in the docs. Since you were doing ama I thought I might ask it here :)
This video is really neat and it seems like the promised plan is almost done.
Can you clarify at all what the plan is for supporting other container manager systems? I looked at Openshift and it doesn't look like something you can run on your own hardware. Maybe I am misunderstanding this though.
--
EDIT: I had spent my time looking at openshift.com and didn't notice that there is an open source version located at openshift.org. Sorry for the confusion.
--
I'm interested in using all the functionality shown in the video, but for a small team and on our own hardware. Maybe it doesn't make sense that way, any clarification would help!
Thanks for your kind words. We would love supporting other container management systems. From the OP: "We believe container schedulers such as Kubernetes are the future of application lifecycle management and are working on Mesosphere support. We would love it if people would contribute support for other container schedulers such as Docker Swarm and for other Kubernetes providers such as Tectonic."
In the demo we use our own Openshift Origin installation on our own cloud servers, for more information please see https://www.openshift.org/
Thank you so much. I had spent 20 minutes this afternoon on openshift.com and was looking for an open source version and just failed to find it. Sorry for causing confusion with my earlier post.
No problem, at GitLab we used to have a seperate .org and .com site but we noticed it caused duplicate content and confusion so we consolidated under .com
Although great companies like Wordpress are doing a good job making it work.
> I looked at Openshift and it doesn't look like something you can run on your own hardware.
I work on a competing platform, Cloud Foundry. But I'm pretty darn sure OpenShift can be installed on your own platforms. Red Hat know a thing or two about Linux, after all.
Speaking of Red Hat, they have a dog in this fight in Fabric8.
Disclosure: I work for Pivotal, we're the majority donors of engineering to Cloud Foundry.
This is probably most interesting to those who don't want to use OpenShift. There's some more fleshing out to do re: CI runners, but the basics are there.
What's the minimum monthly cost for running this setup? Can it be done on a single server? If the minimum setup requires a lot of redundancy, then I think this will still be a blocker for many side projects and small startups.
I can't comment on the monthly cost. You should be able to run everything on a single server, but I suspect that if everything is under load, that would need to be a decent server.
You can use our Docker registry on GitLab.com for free.
I wasn't sure what to thing about Cycle Analytics, and the metrics it provides ("time from thoughts to issue", "time from issue to code", "time spent reviewing"). IMHO these numbers may be too synthetic to give meaningful information. Plus, once we have this kind of dashboard, there is a risk of starting to optimize for the numbers (instead of optimizing the reality underneath).
That said, I see with this demo how these metrics could be useful to track, well, when reviews are taking too much time (needs more people? better repartition of reviewers?) – or when maintenance tasks are slowing new features (needs better tests? more maintainers?). I guess I'll have to try this out :)
Do you mean that number are to coarse to indicate a specific problem?
Of course there is the problem of gaming the numbers. But we do think that compared to many other ways of measuring productivity (for example the number of issues solved) this is relatively robust against manipulation. Getting something out sooner is better most of the time.
But thanks for the thoughts and I hope you try it out soon.
I agree that everything will be gamed and that culture is the best protection. I do think that some metrics are easier to manipulate (cyclomatic complexity comes to mind) and some metrics don't correspond with effectiveness (lines of code written).
At Pivotal some of the PMs have kicked around "Time to Value", which is the gap between an entry in Pivotal Tracker and a buck being turned on that feature.
Then there's Time to Customer Value, which is the time it takes before a customer using the feature turns a buck on it.
> there is a risk of starting to optimize for the numbers
There absolutely is. You can only use your own judgment of the balance of risk between flying blind and becoming obsessed with instrumentation.
Ah, makes sense, thanks. I get the Time to Customer Value now (customer of the user ordering due to the new feature). Not sure about the Time to Value. Who is billing who? Pivotal the user?
We sell a bunch of software subscriptions these days. If someone buys or increases their subscription on a new feature, that'd tick the marker.
It's probably not precisely measurable, but it's a great thought exercise. It prompts us to think about the entire process from noticing an idea to a customer deciding to buy.
And Time to Customer's Customer Value -- I forgot that TTC^2V was the alternative name -- is even better. It makes us think about the journey from us noticing a need, developing it, releasing, customer installing, their team using it, releasing to their customers, who decide it's worth paying for.
When you turn that into a loop, you get something very like the Gitlab master plan. We call it the "Circle of Code", Onsi Fakhouri talked about it early this year: https://youtu.be/7APZD0me1nU?t=23m6s
Nothing, we just had to pick a quick way to install it and Openshift looked nice. All the features are intended to work with all Kubernetes installations down the road.
We release all of GitLabs components at the same time, guaranteeing that they work together. Every month on the 22nd. Updating GitLab is no more work than 'apt-get install'.
Should I upgrade all my hosts, docker, kubernetes, openshift, gitlab, mattermost ... what would be the chances of something to break to the point of ruining my day/week.
Not everything is merely "apt-get upgrade"-able alas.
Idea -> Design -> Build -> Test -> Deploy -> Maintain
For example, Infrastructure Operations used to be responsible for providing a Test environment, now developers can test without Ops because tools like GitLab automatically provision test environments on demand (and then destroy them when they are no longer needed). You still need Ops to take care of that CI system, but you don't need as many personnel.
I'm sorry to hear that. I should have probably included a drawing. Can others maybe paraphrase what they think it means or does it not make sense to anyone?
You're saying that people used to edit locally, build locally, test locally, then upload to staging/the cloud; now you upload to the cloud first, then build/test/etc. (Of course, people did it this way with Heroku for years because it was harder to build locally, then the pendulum swung back with Docker...)