Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using message queues to decouple the components suffer the same problem as micro services. You start to get a lot of implicit dependencies that you must document.


Beyond that, you tend to often realize you haven't quite made the logical split of components in quite the right place, or the "right place" for that split to be changes over time. Then you've got the fun task of moving functionality from one component to another, which is extremely expensive in development time.

(or, more often than not, because of the cost of doing this, you end up putting up with a slightly batshit-insane design...)


What kind of implicit dependencies?


Service A sends a message to Service B, so Service A is dependent on Service B to function properly.

With careful management, it can be kept to a minimum, but I haven't seen that work for systems with a large numbers of developers working on it yet.


Or in my experience, Service A sends a message to Service B, which sends a message to Service C, which sends a message to Services D, E, and F, and Service F sends another message to Service C, which this time it sends a message to Service G (so at least it's not completely circular), which then hits the database and returns information back up the chain.

I'm exaggerating a little bit, but not too much (actually, on further reflection, I might actually be downplaying it a bit. some of our services schedule tasks with like 10 different services for every item processed, and we do tens of thousands a day).

Debugging issues in this mess is not fun, because there's so many places you need to check to see if it's the source of the failure or not, and a failure in one service could really be in a different service so you have to test all the way up and down the chain. For every bug.


That's a problem of badly specified services. You should be able to look at the messages only, and see where a bug starts.

But then, I'm mostly against microservices because they lead to harder problems on every place. Documentation isn't even the worst.


But I was under the impression that the point of message brokering is that it doesn't have to be service A that puts it there; only that some service puts the message there.

I feel like the article is attacking message brokering by discussing the disadvantages of bad use cases for them. A good use case for message brokers is when work needs to be done on an item, but not immediately.

My company uses them in a way I believe is quite effective. We pull in data from an external source, and send the ID of the item to about five different queues to do different tasks. Each time one of the queues finishes the work, it send a message to a validator that checks to see if all the work is done. If it isn't, it waits to get that message again from another worker. If it is, it marks that item as ready for the end user.

These are the kinds of use cases I think message brokers should be used for. Not to send a message and wait to get an answer back. Why not just use an HTTP request for that?


> But I was under the impression that the point of message brokering is that it doesn't have to be service A that puts it there; only that some service puts the message there.

It doesn't matter, but it's more about debugging. If Service A does not work correctly, where is the bug? Is i Service A? Service B? The network?

The use at your company, is idiomatic to the paradigm. You have n different units of work, that can run seperately, so you do that and communicate with messages.


If you are using messages correctly, it shouldn't be difficult to debug. You have an input and output for each service and you see where something happens differently than expected. I'm not sure where you are saying the difficulty comes from.


That matches my experience — I've seen a fairly common learning process where someone adopts a message queue, decides it's great and uses it all over the place, and then spends awhile working through the various failures which didn't happen in their development/testing environment so they're making decisions about how to handle dropped messages, duplicates, etc. in a rush.

It's not that hard to do but it seems to take people by surprise and it's not helped by some poor defaults like almost everything in the RabbitMQ ecosystem silently blocking when the queue fills up (this probably happened transiently multiple times before it hit the level where it caused a visible outage but how many people will notice that if the default isn't to log or raise an error?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: