Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Normally in a library you'd provide functions that allocate and free structures defined by that library. That's how successful C libraries do it.

I think semver covers this pretty well actually. The public API is not just the function signatures, but basically anything that makes your API - whether it's code or documentation. So either:

- your structs in the library are opaque and the library handles all the memory management itself - addition of fields is backwards compatible, or

- your structs are open and your functions expect the structs from outside - addition of fields is not backwards compatible

The second one is still falls under "Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API."

To make a comparison in a dynamic language: you've got public function `f(d)` where `d` is some dictionary with elements `a` and `b`. Now in the next version you start to require `c` - same signature in the code, but it's not backwards compatible. It's still a public API, even if defined by the documentation.



your structs are open and your functions expect the structs from outside - addition of fields is not backwards compatible

Sometimes it's possible to handle this case (if you've planned ahead) by including unused space within the original struct defintion that you can then repurpose later.


> That's how successful C libraries do it.

What about c++? Object allocation is always handled by new on the consumer side isn't it? Also in C, does the order of function declarations matter?


I don't do C++ normally, but the situation seems similar. There's http://wiki.c2.com/?PimplIdiom for opaque objects, so you don't have to make their members part of the public API. But for things you do want to make public it's the same solution - make it a your public API by documenting it. If you know it breaks compatibility, it requires major version bump.

And no, after compilation, I don't believe the symbol order matters.


The pimpl method looks like it ads a level of indirection to every private member access. Is that an issue in practice?


Qt uses it extensively. For big, complex objects like QTextEdit it's probably the least of your concerns, but they do avoid it for simple value types like QPoint.


A good chunk of these questions I'm asking are because I half remember Qt's binary compatibility guidelines from when I did an internship at Trolltech many years ago :) A similar list is here: https://community.kde.org/Policies/Binary_Compatibility_Issu... . It looks like the virtual function restrictions are what I was talking about, that's where pure additions can break compatibility.


If you worked at Trolltech, you almost certainly know more than me. I was just a user of Qt who peeked under the covers occasionally.


It's an issue if the methods on that object are short-lived enough that the overhead of indirection is significant.

It's also an issue if very performance-sensitive routines need to read data from the object. In which case they already know exactly what's in there, so "pimpl" hiding has no benefit in the first place.

Implementations should be hidden if they are likely to change. In other words, there are are non-obvious data or method members and/or it's likely that functionality will be added in the future. These are the high-level architectural situations, not the leaf level ones where you optimize out every cyle possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: