Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. I expect people to know, be proficient with, and use all relevant tools available to them. Including AI; but not just AI. There are a lot of people trying to argue why they don't want/need to use certain tools. A job interview would be the wrong place to have that argument. A good, open interview question these days would be asking candidates how they are using AI in their work and what challenges they are facing with it.

Say, you are a python developer and you are working on some FastAPI rest service. Do you 1) ask chat GPT to generate documentation for your API. 2) do this by hand. or 3) routinely skip that sort of thing. 1) would be the correct answer. 2) would be you being inefficient and slow 3) would be lazy. It takes 1 minute to generate perfectly usable documentation. Tweak it a little if you need to and job done.

Bonus points for generating most/all of the endpoints. I did that a few weeks ago. And the tests that prove it works. And the documentation. And the bloody README as well. Slightly tedious and you need to be good at prompting to get what you need. But I managed.

Artisanal software is not going to be a thing for very long. I expect people to get standard stuff like that out of the way efficiently; not to waste days doing things manually.

I would also encourage candidates to use chat gpt during coding tests. If I actually believed in those, which I don't. I'd be more interested in their ability to understand the challenge then to produce a working solution from memory. Use tools for that.



> It takes 1 minute to generate perfectly usable documentation.

Ok. So why generate it at all? Just have the user generate it when needed or automate as part of the build. Seems like option 3, the lazy option, might be the right one.


I agree. But one thing I cannot quite put my finger on..

I mean we know how - and why - to write documentation. We have all basic skills and just use LLMs to automate that for us. These skills, however, were won through endless manual labour, not by reading about it. We practiced until we could do it with our eyes closed.

Where will the next generation come from .. ? I appreciate most companies don't have to worry about this, but if I were the head of a very large multi-generational enterprise I would worry about the future of knowledge workers. AGI better pan out or we are all fucked.


> Bonus points for generating most/all of the endpoints.

Swagger has solved that one for years now. In any case I think it's foolish to have multiple competing "sources of truth"... in the worst case you have stuff written on your website/Confluence/wiki, some manually written docs on Dockerhub, a README in the repository, an examples folder in the repository, Javadoc-based comments on classes and functions, Swagger docs (semi-)autogenerated from these, inline documentation in the code, years worth of shit on StackOverflow and finally whatever garbage ChatGPT made out of ingesting all of that.

And (especially in the Javascript world) there's so much movement and breakage that all these "sources of truth" are outdated - especially StackOverflow and ChatGPT - which leaves you as the potential user of a library/API often enough with no other choice than to delve deep into the code, made even messier by the intricacies of build systems, bundlers and include/module systems (looking at you Maven/Gradle/Java Modules and webpack/rollup/npm/yarn/AMD/CommonJS/ESM). The worst nightmare is "examples" that clearly haven't been run in years, otherwise the example would have reflected a breaking API change whose commit is five years ago. That's a sure way of sending your users into rage fits.

IMHO, there is only one way: your "examples" should be proper unit/integration/e2e tests (that way you can show your users how you intend for subscribers to use your interface, and you'll notice when your examples are broken because they are literally your tests!), and your documentation (no matter the format, be it HTML, Markdown or Swagger) should all be auto-generated from in-code Javadoc/whatever. Inline code should be reserved for stuff that's only needed to be known for someone intending to work on the actual library, to describe why you took a certain way of implementing a Thing (say, to work around some sort of deficiency) - think of the infamous "total hours wasted here" joke [1].

My to-go example, even if it is not perfect (the website is manually maintained, but the maintainers are doing a fucking good job at that!) is the Symfony PHP framework. With the exception of the security stack (that one is an utter, utter nightmare to keep up with), their documentation is excellent, legible, and easy to understand, and the examples they provide on usage are clear to the point. Even if you haven't worked with it for a year or two, it's so easy to get up to speed again.

[1] https://nickyreinert.medium.com/ne-13-total-hours-wasted-her...


> Swagger has solved that one for years now.

I was referring to both the implementation and generating good/complete documentation strings for that, which is tedious and repetitive work. Obviously, I'm using openapi (aka. swagger); support for that is built into fastapi. That's one of the reasons I picked that.

The tests are also generated but I screened them manually and iterated on them with chat gpt (also test this, what about that edge case, etc.). I know how to write good tests. But generating them is a way better use of my time than writing them manually. Especially spelling out all the permutations of what can go wrong and should be tested and asserting for that. AIs are better at that sort of thing.

These are simple crud endpoint. I've written loads of them over the years. If you are still doing that manually, you are wasting time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: