Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am thinking about this. I think our distinction between IO-bounded tasks v.s. compute-bounded tasks helped design in some ways. However, as time goes by, the code you are dealing with can bounce between these two modes, and that is where all the limitations seem good for you start to be bad.

That's what I suspect happening on the Python / scientific-computing community. They thought (and some people in this thread still think) that Python can be the driver to wait for compute-bounded task to finish (effectively, making the Python code to be IO-bounded / i.e. waiting for another language to finish). But over time, it changed. People write more code in the Python driver, and suddenly, the very optimized C code is no longer the bottleneck, the Python code is.

We've seen this bouncing pattern enough times and really need to think through whether we want to have two different design patterns to deal with IO-bounded problem v.s. compute-bounded problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: