Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used inverse transform sampling extensively in my thesis. I used it for fitting distributions to data (new methods)and similarity analysis of CDF's. In the software engineering field in am frequently appalled by how often basic probability techniques are ignored in favor of just the "mean" and "std. deviation".


For any individual, there is a limited time to study material. Any time that is allocated to study, for example, statistics, is time not spent on studying, as an example, a new computer language paradigm.

I am also surprised how some computer engineers take mathematics/statistics lightly. Even so, I understand that there is a pragmatic path in trying to do one field well (computer engineering), versus spreading effort across more fields. If knowing mean and variance suffices for work, then knowing it is good enough.


I'm curious if you could elaborate on this more. What other probably techniques would you recommend knowing?


I'd recommend knowing about bootstrapping[1], there must be simpler articles but the basic idea is to randomly sample with replacement to generate new datasets which can then be used to test the variation of properties of the original dataset.

[1] https://en.wikipedia.org/wiki/Bootstrapping_(statistics)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: