Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The database has analysed the query that is being run, and decided that the quickest way to fulfil that query is to take 27MB of data and sort it into a file on disc before reading it back in with the correct order. This is caused by the work_mem setting in the database being set too low, preventing the database from contemplating just sorting the whole thing in memory.

The default work_mem setting for Postgres has historically been very low. It's fine for reading single rows from a single table using an index, but as soon as you put a few joins in there it's not adequate. It should be one of the first steps of setting up Postgres to increase this limit.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: