Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Some things I've learned about memory (neugierig.org)
82 points by llambda on June 3, 2012 | hide | past | favorite | 13 comments


> For the best picture I recommend smem, a tool that was written by the same person who added the hooks to the kernel to make PSS computation possible: sudo apt-get install smem.

Who's that? Feels to me like he or she is due some credit.


# yum info smem | grep URL

URL : http://www.selenic.com/smem/

First changelog entry is by "Matt Mackall":

http://selenic.com/repo/smem/shortlog/61?revcount=120


... the lead developer and creator of the Mercurial DVCS.


Offtopic, related to the submission:

Is anyone here able to guess (or even state as fact) what this blog is based on? I'm trying to get into blogging for a long time and this is the cleanest and (by _far_, being used to Europe's internet speed I'm often cringing here) fastest site I've seen on HN for a long time. What's the secret sauce, if the author isn't just creating static html files manually..?


Thanks!

The other guesses are correct. The code is here, but it's undocumented and specific to the site. https://github.com/martine/cms

It's pretty straightforward to write your own such thing (though you must be careful about the details when generating a feed) or use one of the many other static site generators like Jekyll.


It looks like static HTML hosted on nearly free speech.


Not sure, but you might want to check out a Markdown based static site generator. A popular one is Jekyll, which is also used by Github (it's written by one of Github's founders). No database required.

EDIT: evmar answered the question before I posted my reply. That's what I get for keeping HN stories open in tabs.


I think the author is creating static html files manually.

He probably has some basic (selfmade?) constructs like templating systems, and the front page could possibly be maintained by a simple cron script that looks for entries.

If you look at the source, it is exceedingly simple. The linked article, for example, is: 25 lines of header, text body, and a call to Google's Urchin.


This is a good article about memory for anybody who doesn't understand it.

Bear in mind that there are many ways that memory can be used, including (in the case of some applications) specialized memory allocators that look after memory within a process. For example SLAB allocation etc.


He mentions running out of memory addresses when there's still ample physical memory available.

The obvious thought is: We need to create more addressable space to take advantage of this plentiful physical memory.

The less obvious thought is: Given that we have such plentiful physical memory, why are we using an old hack that was designed for a problem that no longer exists?[1] The name of the problem: Not enough physical memory to keep processes in their own space. The name of the hack: Shared memory and dynamic linking.

1. Of course, we can keep making software larger and more resource intensive to perpetuate the problem and thus justify the ongoing use of the solution (shared memory). A comparative example might be disk space. We have so much disk space that we can keep making files larger (e.g. programs, operating systems, document/media files, data collection) to justify a need for more space.

Commence downvoting.


Dynamic linking addressed was a solution to a lot more issues than just not having enough physical memory. It provided convenient ways to ensure consistent modules between otherwise unrelated processes, to upgrade functionality on all processes on a platform without altering them, to insulate programs from changes to the underlying platform, etc.


What is dll/dependency hell?


Just 'cause it is a solution, doesn't mean it doesn't create its own problems. ;-)

I do agree that we may be ready to have something cleaner than the old shared library model. Just don't suggest it was a one trick pony.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: