Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What can I do with 48GB of RAM?
23 points by zaptheimpaler on Feb 26, 2022 | hide | past | favorite | 59 comments
Hi HN,

After a mobo upgrade, i have ended up with an ungodly 48GB of 3200Mhz DDR4 RAM. This is a ridiculous amount to have on a personal machine for me. What are some cool things I can do with this much RAM?

All ideas are welcome. Video/audio editing?, databases, run an OS off a ramdisk??, anything.



Run Slack and MS Teams at the same time


2 chats at the same time? I've always wanted to do that man


With only 48 GB of RAM? Unlikely.


I do it all day long in 16gb with zero issues, of course it’s a Mac so…


I hope you still feel dirty at least! :)


I have felt dirtier…

I know Teams gets a lot of hate on here, but in a 3 decade career I can tell you it’s far from the worst piece of software an employer has made me use. Lotus Notes gets that top prize. What a piece of crap that monstrosity was…

I’d be willing to bet that pretty much everyone who has used Lotus Notes probably at worst has a “meh, at least it’s not Lotus Notes” opinion of Teams.


Doesn't macOS compress memory on the fly, or something like that?


Yes, but that behaves just like fast swap. You can do the same on Linux with zram.


No idea…I’m long past the point where I get deep into the gears of the devices I use, I just want them to work when I need them to work.


Man ain't that the truth. I think it just means we're getting old.


But what if I wanted to browse internet at the same time?


Lol.. good one! Also add chrome to that list


This is psychopath behavior though. I think the feds have a watchlist waiting for you.


I see you mentioned running an OS off a ramdisk. I recommend this, just to see how incredibly fast it can be.

And also how incredibly not-fast. The fact is that most applications are memory bandwidth bound, once you eliminate the disk as a bottleneck. Not CPU bound. So when you run off a ramdisk, it's not actually helping as much as I thought it would.

But! One really neat thing you can do is to save VM checkpoints, so that backing up your computer is as simple as checkpointing the VM. So there are other advantages.

Doing some video editing is fun too, and 3D modeling. Ever want to dabble with ZBrush? Now's your chance. Get yourself a nice big monitor and Wacom tablet. Yum.

(And then, y'know, set the hobby down and never touch it again, just like the rest of us. But it's fun while it lasts.)


Hey sillysaurus, thanks for the ideas. Incidentally i did recently go on a little Procreate drawing kick so Zbrush sounds perfect. BTW i really appreciate your writing and community building online.


Can you please describe an example of where an application might be memory bandwidth bound, and what engineering techniques might be used to circumvent this restriction?


In modern times, it's hard to describe an example where an application isn't memory bandwidth bound. It's basically the primary bottleneck.

Most programs spend little time doing computation, or reading I/O. Everyone knows I/O is expensive, so it's minimized.

But there's no getting around the fact that every time you want to do anything at all, you have to shuffle around memory. There's no choice.

One way to circumvent this restriction is to make memory faster. This is difficult with traditional approaches.

I was going to point to Memristors as a possible way forward, but honestly I don't know enough about the subject.

We're getting to the point where we're speed-of-light bound, I believe. I.e. running up against fundamental limits.

Still, there's a lot of room. One interesting thing is to read Feynman's lectures on computation: https://theswissbay.ch/pdf/Gentoomen%20Library/Extra/Richard...

He points out that a reversible computer is actually the most efficient, from an energy perspective. But the tradeoff is that things take more time. If you want to take less time, it generates more heat. And more heat means inevitable delay.



This is the kind of blog post that just ignites a passion for serious software engineering.


In linux-land, sometimes I'll use a 2-3G ramdisk (tmpfs) for `$HOME/.cache` just to reduce wear-and-tear on my SSD. The web browsers put a ton of junk there, and I rarely reboot my machine.


I would be surprised if managing disk cache by hand is going to beat the Linux cache allocator. RAM is never wasted, every byte not being used by an application is used by the kernel for disk cache.

If you dedicate 2 gigabytes of it to the .cache folder than either it's going to be mostly empty and you're be causing more thrashing as the kernel unloads stuff it didn't need to, or it fills up and your system falls over when something tries to put a big temporary file in that folder.


Then your cache is lost on every reboot... Evolution (mail client) by default stores fetched email in there (I moved it to other place for backup purposes). It'd suck to have your mail client re-fetch emails every time.


I use webmail, so haven't had any problems. Seems like something under `$HOME/.local`, or maybe its own dotdir, would be a better place for downloaded messages.


How often do you reboot linux? I go months between reboots, and that's with crappy hardware.


Enjoy your unfixed CVEs.

About weekly the longest.


Same. Also for when compiling packages from source and whatnot.


Whoa. Super interesting! Maybe even useful for compiling during development ... develop in workspace, use unison to copy over to the ramdisk, then do all builds from the ramdisk dir?

Would you agree with this article's recommendations regarding ramdisk setup? https://www.linuxbabe.com/command-line/create-ramdisk-linux There seems to be controversy in the comments as to whether tmpfs is a proper ramdisk - although no clear tutorial as to a better method. Interested to learn more!


Strangely, I've gotten slightly better performance out of an ordinary filesystem (XFS) than tmpfs. Perhaps filesystem caching is more performant at the moment? I've never really dug into the bowels of it. Any kernel FS gurus have an answer?


Yeah they are used a lot interchangeably, but the top commenter is probably technically correct.

In this case I'm just using tmpfs as outlined in your link. Keep in mind tmpfs can actually swap when out of space, if that matters to you.


Good idea, ill try that thanks.


>just to reduce wear-and-tear on my SSD

SSDs can handle a lot of writes. That is not necessary to do.


It's not necessary, but it is beneficial. Why not increase throughput and decrease latency of storage?


Have fun with in-memory columnar databases or SQL engines and see how fast they are (the ones that use CPU-efficient data structures and data element access, plus SIMD processing). For example Apache Arrow datafusion [1]

Edit: Also, run a cluster of anything (in VMs or containers) and muck around killing individual cluster nodes or just suspending them/throttling them to be extremely slow to simulate a brownout/straggling cluster node.

[1] https://github.com/apache/arrow-datafusion


Make a relatively simple application, say an async video chat app. Build it with 'micro'services for everything (e.g. thumbnail generator, contacts, groups, sending, receiving, email/sms notifications). Deploy all of them in containers with redundancy and use a distributed datastore in VMs (to simulate separate machines, run some of them in different timezones).

Alternatively, try running Elasticsearch to index something.


You can devote 1/3 of it to CISA's Malcom, which has a minimum requirement of 16GB: https://github.com/cisagov/Malcolm

As for the other 2/3... ZFS, Google Chrome, or Electron apps maybe?


Looking at that repo I would say it would take several gigs just to load the README...


If you like playing with different things (Operating systems, misc software) - virtual machines are fun.

I allocate 32 of my 128GB to 'hugepages' - basically reserved areas of memory for the VMs (or other capable software) to use. It helps performance a bit.

Aside from that, I make pretty liberal usage of tmpfs (memory backed storage) for various things. When building software with many files it can make a big difference.

Fedpkg/mock will chew through 40-50GB depending on what I'm building/the chroot involved


Install Qubes OS: https://qubes-os.org. Sometimes my 32 GB is not enough.


Qubes is really quite amazing when you have enough memory.

Your post may sound like you're mentioning as a joke, but it's really something more people should use.


My main workstation has 192GB of ram (also running twin 20 Core Xeon Golds plus a shed load of SSDs/HHDs/NVMes... Anyway, long story boring, I run VMs for Dev/Test as required... That's the main reason I got such a machine. [EDIT] I am running Windows Server 2022 on this box, not a standard Windows Desktop OS, and use Hyper-V for the Hypervisor...


Open a second tab in chrome


1/ If you're into ML, hosting vector search databases can be expensive.

At a super high level, an ML algorithm converts content (text, images, or audio) into vectors (aka embeddings). Similar content should generate similar embeddings, so a large RAM lets you keep more embeddings in memory allowing more search. Large ram also makes training models easier.

2/ Data leaks can be fun to explore, but are often gigabytes of data. More ram makes them faster to query.


A normal desktop machine for a 3D visual effects artist doing complex shots and/or (offline) rendering of those commonly has (at least) 64GB of RAM nowadays.

Get Houdini Apprentice and if you like it, upgrade to Houdini Indie and get a free 3Delight license for rendering and try to max that combo out.

Clarisse is another option.


https://github.com/myspaghetti/macos-virtualbox

Put MacOS and Linux (I recommend Ubuntu Mate for desktop) on there as a couple of VMs and poof, a much less ungodly 16GB of RAM per OS.


Assuming a decent OS, all that RAM is already full of filesystem caches, so you are already benefitting from it.

Just use your computer as you planned. There is nothing "cool" about large memory applications anyway. But maybe next time, don't waste money on RAM you don't really need.


What's the point of being a technologist if you don't waste money on things you don't need? At least in this one area of life. :)

You're not wrong though.


This isn't that much... If you want more you can go to your favorite cloud provider and spin up many 100s of gigs of ram in an instant. Of course it will be slower in the cloud. But still... Not much use for this on a personal machine IMO


Spin up a few VMs, build a Kubernetes cluster.

Alternatively, create a VM with an interesting application (for example a Genera VLM) and run that in the background with your other stuff.

Run several Electron apps at once (VSCode, Teams, Slack, etc.).


Install Gentoo and set up tmpfs on 2/3 of your RAM for compiling.


Chess analysis is one of the most RAM consuming processes you can try.


Put /tmp on tmpfs if it isn't already. And use profile-sync-daemon


Try run Chrome with multi tabs :)


24 Chrome tabs?


Run firefox


I'm running 6 Firefox windows with 30 active tabs (out of about 200 open but inactive tabs) across 4 virtual desktops and it's running 12 processes and using 4 GB of RAM. I'm running Chromium with 2 tabs and it's running 17 processes and using over 1 GB of RAM.


Firefox has caused me to run out of memory multiple times and I have 64 GiB of RAM. I don't use chrome beyond a couple tabs at a time so I personally can't speak on chrome's requirements.


I'm running it on Linux.


Weird approach to the subject. If you didn't have an application in mind, why did you upgrade the computer at all?


...or at least, why did they buy more RAM?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: