2gb vs 4gb RAM


Not obsolete yet either is it?
Tell that to my Pandora... it's getting drunk every day now...

ohtheanxiety.jpg
 
I did not find those 2GB videos to be persuasive since they didn't simulate real browsing. They just loaded a bunch of pages and did not portray interacting with them.

For web application use, 2GB is the bare minimum today, particularly if you use Firefox with JavaScript. You will hate yourself 2 years from now if you don't get 4GB. The Pyra CPU is already quite outdated so why make yourself have to worry about another performance bottleneck?

The power draw in normal use (ie with the screen on) between the two will be minimal.

If you are not someone who uses lots of browsing tabs or virtual machines, then 2GB wouldn't be so irritating.
 
Yeah, I'd like to see some more comprehensive tests, such as Discord web app, Amazon.com, stuff with lots of scripting and such.
 
I did not find those 2GB videos to be persuasive since they didn't simulate real browsing. They just loaded a bunch of pages and did not portray interacting with them.

For web application use, 2GB is the bare minimum today, particularly if you use Firefox with JavaScript.
That's why I use noscript, so I don't have to spend any cycles for google or facebook etc. to track me, or spend any cycles beyond displaying a simple image on you advertising to me.
You will hate yourself 2 years from now if you don't get 4GB.
I plan to get another CPU board in around 2 years. By then I'll have a much better idea of whether 2GB is enough or not.
 
I did not find those 2GB videos to be persuasive since they didn't simulate real browsing. They just loaded a bunch of pages and did not portray interacting with them.

For web application use, 2GB is the bare minimum today, particularly if you use Firefox with JavaScript. You will hate yourself 2 years from now if you don't get 4GB. The Pyra CPU is already quite outdated so why make yourself have to worry about another performance bottleneck?

The power draw in normal use (ie with the screen on) between the two will be minimal.

If you are not someone who uses lots of browsing tabs or virtual machines, then 2GB wouldn't be so irritating.
Well "They" as you put it would be me, I spent a couple years using that OMAP5 devboard as a primary workstation on my electronics bench, it handles web browsing fairly well, I typically only have a half dozen of tabs open even on my powerful desktop. I find anymore open at once is a distraction. A 2GB OMAP5 would be fine for just everyday browsing, it's a handheld a typical sane person doesn't need 20+ tabs open at once, bookmarks exist for a reason. Some observations I have, so I'm running 64-bit x86 Debian Stretch with MATE on my desktop, for some reason it's using ~800 MBs just idling at the desktop no windows open, no background services. Running 32-bit armhf Debian Stretch on the OMAP5 devboard with MATE idles ~80-100MBs of usage. So in my opinion 2GB goes farther in an armhf environment.
 
Some observations I have, so I'm running 64-bit x86 Debian Stretch with MATE on my desktop, for some reason it's using ~800 MBs just idling at the desktop no windows open, no background services. Running 32-bit armhf Debian Stretch on the OMAP5 devboard with MATE idles ~80-100MBs of usage. So in my opinion 2GB goes farther in an armhf environment.

Debian preemptively caches the HDD to RAM. It's become difficult to see how much actual RAM usage the system actually requires. The more RAM the workstation has, the more of it will get used with caching. I think it might top out around 2.5GB. That is about what my system rolls up to a few minutes after booting (without other interaction other than opening Gkrellm). It has 48GB of RAM total and pulls data from a pair of 500GB SSDs in RAID10.

But - understanding how much RAM Debian (and variants) are using is a lot more complicated than just looking at the memory in use numbers. There has to be some way to shut off all device caching. If you find it, let me know. I want to disable the write cache system wide and force it to finish actually writing before telling me it is done (a whole other issue with current Debian). (Super annoying when using removable media.)
 
Yeah, in top or htop you can see the amount used for caching though. Top reports it in the MiB mem line, and my htop currently displays it as a graph but that can be changed. Free also reports cached memory separately to used memory.
 
Well "They" as you put it would be me, I spent a couple years using that OMAP5 devboard as a primary workstation on my electronics bench, it handles web browsing fairly well, I typically only have a half dozen of tabs open even on my powerful desktop. I find anymore open at once is a distraction. A 2GB OMAP5 would be fine for just everyday browsing, it's a handheld a typical sane person doesn't need 20+ tabs open at once, bookmarks exist for a reason. Some observations I have, so I'm running 64-bit x86 Debian Stretch with MATE on my desktop, for some reason it's using ~800 MBs just idling at the desktop no windows open, no background services. Running 32-bit armhf Debian Stretch on the OMAP5 devboard with MATE idles ~80-100MBs of usage. So in my opinion 2GB goes farther in an armhf environment.

You misconstrued what I was referring to by my use of "they." The antecedent was the videos, not the person who made them.

It sounds like for you, 2GB is sufficient but lots of people use lots of tabs, from my observations. People also often keep tabs open because they are reading at a specific place on a page.

My work involves lots of web applications, video editing, extensive Javascript debugging and flipping between many tabs. Without working OpenGL, this would be bad enough. With 2GB, it becomes much less possible.

The way most web developers are going, in a few years' I suspect almost everyone will hate having 2GB.

ARM binaries are smaller but that doesn't matter if you're required to use a lot of bloated web sites.
 
I want to disable the write cache system wide and force it to finish actually writing before telling me it is done (a whole other issue with current Debian). (Super annoying when using removable media.)

YES. Whoever decided we needed to be returned to a prompt before cp is done needs to be ... ... scolded.

What's the freaking point? Why does it even delay a while before fraudulently appearing to finish copying? Why not just return to the promt immediately after invoking your cp?

Oh wait, we already have & to background tasks if we want. This is a mind boggling 'feature'.
 
You clearly don't do a lot of scripting. You invoke the cp as part of the script, and when it returns (once it's got a copy of all of the data into a secure buffer that can't be changed, unlike the source file) you can get on with doing something that's cpu bound or using other devices, and your script will complete more quickly and make more efficient uses of the hardware. Perhaps that's somewhat less of a concern when linux is a single user system, but it's a big win when you've got dozens of users hanging off a single instance.
 
Whoever decided we needed to be returned to a prompt before cp is done needs to be ... ... scolded.
cp is done at that point, it returns because cp has exited and therefore all of its associated resources were freed. This is an independent feature of the kernel and its filesystem drivers.

Windows does the same since Vista.
 
Uhm you can just disable that with changing a simple number in a config file.
Without caching the drive has to deal with every little changed bit tho, normally all changes are stalled for around 5 seconds.
I usually do the opposite, infinitive caching time allows you to actually shut down hard drives on running systems so you get rid of the noise.
I just have to be careful when shutting it down.
 
cp is done at that point, it returns because cp has exited and therefore all of its associated resources were freed. This is an independent feature of the kernel and its filesystem drivers.

Windows does the same since Vista.

I was being imprecise i know...

What *is* the purpose of this kernel write caching? Is it useful for single-user low-write frequency systems? Can i disable it on certain devices? Like usb sticks and sd cards?
 
Last edited:
You can add the sync and dirsync options to either your mount command if mounting manually, or your fstab if mounting automatically. There's probably a way to do it if you run an automounter also, but I don't generally on systems I maintain myself, so I don't know about that. Alternatively, you can run the 'sync' command (from the 'coreutils' package in Arch) any time you need to be sure the caches are clear, but let it use caches otherwise. Or you can do as I do with removable drives, which is to umount (from 'util-linux', same package as 'mount') them before removing them.

Edit: As far as I can see there's little benefit to them if running a single user system mostly strictly interactively. Doing it in the mount options will apparently shorten the life of certain SSD drives, but probably only really old ones that don't do any sort of wear levelling, and only relatively slightly I think.

Edit2: On consideration, and as the post below hints at, this could actually worsen the lifespan of flash based devices in one relatively common scenario; consider the system logging log lines to a file. These will rarely be longer than the block size of the flash chips, so it immediately written will need the flash chip to scrap the entire sector, write the text in, then repopulate the block with all the unwritten over data. If you have caching, it's more likely the system is able to wait until there's at least a block to write, and any complete blocks will only need to be blanked then rewritten the once, as compared to dozens of times for small log updates.
 
Last edited:
What *is* the purpose of this kernel write caching?
There's so many reasons I'm baffled that you can't think of even a single one yourself... Block based devices (in this context the Pandora's NAND chip is the only thing that is not block based, even the eMMC chip of the Pyra is block based due to its internal FTL) can only access a whole sector at a time, which is a PITA if you're only performing small data transfers (especially on modern 2K sector devices), parallel accesses to the same range of sectors by multiple programs, access time optimizations based on the physical layout of HDDs by reducing head movement (NCQ), protocol-level optimizations that reduce the amount of transactions, exploiting synergies based on the device caches and cache-like structures like the SLC buffers in TLC SSDs...

On old HDDs the transfer of large memory blocks via dd is usually the fastest if you use the largest transfer block size that still fits into the HDD's cache.

Is it useful for single-user low-write frequency systems? Can i disable it on certain devices? Like usb sticks and sd cards?
It's a mount option of the filesystem driver. Like every other mount option, you configure it in your fstab file.

We have access indicator LEDs that show you when it's done writing, just look at them if you're too lazy to just properly unmount them before ejecting the SD cards. You won't make it write any faster by disabling this kernel-side caching, quite the contrary, it will take even longer because more transactions are required to do the same amount of work.
 
Last edited:
My primary frustration with write caching is on removable drives.

I drag and drop copy on my workstation to write 40GB of data from my SSD RAID to a USB stick.
30 seconds later the GUI tells me it is done.
It is not. All it has done is read the 40GB into RAM (48GB on system).
I can tell using Gkrellm that it hasn't even started writing to USB.
5-10 minutes later I see disk activity start going on the USB device. This can be pushed to start if I request unmount too.
At some point it may or may not be actually done, but it is freekishly hard to know for sure if you miss the momentary OK to remove message.
 
Back
Top