Question About New Ram Throughput


Phawx

Professional Derailer
Joined
Oct 1, 2008
Messages
1,345
maciek_urbanski wrote:
The default bandwidth, latency, etc are normally unchanged. Unless TI changed something in the process ( don't think so ). So far, the information about the 256MB upgrade is limited. Most things like the extra battery drain etc, are pure guesswork, based on doubling the 128's figures.

All information is on Micron site here(link).
Basing on OMAP3530 specs it seems that Pandora team switched memory from 512Mbit (128MB) MT46H16M32LFCG-6(link) to 1Gbit (256MB) MT46H32M32LFJG-6(link).
Both are x32 LDDR333, so memory bandwidth is still 4*2*166M = 1266.5MB/s.
Sadly, for both parts datasheets are not available (but you can send a request). Looking at similar parts (VFBGA package instead of POP) power consumption should be in fact greater. Burst read current in 256MB part is up to 20% greater, but deep power down current is the same... So all depends on usage scenario.

Laurent wrote:

As I like to be off-topic... :D The new Micron 256 MB chips have been tested on BeagleBoard with OMAP3530 ES3.0 and bring 20% more bandwidth (measured by Mans Rullgard who has highly tuned assembly routines for memory copying, using or not the PLE). This probably is a combination of enhancements to the RAM chips and of revision r1p3 of Cortex-A8.

Phawx:
20% more bandwidth than 1266.5MB/s? So the 256 Config runs at 1519.8MB/s? Can you link the source?


***


So how is this beagleboard showing better performance than the theoretical limit of the RAM. Perhaps he had his beagle board overclocked?

Anyone know any info where this 20% increase in bandwidth is coming from?
 
The actual bandwidth isnt going to change by just changing the size of the total memory. The 20% benchmark was some sort of ubuntu variant, so I assumed the addition ram allowed more data to be stored in RAM rather than swap. In other words ubuntu (or the linux kernel) was able to get data faster becuase it was in RAM and it didnt have to look it up in the swap.
 
The clock speed and theoretical throughput are the same, The CAS latency has probably dropped. CAS latency effects how "responsive" the RAM is as you move between different Columns of data. Most applications change columns frequently so your likely to get a fairly decent realworld gain.

Some of the other RAM timings may have also changed to get this improvement.

Edit: Totally guessing here, dont take it as gospel.
 
The improvement isn't because of better caching in Linux (Laurent's post makes this clear). Bear in mind that he said that one of the improvements is because of a revision in the CPU, not the memory. Although the memory could have lower latency too.

If I'm not mistaken, the CPU improvement is the allowing of critical word first L2 cache line reads; ie, it would let the miss stall resume as early as the needed word is obtained, rather than having to wait for the entire line to be filled. This has an improvement in applications that do not benefit from temporal locality of reference, like any kind of data streaming.
 
I was going to comment on Laurent's post. (He's responded in the other forum that the bandwidth increase is from the Cortex A-8)

Thanks Exophase. But, I have a question?

Was texture streaming not possible in the 128MB version of Pandora?

and

Besides texture streaming and AV streaming, what other types of scenarios benefit from streaming?
 
Most programs are designed with locality in mind: they put data close to other data that it will frequently be used with.

CPUs optimize for this case: when you read a small object from memory (say, 4 bytes), the CPU will actually read a much larger area (say, 64 bytes) into a cache. This large area is called a cache line. It does this so that subsequent reads near that initial small object will be much much faster.

If Exophase is correct, the old revision of the Cortex A8 forced the program to wait for an entire cache line to load. The new CPU gives you what you requested as fast as possible, while continuing to load the rest of the cache line in the background.

This helps any time you read a small object from a cache line in memory for the first time. Instead of waiting, the program will be able to immediately start working on the data. This helps with all such cases, not just streaming.

In short: it keeps the CPU as busy as possible.

Streaming is when a chunk of memory is used only once. In the streaming case, the memory doesn't need to be cached because there will be no subsequent reads -- it would actually slow things down, because it pushes out a cache line that was actually in use. This optimization has nothing to do with streaming.
 
I believe you're right hch. But that is a quote from maciek_urbanski. Its just an error. Changed highlighting to fix readability.
 
Phawx said:
I believe you're right hch. But that is a quote from maciek_urbanski. Its just an error.
fine. was just worried you were comparing the wrong specs. thanks for looking them up btw.
 
Last edited by a moderator:
So if im following this right the fact the memory was 256 mb isnt really the change its the cpu that was improved. So we could have had 128 mb and still had the improved bandwidth?
 
Phawx said:
I believe you're right hch. But that is a quote from maciek_urbanski. Its just an error. Changed highlighting to fix readability.
Hi folks - of course it's an error. :)

Interesting that nobody else catch this...

Anyways - I'll beter go and edit original post ant Pandora forums...

It should go like this:
Basing on OMAP3530 specs it seems that Pandora team switched memory from 1Gbit (128MB) MT46H32M32LFJG-6(link) to 2Gbit (256MB) MT46H64M32L2JG-6(link).
 
Last edited by a moderator:
TI says the speed increase has nothing to do with ES3.0. They say it is all the RAM. From the datasheet, I see no speed improvement. It's still CL3 and 166MHz so the assumption was always that Linux had more memory to cache data rather than going to swap.
 
From Laurent's source, we don't know if his routine was using swap before the revision. But if nothing changed, I suppose all we can do is assume.
 
MWeston said:
TI says the speed increase has nothing to do with ES3.0. They say it is all the RAM. From the datasheet, I see no speed improvement. It's still CL3 and 166MHz so the assumption was always that Linux had more memory to cache data rather than going to swap.

Ok so im not going crazy. Thats what I said at the beginning.
 
Last edited by a moderator:
Back
Top