Cpu For Pandora 2 ?


Thanks notaz. Just to confirm, that was done from privileged mode, right?

The values are:
L2 data RAM read = 2 cycles

Load data forwarding = disabled
Write combining = enabled
Write allocate delay = enabled
Write allocate combine = enabled
Write allocate = enabled
Parity/ECC = disabled
L2 observes outer cacheability

Data RAM latency = 3 cycles
Tag RAM latency = 2 cycles

So if these values are correct it means the following things can be tried:
- Data RAM latency changed from 3 cycles to 2 cycles, and hope there's something funny with its reckoning of "cycle" with relation to clock cycle (tag RAM is supposed to be looked up in parallel with data fetch, so somehow this could mean 1 cycle = 4 clock cycles)
- L2 data RAM read changed from 2 cycles to 1 cycle
- Load data forwarding is enabled.. apparently this was disabled in Beagle as a workaround for a NEON bug which could cause it to lock up in rare cases. It was said that it didn't hurt performance much but it'd be interesting to see what, if any, impact it has here.
 
Exophase said:
Thanks notaz. Just to confirm, that was done from privileged mode, right?
Yup, undefined instruction exception otherwise.

Attempting to change that register in privileged mode also results in undefined instruction, haven't read up how to switch get into secure mode yet.
 
Last edited by a moderator:
One quick note: Cortex-A8 and Cortex-A9 don't share any RTL except for the NEON unit and even so the NEON ld/st don't behave the same. So don't assume anything about Cortex-A9 based on your Cortex-A8 experiments :) In particular a NEON memcpy won't be faster.
 
Exophase said:
(btw, for anyone who was wondering: http://infocenter.arm.com/help/topic/com.arm.doc.ddi0246e/Chdcjfia.html < L2 access time of 8 cycles.. I was worried it'd be higher latency since it was moved off the core and shared, guess not)
These are not real figures, but rough ones to compare the benefits of using a L2 vs not using one. The L1 typical access time of 1-2 cycles proves this as you never get a 1 cycle access time out of d$.

Look at this for L2 latency as measured on Tegra2: http://forum.canardpc.com/showpost.php?p=3295102&postcount=491 This is in French, but I guess it's easy to understand.

For the micro TLB, the advantage is a much lower latency when compared to a main TLB lookup. Big TLB (for instance main TLB) are typically implemented using RAM arrays, whereas micro TLB are just an array a parallel comparators (note I don't mean this is the way it's done in A9, just the classical way of implementing it). IIRC latest Intel processors are also using multi level TLB.

As far as VIPT vs PIPT goes, you should remember that A9 cache lines are 32 bytes and caches can be 64KB, so if you were VIPT you'd need to take care of aliasing, either through OS support or through some HW.
 
Last edited by a moderator:
Afaik L2 on Cortex-A9 has the option of running at the CPU clock or at half the CPU clock. Maybe Tegra 2 picked the latter?

If Cortex-A9's cache can't provide anywhere close to 8 cycles then that chart is really misleading.

Of course you get 1-cycle access out of L1 dcache if the clock speed is low enough and the cache and associativity are small enough; when we say "1 cycle access" we mean you receive the data two cycles after the one you request it at, right? At the very least ARM9 needs to to achieve 1-cycle load-use penalty (clock speed up to > 800MHz if overclocks on Pollux are any indication), not sure if Cortex-A8 and A9 avoids this requirement by having address generation earlier. You can get 2-cycle out of 16KB at up to 3.4GHz on Pentium 4 for 130nm parts. I'm sure that held back yields substantially (the 90 and 65nm versions brought it all the way up to 4 cycles) but that still stands to reason that you can get 1-cycle latency out of > 1.5GHz.
 
Laurent said:
These are not real figures, but rough ones to compare the benefits of using a L2 vs not using one. The L1 typical access time of 1-2 cycles proves this as you never get a 1 cycle access time out of d$.
A8 has a latency of 3 cycles for instructions which access the L1 cache. Assuming the first cycle is for address generation, the actual L1 access takes 2 cycles.

The instruction cache is also 2 cycles, but they manage to squeeze in some minimal instruction decoding to check for branches during that time, so I guess the data is returned in slightly less than 2 cycles. (Or maybe they predecode the instructions and flag branches in the cache, but I doubt it.)

Laurent said:
Look at this for L2 latency as measured on Tegra2: http://forum.canardpc.com/showpost.php?p=3295102&postcount=491 This is in French, but I guess it's easy to understand.
So 25 cycles on A9, versus 20 cycles on A8.

Laurent said:
For the micro TLB, the advantage is a much lower latency when compared to a main TLB lookup. Big TLB (for instance main TLB) are typically implemented using RAM arrays, whereas micro TLB are just an array a parallel comparators (note I don't mean this is the way it's done in A9, just the classical way of implementing it). IIRC latest Intel processors are also using multi level TLB.
Yeah, but the question is whether it is better than virtually indexing the cache. I suppose it saves some die area in the cache by not having separate index/tag addresses.

Laurent said:
As far as VIPT vs PIPT goes, you should remember that A9 cache lines are 32 bytes and caches can be 64KB, so if you were VIPT you'd need to take care of aliasing, either through OS support or through some HW.
So they reduced the L2 latency by making cache lines 32 bytes instead of 64 bytes? I don't think that's a good design tradeoff.

For VIPT, generally you just need to check that none of the tags match when loading a new cache line. For a 4-way set associative cache, you need to check 4 tags for aliasing, which is not hard to do when you're going to be waiting 20-25 cycles anyway. Of course this means you thrash the cache if the process maps the same page at two addresses and accesses them both, but that's a pretty rare case. (Usually only happens when doing write-allocate of zero-filled pages.)
 
Last edited by a moderator:
Even assuming the pinouts between the OMAP 3530 and the desired processor (maybe OMAP 4430, etc) aren't similar, couldn't someone make an interface board that moves the pins to their correct locations?
 
Aurekana said:
Even assuming the pinouts between the OMAP 3530 and the desired processor (maybe OMAP 4430, etc) aren't similar, couldn't someone make an interface board that moves the pins to their correct locations?

These sorts of boards are done sometimes where pinout is compatible. But this requires that the OMAP4 provide all the peripheral interfaces that the board expects to connect to, in a valid pin mux configuration of some sort.. it also needs to be able to run off of the same power rails. Even having more power pins than OMAP3530 could be a problem. This could be the case if it draws the same or more power at lower voltages, since there'd be a higher aggregate current load and could need to be more power pins to distribute it better.
 
Last edited by a moderator:
Exophase said:
it also needs to be able to run off of the same power rails. Even having more power pins than OMAP3530 could be a problem. This could be the case if it draws the same or more power at lower voltages, since there'd be a higher aggregate current load and could need to be more power pins to distribute it better.

Early reports on the PandaBoard suggest it has some power management issues. Apparently it draws too much current to be powered by USB, and the recommended power supply is 20 watts.

I don't know how much power the OMAP4 really needs, but it may be awhile before there's a drop-in replacement for the Pandora.
 
Last edited by a moderator:
Ari64 said:
I'd be surprised if the board really needed that much power.

I don't know how much power the OMAP4 really needs, but it may be awhile before there's a drop-in replacement for the Pandora.
C-A9 shouldn't require much more power than C-A8. Of course there are two of them... I wonder if the rest of the SoC consumes much more.

Also IIUC the OMAP4 on Pandaboard are engineering samples, which means they are not yet tuned.
 
Last edited by a moderator:
I'm curious, how feasible is it to do something similar to standard mobo upgrades for PCs? With custom desktops (unfortunately, proprietary desktop MFGs like Dell and HP don't think this way for efficiency as well as limiting internal costs... but...), if you wish to move from Pentium to i5/i7 or DDR to DDR3, it's as simple as buying the newer mobo with compatible CPU and RAM. Could a SoC be made that keeps all the connections in the same places, use a newer CPU and (if more power is necessary) utilize the battery and connections as they are (without the need to modify the battery or battery connectors in-case)? if this were to be done, then it would be quite some time before a Pandora 2 would even really be talked about... where the whole concept behind a v2 would be more hardware functionality like built-in GPS, built-in WiMax, possibly an ION2~ GPU, etc (vs higher flash, RAM and CPU)... To be honest, I wonder if this question was asked amongst the inner OPT group prior to creation/design completion for simple upgradeability down the line... Just a thought... :)
 
If you were to go that route, you could probably do something similar to Gumstix in that the SoC, flash, wifi, power chip, etc are all on one standard board, and the rest of the system has complementary connectors.

I'd assume the OPT didn't do something similar as an effort to keep costs and complexity (and size) down.
 
Ari64 said:
Early reports on the PandaBoard suggest it has some power management issues. Apparently it draws too much current to be powered by USB, and the recommended power supply is 20 watts.

I don't know how much power the OMAP4 really needs, but it may be awhile before there's a drop-in replacement for the Pandora.

It not running off of USB doesn't strictly mean that 500mA @ 5V is insufficient. They could be powering on Windows, where no more than 100mA will be available if the USB device doesn't request more. For instance, if the hub used is reporting itself as powered this will happen. Another possibility is that it's using 5V straight from power somewhere on the board and the USB provided 5V is too unstable.

Plus, just because the recommended adapter is 4A at 5V doesn't mean it needs that much, it could simply need any amount over 500mA at 5V. And it's possible that there's an inefficient power regulator involved somewhere here, giving it far less than 2.5A at around 1V (or whatever it is the main voltage levels on OMAP4 are)

EDIT: Woah, I didn't notice they're linking an OMAP4 TRM too. http://focus.ti.com/pdfs/wtbu/OMAP4430_ES2.0_Public_TRM_vJ.pdf
 
Last edited by a moderator:
Exophase said:
EDIT: Woah, I didn't notice they're linking an OMAP4 TRM too. http://focus.ti.com/pdfs/wtbu/OMAP4430_ES2.0_Public_TRM_vJ.pdf
It gives a detailed description of the various power management modes, but no estimates of total power consumption. It's a huge document, so maybe I missed something.

I was a little surprised by this warning:
CAUTION: The PandaBoard may reach elevated temperatures. Avoid handling the PandaBoard while power is applied.
Maybe they're just being paranoid, but the beagleboard does not get that hot. It gets warm, but not dangerous to handle.

Any idea what the TDP of the pandaboard is?
 
Last edited by a moderator:
Don't know, need to wait for a datasheet most likely. Might end up posted here:

http://focus.ti.com/general/docs/wtbu/wtbudocumentcenter.tsp?templateId=6123&navigationId=12667
 
I was looking through the Cortex-A8 TRM to see if there was any mention of the 2-cycle issue rate for branches. Not only are branches called out as single cycle, but their timing example clearly shows them not having any stall penalty when correctly predicted.

http://infocenter.ar...363e/index.html

It claims L2 latency is 8 cycles, but that can be somewhat explained away with design options chosen by OMAP3.. this one is less likely. I think I'll ask on the ARM forum, although I'm not optimistic I'll get a reply.

EDIT: Looks like the TRM has several verified errors. This thread highlights some: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.arm/2010-10/msg00000.html
 
Exophase said:
I was looking through the Cortex-A8 TRM to see if there was any mention of the 2-cycle issue rate for branches. Not only are branches called out as single cycle, but their timing example clearly shows them not having any stall penalty when correctly predicted.
Branches issue at a rate of one per cycle. The stall occurs in instruction fetch when a branch is predicted taken. There is a one-cycle delay before the target address from the branch predictor is passed to the L1 cache. Since the instruction decoder maintains a queue, if the pipeline is stalled for some other reason (eg load-use interlock) this may hide the fetch penalty. There is no penalty for not-taken branches.

It also depends on the alignment. If the target address of the branch is not divisible by eight, then the first fetch after branch prediction will only get one instruction instead of two.
 
Last edited by a moderator:
Ari64 said:
Branches issue at a rate of one per cycle. The stall occurs in instruction fetch when a branch is predicted taken. There is a one-cycle delay before the target address from the branch predictor is passed to the L1 cache. Since the instruction decoder maintains a queue, if the pipeline is stalled for some other reason (eg load-use interlock) this may hide the fetch penalty. There is no penalty for not-taken branches.

It also depends on the alignment. If the target address of the branch is not divisible by eight, then the first fetch after branch prediction will only get one instruction instead of two.

You already explained all of this before, both here and on the Wiki. The TRM contradicts it completely, and barring some other explanation it's a fairly serious error. For that matter, it goes so far as to claim that the target of a branch can dual issue with the branch, which should be impossible for any standard branch given a single 64-bit fetch per cycle. That is unless we're to assume that this example starts with the fetch queue several instructions ahead, which is pretty unreasonable. It still shouldn't be possible given your claims that when a branch is in pipe 0 the instruction at the next sequential PC will dispatch in pipe 1 regardless of prediction, and will at least go far enough through the pipeline to reach the memory load stage. Possibly ssued as an "execute never" instruction.

My link didn't work, but look at the timing example in 16.8.

What surprises me the most is that other people, who are finding more obscure errors in the TRM, are not noticing this. I don't really see how you can perform any cycle counting benchmarks and not see it.
 
Last edited by a moderator:
It looks so far from the tests Ari64 and I have done before that the fetch unit is stalled during L1 dcache misses (even though I don't really see why it would have to), so you don't gain a chance to get ahead there. There are still probably enough interlocks (dependencies, load-use, address generation, issue restriction, etc) that let normal code get one ahead with fetching but it could still cost a cycle on highly optimized loops.

If there's really a 1-cycle stall on branch target resolution for the fetch unit then you might be able to hide it in Thumb-2 code. You would need four 16-bit instructions for every branch. For instance, a simple 4 instruction loop could execute in two cycles instead of the three it would take in ARM code.

If this works out to be true it'd be good to come up with a comprehensive list of how to write optimized Thumb-2 code, to add to Ari64's Cortex-A8 assembly optimization documentation. Add to this if I'm missing anything:

- Align basic blocks on 64-bits still, which can be harder than just aligning on 2-instructions.
- Don't have more than one branch per 64-bit block
- Keep branches as close as possible to the end of a 64-bit block.. the excess instructions will be wasted fetch; hopefully only the first excess will actually see a pipeline issue (if any)
- Don't cross 32-bit instructions over (64 byte) cache lines

I could see this actually being more worthwhile in Cortex-A9, if branches are folded out prior to decode.. then you really could get free branches.
 
Exophase said:
You already explained all of this before, both here and on the Wiki. The TRM contradicts it completely, and barring some other explanation it's a fairly serious error. For that matter, it goes so far as to claim that the target of a branch can dual issue with the branch, which should be impossible for any standard branch given a single 64-bit fetch per cycle. That is unless we're to assume that this example starts with the fetch queue several instructions ahead, which is pretty unreasonable. It still shouldn't be possible given your claims that when a branch is in pipe 0 the instruction at the next sequential PC will dispatch in pipe 1 regardless of prediction, and will at least go far enough through the pipeline to reach the memory load stage. Possibly ssued as an "execute never" instruction.
The issue that I ran into with the loads dual-issuing with branches was code like the following
Code:
 b target
 ldr r0,[r1]
If the branch has not been seen before, it will have no entry in the BTB and will be predicted not-taken. This pair of instructions can dual-issue, and since the memory load starts in E1 (or E2?) and branch resolution happens in E4, the memory load is already in progress when the branch misprediction is detected. This can be demonstrated experimentally by running code like this, and will result in reduced performance due to the extraneous cache loads/evictions (and probably TLB too).

Of course, this doesn't happen when the branch is predicted correctly, but it is guaranteed to happen the first time each branch is encountered.

I assume that the target of a branch can dual issue with the branch when the branch is predicted taken, if there are enough instructions in the queue to keep the pipeline full. I haven't actually tested this, since the queue ended up empty in most of the examples I tested. It should be easy enough to experiment with unrolling a loop containing MUL or LDM if you want to try it yourself.


Exophase said:
My link didn't work, but look at the timing example in 16.8.
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0344k/Babeghic.html

Yeah, this example assumes a full queue, which is unlikely.
 
Last edited by a moderator:
Back
Top