128mb Of Ram...


Exophase said:
Great, the instruction set/architecture is totally documented, so I don't see what will stop compilers or at least assemblers from being made, if they aren't already available.

If performance of GCC on Blackfin is any indication I'm thinking we might want to write ASM for this one.
gcc is not very good for VLIW architectures... And that DSP is a VLIW beast so indeed asm might be the only way to go. But then you will need an optimizing assembler, because there are many things you wouldn't like to do by hands such as static scheduling which is mandatory for correct execution.

Anyway as you wrote this is totally documented, we just need some people to work on the tools :)
 
Last edited by a moderator:
What, exactly, would the applications for the DSP be? Could it be used much like the 940 on the GP2X?

(Keep in mind I'm not an assembly programmer, although I understand most of it)
 
atomicthumbs said:
What, exactly, would the applications for the DSP be? Could it be used much like the 940 on the GP2X?

The C64x is not very well-suited for real applications (you could conceivably build full apps on a 940).
You may think of it as a processor thought for codec programming :) A kind of specialized processor similar to what GPU and PPU are.
 
Last edited by a moderator:
Laurent said:
Exophase said:
Great, the instruction set/architecture is totally documented, so I don't see what will stop compilers or at least assemblers from being made, if they aren't already available.

If performance of GCC on Blackfin is any indication I'm thinking we might want to write ASM for this one.
gcc is not very good for VLIW architectures... And that DSP is a VLIW beast so indeed asm might be the only way to go. But then you will need an optimizing assembler, because there are many things you wouldn't like to do by hands such as static scheduling which is mandatory for correct execution.

Anyway as you wrote this is totally documented, we just need some people to work on the tools :)


I'd love to see more optimizing assemblers, personally (although I might be crazy enough to try to do some static scheduling of this magnitude myself.. it'd be kind of nice having an assembler which gave you a report on the utilization of the units as you went through, so you could hand tweak it)

So far I'm very impressed with C64x compared to Blackfin. 8 unit blocks of four different types, so it already has 2x redundancy for every operation, and a lot of them can handle the same basic things. It also has predication which I imagine is a major win for a platform that hates branching, and unlike Blackfin it has a ton of general purpose 32bit registers (32x2, great).. Undeniably it has several DSP features, but at its heart it almost feels more like a mobile Itanium.

Not suited for general purpose code that has a lot of branches and not a lot of inherent parallel execution, but I bet if it has a few pivotal instructions that it could pull off some very efficient rendering code for 2D emulators (even DS and Saturn have a ton of 2D hardware)
 
Last edited by a moderator:
flatmush said:
Well to answer my own question with respect to the fillrate, the SGX uses tile-based deferred rendering onto a tile in onboard memory which is then copied out to the framebuffer. Which leaves me wondering how you'd go about changing render-targets.

I'm not sure what you mean. It simply writes to different locations in memory for different render targets.
 
Last edited by a moderator:
Xmas said:
flatmush said:
Well to answer my own question with respect to the fillrate, the SGX uses tile-based deferred rendering onto a tile in onboard memory which is then copied out to the framebuffer. Which leaves me wondering how you'd go about changing render-targets.

I'm not sure what you mean. It simply writes to different locations in memory for different render targets.
Using TBDR means that it stores a list of triangles and draws them all when you swap the buffers/flush. So changing render targets would surely require the whole list of triangles to be saved to memory, then loaded back afterwards or something like that.
 
Last edited by a moderator:
flatmush said:
Xmas said:
flatmush said:
Well to answer my own question with respect to the fillrate, the SGX uses tile-based deferred rendering onto a tile in onboard memory which is then copied out to the framebuffer. Which leaves me wondering how you'd go about changing render-targets.

I'm not sure what you mean. It simply writes to different locations in memory for different render targets.
Using TBDR means that it stores a list of triangles and draws them all when you swap the buffers/flush. So changing render targets would surely require the whole list of triangles to be saved to memory, then loaded back afterwards or something like that.


I don't really see what you're getting at either. RAM is RAM, isn't it? If the video chipset can address the RAM then it shouldn't matter if it's faster than normal SDRAM or if it's shared with the CPU or whatnot.
 
Last edited by a moderator:
flatmush said:
Using TBDR means that it stores a list of triangles and draws them all when you swap the buffers/flush. So changing render targets would surely require the whole list of triangles to be saved to memory, then loaded back afterwards or something like that.
The list of triangles is stored in memory in the first place. Anyway, why would you want to change render targets mid-frame?
 
Last edited by a moderator:
Xmas said:
flatmush said:
Using TBDR means that it stores a list of triangles and draws them all when you swap the buffers/flush. So changing render targets would surely require the whole list of triangles to be saved to memory, then loaded back afterwards or something like that.
The list of triangles is stored in memory in the first place. Anyway, why would you want to change render targets mid-frame?


I think wikipedia says that the PowerVR SGX doesn't use TBDR.

Am I misreading the wikipedia entry on the PowerVR? http://en.wikipedia.org/wiki/PowerVR#Technology - doesn't this say that TBDR was only used on series 1 & 2. (Minor point, I think nubie said it was series 3 in use on the Dreamcast - but according to this it was series 2).

So, the SGX (series 5) doesn't actually have TBDR?

Sorry if this is a dumb question - you guys all know a LOT more about this than me - I was just trying to read along and learn something...
 
Last edited by a moderator:
jdh2550 said:
Am I misreading the wikipedia entry on the PowerVR? http://en.wikipedia.org/wiki/PowerVR#Technology - doesn't this say that TBDR was only used on series 1 & 2. (Minor point, I think nubie said it was series 3 in use on the Dreamcast - but according to this it was series 2).

That Wikipedia article is wrong and/or outdated on some points. However, all PowerVR GPUs are tile based deferred renderers, what you're referring to is automatic sorting of transparent objects, a feature which was dropped with the Kyro family.
 
Last edited by a moderator:
Xmas said:
jdh2550 said:
Am I misreading the wikipedia entry on the PowerVR? http://en.wikipedia.org/wiki/PowerVR#Technology - doesn't this say that TBDR was only used on series 1 & 2. (Minor point, I think nubie said it was series 3 in use on the Dreamcast - but according to this it was series 2).

That Wikipedia article is wrong and/or outdated on some points. However, all PowerVR GPUs are tile based deferred renderers, what you're referring to is automatic sorting of transparent objects, a feature which was dropped with the Kyro family.

Thanks!
 
Last edited by a moderator:
QUOTE
I don't really see what you're getting at either. RAM is RAM, isn't it? If the video chipset can address the RAM then it shouldn't matter if it's faster than normal SDRAM or if it's shared with the CPU or whatnot.

Ah, there I go making assumptions again (gotta stop doing that). I had just assumed that the triangle lists were stored on the chip itself, obviously that couldn't be the case though as the ram requirement for the triangle lists is obviously going to be larger than would fit in any cache.
 
icurafu said:
ARM is big endian, DSP is little endian.
http://focus.ti.com/docs/prod/folders/print/omap3530.html

There was a typo, but it has been corrected. The ARM and DSP are both little-endian, at least as far as I know. :)

C64x+ is actually pretty good, in my estimation, for general programming, but the OMAP3 architecture is centered around the ARM processor. Faster C64x processors are available from TI for higher performance (somewhat less mobile power-level focused).

I'm already on the hook other places saying that TI will offer a C64x compiler for non-commercial use, so what's the harm in saying it one more time here?
 
Last edited by a moderator:
Another forum thread with the off-topic syndrome, well...

What about roms for certain systems? There are really big Neogeo roms, so more than 128M could be interesting for avoid stuff like prefetching.

I don't want to read the 12 pages of the thread, but it seems the used SoC (the OMAP one) can't support more than 128MB of RAM. Anyone could reply if it's true?
 
timofonic said:
I don't want to read the 12 pages of the thread, but it seems the used SoC (the OMAP one) can't support more than 128MB of RAM. Anyone could reply if it's true?
The SoC can support more than 128MB of RAM.

"16, 32-bit Memory Controller With 2G-Byte Total Address Space" at http://focus.ti.com/docs/prod/folders/print/omap3530.html

I won't claim that adding all of that memory would be trivial, so I wouldn't expect the Pandora specs to change based on me saying this.
 
Last edited by a moderator:
timofonic said:
I don't want to read the 12 pages of the thread, but it seems the used SoC (the OMAP one) can't support more than 128MB of RAM. Anyone could reply if it's true?
aka "I can't be bothered to read all the thread to find the answer, can someone else do it for me?" :)
 
Last edited by a moderator:
timofonic said:
Another forum thread with the off-topic syndrome, well...

What about roms for certain systems? There are really big Neogeo roms, so more than 128M could be interesting for avoid stuff like prefetching.

I don't want to read the 12 pages of the thread, but it seems the used SoC (the OMAP one) can't support more than 128MB of RAM. Anyone could reply if it's true?
For large roms the devs will need to work some kind of virtual memory system into the emulator. It's not ideal, but thems the brakes.

Farther back in this thread it was stated byCraig or MWeston that a) The actual RAM limit is far below the 2gb theoretical limit and B) 128 was chosen because that was the ammount that they could get in the quantity, pricing and timing necessary.

All the actual devs who have commented don't seem to think it will be a problem. I'd normally be of the "more is better" mindset, but for now I'll take their word on it.
 
Last edited by a moderator:
I isn't regular RAM, it is special mobile ram, and possibly in a special package.

+ less power
+ makes it smaller
+ very short connection paths

- Expensive
- smaller overall memory sizes.

I think that we are fine. But in the future the ram could be expanded, if there is even a real need.
 
Back
Top