Graphics Concept

Do you want to see this in future Pandora games?


  • Total voters
    49

Mr.Confuzed

stoked and confused
Joined
Jun 12, 2009
Messages
352
Age
34
Location
Canada
Does anyone else hate seeing polygons where there shouldn't be polygons? I know I do, so I've been working on a method for drawing curved 3d shapes in realtime. First I implemented an in-shader raycaster with RenderMonkey. I have some simples shapes down:

spheref.th.jpg

Now, raycasting isn't usually an efficient algorithm, unless you have upwards of a million primitives in your scene. So I've written up a software sphere rasterizer instead. The results:



I figure, this method fits well with the Pandora because the DSP can run the rasterizer, while the GPU does the pixel shading. If I'm right, the rasterizer should be able to support geometry and viewport distortions. For example, a fisheye effect should be a low cost option. What do you guys think? AFAIK, this hasn't really been done before, so it's risky, but I think it looks promising.
 
mindlord said:
How well do you think something like this would work with voxels instead of polygons?
I hope you're joking.

Edit: Did you mean adding in cloud/particle effects as appropriate?
 
Last edited by a moderator:
Does the renderer still output geometry? Otherwise I don't see how they'll get pixel shaded. Sure, you could have a bunch of point sprites, but I think that'd be way too much for the SGX to handle.
 
Exophase said:
Does the renderer still output geometry? Otherwise I don't see how they'll get pixel shaded. Sure, you could have a bunch of point sprites, but I think that'd be way too much for the SGX to handle.
No, at the moment it produces an alpha texture, but the plan is to store the relevant data in texture memory (say, surface coords and normal vector). The potential drawback here is that if you need multiple shaders, you either need to use the stencil buffer or add ugly conditionals and amass a conglomerate shader. If we ever get GPGPU functionality though, it shouldn't be a problem assuming you can use function pointers.

If you need geometry (maybe on other platforms?), lines might work better considering that's how the rasterizer works anyway.
 
Last edited by a moderator:
Sorry, I don't really get the point of this.

Why would you try to avoid polygons on a small display like you have on the Pandora?
And whats the huge advantage of your "new" technique? Why can't I just draw a sphere using a quad in a fragmentshader if I want it to be perfect?

I just don't see any reason for doing this unless you have a specific game idea which is only possible using a technique like this.

And won't we have OpenVG for this in the future? I remember reading something about this on the PowerVR page.
 
Might not be fully on topic
but i think it'd kick ass to get some nice tech demos to show my firends on the pandora.

or "demoscene" demo apps :)
 
The only reason I voted yes is because I think it's interesting but I don't know if it's practical. Can you make a demo that will run on a pc? Maybe show us how this would work in a game?
 
IDK if they did it like this, but in TF2 "the cart" (also called "the bomb") is perfectly rounded but doesn't cause the extreme lag you'd think a high-poly object would cause (I always imagined they were somehow drawing vector graphics that changed based off of your perspective).
 
It sounds interesting. I don't really know too much about 3D rendering, but if you see something that no one else has seen, then this may inspire others to try this unused technique, I agree with the suggestion for some videos :)
 
JayFoxRox said:
Sorry, I don't really get the point of this.

Why would you try to avoid polygons on a small display like you have on the Pandora?
And whats the huge advantage of your "new" technique? Why can't I just draw a sphere using a quad in a fragmentshader if I want it to be perfect?

I just don't see any reason for doing this unless you have a specific game idea which is only possible using a technique like this.

And won't we have OpenVG for this in the future? I remember reading something about this on the PowerVR page.
You can draw a sphere using a quad in a fragment shader. That's pretty much what I did in the first picture, but can you modify the depth buffer appropriately at the same time?

As far as OpenVG goes, I'm fairly certain that is for 2d graphics only.

I don't see how the size of the display matters.

Huge advantage, well I never said it was huge, but smoother graphics are always nice, not to mention it is more efficient to draw one sphere than 100+ triangles. Although, I'm not just talking about spheres here, I'm talking about all manner of 3-dimensional curves.

There is an alternative method, by which you enclose the object in polygons and raycast in the shader. It should work fine until you get two objects that are really close to each other/intersecting.
 
Last edited by a moderator:
I don't know about practical applications aside from balls and planets and other spherical things at this moment, but I do enjoy new tech.

I could potentially see the use of something like this in a game, but at the moment I'm kinda stumped as to what's going on. Thus, questions! (I haven't voted yet because I don't understand yet)

- Does this work only on shapes you've programmed in, or for ALL curved surfaces?
- Is this something that will work alongside other polygonal shapes, or does it replace the engine for drawing 3D completely?
- The way I'm thinking it's working at the moment, you're drawing the 3D object to some kind of buffer, then doing some edge processing to it to create a smoother curve, and outputting that to a texture, which is drawn to the screen?

My brain kinda sees some potential use of this, but maybe I'm not seeing enough info to know exactly what I'd use it for yet. If it only works on certain shapes, I don't think I'd use this over a regular polygon sphere, but if it will work on any shape, say a human figure or other non-primitive shapes, I can see more use for it.
 
-Tj- said:
I don't know about practical applications aside from balls and planets and other spherical things at this moment, but I do enjoy new tech.

I could potentially see the use of something like this in a game, but at the moment I'm kinda stumped as to what's going on. Thus, questions! (I haven't voted yet because I don't understand yet)

- Does this work only on shapes you've programmed in, or for ALL curved surfaces?
- Is this something that will work alongside other polygonal shapes, or does it replace the engine for drawing 3D completely?
- The way I'm thinking it's working at the moment, you're drawing the 3D object to some kind of buffer, then doing some edge processing to it to create a smoother curve, and outputting that to a texture, which is drawn to the screen?

My brain kinda sees some potential use of this, but maybe I'm not seeing enough info to know exactly what I'd use it for yet. If it only works on certain shapes, I don't think I'd use this over a regular polygon sphere, but if it will work on any shape, say a human figure or other non-primitive shapes, I can see more use for it.
Yes, the shapes have to be programmed in, but there is a mathematical process for converting from the original formula to the drawing formulae. It could probably be automated and should work for any shape that has two intersections per 'ray'. I haven't tried any more complicated ones yet, but if push comes to shove, build it in pieces.

I would prefer that polygons be drawn in software purely because I don't like the way opengl handles depth. I prefer full floating point ranges instead of 0.0 to 1.0, or 1.0 to 0.0. You don't have to worry about near and far clipping this way, unless you want to. Regardless, if you want to have the SGX do what it does best, you should be able to write to the depth buffer in software. Then the SGX would draw it's triangles, and you would follow it up with a second pass to fill in where the triangles didn't cover.

There is no edge processing, just edge formation via maths. (Heh, maths)
I've got nothing better to do, so I'll describe the process in an edit.

You start with your shape equation (sphere).
: x^2 + y^2 + z^2 = r^2

Substitute as if you were raytracing, where x,y,z is the ray direction; a,b,c is the offset from the camera; and t is the ray's scaling factor, meaning t*{x,y,z} = intersection point.
: (x*t - a)^2 + (y*t - b)^2 + (z*t - c)^2 = r^2

Solve for t.
: x*x*t*t - 2*x*t*a + a*a + y*y*t*t - 2*y*t*b + b*b + z*z*t*t - 2*z*t*c + c*c = r*r
: t*t*(x*x + y*y + z*z) - 2*t*(x*a + y*b + z*c) + a*a + b*b + c*c - r*r = 0
quadratic formula: t = ( 2*(x*a + y*b + z*c) +- sqrt( 4*(x*a + y*b + z*c)^2 - 4*(x*x + y*y + z*z)*(a*a + b*b + c*c - r*r) ) ) / ( 2*(x*x + y*y + z*z) )

Now, if both solutions are equal then the positive sqrt is equal to the negative sqrt. The only number that is both positive and negative is zero, so:
: sqrt( 4*(x*a + y*b + z*c)^2 - 4*(x*x + y*y + z*z)*(a*a + b*b + c*c - r*r) ) = 0
simplify: (x*a + y*b + z*c)^2 - (x*x + y*y + z*z)*(a*a + b*b + c*c - r*r) = 0
Think of it like we cut the sphere into two pieces: the piece we can see, and the piece we can't see. Where these two halves intersect forms the perimeter of the sphere from our point of view.

Now we have a projection of a sphere. How do we rasterize it? We use the same method again to eliminate another variable.
Solve for x.
: x*x*a*a + y*y*b*b + z*z*c*c + 2*x*a*y*b + 2*x*a*z*c + 2*y*b*z*c - x*x*(a*a + b*b + c*c - r*r) - (y*y + z*z)*(a*a + b*b + c*c - r*r) = 0
: -x*x*(b*b + c*c - r*r) + 2*x*a*(y*b + z*c) + y*y*b*b + z*z*c*c + 2*y*b*z*c - (y*y + z*z)*(a*a + b*b + c*c - r*r) = 0
quadratic formula: x = ( -2*a*(y*b + z*c) +- sqrt( 4*a*a*(y*b + z*c)^2 + 4*(b*b + c*c - r*r)*(y*y*b*b + z*z*c*c + 2*y*b*z*c - (y*y + z*z)*(a*a + b*b + c*c - r*r)) ) ) / ( -2*(b*b + c*c - r*r) )

Equate solutions.
: sqrt( 4*a*a*(y*b + z*c)^2 + 4*(b*b + c*c - r*r)*(y*y*b*b + z*z*c*c + 2*y*b*z*c - (y*y + z*z)*(a*a + b*b + c*c - r*r)) ) = 0
simplify: a*a*(y*b + z*c)^2 + (b*b + c*c - r*r)*(y*y*b*b + z*z*c*c + 2*y*b*z*c - (y*y + z*z)*(a*a + b*b + c*c - r*r)) = 0

Solve for y.
: a*a*(y*y*b*b + 2*y*b*z*c + z*z*c*c) + (b*b + c*c - r*r)*(-y*y*(a*a + c*c - r*r) - z*z*(a*a + b*b - r*r) + 2*y*b*z*c) = 0
: y*y*(a*a*b*b - (a*a + c*c - r*r)*(b*b + c*c - r*r)) + 2*y*b*z*c*(a*a + b*b + c*c - r*r) + z*z*(a*a*c*c - (a*a + b*b - r*r)*(b*b + c*c - r*r)) = 0
: -y*y*(c*c - r*r)*(a*a + b*b + c*c - r*r) + 2*y*b*z*c*(a*a + b*b + c*c - r*r) - z*z*(b*b - r*r)*(a*a + b*b + c*c - r*r) = 0
: -y*y*(c*c - r*r) + 2*y*b*z*c - z*z*(b*b - r*r) = 0
quadratic formula: y = ( -2*b*z*c +- sqrt( 4*(b*z*c)^2 - 4*(c*c - r*r)*z*z*(b*b - r*r) ) ) / ( -2*(c*c - r*r) )
simplify: y = ( b*z*c +- sqrt( (b*z*c)^2 - (c*c - r*r)*z*z*(b*b - r*r) ) ) / (c*c - r*r)
simplify: y = z*( b*c +- sqrt( (b*c)^2 - (c*c - r*r)*(b*b - r*r) ) ) / (c*c - r*r)
simplify: y = z*( b*c +- sqrt( r*r*b*b + r*r*c*c - r^4 ) ) / (c*c - r*r)
simplify: y = z*( b*c +- r*sqrt( b*b + c*c - r*r ) ) / (c*c - r*r)

This equation provides two solutions: the top and bottom of the sphere projection. Then you repeatedly feed y-values into one of the previous equations to produce pairs of x-values. Voila! Instant rasterization! I hope that clears things up, lol.
 
Last edited by a moderator:
So you can't pick pixels like in a ray tracer? You have to test enough ys or xes until your image is of a high enough quality. How is this usable for a game, and how can you gain real-time performance?

Because I'm using LuxRender as we speak to render a very simple, but extremely realistic image of a sphere, using the same technique of "firing photons". It has so far taken my quad-core computer with an OpenGL 3.1 grahics card about 13 hours to render the image and I'm not even satisfied yet:


Can your system produce a similarly smooth but shadable surface (as Exophase was asking you if you could) so that I would be able to, let's say, add a per-pixel lighting and shadow volume shader?
 
maybe little offtopic:
Q3 also uses some technique to draw perfect curves and spheres (NOT poligonal but REAL curves).
So i think this idea also have some point.
 
Mr.Confuzed said:
JayFoxRox said:
Sorry, I don't really get the point of this.

Why would you try to avoid polygons on a small display like you have on the Pandora?
And whats the huge advantage of your "new" technique? Why can't I just draw a sphere using a quad in a fragmentshader if I want it to be perfect?

I just don't see any reason for doing this unless you have a specific game idea which is only possible using a technique like this.

And won't we have OpenVG for this in the future? I remember reading something about this on the PowerVR page.
You can draw a sphere using a quad in a fragment shader. That's pretty much what I did in the first picture, but can you modify the depth buffer appropriately at the same time?

Yes, kinda. You can discard pixels which don't belong the sphere in the pixelshader. Do you plan to intersect the sphere that much?
You could still slice it in a triangle-fan out of 8 polys (or more if you are paranoid) and make the center closer to the camera, getting rid of some problems. It might not be the ideal solution, but its probably faster and easier than your method.

GizmoTheGreen: Will happen, working on it already when I'm boored ;)
 
Last edited by a moderator:
JayFoxRox said:
You can discard pixels which don't belong the sphere in the pixelshader.
Not in practice, because GLSL::discard is too slow on the SGX iirc (someone made a post about this a couple of months ago). You shouldn't use it, ever. It's faster to just return alpha=0 and use ARGB textures.
 
Last edited by a moderator:
yes, I also heard that discard is a bottleneck, but then again: just use an alpha texture like you said. I don't see why this would be much of a problem.
Unless you want to draw million of spheres performance shouldn't be a real problem on the pandora (Nor the battery usage I guess).
However, figures would be really interesting.
 
Mr.Confuzed said:
I would prefer that polygons be drawn in software purely because I don't like the way opengl handles depth. I prefer full floating point ranges instead of 0.0 to 1.0, or 1.0 to 0.0. You don't have to worry about near and far clipping this way, unless you want to. Regardless, if you want to have the SGX do what it does best, you should be able to write to the depth buffer in software. Then the SGX would draw it's triangles, and you would follow it up with a second pass to fill in where the triangles didn't cover.

The near/far regions is irrelevant, the only part that matters is whether or not the depth buffer is floating point. On SGX the internal depth buffer/depth calculations are 32bit floating point. So you get the same amount of precision between extremes, which is what's important, as the depth values in the scene can be scaled as appropriate.

As far as depth buffer writing goes, this is far removed from how the SGX works, and how depth buffers work in general for that matter. The point of a depth buffer is that it allows writing fragments in an arbitrary order, if you ignore alpha blending at least. If the fragments can themselves update depth then you're forcing some kind of other ordering on them, and if this requires sorting by depth then you've defeated the entire purpose of the depth buffer in the first place. On SGX this case is even worse since the depth must be evaluated for every fully visible fragment before they can even be drawn, thanks to how the binning works.

It sounds like what you want to do is generate a bunch of shapes in software that are sent to the SGX as a bunch of orthogonal quads with depth textures and whatever other information for lighting. Unfortunately OGL ES 2 for SGX doesn't appear to support depth textures so you'd be pretty out of luck with this. I don't really see anything that would prevent the hardware from being capable of doing this, since it's possible to get it to output a depth buffer and read it in OGL ES (somehow, it has to be or else the depth24 and depth_stencil extensions don't make any sense) and presumably it's possible for it to preload the depth buffer too. It wouldn't be using the SGX that efficiently since the depth buffer wouldn't be solely on-chip and depth textures eliminate the potential for some hierarchical-Z optimizations, but it should still be doable.
 
Last edited by a moderator:
Back
Top