The Communication Cube


Ok, this means i could make a new one ^^ This is something for the weekend, as its rains anyway..
I rename my channel to something a bit more cool: "Matzes Gadged Pallace" (i think the Germans would know the reverence ^^)
The Only Problem is then that i have to Moderation my Comment Section, as there will be a lot of shitstorm from the BIBI Followers.. ^^

I dont know if i make my Channel open for everyone, or just for they who have my Link..

EDIT:
And because of corona, they now using also one of these cool American Shool Busses in our small town, ditnt noticed this until i was up to drive out of our Garrage and saw this big yellow thing driving down our street in my back mirror ^^..
Although this thing is maybe a bit too big for us streets, but if i would still visite the shool, i think i would find this pretty awesome ^^
 
Last edited:
I leave for my beach vacation in a matter of hours. I'll be bringing my iPad mini though, so I'll probably still be checking in here once or twice a day for the next week.
 
10k.png

And it's only taken 13 years. :cool:
 
Look! N-vidia invented video chat the same way MMORPG avatars share their conversation!
 
Look! N-vidia invented video chat the same way MMORPG avatars share their conversation!
Funny, they showed the comparison on the BBC News channel between standard video calls and this approach, and this approach had noticeable glitches in comparison to the real stream. Then I noted they're using artificially throttled h264 for the comparison. But maybe those glitches are the payback for much reduced bandwidth. I'm also not sure why the needed a GAN for this, when picture deformation from point to point is an age old problem that's been solved a long time ago. But I guess that wouldn't handle hair movement well, but it seems to me they could save a lot of CPU churn by only sending that bit to the neural networks and doing the skin movement by more traditional methods.
 
I'd love a 2D cartoon animation compression. See, in cartoons, the frames stay still, while, only the mouth moves (this method is especially true in Anime, where it's mostly panning and the mouth movement is just 3 repeating frames that emulate speech). MPEG4 already did a good job in finding the differences and only sending those (ignoring the rest as "static frame", send once in a while as static keyframe), but it only looks at the previous frame, not a few frames back. Playback of that would require more memory of course, as it would need to remember many frames back (for example, a conversation between two alternating faces), and mid stream playback would be hindered too, you should only start from keyframes, but that one is just like MPEG, however, these keyframes would be further apart.

Then again, there must be a reason why games like exstatica (world made out of circle vectors) and Another world (world made out of non-textured polygon vectors) did not make it and textured polygon vectors did... (actually lots of triangles). For a while at least, now we have splines with textures and those look even better.
 
I got my flu shot today.

I expect this flu season to be lighter than last year's though, thanks to coronavirus measures. I think people have forgotten that the last couple flu seasons in the US have been pretty bad.
 
I'd love a 2D cartoon animation compression. See, in cartoons, the frames stay still, while, only the mouth moves (this method is especially true in Anime, where it's mostly panning and the mouth movement is just 3 repeating frames that emulate speech). MPEG4 already did a good job in finding the differences and only sending those (ignoring the rest as "static frame", send once in a while as static keyframe), but it only looks at the previous frame, not a few frames back. Playback of that would require more memory of course, as it would need to remember many frames back (for example, a conversation between two alternating faces), and mid stream playback would be hindered too, you should only start from keyframes, but that one is just like MPEG, however, these keyframes would be further apart.

I remember reading litterature about that kind of video compression back in 2006. After checking it seems that H.264 allows to look back for 32 key frames, and if I remember correctly it also allows for dynamic keygrames. So a good multi-pass encoder should already be able to handle conversations with keyframes for each faces. I'm not sure about panning, but it might also be handled.
 
Back
Top