The first thing people care about around here is whether or not the script actually works. I saw one pro try to give people style tips on the rpgmakerweb forum for learning Ruby and some people actually jumped down his throat for it. I don't think you'll get a ton of complaints over coding style, I haven't noticed that many people taking a serious Rubyist approach in the RPG Maker community. Not sure if that'd be about as good for the test. Should be a Radeon X300 or some such in it. I'll let you know if I crash with it, provided I can find the laptop charger the kid may have thrown in the garbage on me. I actually have an old Dell Inspiron 6000 series laptop collecting cobwebs, has the crappy factory GPU upgrade that never seemed to get driver updates. I can go on forever about the technical aspects, I'm sure you'll get bored of it, but if you have questions just ask me. If I was using glBegin/glEnd all performance gains would be lost as there is more locking going on with fixed-function GL, that's why shaders and VBOs are used so as much as possible is kept on GPU space and away from the locking of RGSS (There is actually a massive lock every time a Win32API call is made, so getting into GPU context ASAP from the function call is critical and keeping fast matrix maths is just as important for this same reason). The Sprite class for RGSS is more of an object management, it doesn't handle the Bitmaps (It has a handle to the displayed bitmap, that is all), so there's little relationship to be found, what I've done is essentially made a new Bitmap class, but that doesn't mean everything bitmap can be replaced as Game.exe still renders Bitmaps to the screen after everything, I still need to present a GDI+ Bitmap to Game.exe after everything.Īnd there is a lot more to it than just linking up OpenG元2.dll, I had to write a lot of boilerplate code for allocating dedicated GPU memory for streaming to/from the RM Bitmaps and I've included some research from my job (I'm an engine programmer) for dealing with the matrices (RGSS is far too slow for matrix maths to be implemented). The performance gains are more from having the GPU do the rendering operations with the default RGSS software rendering (GDI+) you pretty much have a single IO head writing to each (32 bit!) pixel in CPU memory, it does this for blitting and other bitmap operations, and it is fast enough for people to be satisfied with RPG Maker's default feature-set, when you have a GPU involved the whole blitting stage becomes parallel because of the streaming nature of GPU shader cores, so the real performance gain is the fact that the rendering itself is done on the GPU, rather than the CPU being freed (Which is also true, but it involves sticking Ruby script between kicking off the draw calls and grabbing the FBO contents from the GPU to bring back into the RGSS bitmaps). I use the same streaming method for uploading that id Tech 5 uses for every single texture in the game world, if it is fast enough to maintain 60FPS on an iPhone 3G then it is fast enough for RPG Maker texture uploads. The CPU copying textures is slow when going from CPU to CPU, when it is uploading to the GPU it is fast enough. Your knowledge seems to be incredibly dated and somewhat off, my friend. What you've done so far shows how to bind to Win32 APIs, the only thing we would also need is dynamic linkage to the opengl32 dll. If I wasn't trying to complete a game, I would attempt this. I actually might know more about this than I originally believed. In most cases, any sort of blitting in opengl will require some pointer to a glTexture, so you are essentially replacing the sprite class anyway. I haven't looked at things closely, so you are definitely correct that every blt call would need to be re-written within a glBegin() and glEnd() context, but even you use OGL for the blitting, if you are copying the GDI+ bitmaps into video memory every frame you are going to have a bad time. This frees up the CPU to do other things - like execute scripts. Where the performance comes from is when the texture are stored in video memory, and the blitting instructions are carried out on the graphics card, using texture memory that is ONBOARD and doesn't require the CPU to access and copy it over. Otherwise, the CPU is still copying an entire screen over to the GPU to blit it, which doesn't really give you any performance gain. I would think that to prevent all the CPU copying that the textures would be GL textures.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |