Computation using GPU instead of CPU

Hello. I have a little question about computation using a classical computer. As you know I developed a little 2d engine in order to play with pixels, mathematical science, pseudo physics, geometry and more more. To be simple, each pixel is computed according to some algorithms, geometries or simple coordinates. Then they are displayed each frame. Usually, for a little game like a MineSweeper, all works smoothly with lines, sprites, states, synthesis (bla bla), but for some stress tests I notice a low frame rate. I coded a fluid simulation using the Fick's laws of diffusion (However the Lattice Boltzmann method seems better), but my framerate is really poor because of one fact - all computations have been using my CPU ( and I guess that I should improve my code too). I would like to "export" these computations to my GPU. My understanding of the main principle is really poor - like my stupid sentence ( I would like to "export" these computations to my GPU). So is there a way to "export" some hard computation to the GPU instead of CPU ? I know CUDA, but it seems to me that it is a truly labyrinthine system. Do you know an alternative ? Thank you. Have a nice day ++
Last edited on
There is also boost.compute that should allow offloading work to the GPU. There are of course gaming engines, like Unity, which would handle these things for you.

Googling around will give you several options, but they usually aren't pretty as you saw with Cuda.
You said everything. I came from Unity. I used it for many years in order to create my own games, to participate at events like LudumDare and other CodeJam, but now I want to use my own engine instead of a big factory. Sometimes, when creating a project under Unity, it seems to me that I am eating my soup with a shovel :)
I don't know boost.compute - I have to search a paper about this alternative right now. Thank you ++
If you read the documentation for Boost.Compute you find out it uses OpenCL.

Boost Compute Documentation wrote:
Boost.Compute uses OpenCL as its interface for executing code on parallel devices such as GPUs and multi-core CPUs.

So why not skip the middle-man and use OpenCL by itself?

https://www.khronos.org/opencl/
CUDA is Nvidia-specific and, to the best of my knowledge, works with Nvidia GPUs only.

If you want to support non-Nvidia GPUs too, then your options include OpenCL or DirectCompute.

OpenCL is to OpenGL what DirectCompute is to Direct3D, more or less.

Here is a word of warning: In my experience, the speed-up you can gain by "offloading" computations from the CPU to the GPU will often be destroyed by the additional delay of having to upload your input data from CPU memory to GPU memory and download the results back from GPU memory to CPU memory. To solve this problem, it is recommended to implement pipelining, i.e. upload the next "chunk" of data to GPU memory while the current "chunk" of data is still being processed on the GPU. But that is really not trivial to do...
Last edited on
Topic archived. No new replies allowed.