CUDA – GPU Acceleration Hardware Simulation?

I am investigating whether GPGPU can be used to accelerate hardware emulation.
My reason is this: Since the hardware is very parallel in nature, why emulate a highly sequential CPU?

GPUs are very suitable for this situation, if not because of their restrictive programming style: you have a kernel running, and so on.

I have no experience with GPGPU programming, But can events or queues be used in OpenCL/CUDA?

Edit: Through hardware simulation, I am not referring to simulation, but bit-accurate behavior simulation (such as in VHDL behavior simulation).

I don’t know any method of VHDL simulation on GPU (or general scheme of mapping discrete event simulation), but some application fields usually apply discrete event simulation and can be used in Effective simulation on GPU (e.g. transportation network, such as this paper or this one, or stochastic simulation of chemical system, as described in this paper).

Is it possible to make discrete time steps Is it possible to reformulate the problem in a feasible way into the simulator? In this case, the simulation on the GPU should be simpler (and still faster, even if it seems wasteful, because the time step must be small enough-see, for example, this paper for GPU-based cellular automata simulation). /p>

But please note that this is still likely to be a non-trivial (research) problem, and the reason why there is no general solution (yet) is what you have assumed: it is difficult to implement event queues on the GPU, and Most simulations are accelerated due to clever memory layout and application-specific optimizations and problem modifications.

I am investigating whether GPGPU can be used for acceleration.

Hardware emulation.
My reason is this: Since the hardware is very parallel in nature, why emulate a highly sequential CPU?

GPUs are very suitable for this situation, if not because of their restrictive programming style: you have a kernel running, and so on.

I have no experience with GPGPU programming, But can events or queues be used in OpenCL/CUDA?

Edit: Through hardware simulation, I am not referring to simulation, but bit-accurate behavior simulation (such as in VHDL behavior simulation).

< p>I don’t know any method about VHDL simulation on GPU (or general scheme of mapping discrete event simulation), but some application fields usually apply discrete event simulation and can be effectively simulated on GPU (e.g. transportation network, such as this paper Or this one, or stochastic simulation of chemical systems, as described in this paper).

Is it possible to reformulate the problem in a way that makes discrete-time stepping simulators feasible? In this case, the simulation on the GPU should be simpler (and still faster, even if it seems wasteful, because the time step must be small enough-see, for example, this paper for GPU-based cellular automata simulation). /p>

But please note that this is still likely to be a non-trivial (research) problem, and the reason why there is no general solution (yet) is what you have assumed: it is difficult to implement event queues on the GPU, and Most simulations are accelerated by methods on the GPU due to clever memory layout and application-specific optimizations and problem modifications.

Leave a Comment

Your email address will not be published.