There’s a handful of EDA companies exploiting the potential speed up from GPUs:
Agilent – EM simulation using NVIDIA
Mentor – RET using Cell BE
Nascentric – FastSPICE using NVIDIA
Gauda – OPC using NVIDIA
The typical barrier to adopting GPUs is simple, you have to re-write your code. Many EDA companies do not want to re-write code because it is too costly in terms of time or perhaps the original software architect has moved on to another project or company and isn’t available to explore the benefits.
The technology competitor to GPUs is to simply stick with AMD and Intel CPUs on their quest to add more cores. This approach still requires a software re-write to keep each core busy.
I was impressed to learn last week that Synopsys is now offering multicore support in their FastSPICE simulators under the new product name CustomSim (NanoSim, HSIM, XA). Users don’t have to pay any extra upgrade fee to get the benefit of speed improvements using multicore workstations. Other EDA vendors are charging extra for multicore support.
Perhaps start-ups will lead the way in using GPUs and multicore to maximum benefit, then get acquired by the established EDA vendors.
“The typical barrier to adopting GPUs is simple, you have to re-write your code”
There are some applications that can take advantage of the power of the GPU computing model, but many are thwarted by Amdahl’s law: they contain a key step that cannot be partitioned.
Sean,
Yes, that’s one of the classical parallel limitations of trying to improve throughput. Still, I’m surprised at how slowly EDA companies are exploring hardware to accelerate their algorithms.
Daniel,
EDA companies are in the business of selling licenses. Any substantial runtime improvement may mean that I need less licenses to complete my jobs.
That’s why multi-threading options to any tool cost extra.
As for GPU support, I am sure that NVidia and IBM have specifically requested/ worked with these EDA vendors to establish such support.
Thomas,
Well, CustomSim is an exception because users get multicore support at no upgrade cost.
When I first entered EDA in 1986 we used to sell licenses based on how many MIPS the customers were using. The more MIPS the more the tool price was. Of course that pricing policy didn’t stand up over time because EDA competitors started offering tools at a flat rate, not based on MIPS.
I simply believe we are entering that same era with multicore where EDA vendors will eventually just have one price for their tools that will not depend on how many cores you have.
I am always amused by the “They don’t want to make it run faster so they can sell more licenses theory.”
Believe me, we (I am an AE for Mentor Graphics) get pounded daily on making our tools run faster and the EDA industry has enough competitive pressure that everyone is trying everything possible to get more performance.
But, at least in simulation, it goes back to Amdahl’s law as was stated in a previous post. Perhaps there will eventually be a breakthrough in that space, but I haven’t seen it yet.
Mentor has taken advantage of multi-threading by putting all the extraneous tasks such as logging in a separate thread. This approach is now being adopted by some other simulators.
Ray,
Would a 4X speed improvement in SPICE or FastSPICE circuit simulation be compelling enough to consider using a GPU approach?
I certainly have to side with Ray’s comment. I’ve been in the EDA space for a good long time, and I’d hold that faster performance of an application has never been a deterrent to additional license acquisitions.
The honeymoon with a significantly faster version of an application lasts a month (tweaked algorithms, latest speed bump in CPU speed etc.).
Invariably, the user ramps up the workload- finer grained analysis, additional design space investigation, more Monte Carlo and worst-case investigation – the list goes on. The cry for “better performance” starts again.
GPU’s, and custom accelerator boards, co-processors have a place and I’m sure we’ll see more vendors taking advantage of these capabilities to achieve a competitive edge. But, don’t expect enhanced performance applications to slow down the rate of acquisition. I’ve not seen it happen so far.
There seems to be a misconception about Amdahl’s Law with respect to multi-core in the EDA arena. Amdahl’s Law states that the performance of the system will be limited by the most serial part. So, if 90% of our runtime is spent in code that can be parallelized, and 10% is not, then the fastest we can speed up our program is 10X (with infinite parallelism).
However, in EDA it is often the case that 99% or even 99.999% of the runtime is in code that can be parallelized. Beyond this, we have the competing problem of Moore’s Law, which states that the data size we have to work with doubles every 24 months. So, even if we *only* spend 90% of our runtime in code that can be parallelized, two years from now it will be 95%, and in two more years it will be 97%, and so on.
As far as a “fair” cost of licensing for multi-core applications, I have been advocating that for N licenses, you get 2^N nodes of compute. This is a model that at least one medium sized EDA company that I know of uses.
Embarrassingly parallel isn’t a term with enough granularity to apply to the issue of Multicore vs. GPU, but rather only to how well the problem can be parallelized to begin with. GPUs are SIMD, multicores are MIMD. The litmus test for applying GPUs boils down to, “Do I need to branch?” To give it a more solid motivation: simulating 20,000 DFFs with different inputs on a GPU = good; simulating 1 DFF and 1 SRFF on a GPU with any inputs = bad. I’ve never worked on EDA software, but simulation seems like an obvious and easy place to use multicores (assuming you know how to avoid coherency faux pas), but unless the user has a large number of similar components it’ll thwart GPU attempts. Synthesis or implementing designs looks very NP-Complete to me, and I do have experience with SAT and SMT so I’m going to let my experience talk and say that while you can probably run a bunch of helper utilities at the same time as the main execution to help you make better decisions (produce lemmas), overall the problem is going to defy multicore, let alone GPU, attacks (as long as P != NP, that is…)
Jonathan,
Thanks for the info, I learned a thing or two.
Alas, the only EDA company to apply a GPU to SPICE simulation went out of business (Nascentric).
Many EDA tools are exploiting multi-core to speed up results for: Logic Synthesis, Place & Route, Static Timing Analysis, Circuit Simulation, Formal checking, functional verfication, DRC, LVS, 3D field solvers, etc.