CPU Tests: Microbenchmarks

A y-Cruncher Sprint

The y-cruncher website has a large amount of benchmark data showing how different CPUs perform when calculating pi up to a given number of digits. Not only are the pi world records present, but below these there are a few CPUs showing the scaling of the hardware, where it shows the time to compute moving from 25 million digits to 50 million, 100 million, 250 million, and all the way up to 10 billion, to showcase how the performance scales with digits (assuming everything is in memory). This range of results, from 25 million to 250 billion, is something I’ve dubbed a ‘sprint’.

I have written some code in order to perform a sprint on every CPU we test. It detects the DRAM, works out the biggest value that can be calculated with that amount of memory, and works up starting from 25 million digits. For the tests that go up to the ~25 billion digits, it only adds an extra 15 minutes to the suite for an 8-core Ryzen CPU. With this test, we can see the effect of increasing memory requirements on the workload and the scaling factor for a workload such as this.

Longer lines indicate more memory installed in the system at the time

For this sprint, we’ve covered each result into how many million digits are calculated per second at each of the dataset sizes. The more cores a system has, the better the compute, and Intel gets an AVX-512 bonus here as well because the software can use AVX-512. But as the dataset gets larger, there is more shuffling of values back and forth between memory and cache, so being able to keep a high bandwidth while also a low latency to all cores is crucial in this test, especially as the test increases.

The 8-channel 64-core TR Pro 3995WX here does very well, peaking at around 80 million per second, and at the end of the test still being very fast. It sits above the EPYC 7742 here due to the fact that it has a higher TDP and frequency. They are both well above the Threadripper 3990X, which only has quad-channel memory, which is the reason for the decrease as the dataset increases.

The W-3175X from Intel has the AVX-512 advantage, which is why the 28 cores can compete with the 64 cores from AMD, however the six-channel memory bandwidth and probably the mesh quickly becomes a bottleneck as each core needs to feed those AVX-512 units. This is the sort of situation where in-package HBM is likely to make a big difference. But at the smaller dataset sizes at least the W-3175X can feed enough data across the mesh to the AVX-512 units for the peak throughput.

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

Due to a test limitation, we’re only probing the first 64 threads of the system, but the scale out to 128 threads would be identical. This generation of Threadripper Pro is built on Zen 2, similar to Threadripper 3990X and the EPYC 7742, and so we only have quad-core CCXes in play here. A thread speaking to itself has a latency of around 7 nanoseconds, inside a quad-core CCX is around 18-19 nanoseconds, and then accessing any other core varies from 77-89 nanoseconds. Even accessing the CCX on the same chiplet has the same latency, as the communication is designed to ping out to the central IO die first. If Threadripper Pro gets boosted to Zen 3 for the next generation, this will be a big uplift as we’ve already seen with Zen 3. But TR Pro with Zen 3 might only be launched only when Zen 4 comes out, and we’ll be talking about that difference when that happens.

Frequency Ramping

Both AMD and Intel over the past few years have introduced features to their processors that speed up the time from when a CPU moves from idle into a high powered state. The effect of this means that users can get peak performance quicker, but the biggest knock-on effect for this is with battery life in mobile devices, especially if a system can turbo up quick and turbo down quick, ensuring that it stays in the lowest and most efficient power state for as long as possible.

Intel’s technology is called SpeedShift, although SpeedShift was not enabled until Skylake.

One of the issues though with this technology is that sometimes the adjustments in frequency can be so fast, software cannot detect them. If the frequency is changing on the order of microseconds, but your software is only probing frequency in milliseconds (or seconds), then quick changes will be missed. Not only that, as an observer probing the frequency, you could be affecting the actual turbo performance. When the CPU is changing frequency, it essentially has to pause all compute while it aligns the frequency rate of the whole core.

We wrote an extensive review analysis piece on this, called ‘Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics’, due to an issue where users were not observing the peak turbo speeds for AMD’s processors.

We got around the issue by making the frequency probing the workload causing the turbo. The software is able to detect frequency adjustments on a microsecond scale, so we can see how well a system can get to those boost frequencies. Our Frequency Ramp tool has already been in use in a number of reviews.

The frequency ramp here is around one millisecond, indicative of AMD implementing its CPPC2 management design.

The AMD Threadripper Pro 3995WX Review 280W, or Does It Turbo To More?
Comments Locked


View All Comments

  • Fellovv - Tuesday, February 9, 2021 - link

    Agreed— picked up a p620 with 16c for $2500, could have gotten it for lower from Lenovo if they didn’t have weeks of lead time. Ian- you may see Lenovo discounts all the crazy prices about 50% all year, and sometimes there are Honey coupons to knock off hundreds more.
    I have read that the 16c 2 CCX 3955WX May only get 4 channel RAM, not the full 8. I may be able to confirm in the near future. Gracias for the fine and thorough review. My only request is to ensure the TR 3990 is included in every graph— it was MIA or AWOL in several. I went with they TR Pro for the RAM and PCIe 4 lanes. Seeing the results confirms it was a good choice for me. Can’t wait for the Zen3!
  • realbabilu - Tuesday, February 9, 2021 - link

    Nice 👍 about mkl, how about blis and open las,.did it suffer high multi core problem
  • MonkeyMan73 - Wednesday, February 10, 2021 - link

    AMD has the performance crown in most scenarios, but it comes at an extremely high price point. Might not be worth this kind of money even for most extreme power user. Maybe get a dual core Xeon? Might be cheaper.

    BTW, your las pic of this review is definitly not an OPPO Reno 2 :)
  • MonkeyMan73 - Wednesday, February 10, 2021 - link

    Apologies, not a Dual core Xeon, that will not cut it but meant a Dual Socket Xeon setup.
  • Oxford Guy - Wednesday, February 10, 2021 - link

    The worst aspect of the price-to-performance is that it’s using outdated tech rather than Zen 3.
  • MonkeyMan73 - Sunday, February 28, 2021 - link

    Correct, there is always some sort of trade-off.
  • Greg13 - Wednesday, February 10, 2021 - link

    I feel like you guys really need to get some more memory intensive workloads to test. So often in these Threadripper / Threadripper Pro / EPYC reviews, the consumer CPU (5950X in this case) is often faster or not far behind even on highly multithreaded applications. I do some pretty large thermal fluid system simulations in Simscape where by once a system is designed I use an optimisation algorithm to find the optimal operating parameters of the system. This involves running multiple simulations of the same model in parallel using Matlab Parallel computing toolbox along with their global optimisation toolbox. Last year I bought a 3950X and 128GB ram to do this, but as far as I can tell it is massivly memory bandwidth limited. It's also memory capacity limited too... Each simulation uses around 10GB ram each, so I generally only run 12 parallel workers to keep within the 128GB of ram. However, In terms of throughput I see barely any change when dropping down to 8 parallel workers, suggesting, I think that with 12 workers, it's massivly memory bandwidth limited. This also seems to be the case in terms of the CPU power, even with 12 workers going, the CPU power reported is pretty low, which leads me to think it's waiting for data from memory?

    I assume that this would be better with Threadripper or even better with Threadripper Pro with their double and quadrouple memory bandwidth. However I don't have the funds to buy a selection of kit and test it to see if the extra cost is worth it. It would be good if you guys could add some more memory intensive tests to the suite (ideally for me some parallel Simscape simulations!) to show the benefit these extra memory channels (and capacity) offer.
  • Shmee - Wednesday, February 10, 2021 - link

    Yeah I would wait for Zen 3 TR for sure. That said, this would only make sense as X570 has limited IO. It would be great to have a nice 16 core TR that had great OC capability and ST performance, was great in games, and did not have the IO limitation as X570. I really don't need all the cores, mainly I care about gaming, but the current gaming platforms just don't have the SATA and m.2 ports I would like. Extra memory bandwidth is also nice.
  • eastcoast_pete - Wednesday, February 10, 2021 - link

    Thanks Ian! I really wanted one, until I saw the system price (: But, for what these proTRs can do, a price many are willing and able to pay.
    Also, as it almost always comes up in discussions of AMD vs Intel workstation processors: could you write a backgrounder on what AVX is/is used for, and how open or open source extensions like AVX512 really are? My understanding is that much of this is proprietary to Intel, but are those AVX512 extensions available to AMD, or do they have to engineer around it?
  • kgardas - Wednesday, February 10, 2021 - link

    avx512 is instruction set implemented and invented by Intel. Currently available in TigerLake laptops and Xeon W desktops plus of course server Xeons. Previous generation was AVX2 and generation before AVX. AVX comes with Intel's SandyBridge cores 9 years ago IIRC. AVX2 with Haswell.
    Due to various reasons IIRC AMD and Intel cross-licensed their instruction sets years ago. Intel needed AMD's AMD64 to compete. Not sure if the part of the deal is also future extensions, but I would guess so since AMD since that time implemented both AVX and AVX2. Currently AMD sees no big pressure from Intel hence I guess is not enough motivated to implement avx512. Once it is, I guess we will see AMD chips with avx512 too.

Log in

Don't have an account? Sign up now