Comments on: IDF 2016: Intel To Demo Optane XPoint, Announces Optane Testbed for Enterprise Customers https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/ A Leader in PC Hardware Reviews and News Mon, 25 Mar 2019 13:22:05 +0000 hourly 1 By: Paul A. Mitchell https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-89464 Mon, 31 Oct 2016 05:42:02 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-89464 http://semiaccurate.com/2016/
http://semiaccurate.com/2016/09/12/intels-xpoint-pretty-much-broken/

]]>
By: MRFS https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-86095 Thu, 18 Aug 2016 22:16:21 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-86095 I apologize for my math error
I apologize for my math error above:
it was truly a day to forget
(I’ll spare you the political details).

Here’s a simplified version of
our bandwidth comparison with DDR3-1600:

Assume:
DDR3-1600 (parallel bus)
1,600 MHz x 8 bytes per cycle = 12,800 MB/second
(i.e. exactly TWICE PC2-6400 = 800 x 8)

Now, serialize with PCIe 3.0
(8G transmission clock + 128b/130b jumbo frames):

1 x NVMe PCIe 3.0 lane
= 8 GHz / 8.125 bits per byte = 984.6 MB/second

4 x NVMe PCIe 3.0 lanes
= 4 x 984.6 MB/second = 3,938.4 MB/second

4 x 2.5″ NVMe SSDs in RAID-0 (zero controller overhead)
= 4 x 3,938.4 = 15,753.6 MB/second

Compute aggregate overhead:
1.0 – (12,800 / 15,753.6) = 18.7% total overhead

Highpoint calculated 15,760 (almost identical):
http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_04.pdf

Conclusion:
assuming aggregate controller overhead of 18.7%,
four 2.5″ NVMe SSDs in RAID-0
exactly equal the raw bandwidth
of DDR3-1600 DRAM.

]]>
By: MRFS https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-86004 Thu, 18 Aug 2016 00:07:15 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-86004 In reply to MRFS.

OOPS! This is an error:
>

OOPS! This is an error:

> With perfect scaling and zero controller overhead,
> an NVMe RAID Controller with 3 members of a RAID-0 array
> might come close:
> 3 @ 8G = 24 Gbps vs. 26 Gb/s (see above)

Correction: each array member uses 4 PCIe 3.0 lanes:

3 @ 32G = 96 Gbps vs. 26 Gb/s

Sorry about the typo.

]]>
By: MRFS https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-86000 Wed, 17 Aug 2016 23:38:05 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-86000 Now, for a future
Now, for a future possibility:

IF (BIG IF here) …
IF NVMe 4.0 “syncs” with PCIe 4.0, THEN
re-calculate with 16G serial data channels:

assume a RAID-0 array with zero controller overhead and
4 member NVMe 4.0 SSDs with channels oscillating
at 16G + using 128b/130b jumbo frames:

THEN …

4 x NVMe SSDs @ 4 PCIe 4.0 lanes @ 16 GHz per lane / 8.125
= 31.50 GB/second (roughly 2 GBps per lane)

So, under those assumptions, such an NVMe storage subsystem
does exceed the raw bandwidth of DDR3-1600, theoretically.

Reviews of Highpoint’s RocketRAID 3840A (recently announced)
should tell us a LOT about how close the above calculations are.

]]>
By: MRFS https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-85998 Wed, 17 Aug 2016 23:22:34 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-85998 In reply to Anonymous.

There’s an assumption that is
There’s an assumption that is often hidden in these predictions, and this is that NVDIMMs will be deployed
IN PLACE OF DDR4+.

On the other hand, I can visualize
a triple-channel memory subsystem, which uses 2 of 3
“banks” for normal DRAM, and the third bank is populated
with NVDIMMs that spend most of their time on READs
e.g. launching programs.

Hosting an OS in non-volatile memory has a LOT to
recommend it e.g. almost INSTANT-ON restarts.

As such, durability is not a singular concept, but
should come with separate metrics for READs and WRITEs.

]]>
By: MRFS https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-85997 Wed, 17 Aug 2016 23:13:55 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-85997 In reply to Allyn Malventano.

> The same thing would
> The same thing would happen if someone tried to make an NVMe SSD full of DRAM.

Allyn’s excellent point here is very easy to prove,
using some simple arithmetic:

Take DDR3-1600, just to illustrate
(yes, DDR4 is the “latest” but let’s go with
the larger installed base of DRAM):

1600 x 8 = 12,800 MB/second (stock speed / no overclock)

There is a very large installed base of DDR3-1600 DRAM
(e.g. in a myriad of laptops).

Now, serialize that data stream, assuming PCIe 3.0 specs:

12,800 MB/second x 8.125 bits per byte = 104.0 Gb/sec

One NVMe device uses 4 x PCIe 3.0 lanes at 8 GHz per lane

104 Gb/s / 4 NVMe PCIe lanes = 26 Gb/s per PCIe 3.0 lane

BUT, each PCIe 3.0 lane oscillates at 8 GHz presently.

With perfect scaling and zero controller overhead,
an NVMe RAID Controller with 3 members of a RAID-0 array
might come close:

3 @ 8G = 24 Gbps vs. 26 Gb/s (see above)

With realistic scaling and non-zero controller overheads,
a RAID-0 array with 4 members might come close.

Another good comparison would be a RAID-0 array
of 12G SAS devices: 12G SAS oscillates faster
than one NVMe lane, but SAS still uses the
8b/10b legacy frame: so, there’s a trade-off.

]]>
By: BlackDove https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-85968 Wed, 17 Aug 2016 11:28:39 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-85968 In reply to Allyn Malventano.

Skylake Purley and Knights
Skylake Purley and Knights Hill, as well as(probably) Fujitsu’s ARMV8 CPU for their Post K exascale architecture will all likely be using XPoint directly addressed by the CPU.

That should show its true capabilities much better than PCI-E bottlenecked SSDs do.

My only concern with XPoint is some kind of artificially imposed endurance limitation.

]]>
By: Allyn Malventano https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-85952 Wed, 17 Aug 2016 00:11:07 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-85952 In reply to Anonymous.

Yeah, that article is kinda

Yeah, that article is kinda out to lunch. First gen XPoint was meant for storage-class memory, not to act as DRAM, which is what it will take to realize the raw 1000x performance gains over flash. XPoint basically pushes storage class devices into their other bottlenecks. The same thing would happen if someone tried to make an NVMe SSD full of DRAM.

]]>
By: Anonymous https://pcper.com/2016/08/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/comment-page-1/#comment-85949 Tue, 16 Aug 2016 22:43:39 +0000 https://pcper.com/news/idf-2016-intel-to-demo-optane-xpoint-announces-optane-testbed-for-enterprise-customers/#comment-85949 Really those XPoint haters
Really those XPoint haters are responding to Intel’s marketing claims, and who does not hate/mistrust the marketing “profession”. So it’s really a response to Intel’s Pie in the sky claims about XPoint’s real obtainable performance metrics in a non vaporware form.

Most are hoping that XPoint will have some measure of durability to justify using it in DIMMs along with the DRAM at enough of an improvement over NAND to justify XPoint’s added cost, but more testing is in order for hopefully some engineering samples with which to see beyond any marketing hype!

“Intel’s Optane XPoint DIMMs pushed back – source”

http://www.theregister.co.uk/2016/08/16/intel_optane_xpoint_dimms_and_ssds_delayed/

]]>