« All Events
Q3 Memory Fabric Forum
August 27, 2024 @ 9:00 am - 5:00 pm PDT

Memory fabrics unlock the power to pool, tier, and share memory in a data center fabric. The results will be vastly more memory capacity and lower cost due to higher utilization—the things needed to support memory-hungry generative AI apps.
Attend this webinar for a comprehensive update on CXL® market adoption, technology, products, and use cases.
Register to attend and/or watch on YouTube after the event.
Part 1 – Industry Landscape. Highlighted by the presentation at 9:10 PT by Samsung of a “Moonshot” project to leapfrog from 10,000 AI cluster nodes to 1,000,000 nodes.
Part 2 – Technologies and products for the Enterprise. Highlighted by an overview from NVIDIA at 12:09 PT of their portfolio of high-bandwidth, low-latency networking products for AI.
Part 3 – Technologies and products for developers. Highlighted by new CXL forecast data presented by Montage at 1:26 PT.
Best, Frank Berry, VP of Marketing at MemVerge. |
|
|
|
Q3 Memory Fabric Forum Agenda
Register to attend |
|
Industry Landscape |
1 |
9:00 PT |
Frank Berry |
MemVerge |
The AI Big Bang |
2 |
9:10 PT |
Siamak Tavallaei |
Samsung |
At-scale Systems: Interconnecting Massively Parallel xPUs |
3 |
9:41 PT |
Kurtis Bowman |
CXL Consortium |
CXL® Advancing Coherent Connectivity |
4 |
9:53 PT |
Jim Handy |
Objective Analysis |
CXL Is Exciting, But Where is It Headed? |
5 |
10:16 PT |
Mark Nossokoff |
Hyperion Research |
HPC/AI Market Update and Industry Composability Snapshot |
6 |
10:32 PT |
Jack Gidding |
STAC Research |
STAC Overview |
7 |
10:46 PT |
Seth Friedman |
Liquid Markets |
Liquid-Markets-Solutions: Introduction to UberNIC |
Technology and Products for the Enterprise |
8 |
11:10 PT |
Anil Godbole |
Intel |
Intel Compute Express Link™ Enablement |
9 |
11:25 PT |
Michael Abraham |
Micron |
CXL-Compatible Memory Modules |
10 |
11:45 PT |
Torry Steed |
SMART Modular |
CXL Memory Expansion Advantages |
11 |
12:09 PT |
Reggie Reynolds |
NVIDIA |
NVIDIA Networking for HPC, AI, and Accelerated IO |
12 |
12:41 PT |
Steve Scargall |
MemVerge |
Memory Machine™ for CXL |
13 |
1:00 PT |
Philip Maher |
MSI |
S2301 CXL Memory Expansion Server |
Technology and Products for the Enterprise |
14 |
1:08 PT |
Michael Ocampo |
Astera Labs |
Accelerating AI & ML with CXL-Attached Memory |
15 |
1:26 PT |
Geof Findley |
Montage |
CXL 2.0 Use Case: Using Both DDR4 & DDR5 on the Same Server |
16 |
1:43 PT |
JP Jiang |
XConn |
CXL Switch for Scalable & Composable Memory Pooling/Sharing |
17 |
2:05 PT |
Gary Ruggles |
Synopsys |
Enabling New Memory Applications Using CXL IP |
18 |
2:37 PT |
Nilesh Shah |
ZeroPoint |
Hyperscale Composable Memory Systems with Dynamically Adjusting Compressed Tier |
19 |
3:00 PT |
Grant Mackey |
Jackrabbit Labs |
You Don’t Know Jack: CXL Fabric Orchestration and Management |
20 |
3:19 PT |
Bill Gervasi |
Wolley |
NVMe Over CXL: How CXL Lets Us Do Controller Memory Buffers the Right Way |
21 |
3:39 PT |
Bill Gervasi |
Wolley |
FleX: Bringing CXL to the Motherboard |
22 |
3:59 PT |
Bill Gervasi |
Wolley |
CXL Native Memory: Do We Really Need DDR? |
23 |
4:20 PT |
Arvind Jagannath |
VMware |
VMware Memory Tiering: Customer-Ready Today |
|
Like this:
Like Loading...
Related