In this blog

Summary

These real-time workflows and applications have exposed a large performance gap in existing solutions. In an ideal world, these applications would live entirely in system memory, but legacy architectures cannot support such a deployment due to a lack of capacity, performance, or availability required. This data center challenge has opened the door to new, innovative software-defined memory solutions like MemVerge Memory Machine. 

MemVerge Memory Machine solution enables administrators to combine DRAM and Intel Persistent Memory Modules (PMem) into a single memory pool and then, present it to their in-memory applications without having to rewrite the application. In addition, Memory Machine provides data services such as in-memory snapshots and replication, addressing application fault tolerance and data durability challenges when applications are deployed on persistent memories.

In the ATC Insight section, you see how we tested the performance of MemVerge Memory Machine with a real-world in-memory database application called KDB+.

Memverge Memory Machine Test


Expectations

The MemVerge Lab is a comprehensive set of tests that are recommended as general-purpose Proof of Concept testing. The purpose of this test is to showcase the features, performance, and functionality of the Memory Machine, which in turn informs MemVerge customers on how to utilize it in serving their business goals. The application we will be using in the demo will be KDB+ which is a Time Series DBMS.

ATC Insight

Hardware

A pair of Dell PowerEdge Intel-based dual-socket servers are used for the testing. Both systems with 384Gb of DRAM and 3TB of Intel Optane Persistent Memory.  It is recommended that users have at least 1TB of PMEM within each server node.

Hardware of Dell PowerEdge Intel-based dual socket servers

Software Versions: 

Software Versions

 

Performance Test

KDB Performance Testing

Kx's KDB+ is a time-series in-memory database. It is known for its speed and efficiency and for that reason very popular in Financial Service Industry. One big constraint for KDB+ is the limitation of DRAM capacity. MemVerge Memory Machine fits perfectly here so KDB+ can take full advantage of PMEM for expanded Memory space with similar performance to that of DRAM.

KDB Benchmark test with bulk insert 

Using a small amount of DRAM as cache, MemVerge Memory Machine closes up the performance gap, and in some cases even surpasses the DRAM performance because of the advantage of using a huge page and a better memory allocator.

KDB Benchmark test with bulk insert

 

KDB Benchmark test with Read Test

The results show that with more DRAM caching MemVerge Memory Machine can even surpass the result from a pure DRAM-only configuration.

KDB Benchmark test with Read Test

Test Cases

KDB+ is a column-based relational time-series database with in-memory abilities, developed and marketed by Kx Systems. The database is commonly used in high-frequency trading to store, analyze, process, and retrieve large data sets at high speed.

KDB Benchmark test with bulk insert 

Bulk insert on KDB when it runs on DRAM

# cd kdb
# export QHOME=/root/memverge/demo/kdb
# numactl -C 1-16 /root/memverge/demo/kdb/l64/q inserts-mps.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

,0
(`s#+(,`sym)!,,`a)!+(,`size)!,,100i

time          sym price size
---------------------------
09:30:00.000 a    10.75 100

2.618 million inserts per second (single insert)
35.714 million inserts per second (bulk insert 10)
142.857 million inserts per second (bulk insert 100)
142.857 million inserts per second (bulk insert 1000)
48.544 million inserts per second (bulk insert 10000)
75.758 million inserts per second (bulk insert 15000)
80 million inserts per second (bulk insert 20000)
74.627 million inserts per second (bulk insert 25000)

Bulk insert on KDB when it runs on Memory Machine with only PMEM

# numactl -C 1-16 mm /root/memverge/demo/kdb/l64/q inserts-mps.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

,0
(`s#+(,`sym)!,,`a)!+(,`size)!,,100i

time          sym price size
---------------------------
09:30:00.000 a    10.75 100

2.632 million inserts per second (single insert)
28.571 million inserts per second (bulk insert 10)
55.556 million inserts per second (bulk insert 100)
62.5 million inserts per second (bulk insert 1000)
36.9 million inserts per second (bulk insert 10000)
54.945 million inserts per second (bulk insert 15000)
61.92 million inserts per second (bulk insert 20000)
59.524 million inserts per second (bulk insert 25000)

Enable 20 GB DRAM cache by adding these lines to /etc/memverge/mvmalloc.yml:

DramCacheGB: 20
HugepageDram: true
DramCacheNumaInterleave: false

## or just copy and overwrite the files with the command here
# cp mvmalloc.yml /etc/memverge/mvmalloc.yml

Bulk insert on KDB when it runs on Memory Machine with PMEM and small DRAM cache

# numactl -C 1-16 mm /root/memverge/demo/kdb/l64/q inserts-mps.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

,0
(`s#+(,`sym)!,,`a)!+(,`size)!,,100i

time          sym price size
---------------------------
09:30:00.000 a   10.75 100

2.688 million inserts per second (single insert)
35.714 million inserts per second (bulk insert 10)
125 million inserts per second (bulk insert 100)
142.857 million inserts per second (bulk insert 1000)
82.645 million inserts per second (bulk insert 10000)
107.914 million inserts per second (bulk insert 15000)
120.482 million inserts per second (bulk insert 20000)
122.549 million inserts per second (bulk insert 25000)

 

KDB Benchmark test with Read Test

Read test on KDB when it runs on DRAM

# cd kdb
# export QHOME=/root/memverge/demo/kdb
# numactl -C 1-16 /root/memverge/demo/kdb/l64/q readTest.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

dram mode
create list – 4215 MiB/sec
memory usage
dram  | 34360098832
optane| 0

Read Test on KDB when it runs on Memory Machine with only PMEM. The Optane reading is to be ignored as MemVerge is transparent to KDB so no Optane usage will be reported by the KDB.

# sed -i 's/DramCacheGB: .*/DramCacheGB: 0/' /etc/memverge/mvmalloc.yml
# numactl -C 1-16 mm /root/memverge/demo/kdb/l64/q readTest.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

dram mode
create list - 2874 MiB/sec
memory usage
dram  | 34360098832
optane| 0

Enable 20GB DRAM cache by modifying /etc/memverge/mvmalloc.yml:

# sed -i 's/DramCacheGB: .*/DramCacheGB: 20/' /etc/memverge/mvmalloc.yml

Bulk insert on KDB when it runs on Memory Machine with PMEM and small DRAM cache

# numactl -C 1-16 /root/memverge/demo/kdb/l64/q readTest.q
KDB+ 4.0 2020.05.04 Copyright (C) 1993-2020 Kx Systems
l64/ 16(16)core 93587MB root memverge1 192.168.61.240 EXPIRE 2021.08.14 charlie.yu@memverge.com KOD #4172451

dram mode
create list - 3945 MiB/sec
memory usage
dram  | 34360098832
optane| 0

Technology under test

MemVerge Memory Machine

Memverge Memory Machine

Conclusion

As you can see from the graphical results above MemVege Memory machine is able to present a pool of high-speed memory to a natively installed KDB+ database instance to perform everyday database tasks faster than it ever could with a conventional solution. The combination of DRAM, Intel Persistent Memory Modules, and MemVerge Memory machine allows administrators to build a highly performant, large memory solution with a great return on investment when compared to previous solutions which rely on slower, more expensive LRDIMMs.  The solution also provides an environment with more than twice the memory capacity of conventional solutions without the headache of needing to rewrite the application or sacrificing performance for capacity.
 

Technologies