I am currently evaluating a 2 TB RamSan-620 SSD storage array from Texas Memory Systems attached to a vSphere 4 infrastructure. The RamSan-620 is a rack-mounted SLC NAND Flash SSD that provides plenty of shareable, high performance storage. The RamSan-620 has extremely low latency and should delivers 250,000 IOPS of sustained performance according to specifications.
The RamSan-620 is not on the HCL of VMware, but it worked with all the defined technical tests and all features of vSphere 4. So after verifying the unit worked correctly with vSphere it’s time to test the performance of the device. The RamSan-620 is equipped with two dualport 4GB fibre channel controllers and can be expanded with two additional controllers, resulting in eight fibre channel ports.
In the test setup the RamSan-620 is connected to two HP BL460 G6 blades through two HP Brocade 8GB SAN switches. Each blade has a QLogic QMH2462 4Gb dual port FC HBA and runs ESXi 4 update 1. The quedepth of the QMH2462 has been adjusted to 256.
Each ESXi 4 server contains a Windows 2003 virtual machine with a VMware Paravirtual SCSI (PVSCSI) adapter, 4 vCPU’s and runs in a datastore on a RamSan-620 lun. The lun’s are configured with the VMware round-robin path policy.
I used the SQLIO performance tool from Microsoft to perform various tests on the unit. To test the I/O throughput I fired up the following SQLIO command on both virtual machines:
This simulated four threads performing 4KB random reads on 2 GB files. Each VM performs around 60218 4KB random reads resulting in a total 120436 IOPS. During this tests all vCPU’s run at 100% utilization. 99% of all reads have an average latency of 0 ms. Very good results!
On the RamSan-620 fibrechannel interfaces we can see that the IO load is equally spread among all interfaces.
Using ATTO Disk benchmark I tested the throughput by starting a benchmark on one virtual machine. With 8192 KB blocks reads a throughput of 800 MB/sec is reached. In this case the 4-Gigabit HBA of the blade server becomes the bottleneck. The 4-Gigabit HBA delivers 800 MB/sec throughput per channel in full-duplex mode. Since ESX 4 always uses one active path with round-robin path policy, a maximum of 800 MB/sec throughput can be reached.
From these basic performance tests the RamSan-620 SSD storage device proves the be a high performance storage with extremely low latency and is very useful as a tier 0 storage device for a VMware vSphere Infrastructure environment. In the next weeks I will further test the device and post my results.