Texas Memory Systems RamSan-620

I am currently evaluating a 2 TB RamSan-620 SSD storage array from Texas Memory Systems attached to a vSphere 4 infrastructure. The RamSan-620 is a rack-mounted SLC NAND Flash SSD that provides plenty of shareable, high performance storage. The RamSan-620 has extremely low latency and should delivers 250,000 IOPS of sustained performance according to specifications.

The RamSan-620 is not on the HCL of VMware, but it worked with all the defined technical tests and all features of vSphere 4. So after verifying the unit worked correctly with vSphere it’s time to test the performance of the device. The RamSan-620 is equipped with two dualport 4GB fibre channel controllers and can be expanded with two additional controllers, resulting in eight fibre channel ports.

In the test setup the RamSan-620 is connected to two HP BL460 G6 blades through two HP Brocade 8GB SAN switches. Each blade has a QLogic QMH2462 4Gb dual port FC HBA and runs ESXi 4 update 1. The quedepth of the QMH2462 has been adjusted to 256.


Each ESXi 4 server contains a Windows 2003 virtual machine with a VMware Paravirtual SCSI (PVSCSI) adapter, 4 vCPU’s and runs in a datastore on a RamSan-620 lun. The lun’s are configured with the VMware round-robin path policy.

I used the SQLIO performance tool from Microsoft to perform various tests on the unit. To test the I/O throughput I fired up the following SQLIO command on both virtual machines:


This simulated four threads performing 4KB random reads on 2 GB files. Each VM performs around 60218 4KB random reads resulting in a total 120436 IOPS. During this tests all vCPU’s run at 100% utilization. 99% of all reads have an average latency of 0 ms. Very good results!

On the RamSan-620 fibrechannel interfaces we can see that the IO load is equally spread among all interfaces.


Using ATTO Disk benchmark I tested the throughput by starting a benchmark on one virtual machine. With 8192 KB blocks reads a throughput of 800 MB/sec is reached. In this case the 4-Gigabit HBA of the blade server becomes the bottleneck. The 4-Gigabit HBA delivers 800 MB/sec throughput per channel in full-duplex mode. Since ESX 4 always uses one active path with round-robin path policy, a maximum of 800 MB/sec throughput can be reached.


From these basic performance tests the RamSan-620 SSD storage device proves the be a high performance storage with extremely low latency and is very useful as a tier 0 storage device for a VMware vSphere Infrastructure environment. In the next weeks I will further test the device and post my results.

6 Responses to Texas Memory Systems RamSan-620

  1. […] This post was mentioned on Twitter by Texas Memory Systems, Levi Norman. Levi Norman said: RT @TexasMemory: "RamSan-620…very useful as a tier 0 storage device for a VMware vSphere Infrastructure environment" http://bit.ly/9D1Nwn […]

  2. RW says:

    I’m very interested to see if you’ve had any further test results since this posting. I’m thinking of purchasing the RAMSAN 620 for our VDI implementation. Any further info would be helpful

  3. Ted Steenvoorden says:

    Hi RW. I have done additional testing on the unit but haven’t had time to process the results and write a blog post. But feel very to ask any questions!

  4. RW says:

    Just checking to see if you’ve had time to post any of the other results from the RAM SAN 620 that you were testing.

  5. Alex says:

    Hi Ted,

    Can you say anything about the write performance?
    I’m very interested to see what the random write performance of the RAM SAN 620 will be with a average block size of 4K.

  6. Andreas says:

    Sounds quite interesting to me. We have several customer cases here in Europe. If you need anything, drop me a mail. Cheers, Andreas

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: