Thecus N2310 NAS Server Network Storage Review


<< PREVIOUS            NEXT >>

Network Terminology

Benchmark Reviews primarily uses metric data measurement for testing storage products, for anyone who is interested in learning the relevant history of this sore spot in the industry, I’ve included a small explanation below:

The basic unit data measurement is called a bit (one single binary digit). Computers use these bits, which are composed of ones and zeros, to communicate their contents. All files are stored as binary files, and translated into working files by the Operating System. This two number system is called a “binary number system”. In comparison, the decimal number system has ten unique digits consisting of zero through nine. Essentially it boils down to differences between binary and metric measurements, because testing is deeply impacted without carefully separating the two. For example, the difference between the transfer time of a one-Gigabyte (1000 Megabytes) file is going to be significantly better than a true binary Gigabyte (referred to as a Gibibyte) that contains 1024 Megabytes. The larger the file used for data transfer, the bigger the difference will be.

Have you ever wondered why your 500 GB hard drive only has about 488 GB once it has been formatted? Most Operating Systems utilize the binary number system to express file data size, however the prefixes for the multiples are based on the metric system. So even though a metric “Kilo” equals 1,000, a binary “Kilo” equals 1,024. Are you confused yet? Don’t be surprised, because even the most tech savvy people often mistake the two. Plainly put, the Kilobyte is expressed as 1000 bytes, but it is really comprised of 1,024 bytes.

Most network engineers are not fully aware that the IEC changed the way we calculate and name data chunks when they published the new International Standards back in December 1998. The International Electrotechnical Commission (IEC) removed the old metric prefixes for multiples in binary code with new prefixes for binary multiples made up of only the first two letters of the metric prefixes and adding the first two letters of the word “binary”. For example, instead of Megabyte (MB) or Gigabyte (GB), the new terms would be Mebibyte (MiB) or Gibibyte (GiB). While this is the new official IEC International Standard, it has not been widely adopted yet because it is either still unknown by institutions or not commonly used.

Testing Methodology

All the NAS devices we test cannot accommodate all the different disk configurations, so our current test protocol has been based on two of the most popular setups: a basic (single) disk and RAID-5 configurations. Most NAS products that can support RAID 5 go beyond the minimum number of drive bays, to a total of four, so that is the number of drives that I typically use to test with, even though I could get by with only three. During initial setup, I didn’t see any opportunity to upgrade to the latest firmware, and I didn’t see the option on the Thecus website, at least for the N2310. The firmware installed on the N2310 was OS6.build_677 when I received it, and it stayed that way throughout the testing. I downloaded the ThecusOS application, a browser-based monitoring and control application, from the dedicated installation page at http://install.thecus.com, and got version 1.01.08.  It’s a relatively new version, because the latest user manual on the Thecus website references version 1.01.07.

I connected the NAS directly to an Intel X520-T2 10Gbps Ethernet NIC in the test-bench system, with a ten-foot CAT6 patch cable. I set up a static IP address on the host PC, consistent with the default address of the Turbo NAS unit, and we were in business.

Although I wasn’t expecting to get throughput results that were faster than GbE speeds, I kept the test bench PC configuration the same as it has been, since I upgraded it for 10GbE testing. That means I was bypassing the SSD on the test rig and sending/receiving data from a RAM Disk. I’m using RAMDisk v3.5.1.130R22 from Dataram based on performance tests in several reviews (we read ’em, too….) and its reasonable cost structure. I assigned a little more than 10GB of space to the RAM Disk, in order to replicate the test protocol I’ve been using for all my NAS testing. One other trick was necessary; to get the RAM Disk to transfer files larger than 2GB, I had to use the “Convert” utility in Windows to make the RAM Disk into an NTFS volume.


For basic throughput evaluation, the NAS product received one test transfer followed by at least three timed transfers. Each test file was sent to the Western Digital Caviar Black 750GB (WD7502AAEX) hard drive installed in the NAS for a timed NAS write test, and that same file was sent back to the RAM Disk in the test system to perform a NAS read test. Each test was repeated several times, the high and low values were discarded and the average of the remaining results was recorded and charted.

The Read and Write transfer tests were conducted on each NAS appliance using the 1 GB file and then a 10 GB file. A second set of tests are conducted with Jumbo Frame enabled, i.e. the MTU value for all Ethernet controllers on the network is increased from 1500 to 9000. Most of the NAS products tested to date in the Windows 7 environment have supported the Jumbo Frame configuration. Only the NETGEAR ReadyNAS NV+ v2 uses the 1500 MTU setting by default, and has no user-accessible controls to change that; you’ll see that reflected in the charts. I used a single GbE connection for all tests; there’s only one available on the N2310, and I have not achieved consistent results using the IEEE 802.3ad Link Aggregation Control Protocol (LACP) mode.

I also ran the Intel NAS Performance Toolkit (NASPT) version 1.7.1, which was originally designed to run on a Windows XP client. People smarter than me have figured out how to run it under Windows 7, including the 64-bit version that is used more often than the 32-bit version these days. NASPT brings an important perspective to our test protocol, as it is designed to measure the performance of a NAS system as viewed from the end user’s perspective. Benchmarks like ATTO use Direct I/O Access to accurately measure disk performance with minimal influence from the OS and the host platform. This provides important, objective data that can be used to measure raw, physical performance. While it’s critical to measure the base performance, it’s also important to quantify what you can expect using real-world applications, and that’s exactly what NASPT does. One of the disadvantages of NASPT is that it is influenced by the amount of memory installed on the client, and it was designed for systems that had 2-4 GB of RAM. Consequently, two of the tests give unrealistic results, because they are measuring the speed of the buffer on the client, instead of the actual NAS performance. For that reason, we will ignore the results for “HD Video Record” and “File Copy to NAS”. I’m also not going to pay too much attention to the “Content Creation” test, as it is too heavily focused on computing tasks that aren’t really handled by the NAS.


Benchmark Reviews was also able to measures NAS performance using some tests that are traditionally used for internal drives. The ATTO Disk Benchmark program is free, and offers a comprehensive set of test variables to work with. In terms of disk performance, it measures interface transfer rates at various intervals for a user-specified length and then reports read and write speeds for these spot-tests. CrystalDiskMark 3.0 is a file transfer and operational bandwidth benchmark tool from Crystal Dew World that offers performance transfer speed results using sequential, 512KB random, and 4KB random samples. Benchmark Reviews uses CrystalDiskMark to illustrate operational IOPS performance with multiple threads, which allows us to determine operational bandwidth under heavy load.


The chart above, showing the typical results from an ATTO Disk Benchmark test run, highlights the one problem I had with the Thecus N2310 configuration. When I switched over to Jumbo Frames on the network controllers, the Write performance was degraded immensely. You can see how small the red bars are in the graph here. With standard MTU of 1500, the Read and Write performance were much closer to one another. I suspect this problem was caused by the NIC in my test bed PC, which uses an MTU of 9014 instead of the nominal value of 9000, which is what the Thecus uses. MTU mismatches are a no-no, and I don’t know why there isn’t better compliance to some standard for the Jumbo Frame settings. You’ll see the effects of this anomaly in the basic file transfer tests, because they show results for both 1500 and 9000 (nominal) MTU. I only published the results from the 1500 MTU configuration for the remainder of the benchmarks. This is clearly a situation you need to be aware of when setting up your own NAS and the rest of you network. At this point, my recommendation is to stick with an MTU value of 1500 for every device on the network.

We are continuing our NAS testing with the exclusive use of Windows 7 as the testing platform for the host system. The performance differences between Win7 and XP are huge, as we documented in our QNAP TS-259 Pro review. The adoption rate for Win 7 has been very high, and Benchmark Reviews has been using Win 7 in all of our other testing for some time now. It was definitely time to make the jump for NAS products.

NAS Comparison Products

Support Equipment

  • (1) Western Digital Caviar Black WD7502AAEX 750GB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5″
  • Intel E10G42BT, X520-T2, 10Gbps Ethernet NIC, PCIe 2.0 x8, 2x CAT6a
  • Dataram RAMDisk v3.5.1.130R22
  • Intel NAS Performance Toolkit (NASPT) version 1.7.1
  • ATTO Disk Benchmark v2.47
  • CrystalDiskMark 3.0
  • 10-Foot Category-6 Solid Copper Shielded Twisted Pair Patch Cable
  • 1 metric Gigabyte Test File (1 GB = 1,000,000,000 bytes)
  • 10 metric Gigabyte Test File (10 GB = 10,000,000,000 bytes)

Test System


<< PREVIOUS            NEXT >>


  1. Wade Buskirk

    I’m using my N2310 to host a personal web site, hold backups of household computers (ultabooks) and host media to play on a network receiver.

    My disappointment at this time is the lack of implementation of WOL and other power management features built into the SOC but apparently never utilized by Thecus. A power interruption causes problems with custom network configurations on top of the flashed based OS, as well as the fact that it needs to be manually turned back on with a flesh and blood finger.

    1. Bruce Normann

      Yeah, it’s unusual that WOL would not be implemented if it is available in the hardware. Might be a good use for a UPS.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>