QNAP TS-853 Pro 8-bay Intel Bay Trail SMB NAS Reviewby Ganesh T S on December 29, 2014 7:30 AM EST
Introduction and Testbed Setup
QNAP has focused on Intel's Bay Trail platform for this generation of NAS units (compared to Synology's efforts with Intel Rangeley). While the choice made sense for the home users / prosumer-targeted TS-x51 series, we were a bit surprised to see the TS-x53 Pro series (targeting business users) also use the same Bay Trail platform. Having evaluated 8-bay solutions from Synology (the DS1815+) and Asustor (the AS7008T), we requested QNAP to send over their 8-bay solution, the TS-853 Pro-8G. Hardware-wise, the main difference between the three units lie in the host processor and the amount of RAM.
The specifications of our sample of the QNAP TS-853 Pro are provided in the table below
|QNAP TS-853 Pro-8G Specifications|
|Processor||Intel Celeron J1900 (4C/4T Silvermont x86 @ 2.0 GHz)|
|Drive Bays||8x 3.5"/2.5" SATA II / III HDD / SSD (Hot-Swappable)|
|Network Links||4x 1 GbE|
|External I/O Peripherals||3x USB 3.0, 2x USB 2.0|
|VGA / Display Out||HDMI (with HD Audio Bitstreaming)|
|Full Specifications Link||QNAP TS-853 Pro-8G Specifications|
Note that the $1195 price point is for the 8GB RAM version. The default 2 GB version retails for $986. The extra RAM is important if the end user wishes to take advantage of the unit as a VM host using the Virtualization Station package.
The TS-853 Pro runs Linux (kernel version 3.12.6). Other aspects of the platform can be gleaned by accessing the unit over SSH.
Compared to the TS-451, we find that the host CPU is now a quad-core Celeron (J1900) instead of a dual-core one (J1800). The amount of RAM is doubled. However, the platform and setup impressions are otherwise similar to the TS-451. Hence, we won't go into those details in our review.
One of the main limitations of the TS-x51 units is the fact that it can have only one virtual machine (VM) active at a time. The TS-x53 Pro relaxes that restriction and allows two simultaneous VMs. Between our review of the TS-x51 and this piece, QNAP introduced QvPC, a unique way to use the display output from the TS-x51 and TS-x53 Pro series. We will first take a look at the technology and how it shaped our evaluation strategy.
Beyond QvPC, we follow our standard NAS evaluation routine - benchmark numbers for both single and multi-client scenarios across a number of different client platforms as well as access protocols. We have a separate section devoted to the performance of the NAS with encrypted shared folders, as well as RAID operation parameters (rebuild durations and power consumption). Prior to all that, we will take a look at our testbed setup and testing methodology.
Testbed Setup and Testing Methodology
The QNAP TS-853 Pro can take up to 8 drives. Users can opt for either JBOD, RAID 0, RAID 1, RAID 5, RAID 6 or RAID 10 configurations. We expect typical usage to be with multiple volumes in a RAID-5 or RAID-6 disk group. However, to keep things consistent across different NAS units, we benchmarked a single RAID-5 volume across all disks. Eight Western Digital WD4000FYYZ RE drives were used as the test disks. Our testbed configuration is outlined below.
|AnandTech NAS Testbed Configuration|
|Motherboard||Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB|
|CPU||2 x Intel Xeon E5-2630L|
|Coolers||2 x Dynatron R17|
|Memory||G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30|
|OS Drive||OCZ Technology Vertex 4 128GB|
|Secondary Drive||OCZ Technology Vertex 4 128GB|
|Tertiary Drive||OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)|
|Other Drives||12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)|
|Network Cards||6 x Intel ESA I-340 Quad-GbE Port Network Adapter|
|Chassis||SilverStoneTek Raven RV03|
|PSU||SilverStoneTek Strider Plus Gold Evolution 850W|
|OS||Windows Server 2008 R2|
|Network Switch||Netgear ProSafe GSM7352S-200|
The above testbed runs 25 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 25 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
- Thanks to Western Digital for the eight WD RE hard drives (WD4000FYYZ) to use in the NAS under test.
Post Your CommentPlease log in or sign up to comment.
View All Comments
lorribot - Monday, December 29, 2014 - linkTwo things strike me $210 for 8GB ram, how can anyone justify that? Even Apple aren't that expensive.
Raid 5 really? With 4TB SATA disks if you are going to bother with rundundancy then raid 6 please. from painful experience Raid 5 no longer cuts the mustard for protection given SATA's poor data verification and the huge rebuild time on a 4TB based array I really wouldn't bother, if you data is that important then you need to be backing up the changes or use a proper storage syatem.
Pro NAS boxes like these are overpriced for what they offer, which in reality is not a lot, as for running a VMs off of it I personally wouldn't bother.
Halve the price and offer some form of asyncronous replcation and you may just be on to something.
As it is one of HP's micro servers with a bunch of disks in it would offer better value.
mhaubr2 - Monday, December 29, 2014 - linkSeriously not trolling here - trying to better understand. Coming from the original Windows Home Server and its Drive Pool concept has me spoiled. I'm now using WHS2011 and Drive Bender, and it seems like the way to go. With pooled drives I can expand capacity easily using mix-and-match drives of different brands, sizes and vintages. This seems far less risky than using 3 or more identical drives in a RAID-5 or 6 array. I don't have to worry about getting a bad batch of drives or having a second (or third) drive fail on rebuild. This is how I see it, but I know there are plenty of folks out there that are proponents of RAID-x. I'm looking to build a new media server, so why should I consider a RAID setup over drive pooling?
PEJUman - Monday, December 29, 2014 - linkI actually have the same thought process as you. but my mindset was setup around a single family file server demands. where the single drive with duplication would be sufficient in terms of performance/reliabilty. The Raid arrays allows much higher theoretical performance compared to Drive Bender's, not to mention better than N/2 efficiency for single disk failure tolerance.
I personally like Drive Bender's solution for my needs, but will not use it for business oriented needs: 100% uptime, high performance and multi disk failure tolerant setup.
DanNeely - Tuesday, December 30, 2014 - linkBetween long rebuild times and the risk of an URE bringing down the array, RAID10 (or it's equivalents) have largely replaced RAID5/6 in larger arrays and SANs.
DanNeely - Tuesday, December 30, 2014 - linkFWIW I'm running WHS2011 but with DrivePool instead. Quite happy with it so far, but it's only 16 months until end of life; and with the WHS series seemingly dead as well I've been paying closer attention to the rest of the nas world hoping to find a suitable replacement. So far without much luck.
ZFS seems like it's the closest option; but unless I've missed something (or newer features have been added since the blogs/etc that I've read) options for expanding are limited to: Swapping out all the drives one at a time for larger ones rebuilding each time and only getting more usable space after all the drives have been replaced; or by adding a minimum of two drives (in a raid1 analog) as a separate sub array.
Aside from Drobo, which has recovery issues due to its proprietary FS (no option to pull drives and stick into a normal PC to get data off if it goes down) and is reported to slow down severely at it fills to near capacity, I'm not aware of anything else on the market that would allow for creating and expanding a mirrored storage pool out of mismatched disks the way WHSv1 did or WHS2011 does with 3rd party disk management addons.
Brett Howse - Tuesday, December 30, 2014 - linkIf you are happy with WHS 2011 (that's what I run too) you may want to check out Storage Spaces in Windows 8/8.1 and Server 2012/2012 R2.
It's like WHS v1's drive extender but done right. You can do mirror or parity to one or more drives, as well as mix and match the drives including SSDs for different speed tiers. Might be worth your time to check out.
Because this is all available on Windows 8.1, you can do it for a low cost compared to buying Windows Server. What you'd lose though (and this is why I haven't moved off WHS yet) is the amazing full device backup that WHS offers. This is only available in Windows Server Essentials as far as I know, which is a big licensing fee compared to what WHS used to retail for.
Gigaplex - Wednesday, December 31, 2014 - linkIt's not done right. If you have a parity pool and add one more drive later, well, you can't. If you started with 3 drives, the only way to expand is to add 3 drives at a time.
jabber - Tuesday, December 30, 2014 - linkWhy do folks keep bleating on about RAID5? It's been classed as obsolete for nearly 5 years.
Move on folks.
fackamato - Friday, January 2, 2015 - linkBecause it's still applicable for small drives e.g. SSD's or sub-2TB.
chocosmith - Tuesday, December 30, 2014 - linki have the ts-453 pro, as a nas its great but i also got it for the hdmi so i could kill two birds with one stone and use it as a media box.
unforuntately there is a huge amount of video tearing and the power supply fan is too loud for it to hang near the tv. overall if i was doing it again i'd simply get a celeron chip a small case and build it myself, i'd also probably use windows.
also as others noted with the raid setup. After failing a raid 1 during rebuild i now simply use no raid. one disk can flood a 1gb lan so speed isn't an issue.
Instead i just have the two disk, one is shared the other isn't. At 2am every morning i copy the changed files to the other. this gives me also some "opppp i deleted something" breathing space. I don't need critical raid.
my primary is a ssd, its also used for torrents and other chatty stuff