# General Business Category > Technology Forum >  Hybrid Hard Drives

## IanF

Hi I am contemplating putting in hybrid hard drives in place of the hard drives on our desktop pcs to see if this makes it faster to switch between programmes and start up. 
Has anyone done this and did it make a noticeable difference?

----------


## SilverNodashi

It does help a bit, but it all depends on what you do. Generally, the most used data will be moved to the SSD cache from the slower SATA portion. You'll see a difference in Windows bootup, accessing local files (i.e. opening "My Documents", etc) and using applications with databases, like Pastel / Quickbooks  Outlook / etc.

How much of an improvement you'll see depends a lot on other factors as well, like the CPU type (Core ix type CPU's access the SATA bus differently from the non Core ix class CPU's), SATA bus type, and in some (rare) cases the SATA cables. If you have a newer PC's, make sure to use the SATA300 or SATA600 ports (if it has some), put in new SATA cables if needed as well.

----------


## irneb

Rudi's correct. Hybrid helps. It should do well if your programs aren't huge. As soon as the solid state cache is used up, there's no difference between a spindle disc and the hybrid - since in this case the hybrid then becomes just another spindle.

So if your programs aren't larger than the SSD portion it should work nearly as well as a full SSD disc. Though be warned, I've seen some complaints about Seagate's Hybrids failing more than usual - so don't use them as data discs.

Personally, I much prefer a setup where I have my OS & Programs on a full SSD and my data on a normal spindle (HDD). Speedwise compared to cost I find this the most economical. W7 boots up (even after 2 years of use) within 10 sec, Linux in under 5 - on the HDD on my laptop W7 takes near 40 sec. Programs are even more susceptible - it's near immediate from the SSD, but you sit and wait for the W7 "circle" on everything from the HDD (even just Notepad takes a few seconds).

----------


## IanF

Ok I have ordered some drives and will clone the hard drives and see. We do store our artwork on the server but not the data files for email quoting etc.

----------


## SilverNodashi

Dave, you'll see SSD failure much quicker in this kind of setup, because the SSD's now have much more writes than they would have had if they're used for storage. SSD technology is flash based and flash disks generally have limited write cycles, something like 100,000 writes per block. Since the SSD is used as a cache and you now read / write a lot of data on it, it gets written much more frequent. 

IanF, another option, if your PC / laptop supports it, is to buy a USB 3 Flash disk. 8GB or 16GB might be enough, but if you get a larger one it will last longer, for the same reason as with the SSD cache above. Get the fastest one you can afford. Typically a "SDHC" or "SDXC" type card, with "UHS-I" or "UHS-II" specification. You can configure Windows to "speed up" the PC with these and will see somewhat boost as well.

----------


## irneb

There's a bit of contention whether SSD has "long enough" life compared to HDD. But I've seen some tests a few years ago which suggest that the current SSD's actually have a longer write cycle than the magnetic coating on the HDD's spindles. But Rudi's making a good point - for cache it's probably not so great - 100's of times more read/writes than a normal disc. I think you might even be better served by adding some large RAM and setting your server to use it as cache.


> Ok I have ordered some drives and will clone the hard drives and see. We do store our artwork on the server but not the data files for email quoting etc.


That would actually make me think thrice about hybrids. I'd prefer a striped raid with parity (e.g. Raid 5), or even better a software Raid like RaidZ. For servers the main issue is concurrency, with striping/mirroring you split the data across several discs - in effect amalgamating their speed (striping - for read+write, mirroring even better for read, not so good for write). Not to mention you get the added bonus of a bit of protection against HDD failure, and the ability to extend your data disc without needing too many setting changes.

Even a Raid 0 (i.e. no error correction, fully striped) might perform faster than a hybrid in a server. This is because the Hybrid's SSD cache is going to find lots of "page-faults", i.e. need to reload data from the HDD portion for each new user. This would be exacerbated if the files are large, you also mention artwork (these files tend to be several megs - ours can run from 20MB all the way to 2GB easily without even trying). That type of scenario usually makes caching useless.

Edit: of course if you're going to place the hybrids in Raid anyway - then it's probably a slight bit faster. Though again, dependent on size issues.

----------


## IanF

The drives are going into the desktops not the server. 
For the server I use unraid which makes a parity master disk plus there is 8gigs of RAM in the server. This works well so far. In December I replaced all the drives in the server. It took a few days, you first replace the the parity drive and it rebuilds that then the the data drives one at a time and it rebuilds those.
Anyway I am waiting for delivery and will keep the thread goiing.

----------


## irneb

Then for desktops, I think you should be fine. At least in general.

Just did a bit of research on UnRaid. Yep, that's the same reasons I prefer RaidZ on a ZFS file system on a Solaris/BSD OS over any hardware Raid cards. I.e. no need to keep all hardware the same, no need for exact disc matching. I'm eagerly awaiting the Linux BTRFS system to get out of beta testing before I'll attempt it, and using ZFS on Linux is not a good idea for production.

The only difference I can see between unRAID and RaidZ is that unRAID never splits a file across multiple drives, RaidZ does it more like a hardware system by putting pieces of the file across several discs. I.e. it works similar to AuFS on Linux.

At least that's my understanding from their technical explanations. If this was so, then simply cloning the old disc with its replacement should have given you the same result as the rebuilding technique - only much faster. Otherwise it's the same with RaidZ - slow rebuilds of a crashed / removed disc using the parity data. Though you can choose to include mirroring as well so 2 discs are exact copies of each other at all times (not default but possible): pro - now replacing should be a lot faster, con - less overall space using the same amount of discs.

----------


## IanF

Irneb I bumble my way through this so I am no expert. 
With Unraid your parity disk must be equal to or larger than your largest data disk. My server a HP microserver only has 4 HDD bays  so there is not enough physical space to mirror the drives as well. Also to have more than 2 data drives you need to buy a licence.

----------


## irneb

> Irneb I bumble my way through this so I am no expert. 
> With Unraid your parity disk must be equal to or larger than your largest data disk. My server a HP microserver only has 4 HDD bays  so there is not enough physical space to mirror the drives as well. Also to have more than 2 data drives you need to buy a licence.


Ouch. Yep, that is a problem. Not too sure about the RaidZ's parity requirements, but there's other problems also.

ZFS uses something strange: one / more discs in a vdev (virtual device) which can have any of the Raid-Z's settings set to the entire group. This is basically the "raid" portion. Then over this it creates what's known as a zpool, which combines 1 (or more) vdev's which the OS then sees as a single drive. You can add new discs or remove one/more to any vdev, but it will never increase beyond the smallest in that group - so to increase your capacity you need to add an extra vdev to the zpool. Depending on the RaidZ level one disc in the vdev can fail and be reconstructed in Z1 (same as Raid 5), Z2 allows reconstruction of 2 discs, Z3 - 3 discs. Then it can have mirroring of 2 to 11 discs. The zpool will still work if one of the vdevs fail entirely (e.g. if the zpool has 3 vdevs, then if vdev #2 set to Z1 gets 2 failed discs, it will recover the entire vdev #2). People usually add similar sized discs into one vdev, then when expanding create a new vdev and add it to the pool - so you "upgrade" in batches (this is one point I don't like too much about ZFS and why I want to try BTRFS).

What I'm not sure of is the ratio for the parity disc, but I guess it's same as largest disc.

So I'm guessing ZFS is not very conducive for your case - you'll only start seeing real benefits with 4 or more discs. ZFS is a bit useless as recovery system on less than 2, prefer 3 or more. And also 8GB RAM is recommended minimum when using ZFS. And extremely important is that your RAM needs to be ECC certified, many have seen huge issues with normal commercial RAM causing corruption of ZFS's caches - which it can then not fix. So ZFS is only bullet-proof if you have lots of very expensive RAM. See why I want something else?

An alternative poor-man's solution would be to use a Linux server, format all your discs to Ext4 (or whatever FS you prefer, but Ext4's reasonable) and then mount them as a "unified" disc using AuFS. This would be similar to Raid 0, but each disc is still separate as well - if you remove one (or one fails) the unified disc will still work, but that disc's data will not be there anymore. Also seeing as each disc is still a "normal" disc also - you can simply clone it like any other if you want to replace (you can plug it into another PC and it will have its data as normal). I use this for my media server at home - since I don't worry too much if I lose a few movies / songs. I'd not recommend it for business's critical data though - there's no error avoidance built-in with AuFS over the discs' own journaling (or whatever). But I much prefer this method over Windows' "Extending Volumes" - which makes the discs unusable individually (i.e. Extended Volumes is a software Raid 0). AuFS has some settings on how it arranges the files on the multiple discs - default is fill from first, or otherwise save to disc with largest free size, but a possibly better solution is round-robin (each consecutive file being saved to a different disc). Then it also has a setting to check all discs' contents when listing files - usually turned off for performance issues, but a good idea if someone's going to work directly on one of the discs individually (else AuFS has a refresh feature) instead of through the unified mount.

To put it into Windows parlance: It would be like adding a drive X: which is simply combining drives M:, N:, O:, etc. while they're each also still available on their own. Actually, it can be even just folders on those drives. E.g. X: mapping to M:\MyFiles added to N:\SomeOtherFiles added to O:\YetAnotherFolderOfFiles.

----------


## IanF

I tried the Ubuntu server option but didn't have a spare DVD drive when I set it up eventually tried unraid and it worked so easily for me. 
Still waiting for the disks

----------


## SilverNodashi

You can definitely use ZFS with 1, 2 or 3 drives but see more benefits with 4 drives+
With ZFS you need 1GB RAM per TB storage. 
You can use a small ZIL log drive which acts as parity and it can even be on USB 3

----------


## irneb

> You can definitely use ZFS with 1, 2 or 3 drives but see more benefits with 4 drives+
> With ZFS you need 1GB RAM per TB storage. 
> You can use a small ZIL log drive which acts as parity and it can even be on USB 3


+1 ... what I meant to say is that ZFS would only help Ian if he had 4 or more drives (over what he already has with UnRaid).

The 1GB/TB rule is good yes, but the absolute minimum RAM is 4GB, and recommended 8GB. This is due to ZFS's in-ram caching and logging. And if you turn on ZFS's de-duplication (i.e. never saves duplicate blocks of data more than once even between different files) it gets even worse since this is calculated in RAM also. But the main issue is that you absolutely MUST use ECC compliant RAM chips, so the off-the-shelf stuff is not good enough (gets worse with higher altitudes due to more sun-radiation causing bit-flips in RAM). http://hardforum.com/showthread.php?t=1689724 

I've tested this for about a year on a normal PC which I converted to a server, and I've found this to be excellent advice. Has 16GB of "normal" RAM and found at least one corrupt file per week - only way to fix was to copy from backup, ZFS's scrub didn't help at all, it didn't even "know" that the file got corrupt. 1st 2 months on Linux with the ZfsOnLinux separate install, then using FreeBSD instead - thinking that was why the corruptions happened. But no, even with FreeBSD and the native ZFS driver I still got the same corruptions. So it was definitely the non-ECC ram with ZFS's sensitivity to RAM bit-flips. I've had the Ext4 + AuFS system since last year November on the same exact PC, and have yet to find any file corruptions - this file system is less affected by RAM problems.

----------


## irneb

> But the main issue is that you absolutely MUST use ECC compliant RAM chips


Here's an example of what I mean:
4GB DDR3-1600 RAM from the same supplier & manufacturer. Commercial R600, ECC Server edition R850.
Normal desktop RAM (4GB / 8GB)
http://www.comx-computers.co.za/JM16...uy-p-65967.php
http://www.comx-computers.co.za/JM16...uy-p-86804.php

ECC Ram meant for servers:
http://www.comx-computers.co.za/TS51...uy-p-87527.php
http://www.comx-computers.co.za/TS1G...uy-p-66890.php

See the price jump? So instead of me spending around R2100 for a 16GB machine's RAM, I'd have spent closer to R3700. And that, just because I'm using ZFS instead of nearly any other FS.

----------

Dave A (18-Mar-14)

----------


## SilverNodashi

> +1 ... what I meant to say is that ZFS would only help Ian if he had 4 or more drives (over what he already has with UnRaid).
> 
> The 1GB/TB rule is good yes, but the absolute minimum RAM is 4GB, and recommended 8GB. This is due to ZFS's in-ram caching and logging. And if you turn on ZFS's de-duplication (i.e. never saves duplicate blocks of data more than once even between different files) it gets even worse since this is calculated in RAM also. But the main issue is that you absolutely MUST use ECC compliant RAM chips, so the off-the-shelf stuff is not good enough (gets worse with higher altitudes due to more sun-radiation causing bit-flips in RAM). http://hardforum.com/showthread.php?t=1689724 
> 
> I've tested this for about a year on a normal PC which I converted to a server, and I've found this to be excellent advice. Has 16GB of "normal" RAM and found at least one corrupt file per week - only way to fix was to copy from backup, ZFS's scrub didn't help at all, it didn't even "know" that the file got corrupt. 1st 2 months on Linux with the ZfsOnLinux separate install, then using FreeBSD instead - thinking that was why the corruptions happened. But no, even with FreeBSD and the native ZFS driver I still got the same corruptions. So it was definitely the non-ECC ram with ZFS's sensitivity to RAM bit-flips. I've had the Ext4 + AuFS system since last year November on the same exact PC, and have yet to find any file corruptions - this file system is less affected by RAM problems.


Interestingly, I havn't picked up this behavoiur. My home server runs FreeNAS, on a normal desktop PC with 8 HDD's in, Normal DDR3 RAM and yet I haven't picked up anything like that. It's filled with photos, music, videos and saved data from both my wife and my own laptops and tablet. Our production servers all run ECC RAM so I can't vouch for them though.

But, as far as I know, you don't absolutely ^need^ ECC RAM for ZFS, same as you don't absolutely need a ZIL drive or cache drive, but it's definitely recommended for optimal performance.

----------


## irneb

> It's filled with photos, music, videos and saved data from both my wife and my own laptops and tablet.


That's why. The issue usually only arises when saving files. You're probably not saving and resaving and editing and resaving, etc. etc. etc. those JPGs/MP3s/AVIs are you? I'm with you on using FreeNas for such media server - easiest thing in the world, and the ZFS is awesome (especially with this type of data). I've used it as such also, and loved it - used FreeNas exactly for the same purpose. Went so far as doing the same for my "working file server" too, though installed FreeBSD instead (i.e. the full OS unlike the minimalist FreeNAS) - which is when I started noticing the problem. And then started doing research on how to fix it - only solution: ECC RAM.

It's just when you want to use ZFS in a production server were writes are going to happen on a near per-second basis, you should very seriously consider ECC RAM, otherwise it's not a huge problem since the corruption only occurs very rarely. Yes, not a "necessity" ... same as working breaks aren't "necessary" to be able to drive your car.

----------


## IanF

OK I got the drives, now just to find a quiet few hours to clone and install the new drives. Probably over the weekend would be best.

----------


## SilverNodashi

> That's why. The issue usually only arises when saving files. You're probably not saving and resaving and editing and resaving, etc. etc. etc. those JPGs/MP3s/AVIs are you? I'm with you on using FreeNas for such media server - easiest thing in the world, and the ZFS is awesome (especially with this type of data). I've used it as such also, and loved it - used FreeNas exactly for the same purpose. Went so far as doing the same for my "working file server" too, though installed FreeBSD instead (i.e. the full OS unlike the minimalist FreeNAS) - which is when I started noticing the problem. And then started doing research on how to fix it - only solution: ECC RAM.
> 
> It's just when you want to use ZFS in a production server were writes are going to happen on a near per-second basis, you should very seriously consider ECC RAM, otherwise it's not a huge problem since the corruption only occurs very rarely. Yes, not a "necessity" ... same as working breaks aren't "necessary" to be able to drive your car.


We take a fare amount of personal photos and videos and save to the drive on a daily basis. My wife also does some professional photography and save / edit / save client's stuff on their on a regular basis. 
Sure, it's not the same a production server, but it's similar to a small office file server.

----------


## irneb

> OK I got the drives, now just to find a quiet few hours to clone and install the new drives. Probably over the weekend would be best.


Great, let us know how it goes. Which brand did you get? Seagate / WD / other?

Sorry for the off-topic about file systems though.




> We take a fare amount of personal photos and  videos and save to the drive on a daily basis. My wife also does some  professional photography and save / edit / save client's stuff on their  on a regular basis. 
> Sure, it's not the same a production server, but it's similar to a small office file server.


Then  I don't know - I probably then just have some bad commercial RAM. Or a  bad spot where the server sits. Or bad power. It is on 24/7 though - so  that might also be a factor.

----------


## IanF

I got Seagate  desktop SSHD 1 TB drives.
Don't worry about the off topic file systems that is how we learn.

----------


## irneb

> I got Seagate  desktop SSHD 1 TB drives.


Great, personally I like Seagate ... always have. From my own experience they're the drives with the least trouble. Though as I've mentioned before, I've seen some tests done which actually show the opposite: Seagates having the highest failure rate in data servers, then WD, then best of the lot being Toshibas.

The Toshiba I do have a sample of, which seems to corroborate the test: I had a very old Iomega external 80GB drive (around 9 years old). The casing & USB circuitry has since demolished itself after falling off a cupboard.. So I've stuck the 2.5" Toshiba disc into a SATA port - it still works perfectly.I've got another 250GB Toshiba (around 6 years old) which is the one I  replaced recently with a new 3TB - it's now plugged into a Iomega  iConnect together with the 80GB using some self-frankensteined Sata-toUSB converters, both still running fine. But even though I've had 100% reliability with these, I don't consider 2 discs statistically significant.

 Some WD's I've had previously have all failed, and of the 20 or so Seagates I've had since the 90's I only ever had 1 fail on me (though that was around 2 years ago on a 2TB 3.5" Green Barracuda), the rest are still running, or in a box (the oldest I'm still using 5 years 500GB 2.5").




> Don't worry about the off topic file systems that is how we learn.


Thanks! BTW, since you're using it, how easy is it to set up UnRaid using several different sized discs while later swapping / adding new larger discs? With ZFS I had lots of learning to do - the vdev can't be "grown" after first creation (even if you add new larger discs it only uses the original size), only way was to add new vdevs into the zpool. To me that sounded a bit convoluted.

----------


## IanF

With UnRaid as long as the parity drive is the same size as the largest drive it easy to put in a new hard drive which is bigger. I changed 500 gb drives with 1tb drives after the rebuild there was more space. Just it took a while to choose the right options to get the rebuild going. 
Don't know about ZFS though.

----------


## irneb

ZFS's ZIL drive (ZFS Intent Log) is usually a partition it creates on one of the drives in the vdev. You can set it to a different disc - some people set it to work on a SSD because this is written to for every single action - makes it faster. Apparently this is not fixed according the drives inside the vdev - it's actually a dynamically expanding volume. Its size is governed by the RAM cache, the bandwidth on the network, the block size of the files (not ZFS has varying block sizes per file, not per disc as nearly all other FS's have), etc.


> Don't know about ZFS though.


No, it's got a bit of an issue. Especially if you don't have lots of free Sata ports. See the answers on this exact question: http://superuser.com/questions/62224...nt-size-drives

ZFS doesn't like having varying sized discs inside a single vdev. Especially if you turn on mirroring, in which case the vdev's size is only the same as the smallest disc in the batch. With only striping + log it's al-right, but once the vdev is created, adding another disc to it will not increase the total size. That is what the zpool is for, so you add the new disc into a new vdev, then add that into the zpool.

FreeNAS (i.e. the plug-n-play OS which uses ZFS) does the striping idea by default. Since ZFS's ZIL be default forms part of a partition inside each vdev, if you only have one disc inside the vdev - part of it is used for the ZIL. FreeNAS's default is to create a new vdev for each new HDD - you can change this is you wish (even through its web-interface), but this way is the simplest to extend the raid's capacity (not the most robust though).

From most of my research, it seems the "best practise" method is to keep similar sized discs in a vdev, then pool the different sized vdevs together. I.e. when getting new disc(s) you'd need to either add it to a vdev with similar sized discs, or create a new vdev and add that to the zpool as well.

----------


## IanF

Here is a screen shot of the server control panel.
I can't see any options for ZFS.

I looked in the other tabs and there is nothing there, you probably have to open a terminal window.

----------


## irneb

> I looked in the other tabs and there is nothing there, you probably have to open a terminal window.


No, I don't think unRAID has ZFS built-in. unRaid is based on Linux (not Solaris / BSD). Unfortunately there's a license incompatibility between ZFS and Linux, so there's no native ZFS for Linux. There's only a ZFS through FUSE, or a separately installable ZFSonLinux which tries to re-implement ZFS as a native file system.

From unRAID's FAQ's it seems it uses ReiserFS. It was the first journalling FS for Linux, when ext2 was still the norm. Then ext3 included journalling, but did so much slower than Reiser - it was like an addon to the FS. Now with ext4, they're very close performance-wise. It's not a bad FS at all, just a bit older than ZFS, though that means it's had more time to work out any bugs. It still uses oa technique which I think is the best idea in FS since for ever: Copy-onWrite - which places new data in a new empty space, then only when finished points the file-handle to the new data and releases the old. Thus even if a power failure during a save, the worst that happens is you lose the new data (the old stuff is still in tact), with overwriting FS's a power failure means the file WILL BE corrupt and you probably won't be able to recover any useful data.

You might want to look through this: http://en.wikipedia.org/wiki/Comparison_of_file_systems

Note the ZFS I'm referring to is the one made in 2004 by Sun Microsystems for their Solaris Unix (not the zFS by IBM in 2001).

----------


## IanF

I chose unRaid as it was made to be installed on a USB stick and was easier than the the other systems I tried.
Thanks for the research, looks like a good choice.

----------


## IanF

I installed the first hard drive on Friday it took about 90 mins.
I used seagate disc wizard to clone the drive and it worked well.
The only problem was with my MIS system I found out they use the drive serial number and a few other things to work out the user number, so the MIS wouldn't work until they updated the details on their registration module. 
I must find the time to change from this system.

----------

