Welcome to The Forum SA. As a visitor you have read only access to the public content areas of this website. You will have to register as a member to access all content, post messages and network with our members. Membership is free and registering is quick and easy. You can click here to register now and become a member within minutes.
Hi I am contemplating putting in hybrid hard drives in place of the hard drives on our desktop pcs to see if this makes it faster to switch between programmes and start up.
Has anyone done this and did it make a noticeable difference?
It does help a bit, but it all depends on what you do. Generally, the most used data will be moved to the SSD cache from the slower SATA portion. You'll see a difference in Windows bootup, accessing local files (i.e. opening "My Documents", etc) and using applications with databases, like Pastel / Quickbooks Outlook / etc.
How much of an improvement you'll see depends a lot on other factors as well, like the CPU type (Core ix type CPU's access the SATA bus differently from the non Core ix class CPU's), SATA bus type, and in some (rare) cases the SATA cables. If you have a newer PC's, make sure to use the SATA300 or SATA600 ports (if it has some), put in new SATA cables if needed as well.
Rudi's correct. Hybrid helps. It should do well if your programs aren't huge. As soon as the solid state cache is used up, there's no difference between a spindle disc and the hybrid - since in this case the hybrid then becomes just another spindle.
So if your programs aren't larger than the SSD portion it should work nearly as well as a full SSD disc. Though be warned, I've seen some complaints about Seagate's Hybrids failing more than usual - so don't use them as data discs.
Personally, I much prefer a setup where I have my OS & Programs on a full SSD and my data on a normal spindle (HDD). Speedwise compared to cost I find this the most economical. W7 boots up (even after 2 years of use) within 10 sec, Linux in under 5 - on the HDD on my laptop W7 takes near 40 sec. Programs are even more susceptible - it's near immediate from the SSD, but you sit and wait for the W7 "circle" on everything from the HDD (even just Notepad takes a few seconds).
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
Ok I have ordered some drives and will clone the hard drives and see. We do store our artwork on the server but not the data files for email quoting etc.
Dave, you'll see SSD failure much quicker in this kind of setup, because the SSD's now have much more writes than they would have had if they're used for storage. SSD technology is flash based and flash disks generally have limited write cycles, something like 100,000 writes per block. Since the SSD is used as a cache and you now read / write a lot of data on it, it gets written much more frequent.
IanF, another option, if your PC / laptop supports it, is to buy a USB 3 Flash disk. 8GB or 16GB might be enough, but if you get a larger one it will last longer, for the same reason as with the SSD cache above. Get the fastest one you can afford. Typically a "SDHC" or "SDXC" type card, with "UHS-I" or "UHS-II" specification. You can configure Windows to "speed up" the PC with these and will see somewhat boost as well.
There's a bit of contention whether SSD has "long enough" life compared to HDD. But I've seen some tests a few years ago which suggest that the current SSD's actually have a longer write cycle than the magnetic coating on the HDD's spindles. But Rudi's making a good point - for cache it's probably not so great - 100's of times more read/writes than a normal disc. I think you might even be better served by adding some large RAM and setting your server to use it as cache.
Originally posted by IanF
Ok I have ordered some drives and will clone the hard drives and see. We do store our artwork on the server but not the data files for email quoting etc.
That would actually make me think thrice about hybrids. I'd prefer a striped raid with parity (e.g. Raid 5), or even better a software Raid like RaidZ. For servers the main issue is concurrency, with striping/mirroring you split the data across several discs - in effect amalgamating their speed (striping - for read+write, mirroring even better for read, not so good for write). Not to mention you get the added bonus of a bit of protection against HDD failure, and the ability to extend your data disc without needing too many setting changes.
Even a Raid 0 (i.e. no error correction, fully striped) might perform faster than a hybrid in a server. This is because the Hybrid's SSD cache is going to find lots of "page-faults", i.e. need to reload data from the HDD portion for each new user. This would be exacerbated if the files are large, you also mention artwork (these files tend to be several megs - ours can run from 20MB all the way to 2GB easily without even trying). That type of scenario usually makes caching useless.
Edit: of course if you're going to place the hybrids in Raid anyway - then it's probably a slight bit faster. Though again, dependent on size issues.
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
The drives are going into the desktops not the server.
For the server I use unraid which makes a parity master disk plus there is 8gigs of RAM in the server. This works well so far. In December I replaced all the drives in the server. It took a few days, you first replace the the parity drive and it rebuilds that then the the data drives one at a time and it rebuilds those.
Anyway I am waiting for delivery and will keep the thread goiing.
Then for desktops, I think you should be fine. At least in general.
Just did a bit of research on UnRaid. Yep, that's the same reasons I prefer RaidZ on a ZFS file system on a Solaris/BSD OS over any hardware Raid cards. I.e. no need to keep all hardware the same, no need for exact disc matching. I'm eagerly awaiting the Linux BTRFS system to get out of beta testing before I'll attempt it, and using ZFS on Linux is not a good idea for production.
The only difference I can see between unRAID and RaidZ is that unRAID never splits a file across multiple drives, RaidZ does it more like a hardware system by putting pieces of the file across several discs. I.e. it works similar to AuFS on Linux.
At least that's my understanding from their technical explanations. If this was so, then simply cloning the old disc with its replacement should have given you the same result as the rebuilding technique - only much faster. Otherwise it's the same with RaidZ - slow rebuilds of a crashed / removed disc using the parity data. Though you can choose to include mirroring as well so 2 discs are exact copies of each other at all times (not default but possible): pro - now replacing should be a lot faster, con - less overall space using the same amount of discs.
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
Irneb I bumble my way through this so I am no expert.
With Unraid your parity disk must be equal to or larger than your largest data disk. My server a HP microserver only has 4 HDD bays so there is not enough physical space to mirror the drives as well. Also to have more than 2 data drives you need to buy a licence.
Irneb I bumble my way through this so I am no expert.
With Unraid your parity disk must be equal to or larger than your largest data disk. My server a HP microserver only has 4 HDD bays so there is not enough physical space to mirror the drives as well. Also to have more than 2 data drives you need to buy a licence.
Ouch. Yep, that is a problem. Not too sure about the RaidZ's parity requirements, but there's other problems also.
ZFS uses something strange: one / more discs in a vdev (virtual device) which can have any of the Raid-Z's settings set to the entire group. This is basically the "raid" portion. Then over this it creates what's known as a zpool, which combines 1 (or more) vdev's which the OS then sees as a single drive. You can add new discs or remove one/more to any vdev, but it will never increase beyond the smallest in that group - so to increase your capacity you need to add an extra vdev to the zpool. Depending on the RaidZ level one disc in the vdev can fail and be reconstructed in Z1 (same as Raid 5), Z2 allows reconstruction of 2 discs, Z3 - 3 discs. Then it can have mirroring of 2 to 11 discs. The zpool will still work if one of the vdevs fail entirely (e.g. if the zpool has 3 vdevs, then if vdev #2 set to Z1 gets 2 failed discs, it will recover the entire vdev #2). People usually add similar sized discs into one vdev, then when expanding create a new vdev and add it to the pool - so you "upgrade" in batches (this is one point I don't like too much about ZFS and why I want to try BTRFS).
What I'm not sure of is the ratio for the parity disc, but I guess it's same as largest disc.
So I'm guessing ZFS is not very conducive for your case - you'll only start seeing real benefits with 4 or more discs. ZFS is a bit useless as recovery system on less than 2, prefer 3 or more. And also 8GB RAM is recommended minimum when using ZFS. And extremely important is that your RAM needs to be ECC certified, many have seen huge issues with normal commercial RAM causing corruption of ZFS's caches - which it can then not fix. So ZFS is only bullet-proof if you have lots of very expensive RAM. See why I want something else?
An alternative poor-man's solution would be to use a Linux server, format all your discs to Ext4 (or whatever FS you prefer, but Ext4's reasonable) and then mount them as a "unified" disc using AuFS. This would be similar to Raid 0, but each disc is still separate as well - if you remove one (or one fails) the unified disc will still work, but that disc's data will not be there anymore. Also seeing as each disc is still a "normal" disc also - you can simply clone it like any other if you want to replace (you can plug it into another PC and it will have its data as normal). I use this for my media server at home - since I don't worry too much if I lose a few movies / songs. I'd not recommend it for business's critical data though - there's no error avoidance built-in with AuFS over the discs' own journaling (or whatever). But I much prefer this method over Windows' "Extending Volumes" - which makes the discs unusable individually (i.e. Extended Volumes is a software Raid 0). AuFS has some settings on how it arranges the files on the multiple discs - default is fill from first, or otherwise save to disc with largest free size, but a possibly better solution is round-robin (each consecutive file being saved to a different disc). Then it also has a setting to check all discs' contents when listing files - usually turned off for performance issues, but a good idea if someone's going to work directly on one of the discs individually (else AuFS has a refresh feature) instead of through the unified mount.
To put it into Windows parlance: It would be like adding a drive X: which is simply combining drives M:, N:, O:, etc. while they're each also still available on their own. Actually, it can be even just folders on those drives. E.g. X: mapping to M:\MyFiles added to N:\SomeOtherFiles added to O:\YetAnotherFolderOfFiles.
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
I tried the Ubuntu server option but didn't have a spare DVD drive when I set it up eventually tried unraid and it worked so easily for me.
Still waiting for the disks
You can definitely use ZFS with 1, 2 or 3 drives but see more benefits with 4 drives+
With ZFS you need 1GB RAM per TB storage.
You can use a small ZIL log drive which acts as parity and it can even be on USB 3
You can definitely use ZFS with 1, 2 or 3 drives but see more benefits with 4 drives+
With ZFS you need 1GB RAM per TB storage.
You can use a small ZIL log drive which acts as parity and it can even be on USB 3
+1 ... what I meant to say is that ZFS would only help Ian if he had 4 or more drives (over what he already has with UnRaid).
The 1GB/TB rule is good yes, but the absolute minimum RAM is 4GB, and recommended 8GB. This is due to ZFS's in-ram caching and logging. And if you turn on ZFS's de-duplication (i.e. never saves duplicate blocks of data more than once even between different files) it gets even worse since this is calculated in RAM also. But the main issue is that you absolutely MUST use ECC compliant RAM chips, so the off-the-shelf stuff is not good enough (gets worse with higher altitudes due to more sun-radiation causing bit-flips in RAM). http://hardforum.com/showthread.php?t=1689724
I've tested this for about a year on a normal PC which I converted to a server, and I've found this to be excellent advice. Has 16GB of "normal" RAM and found at least one corrupt file per week - only way to fix was to copy from backup, ZFS's scrub didn't help at all, it didn't even "know" that the file got corrupt. 1st 2 months on Linux with the ZfsOnLinux separate install, then using FreeBSD instead - thinking that was why the corruptions happened. But no, even with FreeBSD and the native ZFS driver I still got the same corruptions. So it was definitely the non-ECC ram with ZFS's sensitivity to RAM bit-flips. I've had the Ext4 + AuFS system since last year November on the same exact PC, and have yet to find any file corruptions - this file system is less affected by RAM problems.
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
But the main issue is that you absolutely MUST use ECC compliant RAM chips
Here's an example of what I mean:
4GB DDR3-1600 RAM from the same supplier & manufacturer. Commercial R600, ECC Server edition R850.
Normal desktop RAM (4GB / 8GB)
See the price jump? So instead of me spending around R2100 for a 16GB machine's RAM, I'd have spent closer to R3700. And that, just because I'm using ZFS instead of nearly any other FS.
Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz And central banks are the slave clearing houses
+1 ... what I meant to say is that ZFS would only help Ian if he had 4 or more drives (over what he already has with UnRaid).
The 1GB/TB rule is good yes, but the absolute minimum RAM is 4GB, and recommended 8GB. This is due to ZFS's in-ram caching and logging. And if you turn on ZFS's de-duplication (i.e. never saves duplicate blocks of data more than once even between different files) it gets even worse since this is calculated in RAM also. But the main issue is that you absolutely MUST use ECC compliant RAM chips, so the off-the-shelf stuff is not good enough (gets worse with higher altitudes due to more sun-radiation causing bit-flips in RAM). http://hardforum.com/showthread.php?t=1689724
I've tested this for about a year on a normal PC which I converted to a server, and I've found this to be excellent advice. Has 16GB of "normal" RAM and found at least one corrupt file per week - only way to fix was to copy from backup, ZFS's scrub didn't help at all, it didn't even "know" that the file got corrupt. 1st 2 months on Linux with the ZfsOnLinux separate install, then using FreeBSD instead - thinking that was why the corruptions happened. But no, even with FreeBSD and the native ZFS driver I still got the same corruptions. So it was definitely the non-ECC ram with ZFS's sensitivity to RAM bit-flips. I've had the Ext4 + AuFS system since last year November on the same exact PC, and have yet to find any file corruptions - this file system is less affected by RAM problems.
Interestingly, I havn't picked up this behavoiur. My home server runs FreeNAS, on a normal desktop PC with 8 HDD's in, Normal DDR3 RAM and yet I haven't picked up anything like that. It's filled with photos, music, videos and saved data from both my wife and my own laptops and tablet. Our production servers all run ECC RAM so I can't vouch for them though.
But, as far as I know, you don't absolutely ^need^ ECC RAM for ZFS, same as you don't absolutely need a ZIL drive or cache drive, but it's definitely recommended for optimal performance.
We process personal data about users of our site, through the use of cookies and other technologies, to deliver our services, personalize advertising, and to analyze site activity. We may share certain information about our users with our advertising and analytics partners. For additional details, refer to our Privacy Policy.
By clicking "I AGREE" below, you agree to our Privacy Policy and our personal data processing and cookie practices as described therein. You also acknowledge that this forum may be hosted outside your country and you consent to the collection, storage, and processing of your data in the country where this forum is hosted.
Comment