Hybrid Hard Drives

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • irneb
    Gold Member

    • Apr 2007
    • 625

    #16
    Originally posted by SoftDux-Rudi
    It's filled with photos, music, videos and saved data from both my wife and my own laptops and tablet.
    That's why. The issue usually only arises when saving files. You're probably not saving and resaving and editing and resaving, etc. etc. etc. those JPGs/MP3s/AVIs are you? I'm with you on using FreeNas for such media server - easiest thing in the world, and the ZFS is awesome (especially with this type of data). I've used it as such also, and loved it - used FreeNas exactly for the same purpose. Went so far as doing the same for my "working file server" too, though installed FreeBSD instead (i.e. the full OS unlike the minimalist FreeNAS) - which is when I started noticing the problem. And then started doing research on how to fix it - only solution: ECC RAM.

    It's just when you want to use ZFS in a production server were writes are going to happen on a near per-second basis, you should very seriously consider ECC RAM, otherwise it's not a huge problem since the corruption only occurs very rarely. Yes, not a "necessity" ... same as working breaks aren't "necessary" to be able to drive your car.
    Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
    And central banks are the slave clearing houses

    Comment

    • IanF
      Moderator

      • Dec 2007
      • 2681

      #17
      OK I got the drives, now just to find a quiet few hours to clone and install the new drives. Probably over the weekend would be best.
      Only stress when you can change the outcome!

      Comment

      • SilverNodashi
        Platinum Member

        • May 2007
        • 1197

        #18
        Originally posted by irneb
        That's why. The issue usually only arises when saving files. You're probably not saving and resaving and editing and resaving, etc. etc. etc. those JPGs/MP3s/AVIs are you? I'm with you on using FreeNas for such media server - easiest thing in the world, and the ZFS is awesome (especially with this type of data). I've used it as such also, and loved it - used FreeNas exactly for the same purpose. Went so far as doing the same for my "working file server" too, though installed FreeBSD instead (i.e. the full OS unlike the minimalist FreeNAS) - which is when I started noticing the problem. And then started doing research on how to fix it - only solution: ECC RAM.

        It's just when you want to use ZFS in a production server were writes are going to happen on a near per-second basis, you should very seriously consider ECC RAM, otherwise it's not a huge problem since the corruption only occurs very rarely. Yes, not a "necessity" ... same as working breaks aren't "necessary" to be able to drive your car.
        We take a fare amount of personal photos and videos and save to the drive on a daily basis. My wife also does some professional photography and save / edit / save client's stuff on their on a regular basis.
        Sure, it's not the same a production server, but it's similar to a small office file server.
        Get superfast South African Hosting at WebHostingZone

        Comment

        • irneb
          Gold Member

          • Apr 2007
          • 625

          #19
          Originally posted by IanF
          OK I got the drives, now just to find a quiet few hours to clone and install the new drives. Probably over the weekend would be best.
          Great, let us know how it goes. Which brand did you get? Seagate / WD / other?

          Sorry for the off-topic about file systems though.

          Originally posted by SoftDux-Rudi
          We take a fare amount of personal photos and videos and save to the drive on a daily basis. My wife also does some professional photography and save / edit / save client's stuff on their on a regular basis.
          Sure, it's not the same a production server, but it's similar to a small office file server.
          Then I don't know - I probably then just have some bad commercial RAM. Or a bad spot where the server sits. Or bad power. It is on 24/7 though - so that might also be a factor.
          Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
          And central banks are the slave clearing houses

          Comment

          • IanF
            Moderator

            • Dec 2007
            • 2681

            #20
            I got Seagate desktop SSHD 1 TB drives.
            Don't worry about the off topic file systems that is how we learn.
            Only stress when you can change the outcome!

            Comment

            • irneb
              Gold Member

              • Apr 2007
              • 625

              #21
              Originally posted by IanF
              I got Seagate desktop SSHD 1 TB drives.
              Great, personally I like Seagate ... always have. From my own experience they're the drives with the least trouble. Though as I've mentioned before, I've seen some tests done which actually show the opposite: Seagates having the highest failure rate in data servers, then WD, then best of the lot being Toshibas.

              The Toshiba I do have a sample of, which seems to corroborate the test: I had a very old Iomega external 80GB drive (around 9 years old). The casing & USB circuitry has since demolished itself after falling off a cupboard.. So I've stuck the 2.5" Toshiba disc into a SATA port - it still works perfectly.I've got another 250GB Toshiba (around 6 years old) which is the one I replaced recently with a new 3TB - it's now plugged into a Iomega iConnect together with the 80GB using some self-frankensteined Sata-toUSB converters, both still running fine. But even though I've had 100% reliability with these, I don't consider 2 discs statistically significant.

              Some WD's I've had previously have all failed, and of the 20 or so Seagates I've had since the 90's I only ever had 1 fail on me (though that was around 2 years ago on a 2TB 3.5" Green Barracuda), the rest are still running, or in a box (the oldest I'm still using 5 years 500GB 2.5").

              Originally posted by IanF
              Don't worry about the off topic file systems that is how we learn.
              Thanks! BTW, since you're using it, how easy is it to set up UnRaid using several different sized discs while later swapping / adding new larger discs? With ZFS I had lots of learning to do - the vdev can't be "grown" after first creation (even if you add new larger discs it only uses the original size), only way was to add new vdevs into the zpool. To me that sounded a bit convoluted.
              Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
              And central banks are the slave clearing houses

              Comment

              • IanF
                Moderator

                • Dec 2007
                • 2681

                #22
                With UnRaid as long as the parity drive is the same size as the largest drive it easy to put in a new hard drive which is bigger. I changed 500 gb drives with 1tb drives after the rebuild there was more space. Just it took a while to choose the right options to get the rebuild going.
                Don't know about ZFS though.
                Only stress when you can change the outcome!

                Comment

                • irneb
                  Gold Member

                  • Apr 2007
                  • 625

                  #23
                  ZFS's ZIL drive (ZFS Intent Log) is usually a partition it creates on one of the drives in the vdev. You can set it to a different disc - some people set it to work on a SSD because this is written to for every single action - makes it faster. Apparently this is not fixed according the drives inside the vdev - it's actually a dynamically expanding volume. Its size is governed by the RAM cache, the bandwidth on the network, the block size of the files (not ZFS has varying block sizes per file, not per disc as nearly all other FS's have), etc.
                  Originally posted by IanF
                  Don't know about ZFS though.
                  No, it's got a bit of an issue. Especially if you don't have lots of free Sata ports. See the answers on this exact question: http://superuser.com/questions/62224...nt-size-drives

                  ZFS doesn't like having varying sized discs inside a single vdev. Especially if you turn on mirroring, in which case the vdev's size is only the same as the smallest disc in the batch. With only striping + log it's al-right, but once the vdev is created, adding another disc to it will not increase the total size. That is what the zpool is for, so you add the new disc into a new vdev, then add that into the zpool.

                  FreeNAS (i.e. the plug-n-play OS which uses ZFS) does the striping idea by default. Since ZFS's ZIL be default forms part of a partition inside each vdev, if you only have one disc inside the vdev - part of it is used for the ZIL. FreeNAS's default is to create a new vdev for each new HDD - you can change this is you wish (even through its web-interface), but this way is the simplest to extend the raid's capacity (not the most robust though).

                  From most of my research, it seems the "best practise" method is to keep similar sized discs in a vdev, then pool the different sized vdevs together. I.e. when getting new disc(s) you'd need to either add it to a vdev with similar sized discs, or create a new vdev and add that to the zpool as well.
                  Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
                  And central banks are the slave clearing houses

                  Comment

                  • IanF
                    Moderator

                    • Dec 2007
                    • 2681

                    #24
                    Here is a screen shot of the server control panel.
                    I can't see any options for ZFS.
                    Click image for larger version

Name:	unraid.jpg
Views:	1
Size:	33.7 KB
ID:	262840
                    I looked in the other tabs and there is nothing there, you probably have to open a terminal window.
                    Only stress when you can change the outcome!

                    Comment

                    • irneb
                      Gold Member

                      • Apr 2007
                      • 625

                      #25
                      Originally posted by IanF
                      I looked in the other tabs and there is nothing there, you probably have to open a terminal window.
                      No, I don't think unRAID has ZFS built-in. unRaid is based on Linux (not Solaris / BSD). Unfortunately there's a license incompatibility between ZFS and Linux, so there's no native ZFS for Linux. There's only a ZFS through FUSE, or a separately installable ZFSonLinux which tries to re-implement ZFS as a native file system.

                      From unRAID's FAQ's it seems it uses ReiserFS. It was the first journalling FS for Linux, when ext2 was still the norm. Then ext3 included journalling, but did so much slower than Reiser - it was like an addon to the FS. Now with ext4, they're very close performance-wise. It's not a bad FS at all, just a bit older than ZFS, though that means it's had more time to work out any bugs. It still uses oa technique which I think is the best idea in FS since for ever: Copy-onWrite - which places new data in a new empty space, then only when finished points the file-handle to the new data and releases the old. Thus even if a power failure during a save, the worst that happens is you lose the new data (the old stuff is still in tact), with overwriting FS's a power failure means the file WILL BE corrupt and you probably won't be able to recover any useful data.

                      You might want to look through this: http://en.wikipedia.org/wiki/Comparison_of_file_systems

                      Note the ZFS I'm referring to is the one made in 2004 by Sun Microsystems for their Solaris Unix (not the zFS by IBM in 2001).
                      Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
                      And central banks are the slave clearing houses

                      Comment

                      • IanF
                        Moderator

                        • Dec 2007
                        • 2681

                        #26
                        I chose unRaid as it was made to be installed on a USB stick and was easier than the the other systems I tried.
                        Thanks for the research, looks like a good choice.
                        Only stress when you can change the outcome!

                        Comment

                        • IanF
                          Moderator

                          • Dec 2007
                          • 2681

                          #27
                          I installed the first hard drive on Friday it took about 90 mins.
                          I used seagate disc wizard to clone the drive and it worked well.
                          The only problem was with my MIS system I found out they use the drive serial number and a few other things to work out the user number, so the MIS wouldn't work until they updated the details on their registration module.
                          I must find the time to change from this system.
                          Only stress when you can change the outcome!

                          Comment

                          Working...