Storage Spread Over 5 Drives
Storage Spread Over 5 Drives
My storage server will have (5) 6TB drives in it for a total of 30TB. Is there a "best way" to distribute recordings over all of the drives? I am NOT planning to use Raid for several reasons.
Thank you,
Eric
Thank you,
Eric
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
ZoneMinder only supports recording to a single mounted location.
It is up to you to configure the underlying filesystem to present one logical volume. The thing that does that is called RAID. Sometimes there is a fancy name applied to it, like ZFS or glusterfs, but the underlying behaviour is still RAID.
If you want, you could use mdraid to just jbod the disks together, but the problem with that is your system will only be as fast as a single drive. A properly configured RAID volume will stripe the data across multiple drives for a performance boost. This is necessary for all but the smallest surveillance systems.
It is up to you to configure the underlying filesystem to present one logical volume. The thing that does that is called RAID. Sometimes there is a fancy name applied to it, like ZFS or glusterfs, but the underlying behaviour is still RAID.
If you want, you could use mdraid to just jbod the disks together, but the problem with that is your system will only be as fast as a single drive. A properly configured RAID volume will stripe the data across multiple drives for a performance boost. This is necessary for all but the smallest surveillance systems.
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
Re: Storage Spread Over 5 Drives
Thank you for the reply. I totally understand raid, but I was hoping to avoid using it in this case. I have worked with systems in the past that allow you to designate which channels record to which drive, and was hoping there was a way to do something similar.
Thanks again!
Thanks again!
Re: Storage Spread Over 5 Drives
Use LVM in Linux to add them to a single volume group. You can add and remove disks as necessary. Google volume groups.
Re: Storage Spread Over 5 Drives
LVM of 5 drives is another implementation of "jbod" (strapping 5 real disks into 1 big virtual disk), as mentioned by @knnniggett
This doesn't solve the OP's apparent problem of wanting to spread (distribute) over 5 disks -- for whatever reason -- as a) Zoneminder can only write to 1 location, and b) if the location is an LVM/mdraid "jbod" system, it is likely all data will go on disk 1. When it's full, disk 2. When it's full ... that's not really spreading per recording device.
So I'm not sure that's what the OP is after. Maybe a better question is "What do you think you will gain by spreading over 5 drives?"
Redundancy? That's what RAID is for. Organisation? ZM already organises your output into folders. Wait. Hold on.
Thought: At a push, and I'm not guaranteeing this is a good idea, you COULD create 5 separate file systems, on 5 drives, and then use symbolic links to them, from under the ZM event directory?
E.g. Currently: /mnt/point/zoneminder/events is holding one directory per camera (the camera ID), and one sym-link with a friendly name of camera, all events filed in there somewhere. Note there is one numbered directory per camera ID.
New way: Create 5 file systems, on your 5 disks, with mk*fs, then mount them individually on /mnt/point/bigdisk1 /mnt/point/bigdisk2 ....
Then go into /mnt/point/zoneminder/events and link them OVER the ZM directory points.
ln -s /mnt/point/bigdisk1 1 # For camera ID 1
ln -s /mnt/point/bigdisk2 2 # For camera ID 2
etc.
This way, ZM won't have any idea that it's actually writing to multiple file systems on different disks. All without LVM or RAID.
Nor sure why you'd want to, but that might do it.
Do this with ZM stopped, and be aware that zmaudit will notice that your new filesystems are now "empty" and will delete any events in the database, as the files are "gone" (actually, probably still in the old ZM dirs!)
This doesn't solve the OP's apparent problem of wanting to spread (distribute) over 5 disks -- for whatever reason -- as a) Zoneminder can only write to 1 location, and b) if the location is an LVM/mdraid "jbod" system, it is likely all data will go on disk 1. When it's full, disk 2. When it's full ... that's not really spreading per recording device.
So I'm not sure that's what the OP is after. Maybe a better question is "What do you think you will gain by spreading over 5 drives?"
Redundancy? That's what RAID is for. Organisation? ZM already organises your output into folders. Wait. Hold on.
Thought: At a push, and I'm not guaranteeing this is a good idea, you COULD create 5 separate file systems, on 5 drives, and then use symbolic links to them, from under the ZM event directory?
E.g. Currently: /mnt/point/zoneminder/events is holding one directory per camera (the camera ID), and one sym-link with a friendly name of camera, all events filed in there somewhere. Note there is one numbered directory per camera ID.
New way: Create 5 file systems, on your 5 disks, with mk*fs, then mount them individually on /mnt/point/bigdisk1 /mnt/point/bigdisk2 ....
Then go into /mnt/point/zoneminder/events and link them OVER the ZM directory points.
ln -s /mnt/point/bigdisk1 1 # For camera ID 1
ln -s /mnt/point/bigdisk2 2 # For camera ID 2
etc.
This way, ZM won't have any idea that it's actually writing to multiple file systems on different disks. All without LVM or RAID.
Nor sure why you'd want to, but that might do it.
Do this with ZM stopped, and be aware that zmaudit will notice that your new filesystems are now "empty" and will delete any events in the database, as the files are "gone" (actually, probably still in the old ZM dirs!)
Re: Storage Spread Over 5 Drives
OK, so I have decided to go with ZFS, following https://wiki.zoneminder.com/Using_a_ded ... Hard_Drive as a guide. This works until the part where you need to edit /etc/fstab (since you can not map to zfs here)
How do I essentially do this for zfs, so I can map the old directories to the new?:
How do I essentially do this for zfs, so I can map the old directories to the new?:
Code: Select all
/newdrive/zoneminder/images /var/lib/zoneminder/images none defaults,bind 0 2
/newdrive/zoneminder/events /var/lib/zoneminder/events none defaults,bind 0 2
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
Nice choice. That's what I'm using.
With ZFS, you can create unlimited filesystems within your storage pool, so just create two filesystems. e.g. tank/images and tank/events
Note however, that the images folder never contains a lot of images, only a few MB at a time, so it is safe to leave this folder as part of your root filessytem if you want. You could just get by with tank/events
Here is my ZFS filesystem:
the bittorrent filesystem is for Linux distros of course.
With ZFS, you can create unlimited filesystems within your storage pool, so just create two filesystems. e.g. tank/images and tank/events
Note however, that the images folder never contains a lot of images, only a few MB at a time, so it is safe to leave this folder as part of your root filessytem if you want. You could just get by with tank/events
Here is my ZFS filesystem:
Code: Select all
[abauer@bauerhaus ~]$ sudo zfs list
[sudo] password for abauer:
NAME USED AVAIL REFER MOUNTPOINT
tank 1.83T 1.68T 96K none
tank/LargeDataDisk 1.31T 1.68T 916G /mnt/LargeDataDisk
tank/bittorrent 62.3G 1.68T 62.3G /usr/local/bittorrent
tank/home 6.30G 1.68T 4.26G /home
tank/zmevents 469G 30.9G 469G /var/lib/zoneminder/events
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
Oh, and I made the initial mistake of enabling automatic daily snapshots on my events filesystem. Yeah, don't that. That causes your filesystem space to be consumed by the snapshots in about a week.
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
Re: Storage Spread Over 5 Drives
What command did you use to map the new file system over? Looks like ln is bad....
From what I have seen, all my data is showing up as images in the images folder, did not yet see anything in the events folder (but maybe because I am not recording yet)
From what I have seen, all my data is showing up as images in the images folder, did not yet see anything in the events folder (but maybe because I am not recording yet)
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
See this guide:
https://pthree.org/2012/12/17/zfs-admin ... lesystems/
It is very helpful.
After you create the filesystem, change its mount point to point to /var/lib/zoneminder/events like I did
https://pthree.org/2012/12/17/zfs-admin ... lesystems/
It is very helpful.
After you create the filesystem, change its mount point to point to /var/lib/zoneminder/events like I did
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
Oh, and just to make sure you caught this.... ZFS mount points are handled directly by the ZFS service. You don't mount them from fstab.
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
Re: Storage Spread Over 5 Drives
I'm getting ready to do the same thing, combine multiple drives onto one big space. What is the advantage of using zfs over ext4 and LVM?
- knight-of-ni
- Posts: 2406
- Joined: Thu Oct 18, 2007 1:55 pm
- Location: Shiloh, IL
Re: Storage Spread Over 5 Drives
ZFS has lots of advantages.
- ZFS is like taking LVM a step further
- Filesystems in the zfs pool all share the storage in the pool so by creating a large filesystem for dvd images for example, you don't end up locking up a bunch of unused space that could be used for another filesystem
- You can search for and read about claims that ZFS is more reliable, prevents bit rot, things like that
- You can use run-of-the-mill harddrives. No expensive enterprise drives required. Hardware raid controllers are not required and even not wanted.
- Native support for snapshots, which is awesome for filesystems that don't change a lot, but not so much for zoneminder's event folder
- Native support for quotas
- Copy on write filesystem = fast
- You can configure different raid equivalents. I created multiple mirrored vdevs which get striped together a.k.a. RAID 10. You could also build RAIDZ (https://pthree.org/2012/12/05/zfs-admin ... -ii-raidz/)
The disadvantages include:
- learning curve
- Some distros implement ZFS via kernel module, which requires special care when a new kernel is released
- almost forgot. Copy on write filesystems are not good for databases. I keep my zoneminder dB on the root volume, which is normal ext4 on my system. I have not investigated why this is.
- ZFS is like taking LVM a step further
- Filesystems in the zfs pool all share the storage in the pool so by creating a large filesystem for dvd images for example, you don't end up locking up a bunch of unused space that could be used for another filesystem
- You can search for and read about claims that ZFS is more reliable, prevents bit rot, things like that
- You can use run-of-the-mill harddrives. No expensive enterprise drives required. Hardware raid controllers are not required and even not wanted.
- Native support for snapshots, which is awesome for filesystems that don't change a lot, but not so much for zoneminder's event folder
- Native support for quotas
- Copy on write filesystem = fast
- You can configure different raid equivalents. I created multiple mirrored vdevs which get striped together a.k.a. RAID 10. You could also build RAIDZ (https://pthree.org/2012/12/05/zfs-admin ... -ii-raidz/)
The disadvantages include:
- learning curve
- Some distros implement ZFS via kernel module, which requires special care when a new kernel is released
- almost forgot. Copy on write filesystems are not good for databases. I keep my zoneminder dB on the root volume, which is normal ext4 on my system. I have not investigated why this is.
Visit my blog for ZoneMinder related projects using the Raspberry Pi, Orange Pi, Odroid, and the ESP8266
All of these can be found at https://zoneminder.blogspot.com/
All of these can be found at https://zoneminder.blogspot.com/
Re: Storage Spread Over 5 Drives
"ZFS is like taking LVM a step further" - Sort of. They both nearly do the same thing, if you allow that LVM needs a filesystem as well. There is also btrfs on Linux. There are a lot of variables here and you have to start from the hardware and work up, whilst taking into account the rest of the environment.
The gold standard for surviving a power outage is "battery backed". I don't care how fancy your journal is but if you don't have a battery backed cache then you will probably end up with holes in your filesystem eventually after a power out. To be fair, this is more likely to affect virtual machines rather than physical hosts. If you run your ZM as a VM then you must accept that price.
A single disc will only go so fast. A RAID 0 (stripe across multiple discs, no redundancy) will write at disc x number speed. RAID x, y,z also have their advantages.
ZM is all about streaming and that means that if you are stressing the hardware you need something that writes quickly. You will also want to read it quickly as well so take that into account.
The filesystem in use is nearly immaterial - use zfs or btrfs or xfs or ext3/4 - it really won't matter in my opinion with regards performance. To be honest ext2, with a decent RAID controller + battery will probably be faster anyway and survive a power out, but take ages to recheck.
Oh, sorry: zfs - yes it does things like RAID n that but it won't magically make your discs faster. Neither will LVM - that's just a funky partitioning and block redirection thingie. btrfs is similar to zfs.
The take home is: don't fixate on a filesystem to make things magically faster. Get the hardware right.
Cheers
Jon
The gold standard for surviving a power outage is "battery backed". I don't care how fancy your journal is but if you don't have a battery backed cache then you will probably end up with holes in your filesystem eventually after a power out. To be fair, this is more likely to affect virtual machines rather than physical hosts. If you run your ZM as a VM then you must accept that price.
A single disc will only go so fast. A RAID 0 (stripe across multiple discs, no redundancy) will write at disc x number speed. RAID x, y,z also have their advantages.
ZM is all about streaming and that means that if you are stressing the hardware you need something that writes quickly. You will also want to read it quickly as well so take that into account.
The filesystem in use is nearly immaterial - use zfs or btrfs or xfs or ext3/4 - it really won't matter in my opinion with regards performance. To be honest ext2, with a decent RAID controller + battery will probably be faster anyway and survive a power out, but take ages to recheck.
Oh, sorry: zfs - yes it does things like RAID n that but it won't magically make your discs faster. Neither will LVM - that's just a funky partitioning and block redirection thingie. btrfs is similar to zfs.
The take home is: don't fixate on a filesystem to make things magically faster. Get the hardware right.
Cheers
Jon
Re: Storage Spread Over 5 Drives
In regards to fast hard disks, this article was an interesting read (with benchmarks):
http://www.tomshardware.com/reviews/sur ... 831-6.html
Based on that article, I ended up with a pair of Western Digital Se drives. Each drive has 2 processors and handles multiple simultaneous streams really well, such as when multiple cameras are streaming simultaneously. I'm not sure if ZM uses the drive in this way, but it was worth a shot.
For me, the ZFS or LVM option is all about being able to add more drives to the pool to increase volume space on the fly. When space gets tight, it's great to just add a drive to the pool and presto you've got more space. Or remove a drive when you want to swap it out, such as if a drive gets flaky. Shrink the volume, remove the drive, insert a new one then grow the volume back. You would lose some data, but better than losing ALL data.
http://www.tomshardware.com/reviews/sur ... 831-6.html
Based on that article, I ended up with a pair of Western Digital Se drives. Each drive has 2 processors and handles multiple simultaneous streams really well, such as when multiple cameras are streaming simultaneously. I'm not sure if ZM uses the drive in this way, but it was worth a shot.
For me, the ZFS or LVM option is all about being able to add more drives to the pool to increase volume space on the fly. When space gets tight, it's great to just add a drive to the pool and presto you've got more space. Or remove a drive when you want to swap it out, such as if a drive gets flaky. Shrink the volume, remove the drive, insert a new one then grow the volume back. You would lose some data, but better than losing ALL data.