Zoneminder stops recording at 75% disk space

Forum for questions and support relating to the 1.24.x releases only.
Locked
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Zoneminder stops recording at 75% disk space

Post by theforce »

I'm having a problem with one of my zoneminder installations. Zoneminder stops recording when the disk space reaches 75%. Its a disk space issue I think because when I free up some space it will start recording again.

I'm running Ubuntu 10.04 and Zoneminder 1.24.2. Zoneminder is recording files to a 120GB ext4 drive.

I have the file purge set to 95% and to run in the background.

Any ideas on why its doing this?
bb99
Posts: 943
Joined: Wed Apr 02, 2008 12:04 am

Post by bb99 »

Take a look at your actual disk usage via a file manager or disk resource tool. I'm going to guess you don't have a separate /var partition and that 75% equals full. Set purge when full to some value less than 70% (65% or less for large events) and you'll be ok (be sure to select "run in background").
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

My main drive is the boot drive and its 10GB and 34% used.

The second drive is the 120GB and I have just the events folder on it.

Here is the results of df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext4 9341440 2957288 5909632 34% /
none devtmpfs 185696 220 185476 1% /dev
none tmpfs 189916 100 189816 1% /dev/shm
none tmpfs 189916 140 189776 1% /var/run
none tmpfs 189916 0 189916 0% /var/lock
none tmpfs 189916 0 189916 0% /lib/init/rw
/dev/sdb1 ext4 115377640 79578568 29938160 73% /media/data
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

theforce wrote:My main drive is the boot drive and its 10GB and 34% used.

The second drive is the 120GB and I have just the events folder on it.

Here is the results of df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext4 9341440 2957288 5909632 34% /
none devtmpfs 185696 220 185476 1% /dev
none tmpfs 189916 100 189816 1% /dev/shm
none tmpfs 189916 140 189776 1% /var/run
none tmpfs 189916 0 189916 0% /var/lock
none tmpfs 189916 0 189916 0% /lib/init/rw
/dev/sdb1 ext4 115377640 79578568 29938160 73% /media/data
Check the inode usage. You might be running out of inodes.
Run "df -i"
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

Never heard of inodes. That must be my problem. So how do I fix it?


Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 593344 166817 426527 29% /
none 46424 703 45721 2% /dev
none 47479 3 47476 1% /dev/shm
none 47479 57 47422 1% /var/run
none 47479 3 47476 1% /var/lock
none 47479 1 47478 1% /lib/init/rw
none 593344 166817 426527 29% /var/lib/ureadahead/debugfs
/dev/sdb1 7331840 7185613 146227 99% /media/data
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

theforce wrote:Never heard of inodes. That must be my problem. So how do I fix it?


Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 593344 166817 426527 29% /
none 46424 703 45721 2% /dev
none 47479 3 47476 1% /dev/shm
none 47479 57 47422 1% /var/run
none 47479 3 47476 1% /var/lock
none 47479 1 47478 1% /lib/init/rw
none 593344 166817 426527 29% /var/lib/ureadahead/debugfs
/dev/sdb1 7331840 7185613 146227 99% /media/data
The bad news is that you can not change the number of inodes once you have created the filesystem.

The good news is that you can set the number of inodes when you create it.

Try the following ***** WARNING: YOU WILL LOSE ALL DATA ON THE DISK ******

mkfs -t ext4 -i 8192 /dev/sdb1

This will decrease the amount of space available by a small amount but should ensure you have enough inodes to fill the disk (it should double the number of inodes).
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

Thanks. I just applied the command to the drive. I'll report back in a few weeks on if it worked.

I have another Zoneminder installation thats going to have the same problem. How did you get the number 8192? I want to do the same thing to my 1.5TB drive but I'm guessing 8192 wont work on it.
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

theforce wrote:Thanks. I just applied the command to the drive. I'll report back in a few weeks on if it worked.

I have another Zoneminder installation thats going to have the same problem. How did you get the number 8192? I want to do the same thing to my 1.5TB drive but I'm guessing 8192 wont work on it.
If you run "dumpe2fs -h /dev/sdb1" it dumps out information about your filesystem. The important bits are:

Inode count:
Block count:
Block size:

You can calculate the value of the -i (bytes per inode) parameter with the following formula:

bytes per inode = (Block size) * (Block count) / (Inode count)

And then round up the result to the nearest power of 2 (2,4,8,16 ... 8192, 16384 etc). I calculated yours as being 16384 from the output of the df commands. If you decrease the "bytes per inode" value then the number of inodes will automatically increase as the block size and count will remain the same. The minimum size for "bytes per inode" is the "block size".
There are disadvantages to lowering the "bytes per inode" value. For example, you lost about 1.7G of disk space when you set the value to 8192.

You can roughly calculate if you are going to run out of inodes or disk space first. Assuming the disk is only going to be used for the event images then have a look and see what size the images are. If they are larger than the calculated "bytes per inode" value then you will run out of disk space before you run out of inodes. If they are less then you will run out of inodes first.
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

Thanks for the help.
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

OK I guess my 1.5TB drive was the same. I came up with 16384 so I set it to 8192. It doubled the inodes so I think I'm all good now. I'll just have to remember to set the inodes for future Zoneminder installations.
User avatar
kingofkya
Posts: 1110
Joined: Mon Mar 26, 2007 6:07 am
Location: Las Vegas, Nevada

Post by kingofkya »

Useing the deep file structure built into zm will fix this issue. I have a 1.5tb system ext3 and i never set inodes.
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

I dont think that will work since the deep file structure fixes the 32k file in one folder limit. My problem is that I have too many files in general on the whole drive. Thats what I understand anyway.
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

kingofkya wrote:Useing the deep file structure built into zm will fix this issue. I have a 1.5tb system ext3 and i never set inodes.
Every file and directory in an ext3/4 file system requires exactly one inode. What the above calculation tells you is that for every 16k of space on your disk one inode has been created. If your images are larger than 16k then you will not run out of inodes before you run out of disk space. If your images are less than 16k, by at least a block size (4k) then you will run out of inodes before you run out of disk space.

The 32k file limit in one folder is a separate issue.
theforce
Posts: 129
Joined: Tue May 11, 2010 5:22 am

Post by theforce »

Looks like increasing the inodes fixed the problem. I'm now at 80% disk space with 54% inodes.

Thanks!
xeo
Posts: 3
Joined: Tue Sep 07, 2010 7:44 am

Post by xeo »

Hi,
I'm a newby in ZoneMinder and I'm evaluating it, and only problem I didn't find a solution was this one.
Running in Ubuntu 10.4, change the deep strcture but it didn't solve the problem.
The inodes are at 8192 I think.
a df -i command show that the system run out of inodes.

Any ideas of how to solve this.

Thank yo uin advance.

P.S.- ZoneMinder is running but I can't see the cameras in montage or directly.
Locked