Can't increase Image Buffer Size (frames) past 40

Forum for questions and support relating to the 1.24.x releases only.
Locked
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Can't increase Image Buffer Size (frames) past 40

Post by BlankMan »

In openSuSE 11.2

Code: Select all

Jan 22 22:52:33 router zma_m1[13984]: INF [Debug Level = 0, Debug Log = <none>]
Jan 22 22:52:33 router zma_m1[13984]: ERR [Can't shmget, probably not enough shared memory space free: Invalid argument]
Shared memory is:

kernel.shmmax = 18446744073709551615
kernel.shmall = 1152921504606846720
kernel.shmmni = 4096

So what's up?
jfkastner
Posts: 74
Joined: Wed Jun 17, 2009 11:52 pm

Post by jfkastner »

did you solve this one?

i think your mem settings are scaringly high, you don't need to enable more memory than you actually have ...

also some distros use real mem size for shmmax and shmall, some use pages (with 4096 byte), check your actual limits with 'ipcs -l'
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Post by BlankMan »

No, I didn't change the shared memory settings. For some reason, from what I can tell SuSE has been using those memory settings for quite a while. Thers's a lot of posts in forums askiing about them. I'm kind of leery to change them until I can understand why SuSE is doing that. Just playing it safe. I've got 8G of memory but I really want to understand why they are doing that so I don't break anything. If anyone has any ideas or suggestions what to set them to I'd be willing to give it a try.
kevin_robson
Posts: 247
Joined: Sun Jan 16, 2005 11:26 am

Post by kevin_robson »

I use Suse 11.1 and it didn't even set those two settings by default - they were blank before I added them. Not sure why 11.2 would.
Those do look far too high.
Set them both to 256000000
You can always put them back, you wont do permanent damage.
run sysctl -p after, or better still reboot
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Post by BlankMan »

The part I'm not understanding here is that usually when a process has a problem allocating memory it's due to the values not being set high enough. In this case I believe they are set to the maximum so how then can that be the case? Unless the memory has become too fragmented and it can't allocate a contiguous block?
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Post by BlankMan »

And it gets weirded. Trying to track this down, as to why these values appear not to be enough...

My original kernel settings were:

kernel.shmmax = 18446744073709551615
kernel.shmall = 1152921504606846720
kernel.shmmni = 4096

I cut them in half and using echo I set them to:

kernel.shmmax = 9223372036854775807
kernel.shmall = 576460752303423487
kernel.shmmni = 4096

Never stopping ZM I was then able to increase the buffers on both cameras to 60 and zmc ran.

To verify this was the fix I set that buffers back down to 40 and I went back to the original settings of:

kernel.shmmax = 18446744073709551615
kernel.shmall = 1152921504606846720
kernel.shmmni = 4096

Again, never stopping ZM, I tried to again set the buffers to 60, and zmc started, no shmget error. I even successfully set the buffers to 400 on one camera as a test and it worked.

This leads me to believe that it just may be a memory fragmentation issue.

I'm going to leave it there and let it run for a while at 40 then try to go back up to 60 and see if it fails again. If it does I'm hoping I can determine maximum contiguous memory free space size available and if that turns out to be less then what zmc is trying to allocated for a camera based on buffers and resolution when it is having a problem that would sure point to memory fragmentation. I used tools before to get maximum contiguous memory free space size available in UNIX, never tried it in Linux so I hope I can.
jfkastner
Posts: 74
Joined: Wed Jun 17, 2009 11:52 pm

Post by jfkastner »

post your

ipcs -l

and

ipcs -m

again, change the numbers in sysctl, and see how the reported amounts change to find out IF you need # of bytes or # of pages in there (depends on the distro/kernelversion)

also try much lower, real sizes eg 100MB per camera, you need only

xresolution x yresolution x 3 x #bufferframes (in BYTES for EACH camera)

(the 3 of course is 24colorbits / 8 bitsperbyte)

i've got 11 IPcams on 768MByte running fine - you got 18446744 TB ...
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Post by BlankMan »

Thanks, here ya go. Yeah I know how to lower it and it appears lowering it will correct it, just can't understand why higher is the problem. Like I said, this type of problem is usually because it's too low, not to high. Really like to understand the why here.

I was starting to wonder if it's message related, msgmax, msgmni, & msgmnb, but haven't played around with those numbers yet.

Or "min seg size" being 1 byte, that really have the potential to fragment the memory on a busy machine.

# ipcs -l

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509481983
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 4000
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536

# ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 4456448 root 777 135168 1
0x7a6d0001 6291457 wwwrun 700 41473524 2
0x7a6d0002 6324226 wwwrun 700 40551924 2
0x00000000 6356995 root 600 67108864 11 dest
0x0056a4d5 4390916 root 660 488 0
0x0056a4d6 4423685 root 660 65536 0
jfkastner
Posts: 74
Joined: Wed Jun 17, 2009 11:52 pm

Post by jfkastner »

it says

0x7a6d0001 6291457 wwwrun 700 41473524 2
0x7a6d0002 6324226 wwwrun 700 40551924 2

there you got 2 cams, each uses about 40MB, seems OK

my guess is since ZM is a mix of PHP,AJAX,C,JS etc there are some pointers/memory references etc that can NOT handle thousands of TB

also every OS has limits on # of pages allocated, allowed, managed etc

so i guess if you stick with low, real numbers it's more likely to work (since nobody has tested it on let's say more than 8 or 16 GB)

i don't think fragmentation is a problem since it's all virtual anyways (with GDTs etc), unless you exceed the manageable number of pages! that's a hard limit that some kernel developer set (and forgot to tell about)
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

BlankMan wrote: Again, never stopping ZM, I tried to again set the buffers to 60, and zmc started, no shmget error. I even successfully set the buffers to 400 on one camera as a test and it worked.

This leads me to believe that it just may be a memory fragmentation issue.
This is all down to how the OS allocates and manages shared memory.
Once a shared memory segment is allocated its size can not change beyond the original allocation. This means that, for example, you could start ZM with a 60 frame buffer, reduce it to 40 and then increase back to 60 but you could not increase it beyond the initial 60. You would also have problems increasing the size of image as this would also increase the shared memory needed.

The only solution is to de-allocate the memory prior to increasing it. This is what you effectively do when stop and restart ZM.

Changing the code to automatically do this may not work as the memory will not be freed until all processes using it have detached from it and there may be several processes attached.
jfkastner
Posts: 74
Joined: Wed Jun 17, 2009 11:52 pm

Post by jfkastner »

interestingly i saw the increase in allocated memory right after the buffer size increase with "ipcs -m" (going bigger than ever before)

of course that change did force zmc to restart, and that's when it starts useing the memory

do a "watch ipcs -l" and see how it changes!

EDIT it seems it can "grow" the buffer right away but then NOT "shrink" it back to a lower number
Paranoid
Posts: 129
Joined: Thu Feb 05, 2009 10:40 pm

Post by Paranoid »

jfkastner wrote:interestingly i saw the increase in allocated memory right after the buffer size increase with "ipcs -m" (going bigger than ever before)

of course that change did force zmc to restart, and that's when it starts useing the memory

do a "watch ipcs -l" and see how it changes!

EDIT it seems it can "grow" the buffer right away but then NOT "shrink" it back to a lower number
If you saw an increase in the allocated memory you would have also seen the shmid change. This would show that the original memory had been released and then new shared memory allocated.

From the shmget man page:
EINVAL A new segment was to be created and size less than SHMMIN or size greater than SHMMAX, or no new segment was to be created, a segment with given key existed, but size is greater than the size of that segment.
The "EINVAL" translates to "Invalid argument". Since the new buffer size isn't beyond the bounds of SHMMAX and SHMMIN that only leaves "the key existed but size is greater than the size of the segment". In other words once you have created the shared memory segment you can not increase its size.
BlankMan
Posts: 147
Joined: Tue Jan 19, 2010 2:53 am
Location: Milwaukee, WI USA

Post by BlankMan »

Yep, have to agreed with that, this is what I saw:

Code: Select all

Previous:
0x7a6d0001  6291457   wwwrun     700        41473524   2
0x7a6d0002  6324226   wwwrun     700        40551924   2

Current:
0x7a6d0001 14024707   wwwrun     700        82946164   2
0x7a6d0002 14057477   wwwrun     700        60827444   2
I cut my shmmax and shmall in half on a running system and then was able to increase the number of buffers.
Locked