Does hardware impose limits on the size of shared memory?
I am running a Dell E6330 i7 3rd Generation box with 8gb of ram. The system,, by default, configures 4 gb of shared memory.
7 cams run just fine under 720p resolution but when I bump em up to 1080p I end up with /dev/shm showing 100% used via df command and just one of the 7 cams fails. Converting that cam to 720p results all cams running fine. Nothing I do seems to change the maximum of 4gb shared memory via kernel shmmax and shmall parameters. I would think shared memory would be limited to less than or equal to ram and not 1/2 of ram. Am I running into a linux or hardware limitation here? Thx!
It's the output of the df command. It shows disk space utilization. In the output you will see device /dev/shm is nearly full. Although labeled /dev/shm, I do believe this is a ramdisk and not shared memory.
A ramdisk is a filesystem that resides in real memory and not on a physical disk. Processes can read, write, create files or whatever using normal file io operations. By making the filesystem a ramdisk, the data stays in real memory. It is never written or read to a physical disk. The cost is that some of the physical memory is now dedicated to just a few processes and unavailable to other processes. Accesses to ramdisk may be faster but not necessarily so as compared to a physical disk.
The zoneminder designers choose to use a ramdisk. Apparently Linux, by default, limits the amount real memory that can be used as a ramdisk. Some minor configuration change are required to up the amount of memory allowed for the ramdisk.
On my system, I need just a little larger ramdisk than the default size.
Well, strictly, they chose to use memory-mapped files; those actually can live on a non-ramdisk, but it's pretty futile... unless that physical device is, say, a *hardware* ramdisk, which they do make.
A lot of systems are coming out with SSDs instead of the physical drives (with spinning disks). Anyone know if there is still a significant benefit with using ramdisks stored in RAM versus stored in SSD? File io systems have a ton of overhead compared non-io based implementation but then software complexity can go up.
My inclination is to say that if you put your shm on an SSD, you're going to burn it out *really* fast; this is really a RAM job, papered over by a filesystem to make interprocess sharing easier.