Todo #14011


Update memory graphs to account for changes in memory reporting

Added by Jim Pingle 3 months ago. Updated about 1 month ago.

Operating System
Target version:
Start date:
Due date:
% Done:


Estimated time:
Plus Target Version:
Release Notes:


FreeBSD reports memory usage broken down differently than it did in the past. The code handling the memory graphs and usage counts (Dashboard system info widget, RRD Graphs/Status > Monitoring, etc) needs to be updated to match the current methods.

For example, the RRD data file and data collection script use the following values:

MEM=`/sbin/sysctl -qn vm.stats.vm.v_page_count vm.stats.vm.v_active_count vm.stats.vm.v_inactive_count vm.stats.vm.v_free_count vm.stats.vm.v_cache_count vm.stats.vm.v_wire_count |  /usr/bin/awk '{getline active;getline inactive;getline free;getline cache;getline wire;printf ((active/$0) * 100)":"((inactive/$0) * 100)":"((free/$0) * 100)":"((cache/$0) * 100)":"(wire/$0 * 100)}'`

Which queries this list of OIDs:


However on current versions the cache is always 0 since it's now a dummy value, and there are new values for laundry and user wired count that may be interesting to graph.

vm.stats.vm.v_active_count: Active pages
vm.stats.vm.v_cache_count: Dummy for compatibility
vm.stats.vm.v_free_count: Free pages
vm.stats.vm.v_inactive_count: Inactive pages
vm.stats.vm.v_laundry_count: Pages eligible for laundering
vm.stats.vm.v_page_count: Total number of pages in system
vm.stats.vm.v_user_wire_count: User-wired virtual memory
vm.stats.vm.v_wire_count: Wired pages

So we need to check around the system and make sure that anything performing memory calculations is doing so properly. For example, dropping any references to vm.stats.vm.v_cache_count and if it's appropriate, including the laundry count in calculations.

This should also include upgrade code to adjust existing memory RRD data files to include the new fields in question, and updating the Status_Monitoring frontend graphs to display the new values.


Actions #1

Updated by Jim Pingle 3 months ago

  • Description updated (diff)
Actions #2

Updated by Jim Pingle 3 months ago

  • Assignee set to Jim Pingle

I'll take a look at this one soonish

Actions #3

Updated by Jim Pingle 3 months ago

Internal MR:

Initial diff attached here as well since that URL is not public.

You can install the System Patches package and then create an entry for the patch, paste it in, and then apply the fix. Reboot after applying to ensure the existing memory database is upgraded properly.

Actions #4

Updated by Jim Pingle 3 months ago

  • Status changed from Pull Request Review to Feedback
Actions #5

Updated by Jim Pingle about 2 months ago

  • Status changed from Feedback to In Progress

Looks like the command that gets run at boot to put "unknown" values into the RRD (source:/src/etc/inc/ file isn't using enough data sources now. It throws an error but it's mostly cosmetic. Good to fix but doesn't need a patches entry.

Mar 27 13:52:58     php-cgi     2007     rc.bootup: The command '/usr/bin/nice -n20 /usr/local/bin/rrdtool update /var/db/rrd/system-memory.rrd N:U:U:U:U:U' returned exit code '1', the output was 'ERROR: /var/db/rrd/system-memory.rrd: expected 8 data source readings (got 5) from N' 
Actions #6

Updated by Jim Pingle about 2 months ago

  • Status changed from In Progress to Feedback
Actions #7

Updated by Chris Linstruth about 2 months ago

If cache is now always 0 why is it 28.8%?

Shell Output - sysctl -a | grep vm.stats.vm.v

vm.stats.vm.v_pdpages: 1182959
vm.stats.vm.v_tcached: 0
vm.stats.vm.v_cache_count: 0
vm.stats.vm.v_user_wire_count: 0
vm.stats.vm.v_free_severe: 721
vm.stats.vm.v_interrupt_free_min: 2
vm.stats.vm.v_pageout_free_min: 34
vm.stats.vm.v_laundry_count: 11067
vm.stats.vm.v_inactive_count: 20514
vm.stats.vm.v_inactive_target: 5607
vm.stats.vm.v_active_count: 12592
vm.stats.vm.v_wire_count: 102289
vm.stats.vm.v_free_count: 25908
vm.stats.vm.v_free_min: 1152
vm.stats.vm.v_free_target: 3738
vm.stats.vm.v_free_reserved: 290
vm.stats.vm.v_page_count: 172507
vm.stats.vm.v_page_size: 4096
vm.stats.vm.v_kthreadpages: 0
vm.stats.vm.v_rforkpages: 0
vm.stats.vm.v_vforkpages: 889320
vm.stats.vm.v_forkpages: 2529521
vm.stats.vm.v_kthreads: 21
vm.stats.vm.v_rforks: 0
vm.stats.vm.v_vforks: 16102
vm.stats.vm.v_forks: 67070
vm.stats.vm.v_tfree: 19505580
vm.stats.vm.v_pfree: 14365915
vm.stats.vm.v_dfree: 30588
vm.stats.vm.v_pdshortfalls: 8
vm.stats.vm.v_pdwakeups: 41
vm.stats.vm.v_reactivated: 17792
vm.stats.vm.v_intrans: 127
vm.stats.vm.v_vnodepgsout: 20146
vm.stats.vm.v_vnodepgsin: 37872
vm.stats.vm.v_vnodeout: 14374
vm.stats.vm.v_vnodein: 5110
vm.stats.vm.v_swappgsout: 1925
vm.stats.vm.v_swappgsin: 662
vm.stats.vm.v_swapout: 299
vm.stats.vm.v_swapin: 114
vm.stats.vm.v_ozfod: 1360
vm.stats.vm.v_zfod: 12694908
vm.stats.vm.v_cow_optim: 64
vm.stats.vm.v_cow_faults: 2900099
vm.stats.vm.v_io_faults: 4865
vm.stats.vm.v_vm_faults: 16891972

Actions #8

Updated by Jim Pingle about 2 months ago

Because the "cache" value is a dummy in the FreeBSD sysctl tree now I used "cache" on the graph for ZFS ARC (when using ZFS) or the UFS directory hash (if using UFS), and subtracted those from the Wired count on the graph so it would still total up correctly.

See my notes on 0d83ed084a987f3446a0cbdcf249fc5b8722726f and I updated the docs to reflect this as well since this code is in the system patches package: and

Actions #9

Updated by Chris Linstruth about 1 month ago

OK cool.

Actions #10

Updated by Jim Pingle about 1 month ago

  • Status changed from Feedback to Resolved

This all appears to be working as expected. It's also available in the system patches package and people have been running it there successfully.


Also available in: Atom PDF