ZFS

From pressy's brainbackup
Jump to: navigation, search

ZFS

Limit the ARC Cache

To prevent ZFS eating up the whole memory you should limit the size of the ZFS ARC cache. Otherwise you could get a slow performance on applications with huge RAM requirements on freeing up the ZFS cache again.

Setting the ARC Cache

# tail -3 /etc/system
*** ZFS ARC Cache Limitation (10GB)
set zfs:zfs_arc_max=0x280000000
********
# init 6

Limit ARC Cache without reboot

That depends on the solaris version. This one is true for Solairs10u10, not sure for older systems. In this example the cache is set to 10GB, not to eat up my (nice) 512 GB RAM with FS cache...

root@server # mdb -kw
Loading modules: [ unix genunix dtrace specfs ufs sd mpt px md qlc fctl fcp ssd sockfs ip hook neti sctp arp usba nca zfs random sppp cpc crypto fcip logindmux ptm nfs mpt_sas mr_sas ipc isp ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
7054e528 arcstat_p.value.ui64 = 0x3ec5082e00
7054e558 arcstat_c.value.ui64 = 0x7d8a150000
7054e5b8 arcstat_c_max.value.ui64 = 0x7d8a150000
> 7054e528/Z 0x140000000
arc_stats+0x500:0x3ec5082e00            =       0x140000000
> 7054e558/Z 0x280000000
arc_stats+0x530:0x7d8a150000            =       0x280000000
> 7054e5b8/Z 0x280000000
arc_stats+0x590:0x7d8a150000            =       0x280000000
> ::quit
 
root@server # kstat -m zfs | head
module: zfs                             instance: 0
name:   arcstats                        class:    misc
        c                               10737418240
        c_max                           10737418240
        c_min                           67398443008
        crtime                          511.291508
        data_size                       10702071808
        deleted                         3378
        demand_data_hits                0
        demand_data_misses              0
 
#########  decimal view #########
root@server # echo "arc_stats::print -d arcstat_size.value.ui64" | mdb -k
arcstat_size.value.ui64 = 0t10737535816
root@server # echo "10737535816/1024/1024/1024" | bc
10
root@server #  mdb -k
Loading modules: [ unix genunix dtrace specfs ufs sd mpt px md qlc fctl fcp ssd sockfs ip hook neti sctp arp usba nca zfs random sppp cpc crypto fcip logindmux ptm nfs mpt_sas mr_sas ipc isp ]
>  ::memstat
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                    3445179             26915    5%
ZFS File Data             1308485             10222    2%
Anon                     46117794            360295   70%
Exec and libs              163098              1274    0%
Page cache                 583288              4556    1%
Free (cachelist)          1136262              8877    2%
Free (freelist)          13195758            103091   20%
 
Total                    65949864            515233
Physical                 65927716            515060
> 

New Way @ Solaris 11.1+

There is a new kernel parameter in Solaris 11 (11.1 SRU 20.5) which allows to set a percentage of free memory for user data. Oracle also provides a script to tune this live (Doc 1663861.1)

root@server# ./set_user_reserve.sh -fp 85
Adjusting user_reserve_hint_pct from 0 to 85
Tue Jun 23 21:57:47 CEST 2015 : waiting for current value : 53 to grow to target : 55
Tue Jun 23 21:58:00 CEST 2015 : waiting for current value : 55 to grow to target : 60
Tue Jun 23 21:58:21 CEST 2015 : waiting for current value : 72 to grow to target : 75
Tue Jun 23 21:58:29 CEST 2015 : waiting for current value : 77 to grow to target : 80
Tue Jun 23 21:58:36 CEST 2015 : waiting for current value : 82 to grow to target : 85
Adjustment of user_reserve_hint_pct to 85 successful.
Make the setting persistent across reboot by adding to /etc/system

#
# Tuning based on MOS note 1663861.1, script version 1.0
# added Tue Jun 23 22:06:23 CEST 2015 by system administrator : <me>
set user_reserve_hint_pct=85

root@server#