关于数据库调优Node1
2024-07-21 02:38:01
供稿:网友
大致情况是这样的,两台COMPAQ GS 160 主机(OLTP SERVER),数据库装在共享盘上,目前性能没什么大问题,先期对硬件进行了扩容(node 1 从8CPU 11G 扩容到16 CPU 16GB MEM ,node 2 从8) ,但数据库参数 没有调整, 原因是比如节日系统要稳定、两会期间要考虑安全、系统业务高峰期不能停业务等等吧,预备今天晚上调整, 下面是我们的一些考虑,大家看看是否合适。
一、操作系统情况
orabus@Ahyz1> vmstat 1 20
Virtual Memory Statistics: (pagesize = 8192)
PRocs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
48 959 310 961K 925K 168K 3G 353M 1G 26271 645M 0 5K 24K 22K 17 10 73
62 951 300 960K 925K 168K 2002 266 1010 0 443 0 7K 22K 22K 80 15 5
49 958 308 961K 925K 168K 2495 75 553 0 228 0 6K 25K 23K 83 14 3
53 954 308 961K 924K 168K 1972 173 784 0 598 0 6K 24K 22K 81 15 4
66 952 297 961K 924K 168K 1065 96 434 0 292 0 5K 22K 22K 80 12 8
52 955 311 962K 923K 168K 2067 75 427 0 234 0 8K 26K 24K 82 15 3
59 960 299 962K 923K 168K 2368 179 512 0 398 0 8K 24K 24K 82 15 3
62 959 300 963K 923K 168K 3022 173 979 0 602 0 8K 25K 24K 82 16 2
63 958 300 963K 922K 168K 2505 157 877 0 480 0 8K 28K 25K 80 17 3
65 952 303 963K 922K 168K 2006 98 821 0 366 0 8K 24K 25K 78 14 8
55 969 299 964K 921K 168K 4094 149 1464 0 457 0 7K 24K 23K 81 15 4
58 970 298 966K 920K 168K 3634 182 1393 0 682 0 5K 25K 23K 79 13 7
46 980 298 965K 920K 168K 1738 38 300 0 84 0 4K 24K 21K 80 14 6
49 974 300 965K 920K 168K 1660 139 558 0 442 0 5K 23K 22K 83 13 4
63 962 297 965K 920K 168K 1278 27 610 0 95 0 5K 25K 22K 82 13 5
56 964 305 966K 919K 168K 2396 86 490 0 298 0 8K 24K 24K 82 14 4
66 962 297 967K 918K 168K 2349 119 786 0 394 0 8K 26K 25K 80 15 5
40 986 298 967K 919K 168K 1801 66 1054 0 283 0 8K 22K 23K 79 16 5
45 969 305 967K 918K 168K 1569 95 673 0 301 0 9K 24K 24K 78 18 4
54 968 298 967K 918K 168K 1095 20 185 0 113 0 8K 23K 26K 80 16 4
orabus@Ahyz1>
orabus@Ahyz1> top
load averages: 28.47, 29.85, 30.70 11:23:08
643 processes: 26 running, 66 waiting, 216 sleeping, 328 idle, 7 zombie
CPU states: % user, % nice, % system, % idle
Memory: Real: 6669M/16G act/tot Virtual: 18727M use/tot Free: 6450M
PID USERNAME PRI NICE SIZE RES STATE TIME CPU COMMAND
524288 root 0 0 20G 609M run 617.0H 81.50% kernel idle
556369 buswork 56 0 3376M 1179K run 42:01 64.60% Oracle
733961 buswork 42 0 3376M 1155K run 42:22 61.20% oracle
524817 buswork 42 0 3376M 1261K WAIT 43:12 48.70% oracle
750447 orabus 52 0 3382M 7086K run 6:53 47.20% oracle
677254 buswork 48 0 3385M 8462K sleep 35:09 41.90% oracle
525117 buswork 48 0 3385M 8437K run 33:47 40.20% oracle
960115 buswork 47 0 3385M 9740K run 35:30 38.40% oracle
807149 buswork 49 0 3385M 8445K run 33:15 38.00% oracle
654356 buswork 47 0 3377M 2056K run 31:50 37.20% oracle
1046508 buswork 48 0 3385M 9478K run 36:23 37.00% oracle
569891 buswork 49 0 3385M 8454K run 34:12 36.40% oracle
587602 buswork 48 0 3385M 8740K sleep 32:46 36.40% oracle
860992 buswork 47 0 3385M 8429K run 33:35 34.90% oracle
667424 buswork 49 0 3377M 2088K sleep 34:09 34.40% oracle
orabus@Ahyz1> vmstat -P
Total Physical Memory = 16384.00 M
= 2097152 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 504 pal 504 / 3.94M
504 524271 os 523767 / 4091.93M
524271 524288 pal 17 / 136.00k
8388608 8912872 os 524264 / 4095.81M
8912872 8912896 pal 24 / 192.00k
16777216 17301480 os 524264 / 4095.81M
17301480 17301504 pal 24 / 192.00k
25165824 25690088 os 524264 / 4095.81M
25690088 25690112 pal 24 / 192.00k
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
504 1032 scavenge 528 / 4.12M
1032 1963 text 931 / 7.27M
1963 2048 scavenge 85 / 680.00k
2048 2278 data 230 / 1.80M
2278 2756 bss 478 / 3.73M
2756 3007 kdebug 251 / 1.96M
3007 3014 cfgmgmt 7 / 56.00k
3014 3016 locks 2 / 16.00k
3016 3032 pmap 16 / 128.00k
3032 6695 unixtable 3663 / 28.62M
6695 6701 logs 6 / 48.00k
6701 15673 vmtables 8972 / 70.09M
15673 524271 managed 508598 / 3973.42M
524271 8388608 hole 7864337 / 61440.13M
8388608 8388609 unixtable 1 / 8.00k
8388609 8388612 pmap 3 / 24.00k
8388612 8389128 scavenge 516 / 4.03M
8389128 8390059 text 931 / 7.27M
8390059 8398104 vmtables 8045 / 62.85M
8398104 8912872 managed 514768 / 4021.62M
8912872 16777216 hole 7864344 / 61440.19M
16777216 16777217 unixtable 1 / 8.00k
16777217 16777220 pmap 3 / 24.00k
16777220 16777736 scavenge 516 / 4.03M
16777736 16778667 text 931 / 7.27M
16778667 16786712 vmtables 8045 / 62.85M
16786712 17301480 managed 514768 / 4021.62M
17301480 25165824 hole 7864344 / 61440.19M
25165824 25165825 unixtable 1 / 8.00k
25165825 25165828 pmap 3 / 24.00k
25165828 25166344 scavenge 516 / 4.03M
25166344 25167275 text 931 / 7.27M
25167275 25175320 vmtables 8045 / 62.85M
25175320 25690088 managed 514768 / 4021.62M
============================
Total Physical Memory Use: 2096559 / 16379.37M
Managed Pages Break Down:
free pages = 870044
active pages = 580959
inactive pages = 207561
wired pages = 168231
ubc pages = 228268
==================
Total = 2055063
WIRED Pages Break Down:
vm wired pages = 14077
ubc wired pages = 0
meta data pages = 62820
malloc pages = 79557
contig pages = 1242
user ptepages = 2396
kernel ptepages = 492
free ptepages = 15
==================
Total = 160599
orabus@Ahyz1>
orabus@Ahyz1>
orabus@Ahyz1> sar -u 1 30
OSF1 Ahyz1 V5.1 732 alpha 06May2003
11:26:21 %usr %sys %wio %idle
11:26:22 85 14 2 0
11:26:23 83 16 1 0
11:26:24 82 12 5 1
11:26:25 82 13 5 0
11:26:26 85 13 2 0
11:26:27 85 14 1 0
11:26:28 86 14 1 0
11:26:29 85 15 1 0
11:26:30 85 14 1 0
11:26:31 84 13 3 0
11:26:32 88 12 0 0
11:26:33 87 11 2 0
11:26:34 87 12 1 0
11:26:35 86 12 2 0
11:26:36 87 11 1 0
11:26:37 87 12 1 0
11:26:38 87 12 1 0
11:26:39 86 13 1 0
11:26:40 86 13 1 0
11:26:41 84 14 2 0
11:26:42 85 14 1 0
11:26:43 81 16 3 0
11:26:44 80 16 3 0
11:26:45 85 13 2 0
11:26:46 86 11 2 0
11:26:47 84 15 1 0
11:26:48 87 11 2 0
11:26:49 86 12 1 0
11:26:50 87 13 0 0
11:26:51 86 13 0 0
Average 85 13 2 0
SQL/Business>select count(*),status from v$session group by status ;
COUNT(*) STATUS
---------- --------
89 ACTIVE
303 INACTIVE
4 SNipED
Elapsed: 00:00:00.31
SQL/Business>/
COUNT(*) STATUS
---------- --------
91 ACTIVE
302 INACTIVE
4 SNIPED
Elapsed: 00:00:00.03
SQL/Business>/
COUNT(*) STATUS
---------- --------
92 ACTIVE
301 INACTIVE
4 SNIPED
二、预备先做一下修改
Parameter Current value Recommended value
Db_block_buffer 307200 448000
Shared_pool_size 828375040 1258291200按1.2G
Log_buffer 1048576 4194304
Fast_start_io_target 307200 0
Processes 600 650
Db_block_max_dirty_target 307200 0