转载

Oracle Recover Case: 50TB ASM crash case

本站文章除注明转载外,均为本站原创:转载自 love wife & love life —Roger 的Oracle技术博客

本文链接地址: Oracle Recover Case: 50TB ASM crash case

某客户50 TB的ASM发生故障,经过合力拯救,恢复正常,在此简单记录一下!实际上最后发现比我想象中的简单的多。如下是关于该故障的详细描述情况。

–db alert log信息

Mon Jul  6 15:17:43 2015 Errors in file /oraclehome/admin/xxxx/udump/xxxx_ora_262606.trc: ORA-27063: number of bytes read/written is incorrect IBM AIX RISC System/6000 Error: 6: No such device or address Additional information: -1 Additional information: 1048576 ORA-19502: write error on file "+DG_xxxx/xxxx/datafile/xxxx_load_1.679.847025081", blockno 3136352 (blocksize=32768) ORA-27063: number of bytes read/written is incorrect ...... ...... Errors in file /oraclehome/admin/xxxx/bdump/xxxx_lgwr_185246.trc: ORA-00340: IO error processing online log 9 of thread 1 ORA-00345: redo log write error block 394389 count 2 ORA-00312: online log 9 thread 1: '+DG_xxxx/xxxx/onlinelog/group_9.433.789664647' ORA-15078: ASM diskgroup was forcibly dismounted Mon Jul  6 15:18:46 2015 LGWR: terminating instance due to error 340 Instance terminated by LGWR, pid = 185246

从db的alert log来看,是出现了IO异常,导致lgwr进程写日志,最后lgwr进程强行终止数据库实例.很明显,这里我们需要分析为什么lgwr进程无法写日志呢 ? 接着查看asm日志如下:

Thu Jul  3 08:24:26 2014 NOTE: ASMB process exiting due to lack of ASM file activity Mon Jul  6 15:18:44 2015 Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_353008.trc: ORA-27091: unable to queue I/O ORA-27072: File I/O error IBM AIX RISC System/6000 Error: 6: No such device or address Additional information: 7 Additional information: -939091968 Additional information: -1 Mon Jul  6 15:18:46 2015 NOTE: cache initiating offline of disk 32  group 1 WARNING: offlining disk 32.1115675731 (DG_xxxx_0032) with mask 0x3 NOTE: PST update: grp = 1, dsk = 32, mode = 0x6 Mon Jul  6 15:18:46 2015 ERROR: too many offline disks in PST (grp 1) Mon Jul  6 15:18:46 2015 ERROR: PST-initiated MANDATORY DISMOUNT of group DG_xxxx

从上述信息来看,很明显是因为asm 磁盘组中的32号盘出现IO问题,导致磁盘组被强制offline,最后数据库实例也crash。后面客户尝试手工mount diskgroup 发现报如下类似错误:

SQL> alter diskgroup datadg mount Mon Jul  6 15:33:50 2015 Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_dbw0_275092.trc: ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss ...... ...... ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss OR Mon Jul  6 15:33:51 2015 Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_b000_360654.trc: ORA-00600: internal error code, arguments: [kfcDismount02], [], [], [], [], [], [], [] Mon Jul  6 15:33:52 2015 NOTE: cache dismounting group 1/0xDDDF2CC7 (DG_xxxx) NOTE: dbwr not being msg'd to dismount Mon Jul  6 15:33:52 2015

这个错误极有可能是某个bug,在安装该patch 之后,最后再次尝试mount,发现仍然报错。不过错误已经发生改变:

SQL> alter diskgroup dg_xxxx mount Tue Jul  7 05:49:29 2015 NOTE: cache registered group DG_xxxx number=1 incarn=0x72661a1f Tue Jul  7 05:49:31 2015 NOTE: Hbeat: instance first (grp 1) Tue Jul  7 05:49:36 2015 NOTE: start heartbeating (grp 1) NOTE: cache opening disk 0 of grp 1: DG_xxxx_0000 path:/dev/rhdiskpower41 NOTE: cache opening disk 1 of grp 1: DG_xxxx_0001 path:/dev/rhdiskpower42 NOTE: cache opening disk 2 of grp 1: DG_xxxx_0002 path:/dev/rhdiskpower43 NOTE: cache opening disk 3 of grp 1: DG_xxxx_0003 path:/dev/rhdiskpower44 NOTE: cache opening disk 4 of grp 1: DG_xxxx_0004 path:/dev/rhdiskpower45 ...... ...... NOTE: cache opening disk 33 of grp 1: DG_xxxx_0033 path:/dev/rhdiskpower15 NOTE: cache opening disk 34 of grp 1: DG_xxxx_0034 path:/dev/rhdiskpower14 NOTE: cache opening disk 35 of grp 1: DG_xxxx_0035 path:/dev/rhdiskpower13 NOTE: cache mounting (first) group 1/0x72661A1F (DG_xxxx) NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1 NOTE: crash recovery signalled OER-15131 ERROR: ORA-15131 signalled during mount of diskgroup DG_xxxx NOTE: cache dismounting group 1/0x72661A1F (DG_xxxx) ERROR: diskgroup DG_xxxx was not mounted Tue Jul  7 05:50:10 2015 Shutting down instance: further logons disabled

可以看出,Oracle ASM在mount的时候,需要进行crash recovery,其中的检查点位置就是6295.7329;检查trace发现检查点所读取的位置如下:

NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1 CE: (0x7000000107c9640)  group=1 (DG_xxxx) obj=625  blk=256 (indirect)  hashFlags=0x0100  lid=0x0002  lruFlags=0x0000  bastCount=1  redundancy=0x11  fileExtent=0  AUindex=0 blockIndex=0  copy #0:  disk=32  au=1638611 BH: (0x70000001079c360)  bnum=143 type=rcv reading state=rcvRead chgSt=not modifying  flags=0x00000000  pinmode=excl  lockmode=null  bf=0x70000001048e000  kfbh_kfcbh.fcn_kfbh = 0.0  lowAba=0.0  highAba=0.0  last kfcbInitSlot return code=null cpkt lnk is null *** 2015-07-07 05:26:12.382 

可以看到,oracle需要读取32号磁盘的第1638611号AU,10g AU默认是1M,那么这个位置大致是1.6T的样子,实际上这个checkpoint的位置,我们很容易找到,这里通过kfed可以直接读取,如下:

[xxxx:/oraclehome]$ kfed read /xxx/rhdiskpower13 aun=3 blkn=0|more kfbh.endian:                          0 ; 0x000: 0x00 kfbh.hard:                          130 ; 0x001: 0x82 kfbh.type:                            7 ; 0x002: KFBTYP_ACDC kfbh.datfmt:                          1 ; 0x003: 0x01 kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0 kfbh.block.obj:                       3 ; 0x008: TYPE=0x0 NUMB=0x3 kfbh.check:                  1350450563 ; 0x00c: 0x507e3d83 kfbh.fcn.base:                        0 ; 0x010: 0x00000000 kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000 kfbh.spare1:                          0 ; 0x018: 0x00000000 kfbh.spare2:                          0 ; 0x01c: 0x00000000 kfracdc.eyec[0]:                     65 ; 0x000: 0x41 kfracdc.eyec[1]:                     67 ; 0x001: 0x43 kfracdc.eyec[2]:                     68 ; 0x002: 0x44 kfracdc.eyec[3]:                     67 ; 0x003: 0x43 kfracdc.thread:                       1 ; 0x004: 0x00000001 kfracdc.lastAba.seq:         4294967295 ; 0x008: 0xffffffff kfracdc.lastAba.blk:         4294967295 ; 0x00c: 0xffffffff kfracdc.blk0:                         1 ; 0x010: 0x00000001 kfracdc.blks:                     10751 ; 0x014: 0x000029ff kfracdc.ckpt.seq:                  6295 ; 0x018: 0x00001897    ---ckpt的值 kfracdc.ckpt.blk:                  7329 ; 0x01c: 0x00001ca1 kfracdc.fcn.base:             297751371 ; 0x020: 0x11bf534b kfracdc.fcn.wrap:                     0 ; 0x024: 0x00000000 kfracdc.bufBlks:                     64 ; 0x028: 0x00000040

最后客户经过各种尝试之后,仍然在mount 磁盘组的时候报如下的错误:

Tue Jul  7 18:03:03 2015 Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_438636.trc: ORA-00600: internal error code, arguments: [kfcChkAio01], [], [], [], [], [], [], [] ORA-15196: invalid ASM block header [kfc.c:5553] [blk_kfbl] [625] [2147483904] [2147483905 != 2147483904] NOTE: crash recovery signalled OER-600 ERROR: ORA-600 signalled during mount of diskgroup DG_xxxx NOTE: cache dismounting group 1/0xE70AB6D0 (DG_xxxx) ERROR: diskgroup DG_xxxx was not mounted Tue Jul  7 18:05:38 2015

关于这一点跟 Oracle MOS文档understanding and fixing errors ORA-600 [kfcChkAio01] and ORA-15196(Doc ID 757529.1) 完全一致,因此最后我们建议客户根据该文档的描述,处理即可,实际上处理的方式很简单,该文档提供了提供shell脚本,只需要修改其中的块号即可。处理完毕之后,成功mount 磁盘组如下:

Tue Jul  7 18:05:38 2015 SQL> alter diskgroup dg_xxxx mount Tue Jul  7 18:05:38 2015 NOTE: cache registered group DG_xxxx number=1 incarn=0xce0ab6d3 Tue Jul  7 18:05:38 2015 NOTE: Hbeat: instance first (grp 1) Tue Jul  7 18:05:43 2015 NOTE: start heartbeating (grp 1) ...... ...... NOTE: cache mounting (first) group 1/0xCE0AB6D3 (DG_xxxx) NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1 NOTE: advancing ckpt for thread=1 ckpt=6295.8649 NOTE: cache recovered group 1 to fcn 0.297779775 Tue Jul  7 18:05:43 2015 NOTE: opening chunk 1 at fcn 0.297779775 ABA NOTE: seq=6296 blk=8650 Tue Jul  7 18:05:43 2015 NOTE: cache mounting group 1/0xCE0AB6D3 (DG_xxxx) succeeded SUCCESS: diskgroup DG_xxxx was mounted Tue Jul  7 18:05:45 2015 NOTE: recovering COD for group 1/0xce0ab6d3 (DG_xxxx) SUCCESS: completed COD recovery for group 1/0xce0ab6d3 (DG_xxxx)

最后我们回过头来解释一下,为什么会出现这样的情况呢? 实际上,根本原因在于,客户在之前添加磁盘的时候操作不规范,如下:

Tue May 20 15:43:26 2014 SQL> alter diskgroup dg_xxxx add disk '/xxx/rhdiskpower24' size 1677721M,...... .'/xxx/rhdiskpower16' size 1677721M, '/xxx/rhdiskpower15' size 167772 1M, '/xxx/rhdiskpower14' size 1677721M, '/xxx/rhdiskpower13' size 1677721M  rebalance power 8 Wed May 21 08:45:13 2014 Starting background process ASMB ASMB started with pid=13, OS id=267028 Wed May 21 08:45:14 2014 NOTE: ASMB process exiting due to lack of ASM file activity Wed May 21 12:24:34 2014 NOTE: stopping process ARB5 NOTE: stopping process ARB2 NOTE: stopping process ARB6 NOTE: stopping process ARB1 NOTE: stopping process ARB3 NOTE: stopping process ARB7 NOTE: stopping process ARB4 NOTE: stopping process ARB0 Wed May 21 12:24:38 2014 SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx) Wed May 21 12:24:38 2014 SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx)

前面出问题的disk 就是第32号盘,其大小是1677721M,实际上我们检查发现该磁盘的物理大小是1638400M。

换句话将,在添加磁盘的时候,写了一个比较大的数值,让Oracle以为是这么大,然而实际上并没有这么大。当然,这也只能说明是Oracle 10g 版本中对于asm 的校验不够严格。

所以,问题很明确,报错的AU 编号1638611是大于 1638400的,所以这是一个不存在的位置,因此asm crash了。

备注:客户这里asm diskgroup 一共用了36个盘,每个盘1.6TB,大约53TB,基本上全部用光了,还好能够简单修复之一,否则恢复难度和工作量就太大了。 无可否认,云和恩墨 依然是国内恢复实力最强的公司,没有之一!

正文到此结束
Loading...