Восстановление файловой системы XFS.

Поломалось, посыпалось, не работает...

Модераторы: Trinity admin`s, Free-lance moderator`s

Ответить
kraps
Junior member
Сообщения: 2
Зарегистрирован: 31 янв 2014, 08:24
Откуда: Москва

Восстановление файловой системы XFS.

Сообщение kraps » 31 янв 2014, 08:50

Здравствуйте, уважаемые посетители форума.

На днях столкнулся со следующей проблемой.
Есть сервер с RAID контроллером Areca 1260 и дисковым массивом на 7,5Тб (9 raid set+3 hot spare). Система linux gentoo, установлена на отдельный диск.

Код: Выделить всё

uname -a
Linux host 3.10.25-gentoo #2 SMP Fri Jan 24 14:13:10 MSK 2014 x86_64 Intel(R) Xeon(TM) CPU 3.00GHz GenuineIntel GNU/Linu
Файловая система raid массива XFS /dev/sdb1.
Недавно на raid произошел сбой, после которого в логах рейда была обнаружена запись:

Код: Выделить всё

2014-01-24 07:12:34 H/W Monitor Raid Powered On
Сказать точно было ли отключение электропитания сложно, т.к. сервер стоит удаленно и подключен к гарантированному питанию с резервом, поэтому я не исключаю программно/аппаратный сбой самого контроллера, после которого диск размонтировался. Затем был ребилд raid, который завершился успешно. Сейчас его состояние стабильно, но файловая система не монтируется. Подобное произошло впервые за 5 лет эксплуатации данного дискового массива.

Код: Выделить всё

Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.86, Arclib: 310, Date: Nov  1 2011( Linux )

 S  #   Name       Type             Interface
==================================================
[*] 1   ARC-1260   Raid Controller  PCI
==================================================


CMD     Description
==========================================================
main    Show Command Categories.
set     General Settings.
rsf     RaidSet Functions.
vsf     VolumeSet Functions.
disk    Physical Drive Functions.
sys     System Functions.
net     Ethernet Functions.
event   Event Functions.
hw      Hardware Monitor Functions.
mail    Mail Notification Functions.
snmp    SNMP Functions.
ntp     NTP Functions.
exit    Exit CLI.
==========================================================
Command Format: <CMD> [Sub-Command] [Parameters].
Note: Use <CMD> -h or -help to get details.


CLI> sys info
The System Information
===========================================
Main Processor     : 500MHz
CPU ICache Size    : 32KB
CPU DCache Size    : 32KB
CPU SCache Size    : 0KB
System Memory      : 256MB/333MHz/ECC
Firmware Version   : V1.49 2010-12-02
BOOT ROM Version   : V1.49 2010-12-02
Serial Number      : Y706CAANAR600367
Controller Name    : ARC-1260
Current IP Address : 192.168.90.250
===========================================
GuiErrMsg<0x00>: Success.


CLI> rsf info
 #  Name             Disks TotalCap  FreeCap DiskChannels       State          
===============================================================================
 1  Raid Set # 00       12 9000.0GB    0.0GB 123F465E9ABD       Normal
===============================================================================
GuiErrMsg<0x00>: Success.


CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 ARC-1260-VOL#00  Raid Set # 00   Raid6   7500.0GB 00/00/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> disk info
  # Ch# ModelName                       Capacity  Usage
===============================================================================
  1  1  ST3750640NS                      750.2GB  Raid Set # 00   
  2  2  ST3750640NS                      750.2GB  Raid Set # 00   
  3  3  ST3750640NS                      750.2GB  Raid Set # 00   
  4  4  ST3750640NS                      750.2GB  Raid Set # 00   
  5  5  GB1000EAMYC                     1000.2GB  Raid Set # 00   
  6  6  ST3750330NS                      750.2GB  Raid Set # 00   
  7  7  GB1000EAMYC                     1000.2GB  HotSpare[Global]
  8  8  GB1000EAMYC                     1000.2GB  HotSpare[Global]
  9  9  ST3750640NS                      750.2GB  Raid Set # 00   
 10 10  ST3750640NS                      750.2GB  Raid Set # 00   
 11 11  ST3750640NS                      750.2GB  Raid Set # 00   
 12 12  ST3750330NS                      750.2GB  HotSpare[Global]
 13 13  ST3750640NS                      750.2GB  Raid Set # 00   
 14 14  ST3750640NS                      750.2GB  Raid Set # 00   
 15 15  ST3750640NS                      750.2GB  Raid Set # 00   
 16 16  N.A.                               0.0GB  N.A.      
===============================================================================
GuiErrMsg<0x00>: Success.


CLI> hw info
The Hardware Monitor Information
===========================================
Fan#1 Speed (RPM)   : 1188
Battery Status      : 100%
HDD #1  Temp.       : 31
HDD #2  Temp.       : 30
HDD #3  Temp.       : 29
HDD #4  Temp.       : 34
HDD #5  Temp.       : 31
HDD #6  Temp.       : 27
HDD #7  Temp.       : 29
HDD #8  Temp.       : 33
HDD #9  Temp.       : 32
HDD #10 Temp.       : 31
HDD #11 Temp.       : 29
HDD #12 Temp.       : 32
HDD #13 Temp.       : 33
HDD #14 Temp.       : 33
HDD #15 Temp.       : 31
HDD #16 Temp.       : 0
===========================================
GuiErrMsg<0x00>: Success.


CLI> event info
Date-Time            Device           Event Type            Elapsed Time Errors
===============================================================================
2014-01-28 04:17:30  Proxy Or Inband  HTTP Log In                              
2014-01-27 05:06:56  Proxy Or Inband  HTTP Log In                              
2014-01-27 04:14:16  H/W MONITOR      Raid Powered On                          
2014-01-27 03:58:22  H/W MONITOR      Raid Powered On                          
2014-01-27 03:54:31  RS232 Terminal   VT100 Log In                             
2014-01-27 03:54:16  H/W MONITOR      Raid Powered On                          
2014-01-26 11:06:04  Proxy Or Inband  HTTP Log In                              
2014-01-25 12:32:07  Proxy Or Inband  HTTP Log In                              
2014-01-24 15:31:35  Proxy Or Inband  HTTP Log In                              
2014-01-24 14:46:17  ARC-1260-VOL#00  Complete Rebuild      006:22:29          
2014-01-24 08:23:48  ARC-1260-VOL#00  Start Rebuilding                         
2014-01-24 08:23:46  IDE Channel #12  Device Failed                            
2014-01-24 08:23:46  Raid Set # 00    Rebuild RaidSet                          
2014-01-24 08:23:45  Raid Set # 00    RaidSet Degraded                         
2014-01-24 08:23:45  ARC-1260-VOL#00  Volume Degraded                          
2014-01-24 08:09:00  Proxy Or Inband  HTTP Log In                              
2014-01-24 07:12:34  H/W MONITOR      Raid Powered On                          
2014-01-24 06:23:01  H/W MONITOR      Raid Powered On                          
===============================================================================
GuiErrMsg<0x00>: Success.


CLI> sys showcfg
The System Configuration
=====================================================
System Beeper Setting         : Disabled
Background Task Priority      : Medium(50%)
JBOD/RAID Configuration       : RAID
Max SATA Mode Supported       : SATA300+NCQ
HDD Read Ahead Cache          : Enabled
Volume Data Read Ahead        : Normal
Stagger Power On Control      : 0.7
Spin Down Idle HDD (Minutes)  : Disabled
HDD SMART Status Polling      : Enabled
Empty HDD Slot LED            : ON
Auto Activate Incomplete Raid : Enabled
Disk Write Cache Mode         : Enabled
Disk Capacity Truncation Mode : Multiples Of 10G
=====================================================
GuiErrMsg<0x00>: Success.
При попытке запустить утилиту восстановления xfs_repair получаю следующее:

Код: Выделить всё

xfs_repair -P /dev/sdb1

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
RROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
вывод команды xfs_repair -n более информативен, но не дает возможности вносить изменения в файловую систему:

Код: Выделить всё

xfs_repair -n /dev/sdb1

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
block (3,1498933-1498933) multiply claimed by cnt space tree, state - 2
agf_freeblks 255168644, counted 255168626 in ag 2
agf_freeblks 259940761, counted 259940776 in ag 3
agf_freeblks 255012362, counted 255012365 in ag 4
agf_freeblks 260627255, counted 260627372 in ag 5
agf_freeblks 207044983, counted 207044984 in ag 6
agf_freeblks 243646150, counted 243646100 in ag 1
block (0,9288775-9288775) multiply claimed by cnt space tree, state - 2
block (0,9292880-9292880) multiply claimed by cnt space tree, state - 2
block (0,9311746-9311746) multiply claimed by cnt space tree, state - 2
block (0,9313774-9313774) multiply claimed by cnt space tree, state - 2
block (0,4010552-4010552) multiply claimed by cnt space tree, state - 2
block (0,7294010-7294010) multiply claimed by cnt space tree, state - 2
block (0,6907114-6907114) multiply claimed by cnt space tree, state - 2
block (0,4058360-4058360) multiply claimed by cnt space tree, state - 2
block (0,3891784-3891784) multiply claimed by cnt space tree, state - 2
block (0,9322824-9322824) multiply claimed by cnt space tree, state - 2
agf_freeblks 228242757, counted 228242913 in ag 0
sb_fdblocks 1709684933, counted 1709685157
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
xfs_bmbt_read_verify: XFS_CORRUPTION_ERROR
data fork in ino 16762966 claims free block 16654424
bad nblocks 256 for inode 16764668, would reset to 255
data fork in ino 16767882 claims free block 9317836
data fork in ino 16767882 claims free block 9317837
bad nblocks 530 for inode 16767882, would reset to 545
data fork in ino 16770934 claims free block 9309594
data fork in ino 16770934 claims free block 9309595
bad nblocks 2396 for inode 16772596, would reset to 2395
data fork in ino 16775619 claims free block 9319785
data fork in ino 16775619 claims free block 9319786
bad nblocks 6284 for inode 16775619, would reset to 6291
bad nblocks 103 for inode 16780498, would reset to 102
bad nextents 27 for inode 16780498, would reset to 26
data fork in ino 16781959 claims free block 7295214
data fork in ino 16781959 claims free block 7295215
bad nblocks 76 for inode 16781959, would reset to 81
bad key in bmbt root (is 1856, would reset to 1844) in inode 16782070 data fork
bad nblocks 3060 for inode 16782070, would reset to 3059
bad nextents 642 for inode 16782070, would reset to 641
data fork in ino 16783403 claims free block 1345779579
data fork in ino 16783403 claims free block 1345779580
bad nblocks 3029 for inode 16783403, would reset to 3043
bad nblocks 927 for inode 16783493, would reset to 926
bad nblocks 977 for inode 16783553, would reset to 971
data fork in ino 16786396 claims free block 8430572
bad nblocks 60 for inode 16786396, would reset to 65
data fork in ino 16786416 claims free block 9288774
data fork in ino 16786416 claims free block 9288775
bad nblocks 719 for inode 16786416, would reset to 721
data fork in ino 16786803 claims free block 9307090
data fork in ino 16786803 claims free block 9307091
bad nblocks 56 for inode 16786803, would reset to 65
bad nblocks 536 for inode 16787010, would reset to 535
data fork in ino 16792026 claims free block 9312758
data fork in ino 16792026 claims free block 9312759
bad nblocks 301 for inode 16792026, would reset to 305
bad nblocks 3059 for inode 16792057, would reset to 3045
bad nextents 580 for inode 16792057, would reset to 579
data fork in ino 16792827 claims free block 9317987
data fork in ino 16792827 claims free block 9317988
bad nblocks 88 for inode 16792827, would reset to 97
data fork in ino 16797309 claims free block 9316639
data fork in ino 16797309 claims free block 9316640
bad nblocks 1115 for inode 16797309, would reset to 1121
data fork in ino 16797369 claims free block 5187785
data fork in ino 16797369 claims free block 5187786
data fork in ino 16801363 claims free block 5195413
data fork in ino 16801363 claims free block 5195414
data fork in ino 16805149 claims free block 16857856
bad nblocks 3072 for inode 16805235, would reset to 3071
data fork in ino 16806242 claims free block 9318771
data fork in ino 16806242 claims free block 9318772
bad nblocks 3048 for inode 16806242, would reset to 3058
bad nblocks 1355 for inode 16809840, would reset to 1354
bad nblocks 2467 for inode 16812697, would reset to 2466
data fork in ino 16818259 claims free block 9305797
data fork in ino 16818259 claims free block 9305798
data fork in ino 16824269 claims free block 9319278
bad nblocks 767 for inode 16824269, would reset to 769
bad nblocks 275 for inode 16826120, would reset to 274
bad nextents 95 for inode 16826120, would reset to 94
data fork in ino 16826213 claims free block 272608246
data fork in ino 16828470 claims free block 9316767
data fork in ino 16828470 claims free block 9316768
bad nblocks 3069 for inode 16828470, would reset to 3075
bad nblocks 193 for inode 16828767, would reset to 184
data fork in ino 16829192 claims free block 539292365
bad nblocks 818 for inode 16829192, would reset to 833
data fork in ino 16829681 claims free block 5675633
data fork in ino 16831045 claims free block 6119618
data fork in ino 16833544 claims free block 1378633
bad nblocks 97 for inode 16833658, would reset to 91
bad nblocks 48 for inode 16836020, would reset to 49
data fork in ino 16837615 claims free block 9317968
data fork in ino 16837615 claims free block 9317969
bad nblocks 1237 for inode 16837615, would reset to 1249
bad nblocks 622 for inode 16843855, would reset to 621
data fork in ino 16851046 claims free block 9299867
data fork in ino 16851046 claims free block 9299868
bad nblocks 811 for inode 16851046, would reset to 817
bad nblocks 94 for inode 16852952, would reset to 93
data fork in ino 16858919 claims free block 1047326
bad nblocks 649 for inode 16858919, would reset to 657
bad nblocks 121 for inode 16861780, would reset to 120
bad nblocks 9585 for inode 16863095, would reset to 9457
bad nextents 235 for inode 16863095, would reset to 234
bad nblocks 433 for inode 16868691, would reset to 423
bad nextents 206 for inode 16868691, would reset to 205
bad nblocks 2721 for inode 16870801, would reset to 2720
data fork in ino 16870820 claims free block 9322255
bad nblocks 1025 for inode 16870900, would reset to 1015
data fork in ino 16871311 claims free block 9321968
bad nblocks 2371 for inode 16871311, would reset to 2402
data fork in ino 16871664 claims free block 272107090
data fork in ino 16871664 claims free block 272107091
data fork in ino 16871687 claims free block 272686198
data fork in ino 16872270 claims free block 9302219
data fork in ino 16872270 claims free block 9302220
bad nblocks 547 for inode 16873993, would reset to 561
data fork in ino 16876441 claims free block 9309470
bad nblocks 3071 for inode 16876441, would reset to 3073
bad nblocks 27 for inode 16876582, would reset to 26
bad nblocks 32 for inode 16889354, would reset to 33
data fork in ino 16892870 claims free block 273676067
data fork in ino 16896171 claims free block 9310630
data fork in ino 16896171 claims free block 9310631
bad nblocks 682 for inode 16896171, would reset to 689
data fork in ino 16896792 claims free block 9314447
bad nblocks 1617 for inode 16896792, would reset to 1618
data fork in ino 16906370 claims free block 9313860
data fork in ino 16906370 claims free block 9313861
bad nblocks 244 for inode 16906370, would reset to 257
data fork in ino 16908888 claims free block 9305813
data fork in ino 16908888 claims free block 9305814
bad nblocks 2417 for inode 16911368, would reset to 2416
bad nblocks 950 for inode 16912682, would reset to 949
data fork in ino 16916686 claims free block 5096396
data fork in ino 16916686 claims free block 5096397
data fork in ino 16922077 claims free block 9311859
data fork in ino 16922077 claims free block 9311860
data fork in ino 16923072 claims free block 1077183854
bad nblocks 2350 for inode 16923072, would reset to 2354
data fork in ino 16923549 claims free block 9304733
data fork in ino 16923549 claims free block 9304734
bad nblocks 1016 for inode 16923549, would reset to 1025
data fork in ino 16927417 claims free block 9321495
data fork in ino 16927417 claims free block 9321496
bad magic # 0x20313030 in inode 16927721 (data fork) bmbt block 9305534
bad data fork in inode 16927721
would have cleared inode 16927721
data fork in ino 16928450 claims free block 9318480
data fork in ino 16928450 claims free block 9318481
bad nblocks 241 for inode 16938363, would reset to 240
bad nblocks 289 for inode 16940400, would reset to 257
bad nextents 32 for inode 16940400, would reset to 31
data fork in ino 16942122 claims free block 9304143
data fork in ino 16942122 claims free block 9304144
bad nblocks 106 for inode 16942122, would reset to 113
data fork in ino 16946405 claims free block 9311443
data fork in ino 16946405 claims free block 9311444
bad nblocks 1053 for inode 16946405, would reset to 1057
data fork in ino 16948776 claims free block 9317665
data fork in ino 16948776 claims free block 9317666
bad nblocks 681 for inode 16948776, would reset to 689
bad nblocks 562 for inode 16949011, would reset to 561
bad nextents 200 for inode 16949011, would reset to 199
data fork in ino 16951530 claims free block 8418968
data fork in ino 16957296 claims free block 8435643
bad nblocks 142 for inode 16957296, would reset to 145
data fork in ino 16960362 claims free block 9320795
data fork in ino 16960362 claims free block 9320796
bad nblocks 51 for inode 16960362, would reset to 65
data fork in ino 16965029 claims free block 9304412
data fork in ino 16965029 claims free block 9304413
data fork in ino 16967072 claims free block 9322240
bad nblocks 898 for inode 16967072, would reset to 913
data fork in ino 16972513 claims free block 9322096
bad nblocks 354 for inode 16972513, would reset to 369
data fork in ino 16976981 claims free block 272642965
data fork in ino 16980431 claims free block 9305966
data fork in ino 16981023 claims free block 9313215
data fork in ino 16981023 claims free block 9313216
bad nblocks 508 for inode 16981023, would reset to 513
data fork in ino 16983271 claims free block 28015187
data fork in ino 16983271 claims free block 28015188
bad nblocks 216 for inode 16983271, would reset to 225
data fork in ino 16983280 claims free block 9321906
data fork in ino 16983280 claims free block 9321907
bad nblocks 83 for inode 16983280, would reset to 97
data fork in ino 16987049 claims free block 9314631
data fork in ino 16987049 claims free block 9314632
bad nblocks 422 for inode 16987049, would reset to 433
data fork in ino 16989722 claims free block 5014097
data fork in ino 16990238 claims free block 9318424
data fork in ino 16990238 claims free block 9318425
bad nblocks 1056 for inode 16990306, would reset to 1058
data fork in ino 16992687 claims free block 9318671
data fork in ino 16992687 claims free block 9318672
bad nblocks 180 for inode 16992687, would reset to 193
data fork in ino 16995116 claims free block 4010551
data fork in ino 16995161 claims free block 27301755
data fork in ino 16995239 claims free block 7039534
bad nblocks 39 for inode 16995239, would reset to 49
data fork in ino 16997344 claims free block 9316751
data fork in ino 16997344 claims free block 9316752
bad nblocks 3085 for inode 16997344, would reset to 3091
data fork in ino 17000640 claims free block 1076254748
bad nblocks 390 for inode 17000640, would reset to 401
data fork in ino 17004824 claims free block 9292879
data fork in ino 17004824 claims free block 9292880
bad nblocks 1345 for inode 17004824, would reset to 1346
bad nblocks 21 for inode 17005621, would reset to 33
data fork in ino 17005995 claims free block 5365367
data fork in ino 17005995 claims free block 5365368
data fork in ino 17018696 claims free block 9316278
data fork in ino 17018696 claims free block 9316279
data fork in ino 17019111 claims free block 9311403
data fork in ino 17019111 claims free block 9311404
bad nblocks 2320 for inode 17019111, would reset to 2323
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 5
        - agno = 6
        - agno = 4
entry "10.6.114.148" at block 297 offset 496 in directory inode 19125 references free inode 16927721
        would clear inode number in entry at offset 496...
bad nblocks 256 for inode 16764668, would reset to 255
bad nblocks 530 for inode 16767882, would reset to 545
bad nblocks 2396 for inode 16772596, would reset to 2395
bad nblocks 6284 for inode 16775619, would reset to 6291
bad nblocks 103 for inode 16780498, would reset to 102
bad nextents 27 for inode 16780498, would reset to 26
bad nblocks 76 for inode 16781959, would reset to 81
bad key in bmbt root (is 1856, would reset to 1844) in inode 16782070 data fork
bad nblocks 3060 for inode 16782070, would reset to 3059
bad nextents 642 for inode 16782070, would reset to 641
bad nblocks 3029 for inode 16783403, would reset to 3043
bad nblocks 927 for inode 16783493, would reset to 926
bad nblocks 977 for inode 16783553, would reset to 971
bad nblocks 60 for inode 16786396, would reset to 65
bad nblocks 719 for inode 16786416, would reset to 721
bad nblocks 56 for inode 16786803, would reset to 65
bad nblocks 536 for inode 16787010, would reset to 535
bad nblocks 301 for inode 16792026, would reset to 305
bad nblocks 3059 for inode 16792057, would reset to 3045
bad nextents 580 for inode 16792057, would reset to 579
bad nblocks 88 for inode 16792827, would reset to 97
bad nblocks 1115 for inode 16797309, would reset to 1121
bad nblocks 3072 for inode 16805235, would reset to 3071
bad nblocks 3048 for inode 16806242, would reset to 3058
bad nblocks 1355 for inode 16809840, would reset to 1354
bad nblocks 2467 for inode 16812697, would reset to 2466
bad nblocks 767 for inode 16824269, would reset to 769
bad nblocks 275 for inode 16826120, would reset to 274
bad nextents 95 for inode 16826120, would reset to 94
bad nblocks 3069 for inode 16828470, would reset to 3075
bad nblocks 193 for inode 16828767, would reset to 184
bad nblocks 818 for inode 16829192, would reset to 833
bad nblocks 97 for inode 16833658, would reset to 91
bad nblocks 48 for inode 16836020, would reset to 49
bad nblocks 1237 for inode 16837615, would reset to 1249
bad nblocks 622 for inode 16843855, would reset to 621
bad nblocks 811 for inode 16851046, would reset to 817
bad nblocks 94 for inode 16852952, would reset to 93
bad nblocks 649 for inode 16858919, would reset to 657
bad nblocks 121 for inode 16861780, would reset to 120
bad nblocks 9585 for inode 16863095, would reset to 9457
bad nextents 235 for inode 16863095, would reset to 234
bad nblocks 433 for inode 16868691, would reset to 423
bad nextents 206 for inode 16868691, would reset to 205
bad nblocks 2721 for inode 16870801, would reset to 2720
bad nblocks 1025 for inode 16870900, would reset to 1015
bad nblocks 2371 for inode 16871311, would reset to 2402
bad nblocks 547 for inode 16873993, would reset to 561
bad nblocks 3071 for inode 16876441, would reset to 3073
bad nblocks 27 for inode 16876582, would reset to 26
bad nblocks 32 for inode 16889354, would reset to 33
bad nblocks 682 for inode 16896171, would reset to 689
bad nblocks 1617 for inode 16896792, would reset to 1618
bad nblocks 244 for inode 16906370, would reset to 257
bad nblocks 2417 for inode 16911368, would reset to 2416
bad nblocks 950 for inode 16912682, would reset to 949
bad nblocks 2350 for inode 16923072, would reset to 2354
bad nblocks 1016 for inode 16923549, would reset to 1025
bad magic # 0x20313030 in inode 16927721 (data fork) bmbt block 9305534
bad data fork in inode 16927721
would have cleared inode 16927721
bad nblocks 241 for inode 16938363, would reset to 240
bad nblocks 289 for inode 16940400, would reset to 257
bad nextents 32 for inode 16940400, would reset to 31
bad nblocks 106 for inode 16942122, would reset to 113
bad nblocks 1053 for inode 16946405, would reset to 1057
bad nblocks 681 for inode 16948776, would reset to 689
bad nblocks 562 for inode 16949011, would reset to 561
bad nextents 200 for inode 16949011, would reset to 199
bad nblocks 142 for inode 16957296, would reset to 145
bad nblocks 51 for inode 16960362, would reset to 65
bad nblocks 898 for inode 16967072, would reset to 913
bad nblocks 354 for inode 16972513, would reset to 369
bad nblocks 508 for inode 16981023, would reset to 513
bad nblocks 216 for inode 16983271, would reset to 225
bad nblocks 83 for inode 16983280, would reset to 97
bad nblocks 422 for inode 16987049, would reset to 433
bad nblocks 1056 for inode 16990306, would reset to 1058
bad nblocks 180 for inode 16992687, would reset to 193
bad nblocks 39 for inode 16995239, would reset to 49
bad nblocks 3085 for inode 16997344, would reset to 3091
bad nblocks 390 for inode 17000640, would reset to 401
bad nblocks 1345 for inode 17004824, would reset to 1346
bad nblocks 21 for inode 17005621, would reset to 33
bad nblocks 2320 for inode 17019111, would reset to 2323
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "10.6.114.148" in directory inode 19125 points to free inode 16927721, would junk entry
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting
По данному поводу завел инцидент на багзилле http://oss.sgi.com/bugzilla/show_bug.cgi?id=1045, где приложил все технические характеристики сервера и выводы команд. К сожалению, разработчики пока ничего не ответили.
У меня сейчас один вопрос, можно ли восстановить систему без существенной потери данных? Читал, что использование ключа -L практически наверняка приводит к полной потере. Так ли это? Был ли положительный или отрицательный опыт у кого-нибудь из присутствующих на форуме? Какие существуют варианты по восстановлению файловой системы XFS?

Заранее спасибо за помощь.

Аватара пользователя
gs
Сотрудник Тринити
Сотрудник Тринити
Сообщения: 16650
Зарегистрирован: 23 авг 2002, 17:34
Откуда: Москва
Контактная информация:

Re: Восстановление файловой системы XFS.

Сообщение gs » 31 янв 2014, 14:05

Рекаверов поспрашивайте: http://rlab.ru/forum/board,14.0

kraps
Junior member
Сообщения: 2
Зарегистрирован: 31 янв 2014, 08:24
Откуда: Москва

Re: Восстановление файловой системы XFS.

Сообщение kraps » 31 янв 2014, 16:27

gs писал(а):Рекаверов поспрашивайте: http://rlab.ru/forum/board,14.0
Спасибо, попробую.

Ответить

Вернуться в «Массивы - Технические вопросы, решение проблем.»

Кто сейчас на конференции

Сейчас этот форум просматривают: нет зарегистрированных пользователей и 27 гостей