Конфигурация массива сейчас:
7 RAID1 logical drives в 1-м Logical Volume. Stripe size 128k. Write back cache. С logical volume порезано 3 LUN'a по 2047GB
4 сетевых интерфейса сидят каждый в своей группе, адреса 10.111.111.10-10.111.111.13, 802.3ad не используется. Jumbo Frame 9k
Подключен 1 клиент:
ESX 4.1U1 (пока пустой-тестирую массив), HP DL380 G6, после прочтения обсуждений принял решение использовать Software iSCSI вместо полуаппаратной поддежки Broadcom (сетевухи видны как HBA) из-за остутствия поддержки jumbo frame в iSCSI offloading'ге бродкома. Используется 4 сетевые карты, каждая в своем vSwitch, в каждом vSwitch по порт-группе с 1 vmknic, адреса 10.111.111.70-73. Все с MTU 9000, vmkping -s 9000 нормально работает. Лишние пути доступа убраны, оставлены 4: 10.111.111.70<->10.111.111.10,10.111.111.71<->10.111.111.11 etc. ко всем LUN'ам.Из 3х LUN'ов по 2047GB создана 1 VMFS, всем экстентам Path Selection назначен Round Robin. На round-robin выставлен I/O Operation Limit: 3 (вместо 1000 по дефолту).
Проблемы:
1. Полка не грузится со вставленными винтами: на LCD стоит A16E-2130-4 2GB Ram Wait... и нече не работает. С вытащенными винтами грузится, потом вставляешь винты, запускаешь Logical Volume и все работает. Страшно держать что-то ценное.
2. Скорость: по идее, 14 шпинделей в RAID10 может легко заполнить 4*Gigabit Ethernet, на деле, получается такая картина (из виртуалки Centos 5.2 8GB RAM 2CPU):
Код: Выделить всё
[root@centos-test ~]# iozone -t 4 -s 2G -r 128k -F /test/test.file1 /test/test.file2 /test/test.file3 /test/test.file4 -k 4
Iozone: Performance Test of File I/O
Version $Revision: 3.394 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
Ben England.
Run began: Fri Jun 10 12:05:14 2011
File size set to 2097152 KB
Record Size 128 KB
POSIX Async I/O (no bcopy). Depth 4
Command line used: iozone -t 4 -s 2G -r 128k -F /test/test.file1 /test/test.file2 /test/test.file3 /test/test.file4 -k 4
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 4 processes
Each process writes a 2097152 Kbyte file in 128 Kbyte records
Children see throughput for 4 initial writers = 78850.19 KB/sec
Parent sees throughput for 4 initial writers = 59846.87 KB/sec
Min throughput per process = 17606.41 KB/sec
Max throughput per process = 22486.40 KB/sec
Avg throughput per process = 19712.55 KB/sec
Min xfer = 1655552.00 KB
Children see throughput for 4 rewriters = 86950.00 KB/sec
Parent sees throughput for 4 rewriters = 83156.26 KB/sec
Min throughput per process = 20489.76 KB/sec
Max throughput per process = 22871.87 KB/sec
Avg throughput per process = 21737.50 KB/sec
Min xfer = 1880704.00 KB
Children see throughput for 4 readers = 55541.98 KB/sec
Parent sees throughput for 4 readers = 55523.83 KB/sec
Min throughput per process = 12860.56 KB/sec
Max throughput per process = 14874.53 KB/sec
Avg throughput per process = 13885.50 KB/sec
Min xfer = 1813632.00 KB
Children see throughput for 4 re-readers = 54986.37 KB/sec
Parent sees throughput for 4 re-readers = 54976.36 KB/sec
Min throughput per process = 12888.38 KB/sec
Max throughput per process = 14548.03 KB/sec
Avg throughput per process = 13746.59 KB/sec
Min xfer = 1858304.00 KB
Children see throughput for 4 reverse readers = 6726.46 KB/sec
Parent sees throughput for 4 reverse readers = 6725.39 KB/sec
Min throughput per process = 1557.19 KB/sec
Max throughput per process = 1794.72 KB/sec
Avg throughput per process = 1681.61 KB/sec
Min xfer = 1819648.00 KB
Children see throughput for 4 stride readers = 6175.26 KB/sec
Parent sees throughput for 4 stride readers = 6174.51 KB/sec
Min throughput per process = 1482.83 KB/sec
Max throughput per process = 1606.44 KB/sec
Avg throughput per process = 1543.81 KB/sec
Min xfer = 1936512.00 KB
Children see throughput for 4 random readers = 19618.78 KB/sec
Parent sees throughput for 4 random readers = 19614.92 KB/sec
Min throughput per process = 4842.36 KB/sec
Max throughput per process = 4970.65 KB/sec
Avg throughput per process = 4904.70 KB/sec
Min xfer = 2044288.00 KB
Children see throughput for 4 mixed workload = 112107.81 KB/sec
Parent sees throughput for 4 mixed workload = 66222.86 KB/sec
Min throughput per process = 702.15 KB/sec
Max throughput per process = 81855.67 KB/sec
Avg throughput per process = 28026.95 KB/sec
Min xfer = 18048.00 KB
Children see throughput for 4 random writers = 107123.92 KB/sec
Parent sees throughput for 4 random writers = 20822.12 KB/sec
Min throughput per process = 1134.89 KB/sec
Max throughput per process = 103492.42 KB/sec
Avg throughput per process = 26780.98 KB/sec
Min xfer = 23040.00 KB
Children see throughput for 4 pwrite writers = 101778.14 KB/sec
Parent sees throughput for 4 pwrite writers = 82528.44 KB/sec
Min throughput per process = 23193.72 KB/sec
Max throughput per process = 29385.88 KB/sec
Avg throughput per process = 25444.54 KB/sec
Min xfer = 1658752.00 KB
Children see throughput for 4 pread readers = 46689.54 KB/sec
Parent sees throughput for 4 pread readers = 46682.30 KB/sec
Min throughput per process = 11376.91 KB/sec
Max throughput per process = 11910.46 KB/sec
Avg throughput per process = 11672.39 KB/sec
Min xfer = 2003328.00 KB