Centos系统中软阵列(raid)的实现

分类:CentOS教程 阅读:32681 次

1.创建测试的用户和修改挂载的参数


[root@localhost ~]# useradd user1 --新建两个用户[root@localhost ~]# useradd user2[root@localhost ~]# mount -o remount,usrquota,grpquota /mnt/sdb --重新挂载,加参数[root@localhost ~]# mount -l --查看挂载选项/dev/mapper/VolGroup-lv_rooton / typeext4 (rw)proc on /proctypeproc (rw)sysfs on /systypesysfs (rw)devpts on /dev/ptstypedevpts (rw,gid=5,mode=620)tmpfs on /dev/shmtypetmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")/dev/sda1on /boottypeext4 (rw)none on /proc/sys/fs/binfmt_misctypebinfmt_misc (rw)/dev/sdb1on /mnt/sdbtypeext4 (rw,usrquota,grpquota)[root@localhost ~]# quotacheck -avug -mf --生成两个quota文件quotacheck: Your kernel probably supports journaled quotabut you are not using it. Consider switching to journaled quotato avoid running quotacheckafter an unclean shutdown.quotacheck: Scanning /dev/sdb1[/mnt/sdb] donequotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Checked 2 directories and 0 filesquotacheck: Old filenot found.quotacheck: Old filenot found.[root@localhost ~]# ll /mnt/sdb --查看生成的两个文件total 26-rw-------. 1 root root 6144 Jan 9 17:59 aquota.group-rw-------. 1 root root 6144 Jan 9 17:59 aquota.userdrwx------. 2 root root 12288 Jan 9 17:55 lost+found[root@localhost ~]# quotaon -avug --开启quota功能/dev/sdb1[/mnt/sdb]: group quotas turned on/dev/sdb1[/mnt/sdb]: user quotas turned on[root@localhost ~]# edquota -u user1Disk quotas foruser user1 (uid 500):Filesystem blocks soft hard inodes soft hard/dev/sdb10 10 20 0 0 0[root@localhost ~]# edquota -u user2Disk quotas foruser user2 (uid 501):Filesystem blocks soft hard inodes soft hard/dev/sdb10 5 10 0 0 0

[root@localhost ~]# useradd user1 --新建两个用户[root@localhost ~]# useradd user2[root@localhost ~]# mount -o remount,usrquota,grpquota /mnt/sdb --重新挂载,加参数[root@localhost ~]# mount -l --查看挂载选项/dev/mapper/VolGroup-lv_rooton / typeext4 (rw)proc on /proctypeproc (rw)sysfs on /systypesysfs (rw)devpts on /dev/ptstypedevpts (rw,gid=5,mode=620)tmpfs on /dev/shmtypetmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")/dev/sda1on /boottypeext4 (rw)none on /proc/sys/fs/binfmt_misctypebinfmt_misc (rw)/dev/sdb1on /mnt/sdbtypeext4 (rw,usrquota,grpquota)[root@localhost ~]# quotacheck -avug -mf --生成两个quota文件quotacheck: Your kernel probably supports journaled quotabut you are not using it. Consider switching to journaled quotato avoid running quotacheckafter an unclean shutdown.quotacheck: Scanning /dev/sdb1[/mnt/sdb] donequotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Checked 2 directories and 0 filesquotacheck: Old filenot found.quotacheck: Old filenot found.[root@localhost ~]# ll /mnt/sdb --查看生成的两个文件total 26-rw-------. 1 root root 6144 Jan 9 17:59 aquota.group-rw-------. 1 root root 6144 Jan 9 17:59 aquota.userdrwx------. 2 root root 12288 Jan 9 17:55 lost+found[root@localhost ~]# quotaon -avug --开启quota功能/dev/sdb1[/mnt/sdb]: group quotas turned on/dev/sdb1[/mnt/sdb]: user quotas turned on[root@localhost ~]# edquota -u user1 --用户user1使用/mnt/sdb目录不要起过20MDisk quotas foruser user1 (uid 500):Filesystem blocks soft hard inodes soft hard/dev/sdb10 10000 20000 0 0 0[root@localhost ~]# edquota -u user2 --用户user2对sdb目录不要超过10M,大于5M超出指定时间删除Disk quotas foruser user2 (uid 501):Filesystem blocks soft hard inodes soft hard/dev/sdb10 5000 10000 0 0 0

2.验证配额


[root@localhost ~]# su - user1[user1@localhost ~]$ cd/mnt/sdb[user1@localhost sdb]$ ddif=/dev/zeroof=12 bs=1M count=5 --创建5M的文件没有警告信息,正常5+0 records in5+0 records out5242880 bytes (5.2 MB) copied, 0.0525754 s, 99.7 MB/s[user1@localhost sdb]$ ll -h 12-rw-rw-r--. 1 user1 user1 5.0M Jan 9 18:16 12[user1@localhost sdb]$ ddif=/dev/zeroof=123 bs=1M count=21 --创建12M的文件有警告信息,表示失败sdb1: warning, user block quotaexceeded.sdb1: write failed, user block limit reached.dd: writing `123': Disk quotaexceeded20+0 records in19+0 records out20475904 bytes (20 MB) copied, 0.20094 s, 102 MB/s[user1@localhost sdb]$ ll -h 123-rw-rw-r--. 1 user1 user1 0 Jan 9 18:17 123[user1@localhost sdb]$ exitlogout[root@localhost ~]# su - user2 --用户user2测试[user2@localhost ~]$ cd/mnt/sdb[user2@localhost sdb]$ ddif=/dev/zeroof=23 bs=1M count=8 --写入8M文件成功sdb1: warning, user block quotaexceeded.8+0 records in8+0 records out8388608 bytes (8.4 MB) copied, 0.0923618 s, 90.8 MB/s[user2@localhost sdb]$ ll -h 23 --查看文件大小-rw-rw-r--. 1 user2 user2 8.0M Jan 9 18:23 23[user2@localhost sdb]$[user2@localhost sdb]$ ddif=/dev/zeroof=23 bs=1M count=11 --写入11M文件失败sdb1: warning, user block quotaexceeded.sdb1: write failed, user block limit reached.dd: writing `23': Disk quotaexceeded10+0 records in9+0 records out10235904 bytes (10 MB) copied, 0.106298 s, 96.3 MB/s[user2@localhost sdb]$

3.查看quota配置,修改警告时间,取消quota


[root@localhost ~]# quota -vu user1 user2 --查找指定的用户quota信息Disk quotas foruser user1 (uid 500):Filesystem blocks quotalimit grace files quotalimit grace/dev/sdb10 10000 20000 0 0 0Disk quotas foruser user2 (uid 501):Filesystem blocks quotalimit grace files quotalimit grace/dev/sdb18193* 5000 10000 6days 1 0 0[root@localhost ~]# repquota -av --所有用户和quota信息*** Report foruser quotas on device /dev/sdb1Block grace time: 7days; Inode grace time: 7daysBlock limits File limitsUser used soft hard grace used soft hard grace----------------------------------------------------------------------root -- 13 0 0 2 0 0user1 -- 0 10000 20000 0 0 0user2 +- 8193 5000 10000 6days 1 0 0Statistics:Total blocks: 7Data blocks: 1Entries: 3Used average: 3.000000[root@localhost ~]# edquota -t --修改文件警告天数(Block 天数 Inode 天数)Grace period before enforcing soft limits forusers:Time unitsmay be: days, hours, minutes, or secondsFilesystem Block grace period Inode grace period/dev/sdb17days 7days[root@localhost ~]# vim /etc/warnquota.conf --查看警告信息[root@localhost ~]# quotaoff /mnt/sdb --关闭quota功能

4.磁盘分区,转换磁盘的格式做软阵列


[root@localhost ~]# sfdisk -l --查看系统有多少块硬盘Disk /dev/sda: 1044 cylinders, 255 heads, 63 sectors/trackUnits = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0Device Boot Start End #cyls #blocks Id System/dev/sda1* 0+ 63- 64- 512000 83 Linux/dev/sda263+ 1044- 981- 7875584 8e Linux LVM/dev/sda30 - 0 0 0 Empty/dev/sda40 - 0 0 0 EmptyDisk /dev/sdb: 74 cylinders, 255 heads, 63 sectors/track--第二块硬盘Disk /dev/sdc: 79 cylinders, 255 heads, 63 sectors/track--第三块硬盘Disk /dev/sdd: 74 cylinders, 255 heads, 63 sectors/track--第四块硬盘Disk /dev/mapper/VolGroup-lv_root: 849 cylinders, 255 heads, 63 sectors/trackDisk /dev/mapper/VolGroup-lv_swap: 130 cylinders, 255 heads, 63 sectors/track[root@localhost ~]# fdisk -cu /dev/sdb --分区交转换分区格式(以下分区全部这么做,这里我就不显示了)Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x2255ec93.Changes will remain inmemory only, untilyou decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)Command (m forhelp): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First sector (2048-1196031, default 2048):Using default value 2048Last sector, +sectors or +size{K,M,G} (2048-1196031, default 1196031): +100MCommand (m forhelp): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 2First sector (206848-1196031, default 206848):Using default value 206848Last sector, +sectors or +size{K,M,G} (206848-1196031, default 1196031): +100MCommand (m forhelp): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 3First sector (411648-1196031, default 411648):Using default value 411648Last sector, +sectors or +size{K,M,G} (411648-1196031, default 1196031): +100MCommand (m forhelp): tPartition number (1-4): 1Hex code (typeL to list codes): fdChanged system typeof partition 1 to fd (Linux raid autodetect)Command (m forhelp): tPartition number (1-4): 2Hex code (typeL to list codes): fdChanged system typeof partition 2 to fd (Linux raid autodetect)Command (m forhelp): tPartition number (1-4): 1Hex code (typeL to list codes): fdChanged system typeof partition 1 to fd (Linux raid autodetect)Command (m forhelp): pDisk /dev/sdb: 612 MB, 612368384 bytes255 heads, 63 sectors/track, 74 cylinders, total 1196032 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/Osize (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x2255ec93Device Boot Start End Blocks Id System/dev/sdb12048 206847 102400 fd Linux raid autodetect/dev/sdb2206848 411647 102400 fd Linux raid autodetect/dev/sdd3411648 616447 102400 fd Linux raid autodetectCommand (m forhelp): wThe partition table has been altered!Calling ioctl() to re-readpartition table.Syncing disks.[root@localhost ~]# partx -a /dev/sdb --强制读取分区表BLKPG: Device or resource busyerror adding partition 1BLKPG: Device or resource busyerror adding partition 2BLKPG: Device or resource busyerror adding partition 3[root@localhost ~]# partx -a /dev/sdc --强制读取分区表BLKPG: Device or resource busyerror adding partition 1BLKPG: Device or resource busyerror adding partition 2BLKPG: Device or resource busyerror adding partition 3[root@localhost ~]# partx -a /dev/sdd --强制读取分区表BLKPG: Device or resource busyerror adding partition 1BLKPG: Device or resource busyerror adding partition 2BLKPG: Device or resource busyerror adding partition 3

5.将第二,三块硬盘的第一个分区做成raid0


[root@localhost ~]# mdadm --create /dev/md0 --raid-devices=2 --level=0 /dev/sd{b,c}1 --第二,三块硬盘第一分区做raid0mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md0started.[root@localhost ~]# cat /proc/mdstat --查看raid信息Personalities : [raid0]md0 : active raid0 sdc1[1] sdb1[0]224256 blocks super 1.2 512k chunksunused devices: <none>[root@localhost ~]# mkfs.ext4 /dev/md0 --格式化mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=1024 (log=0)Fragment size=1024 (log=0)Stride=512 blocks, Stripe width=1024 blocks56224 inodes, 224256 blocks11212 blocks (5.00%) reserved forthe super userFirst data block=1Maximum filesystem blocks=6737100828 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock backups stored on blocks:8193, 24577, 40961, 57345, 73729, 204801, 221185Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 38 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost ~]# mount /dev/md0 /mnt/sdb[root@localhost ~]#

6.将第二,三块硬盘的第二个分区做raid1


[root@localhost ~]# mdadm --create /dev/md1 --raid-devices=2 --level=1 /dev/sd{b,c}2mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device. If you plan tostore '/boot'on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90Continue creating array?Continue creating array? (y/n) ymdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md1started.[root@localhost ~]# mkfs.ext4 /dev/md1mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=1024 (log=0)Fragment size=1024 (log=0)Stride=0 blocks, Stripe width=0 blocks28112 inodes, 112320 blocks5616 blocks (5.00%) reserved forthe super userFirst data block=1Maximum filesystem blocks=6737100814 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock backups stored on blocks:8193, 24577, 40961, 57345, 73729Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 35 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost ~]# mount /dev/md1 /mnt/sdb1/

7.将第二,三,四块硬盘的第三个分区做成raid5


[root@localhost ~]# mdadm --create /dev/md2 --raid-devices=3 --level=5 /dev/sd{b,c,d}3mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md2started.[root@localhost ~]# mkfs.ext4 /dev/md2mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=1024 (log=0)Fragment size=1024 (log=0)Stride=512 blocks, Stripe width=1024 blocks56224 inodes, 224256 blocks11212 blocks (5.00%) reserved forthe super userFirst data block=1Maximum filesystem blocks=6737100828 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock backups stored on blocks:8193, 24577, 40961, 57345, 73729, 204801, 221185Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 35 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost ~]# mount /dev/md2 /mnt/sdb2/

8.查看raid信息


[root@localhost ~]# cat /proc/mdstatPersonalities : [raid0] [raid1] [raid6] [raid5] [raid4]md2 : active raid5 sdd3[3] sdc3[1] sdb3[0]224256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]md1 : active raid1 sdc2[1] sdb2[0]112320 blocks super 1.2 [2/2] [UU]md0 : active raid0 sdc1[1] sdb1[0]224256 blocks super 1.2 512k chunksunused devices: <none>[root@localhost ~]# df -THFilesystem Type Size Used Avail Use% Mounted on/dev/mapper/VolGroup-lv_rootext4 6.9G 6.4G 166M 98% /tmpfs tmpfs 262M 0 262M 0% /dev/shm/dev/sda1ext4 508M 48M 435M 10% /boot/dev/md0ext4 223M 6.4M 205M 3% /mnt/sdb/dev/md1ext4 112M 5.8M 100M 6% /mnt/sdb1/dev/md2ext4 223M 6.4M 205M 3% /mnt/sdb2[root@localhost ~]#

9.raid故障恢复和用raid使用逻辑卷(lvm)


[root@localhost ~]# mdadm -a /dev/md2 /dev/sdd1 --在raid5中添加一块分区mdadm: added /dev/sdd1[root@localhost ~]# mdadm -f /dev/md2 /dev/sdd3 --将raid5中的第三个分区变为失效mdadm: set/dev/sdd3faulty in/dev/md2[root@localhost ~]# mdadm -r /dev/md2 /dev/sdd3 --移除raid5中的第三个分区mdadm: hot removed /dev/sdd3from /dev/md2[root@localhost ~]# cat /proc/mdstatPersonalities : [raid0] [raid1] [raid6] [raid5] [raid4]md2 : active raid5 sdd1[4] sdc3[1] sdb3[0] --查看raid5中的所有分区224256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]md1 : active raid1 sdc2[1] sdb2[0]112320 blocks super 1.2 [2/2] [UU]md0 : active raid0 sdc1[1] sdb1[0]224256 blocks super 1.2 512k chunksunused devices: <none>[root@localhost ~]# pvcreate /dev/md2 --将raid5转换成物理卷Physical volume "/dev/md2"successfully created[root@localhost ~]# vgcreate vg0 /dev/md2 --物理卷组成卷组Volume group "vg0"successfully created[root@localhost ~]# lvcreate -L 150M -n test /dev/vg0 --从卷组中划分逻辑卷Rounding up size to full physical extent 152.00 MiBLogical volume "test"created[root@localhost ~]# mkfs.ext4 /dev/vg0/test --格式化逻辑卷mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=1024 (log=0)Fragment size=1024 (log=0)Stride=512 blocks, Stripe width=1024 blocks38912 inodes, 155648 blocks7782 blocks (5.00%) reserved forthe super userFirst data block=1Maximum filesystem blocks=6737100819 block groups8192 blocks per group, 8192 fragments per group2048 inodes per groupSuperblock backups stored on blocks:8193, 24577, 40961, 57345, 73729Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost ~]# mount /dev/vg0/test /mnt/sdb2/ --挂载[root@localhost ~]# df -THFilesystem Type Size Used Avail Use% Mounted on/dev/mapper/VolGroup-lv_rootext4 6.9G 6.4G 166M 98% /tmpfs tmpfs 262M 0 262M 0% /dev/shm/dev/sda1ext4 508M 48M 435M 10% /boot/dev/md0ext4 223M 6.4M 205M 3% /mnt/sdb/dev/md1ext4 112M 5.8M 100M 6% /mnt/sdb1/dev/mapper/vg0-testext4 155M 5.8M 141M 4% /mnt/sdb2[root@localhost ~]#


linux系统中对逻辑卷(lvm)的实现:http://tongcheng.blog.51cto.com/6214144/1350144