背景:vsphere的虚拟机化主机 默认的磁盘有两块 一块 分给系统 ,一块给了 /app分区,
现在是要扩容 /app分区,在控制操作 /app的磁盘增加了 100G之后 重启主机显示正常,先看一下几个状态和名称:
fdisk 分区之后:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part ├─centos-root 253:0 0 41.5G 0 lvm /├─centos-swap 253:1 0 8G 0 lvm [SWAP]└─centos-home 253:3 0 50G 0 lvm /home
sdb 8:16 0 200G 0 disk
├─sdb1 8:17 0 100G 0 part
│ └─bqopensu04_vg-bqopensu04_lv 253:2 0 99.5G 0 lvm /app
└─sdb2 8:18 0 100G 0 part
pvcreate /dev/sdb2 # 命令
# 显示
# Physical volume "/dev/sdb2" successfully created.
# 查看 pv创建是否成功
pvs # 命令# 显示
# PV VG Fmt Attr PSize PFree
# /dev/sda2 centos lvm2 a-- <99.51g 4.00m
# /dev/sdb1 bqopensu04_vg lvm2 a-- <100.00g 508.00m
# /dev/sdb2 bqopensu04_vg lvm2 a-- <100.00g <100.00g
vgs # 命令
# 显示
# VG #PV #LV #SN Attr VSize VFree
# bqopensu04_vg 2 1 0 wz--n- 199.99g 100.49g
# centos 1 3 0 wz--n- <99.51g 4.00m
# 扩容 lv
lvextend -l &#43;100%FREE /dev/bqopensu04_vg/bqopensu04_lv # 显示如下正常
# Size of logical volume bqopensu04_vg/bqopensu04_lv changed from 99.50 GiB (25472 #extents) to 199.99 GiB (51198 extents).
# Logical volume bqopensu04_vg/bqopensu04_lv successfully resized.
vgs VG #PV #LV #SN Attr VSize VFreebqopensu04_vg 2 1 0 wz--n- 199.99g 0 centos 1 3 0 wz--n- <99.51g 4.00m
lvs # 命令
# 显示# LV VG Attr LSize Pool Origin Data% Meta% Move Log #Cpy%Sync Convert
# bqopensu04_lv bqopensu04_vg -wi-ao---- 199.99g # 原来是 99G
# home centos -wi-ao---- 50.00g
# root centos -wi-ao---- 41.50g
# swap centos -wi-ao---- 8.00g
xfs_growfs /app # 命令# 显示状态
meta-data&#61;/dev/mapper/bqopensu04_vg-bqopensu04_lv isize&#61;512 agcount&#61;4, agsize&#61;6520832 blks&#61; sectsz&#61;512 attr&#61;2, projid32bit&#61;1&#61; crc&#61;1 finobt&#61;0 spinodes&#61;0
data &#61; bsize&#61;4096 blocks&#61;26083328, imaxpct&#61;25&#61; sunit&#61;0 swidth&#61;0 blks
naming &#61;version 2 bsize&#61;4096 ascii-ci&#61;0 ftype&#61;1
log &#61;internal bsize&#61;4096 blocks&#61;12736, version&#61;2&#61; sectsz&#61;512 sunit&#61;0 blks, lazy-count&#61;1
realtime &#61;none extsz&#61;4096 blocks&#61;0, rtextents&#61;0
data blocks changed from 26083328 to 52426752
You have mail in /var/spool/mail/root[root&#64;bqopensu04 ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 16G 0 16G 0% /dev
tmpfs tmpfs 16G 4.0K 16G 1% /dev/shm
tmpfs tmpfs 16G 12M 16G 1% /run
tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 42G 37G 5.1G 88% /
/dev/sda1 xfs 497M 136M 362M 28% /boot
/dev/mapper/bqopensu04_vg-bqopensu04_lv xfs 200G 9.0G 192G 5% /app
/dev/mapper/centos-home xfs 50G 3.6G 47G 8% /home
tmpfs tmpfs 3.2G 0 3.2G
分区&#xff1a;fdisk &#43; 盘符
磁盘分区 例如&#xff1a;/dev/sdb 分区后 /dev/sdb1 /dev/sdb2 /dev/sdb3
写入文件系统&#xff0c;如ext4、xfs 等
# 以 /dev/sdc 新增磁盘
# vgname centos
# lvname /dev/mapper/centos-root 为例pvcreate /dev/sdc
vgextend centos /dev/sdc
lvextend -l &#43;100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root
推荐阅读
linux fdisk 分区、格式化、挂载&#xff01;_时光与流水的博客-CSDN博客_fdisk
常用命令 查看当前磁盘分区情况
fdisk -l
磁盘分区
fdisk /dev/sdb
# 进入命令行之后 m是帮助 常用的就是 下面几个字母选择
第一步 &#xff1a; n-新增
第二步&#xff1a;选择是否是 p-主分区&#xff08;如果不是GPT一般只有1-2个主分区P 1-2个free 最多4个&#xff09;
第三部&#xff1a;选择数字 1 2 3 4 即最后名字是 /deb/sdb3 还是、/dev/sdb4
第四步&#xff1a;回车或选择磁盘开头数字 回车或选择结尾数字 一般两个回车默认全部分区完成。
第五步&#xff1a;w保存 q不保存退出
# 可能用到的Type :
# 8e Linux LVM
# fd Linux raid auto
# 扩展vg-group-name 的 vg
vgextend vg-group-name /dev/sdb3
# 扩展LV
lvextend -L &#43;800 /dev/myVG/vol01 # -L size&#43; 800M
lvextend -L 1.5G /dev/myVG/vol01 # -L size&#61; 1.5G
检查
e2fsck -f /dev/myVG/vol01
# -f Force checking even if the file system seems clean.
重置FS大小(对于fs&#xff0c;扩张此时才实际生效)
resize2fs /dev/myVG/vol01 # ext3 或 xfs_growfs home/vsgroup-name
总结一下
RAID&#xff08;加上分布式存储&#xff09; 是保证磁盘坏掉 业务流畅性 比如raid1&#xff08;最少两块硬盘&#xff09; 双硬盘备份 比如raid5&#xff08;最少三块硬盘&#xff09;&#xff0c;。
lvm是保证磁盘的高可用性&#xff0c;对磁盘做管理&#xff0c;常用名词就是 PV--VG ----LV VG就是硬盘&#xff0c;扩容之后 简单一点就是free 100% 扩容即可 无需挂载 这些危险的操作&#xff08;几次的ubuntu DNS弄丢就是 umount的锅 不是我的锅 嘿嘿&#xff09;。
附上最后用的ubuntu给分的lvm 和自己操作增加硬盘 的截图吧&#xff1a;
resize2fs 那条是刷新目录命令 最后df -h会看到 刷新root目录增加了 创建的 、dev/sda3的磁盘容量
删除&#xff1a;找不到的vg先用 vgreduce --removemissing vgname 来恢复一下再删除
-----------------以下是引用的其他文章的详细 lvm 和 raid&#43;lvm的命令 留作以后参考吧------
创建PV
pvcreate /dev/sdb1 /dev/sdb2
确认PV
pvdisplay
pvdisplay /dev/sdb1
pvdisplay /dev/sdb2
删除PV
pvremove /dev/sdb1 /dev/sdb2
创建VG
vgcreate myVG /dev/sdb1 /dev/sdb2
or
vgcreate myVG /dev/sdb1; vgextend myVG /dev/sdb2
确认VG
vgdisplay
vgscan # vgscan scans all SCSI(sd[a-t]), (E)IDE disks([hd[a-t]]), multiple devices(raid) and a bunch of other disk devices in the system looking for LVM physical volumes and volume groups.
重命名VG
vgrename myVG myNewVG
删除VG 找不到的vg先用 vgreduce --removemissing vgname 来恢复一下再删除
vgremove myVG
创建LV
lvcreate -L 400 -n vol01 myVG; lvcreate --name vol01 --size 400M myVG
lvcreate -L 1000 -n vol02 myVG
# -L size, 400M, 1000M
# -n lv&#39;s name
确认LV
lvdisplay
lvscan
重命名LV
lvrename myVG vol01 vol01_new; lvrename /dev/myVG/vol01 myVG/vol01_new
删除LV
lvremove
创建Filesystem
mkfs.ext3 /dev/myVG/vol01
mkfs.ext3 /dev/myVG/vol02
mkfs.xfs /dev/myVG/vol03
mkfs.reiserfs /dev/myVG/vol04
挂载Mount Filesystem
mkdir -p /data1 /data2
mount /dev/myVG/vol01 /data1
mount /dev/myVG/vol02 /data2
卸载FS
umount /data1
确认
df -h
编辑写入/etc/fstab
# file system|mount point|type|options|dump|pass
/dev/myVG/vol01 /data1 ext3 rw,noatime 0 0
/dev/myVG/vol02 /data2 ext3 rw,noatime 0 0
vgextend vg-group-name /dev/sdb3
扩展LV
lvextend -L &#43;800 /dev/myVG/vol01 # -L size&#43; 800M
lvextend -L 1.5G /dev/myVG/vol01 # -L size&#61; 1.5G
检查
e2fsck -f /dev/myVG/vol01
# -f Force checking even if the file system seems clean.
重置FS大小(对于fs&#xff0c;扩张此时才实际生效)
resize2fs /dev/myVG/vol01 # ext3 或 xfs_growfs home/vsgroup-name/
resize_reiserfs /dev/fileserver/media # reiserfs
xfs_growfs /dev/fileserver/backup # xfs
meta-data&#61;/dev/fileserver/backup isize&#61;256 agcount&#61;8, agsize&#61;163840 blks
&#61; sectsz&#61;512 attr&#61;0
data &#61; bsize&#61;4096 blocks&#61;1310720, imaxpct&#61;25
&#61; sunit&#61;0 swidth&#61;0 blks, unwritten&#61;1
naming &#61;version 2 bsize&#61;4096
log &#61;internal bsize&#61;4096 blocks&#61;2560, version&#61;1
&#61; sectsz&#61;512 sunit&#61;0 blks
realtime &#61;none extsz&#61;65536 blocks&#61;0, rtextents&#61;0
装载
mount /data1
------缩减文件系统----------------
现在卸载
umount /data1
检查
e2fsck -f /dev/myVG/vol01
缩减fs
resize2fs /dev/myVG/vol01 2G
删除LV
lvreduce -L 1G dev/myVG/vol01 # -L size&#61; 1G
--添加新的PV到已知VG中----------------------------------
创建partition
fdisk /dev/sdf # type 8e Linux LVM
创建PV
pvcreate /dev/sdf1
添加
vgextend myVG /dev/sdf1
确认
vgdisplay
--删除PV从VG中----------------------------------
先将数据转移
pvmove /dev/sdb_old /dev/sdf_new
再从vg中删除old pv
vgreduce myVG /dev/sdb_old
确认vg中无old pv
vgdisplay
删除old pv
pvremove /dev/sdb_old
确认
pvdisplay
-----清除所有操作--------------------------------------
umount /data1
lvremove /dev/myVG/vol01
vgremove myVG
pvremove /dev/sdb1 /dev/sdb2
restore /etc/fstab manually
shutdown -r now
确认
vgdisplay
pvdisplay
lvdisplay
df -h
&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;LVM On RAID1&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;
LV |/dev/myVG/share | /dev/myVG/backup | /dev/myVG/unused
---|-------------------|----------------------|------------------------
VG | myVG
---|----------------------------------------------------------------
PV | /dev/md0 | /dev/md1
|----------------------------------------------------------------
|/dev/sdb1 | /dev/sdc1 | /dev/sdd1 | /dev/sde1
---|----------------------------------------------------------------
构建初始状态
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
vgcreate myVG /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
lvcreate --name share --size 40G myVG
lvcreate --name backup --size 5G myVG
lvcreate --name media --size 1G myVG
mkfs.ext3 /dev/myVG/share
mkfs.xfs /dev/myVG/backup
mkfs.reiserfs /dev/myVG/media
mount /dev/myVG/share /var/share
mount /dev/myVG/backup /var/backup
mount /dev/myVG/media /var/media
df -h
开始构筑
清除sdc1 sde1
modprobe dm-mirror
pvmove /dev/sdc1
pvmove /dev/sde1
vgreduce myVG /dev/sdc1
vgreduce myVG /dev/sde1
pvremove /dev/sdc1
pvremove /dev/sde1
重新设定partition为raid
fdisk /dev/sdc # type fd Linux raid auto
fdisk /dev/sde # type fd Linux raid auto
创建raid
# add /dev/sdc1 to /dev/md0 and /dev/sde1 to /dev/md1.
# Because the second nodes (/dev/sdb1 and /dev/sdd1) are not ready yet, we must specify missing in the following commands:
mdadm --create /dev/md0 --auto&#61;yes -l 1 -n 2 /dev/sdc1 missing
# -l level
# -n Specify the number of active devices in the array.
mdadm --create /dev/md1 --auto&#61;yes -l 1 -n 2 /dev/sde1 missing
创建raid PV
pvcreate /dev/md0 /dev/md1
将新的raid PV加入到VG
vgextend myVG /dev/md0 /dev/md1
确认
pvdisplay
# PV Name /dev/sdb1
# PV Name /dev/sdd1
# PV Name /dev/md0
# PV Name /dev/md1
迁移数据
pvmove /dev/sdb1 /dev/md0
pvmove /dev/sdd1 /dev/md1
移除/dev/sdb1 /dev/sdd1从vg中&#xff0c;并删除
vgreduce myVG /dev/sdb1 /dev/sdd1
pvremove /dev/sdb1 /dev/sdd1
确认
pvdisplay
# PV Name /dev/md0
# PV Name /dev/md1
设置dev/sdb和dev/sdd为raid type
fdisk /dev/sdb # type fd Linux raid auto
fdisk /dev/sdd # type fd Linux raid auto
# add /dev/sdb1 to /dev/md0 and /dev/sdd1 to /dev/md1:
mdadm --manage /dev/md0 --add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdd1
检查同步进度&#xff0c;直到100%
cat /proc/mdstat
&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;Replacing The Hard Disks With Bigger Ones&#61;&#61;&#61;&#61;&#61;
# The procedure is as follows:
# first we remove /dev/sdb and /dev/sdd from the RAID arrays, replace them with bigger hard disks,
# put them back into the RAID arrays, and then we do the same again with /dev/sdc and /dev/sde.
# First we mark /dev/sdb1 as failed:
mdadm --manage /dev/md0 --fail /dev/sdb1
确认
cat /proc/mdstat
移除
mdadm --manage /dev/md0 --remove /dev/sdb1
确认
cat /proc/mdstat
对/dev/sdd1做同样的操作
mdadm --manage /dev/md1 --fail /dev/sdd1
cat /proc/mdstat
mdadm --manage /dev/md1 --remove /dev/sdd1
cat /proc/mdstat
# now shut it down, pull out the 25GB /dev/sdb and /dev/sdd and replace them with 80GB ones.
格式化新的sdb sdd
fdisk /dev/sdb # type fd Linux raid auto
fdisk /dev/sdd # type fd Linux raid auto
添加新的pv到raid vg中
mdadm --manage /dev/md0 --add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdd1
检查同步进度&#xff0c;直到100
cat /proc/mdstat
# Now we do the same process again, this time replacing /dev/sdc and/dev/sde:
mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm --manage /dev/md1 --fail /dev/sde1
mdadm --manage /dev/md1 --remove /dev/sde1
插入新的 sdc sde
fdisk /dev/sdc # type fd Linux raid auto
fdisk /dev/sde # type fd Linux raid auto
mdadm --manage /dev/md0 --add /dev/sdc1
mdadm --manage /dev/md1 --add /dev/sde1
# Wait until the synchronization has finished.
cat /proc/mdstat
&#61;&#61;&#61;&#61;&#61;如果事先准备好pv&#xff0c;直接构建raid VG时如下步骤&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;
mdadm --create /dev/md2 --auto&#61;yes -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm --create /dev/md3 --auto&#61;yes -l 1 -n 2 /dev/sdd2 /dev/sde2
cat /proc/mdstat # Wait until the synchronization has finished.
原文网址&#xff1a;Linux LVM 简单操作 - goozgk - 博客园
-------------------------------------------------------------其他知识点umount 和dns---------------------------------------------------------------------------
-------------------------终于知道为什么DNS出问题了 是我umont 加载 根目录搞的 那么如何增加根目录的另一块硬盘容量呢&#xff1f; ubuntu16的ip配置和18的ip配置并不麻烦 我私密空间里面那一篇文章-----------------------
ubuntu18 dns进阶文章参考&#xff1a;Ubuntu 18.04的DNS问题(已解决) - openthings的个人空间 - OSCHINA - 中文开源技术交流社区
Kubernetes出现Pod中无法解析DNS的问题&#xff08;Ubuntu 16.04&#xff09;&#xff0c;经过摸索&#xff0c;得到了解决&#xff08;参见&#xff1a;Kubernetes中的Pod无法访问外网-Ubuntu16.04 LTS &#xff09;。但在升级到Ubuntu 18.04后&#xff0c;该问题再次出现&#xff0c;而且老办法也不灵了&#xff01;
咋整&#xff1f;
其他DNS的文章&#xff1a;Ubuntu 18.04设置dns - breezey - 博客园
Ubuntu 18.04设置dns最近使用了最新版的ubuntu 18.04运行一些服务&#xff0c;然后发现服务器经常出现网络不通的情况&#xff0c;主要是一些域名无法解析。
检查/etc/resolv.conf&#xff0c;发现之前修改的nameserver总是会被修改为127.0.0.53&#xff0c;无论是改成啥&#xff0c;过段时间&#xff0c;总会变回来。
查看/etc/resolv.conf这个文件的注释&#xff0c;发现开头就写着这么一行&#xff1a;
# This file is managed by man:systemd-resolved(8). Do not edit.
这说明这个文件是被systemd-resolved
这个服务托管的。
通过netstat -tnpl| grep systemd-resolved
查看到这个服务是监听在53号端口上。
查了下&#xff0c;这个服务的配置文件为/etc/systemd/resolved.conf
&#xff0c;大致内容如下&#xff1a;
[Resolve]
DNS&#61;1.1.1.1 1.0.0.1
#FallbackDNS&#61;
#Domains&#61;
LLMNR&#61;no
#MulticastDNS&#61;no
#DNSSEC&#61;no
#Cache&#61;yes
#DNSStubListener&#61;yes
如果我们要想让/etc/resolve.conf
文件里的配置生效&#xff0c;需要添加到systemd-resolved的这个配置文件里DNS
配置项&#xff08;如上面的示例&#xff0c;已经完成修改&#xff09;&#xff0c;然后重启systemd-resolved服务即可。
另一种更简单的办法是&#xff0c;我们直接停掉systemd-resolved服务&#xff0c;这样再修改/etc/resolve.conf
就可以一直生效了。