热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

Ceph集群搭建记录

环境准备基础环境node00192.168.247.144node00node01192.168.247.135node01node02192.168.247.143node02v

环境准备


基础环境






















node00192.168.247.144node00
node01192.168.247.135node01
node02192.168.247.143node02

vmare在分配IP没有连续,没有关系继续吧


配置免密登录



  1. 修改主机名称

hostnamectl set-hostname node00
hostnamectl set-hostname node01
hostnamectl set-hostname node02


  1. 编辑hosts文件

[root@linux30 ~]# vi /etc/hosts
192.168.247.144 node00
192.168.247.135 node01
192.168.247.143 node02


  1. 官方建议不用系统内置用户, 创建名为ceph_user用户, 密码也设为123456:

useradd -d /home/ceph_user -m ceph_user
passwd ceph_user
# 设置root 权限
echo "ceph_user ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph_user
sudo chmod 0440 /etc/sudoers.d/ceph_user


  1. 生成密钥:切换用户: su ceph_user 执行ssh-keygen,一直按默认提示点击生成RSA密钥信息。

  2. 分发密钥至各机器节点

ssh-copy-id ceph_user@node00
ssh-copy-id ceph_user@node01
ssh-copy-id ceph_user@node02


  1. 修改管理节点上的 ~/.ssh/config 文件, 简化SSH远程连接时的输入信息:管理节点是会有root和ceph_user多个用户, ssh远程连接默认会以当前用户身份进行登陆, 如果我们是root身份进行远程连接, 还是需要输入密码, 我们想简化, 该怎么处理?

su root
vim /root/.ssh/config

复制一下内容即可

Host node00
Hostname node00
User ceph_user
Host node01
Hostname node01
User ceph_user
Host node02
Hostname node02
User ceph_user

注意修改文件权限, 不能采用777最大权限:

chmod 600 ~/.ssh/config

NTP时间工具同步

# 下载
yum install ntp ntpdate ntp-doc -y
# 确保时区是正确, 设置开机启动:
systemctl enable ntpd
# 将时间每隔1小时自动校准同步。编辑 vi /etc/rc.d/rc.local 追加:
echo "/usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w" >> /etc/rc.d/rc.local
# 配置定时任务, 执行crontab -e 加入
crontab -e 0 */1 * * * ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w

ceph的镜像加速源头设置

rpm -ivh https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm

yum install epel-release

yum install ceph-deploy python-setuptools python2-subprocess32

并配置镜像加速源

echo '
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
# 官方源
#baseurl=http://download.ceph.com/rpm-mimic/el7/noarch
# 清华源
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc' > /etc/yum.repos.d/ceph.repo


  1. 开放端口, 非生产环境, 可以直接禁用防火墙:

systemctl stop firewalld.service
systemctl disable firewalld.service


  1. SELinux设为禁用:

setenforce 0

永久生效:编辑 vi /etc/selinux/config修改:

SELINUX=disabled

正式安装



  1. ceph-deploy 工具安装;主节点安装:

yum update && yum -y install ceph ceph-deploy

也可通过如下方式安装:



  1. 创建目录

mkdir -p /opt/ceph/ceph-cluster && cd /opt/ceph/ceph-cluster


  1. 创建集群;

ceph-deploy new node00 node01 node02

会生成配置文件;

vi /opt/ceph/ceph-cluster/ceph.conf

[global]
fsid = 7d75db39-d457-4764-b7d2-1b48645b2781
mon_initial_members = linux30, linux31, linux32
mon_host = 192.168.10.30,192.168.10.31,192.168.10.32
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# 公网网络
public network = 192.168.247.100/24
# 设置pool池默认分配数量 默认副本数为3
osd pool default size = 2
# 容忍更多的时钟误差
mon clock drift allowed = 2
mon clock drift warn backoff = 30
# 允许删除pool
mon_allow_pool_delete = true
[mgr]
# 开启WEB仪表盘
mgr modules = dashboard

public network是其公网IP(虚拟机是vmnet8的网卡IP)



  1. 节点部署

ceph-deploy install node00 node01 node02 --no-adjust-repos

--no-adjust-repos使用该命令可以实现加速效果;并且不改变镜像源。



  1. 初始monitor信息:

ceph-deploy mon create-initial
## ceph-deploy --overwrite-conf mon create-initial

执行完之后,本地目录下会生成若干keyring结尾的密钥文件;如下图:



  1. 同步管理信息

ceph-deploy admin node00 node01 node02

[root@node00 ceph-cluster]# ceph-deploy admin node00 node01 node02
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin node00 node01 node02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['node00', 'node01', 'node02']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node00
[node00][DEBUG ] connected to host: node00
[node00][DEBUG ] detect platform information from remote host
[node00][DEBUG ] detect machine type
[node00][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node01
[node01][DEBUG ] connection detected need for sudo
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
[node02][DEBUG ] connection detected need for sudo
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@node00 ceph-cluster]# vim /etc/ceph/
ceph.client.admin.keyring rbdmap tmphCOe3j
ceph.conf tmp4R_f2T
[root@node00 ceph-cluster]# vim /etc/ceph/
ceph.client.admin.keyring rbdmap tmphCOe3j
ceph.conf tmp4R_f2T
[root@node00 ceph-cluster]# vim /etc/ceph/ceph.conf
[root@node00 ceph-cluster]# clear
[root@node00 ceph-cluster]# ceph-deploy mgr create node00 node01 node02
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create node00 node01 node02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('node00', 'node00'), ('node01', 'node01'), ('node02', 'node02')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts node00:node00 node01:node01 node02:node02
[node00][DEBUG ] connected to host: node00
[node00][DEBUG ] detect platform information from remote host
[node00][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node00
[node00][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node00][WARNIN] mgr keyring does not exist yet, creating one
[node00][DEBUG ] create a keyring file
[node00][DEBUG ] create path recursively if it doesn't exist
[node00][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node00 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node00/keyring
[node00][INFO ] Running command: systemctl enable ceph-mgr@node00
[node00][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node00.service to /usr/lib/systemd/system/ceph-mgr@.service.
[node00][INFO ] Running command: systemctl start ceph-mgr@node00
[node00][INFO ] Running command: systemctl enable ceph.target
[node01][DEBUG ] connection detected need for sudo
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] mgr keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file
[node01][DEBUG ] create path recursively if it doesn't exist
[node01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node01/keyring
[node01][INFO ] Running command: sudo systemctl enable ceph-mgr@node01
[node01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node01.service to /usr/lib/systemd/system/ceph-mgr@.service.
[node01][INFO ] Running command: sudo systemctl start ceph-mgr@node01
[node01][INFO ] Running command: sudo systemctl enable ceph.target
[node02][DEBUG ] connection detected need for sudo
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node02
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][WARNIN] mgr keyring does not exist yet, creating one
[node02][DEBUG ] create a keyring file
[node02][DEBUG ] create path recursively if it doesn't exist
[node02][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node02/keyring
[node02][INFO ] Running command: sudo systemctl enable ceph-mgr@node02
[node02][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node02.service to /usr/lib/systemd/system/ceph-mgr@.service.
[node02][INFO ] Running command: sudo systemctl start ceph-mgr@node02
[node02][INFO ] Running command: sudo systemctl enable ceph.target


  1. 安装mgr(管理守护进程), 大于12.x版本需安装, 我们装的是最新版,需执行:

ceph-deploy mgr create node00 node01 node02


  1. 安装OSD(对象存储设备)并且完成挂载

fdisk -l查看加挂磁盘

加挂节点:

ceph-deploy osd create --data /dev/sdb node00
ceph-deploy osd create --data /dev/sdb node01
ceph-deploy osd create --data /dev/sdb node02


  1. 测试安装情况:

ceph-s

ceph config set mgr mgr/dashboard/server_addr 192.168.247.146
ceph config set mgr mgr/dashboard/server_port 18843
ceph config set mgr mgr/dashboard/server_addr node01

安装管理后台



  1. 开启dashboard模块

ceph mgr module enable dashboard


  1. 生成签名

ceph dashboard create-self-signed-cert


  1. 创建目录

mkdir mgr-dashboard&&cd mgr-dashboard

[root@node00 mgr-dashboard]# pwd

/opt/ceph/ceph-cluster/mgr-dashboard



  1. 生成密钥对

cd /opt/ceph/ceph-cluster/mgr-dashboard
openssl req -new -nodes -x509 -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

[root@linux30 mgr-dashboard]# ll

total 8

-rw-rw-r-- 1 ceph_user ceph_user 1155 Jul 14 02:26 dashboard.crt

-rw-rw-r-- 1 ceph_user ceph_user 1704 Jul 14 02:26 dashboard.key



  1. 启动dashboard

ceph mgr module disable dashboard
ceph mgr module enable dashboard


  1. 设置IP与PORT

ceph config set mgr mgr/dashboard/server_addr 192.168.247.146
ceph config set mgr mgr/dashboard/server_port 18843


  1. 关闭HTTPS

ceph config set mgr mgr/dashboard/ssl false


  1. 查看服务信息

[root@node00 ceph-cluster]# ceph mgr services
{
"dashboard": "http://192.168.247.146:18843/"
}

ceph config set mgr mgr/dashboard/server_addr node00


  1. 设置管理用户与密码

ceph dashboard set-login-credentials admin admin


  1. 访问 http://192.168247.146:18843/


安装问题记录


ceph -s出现问题:

可能是时间同步问题,

ystemctl start ntpd #新增的节点没有启动ntpd
systemctl restart ceph-mon.target
systemctl restart ceph-mon.target

ceph-deploy mon相关问题汇总:

ceph-deploy mon出现mon.node40 monitor is not yet in quorum, tries left: 5错误:

[root@node40 ceph-cluster]# ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node40 node41 node42
[ceph_deploy.mon][DEBUG ] detecting platform for host node40 ...
[node40][DEBUG ] connection detected need for sudo
[node40][DEBUG ] connected to host: node40
[node40][DEBUG ] detect platform information from remote host
[node40][DEBUG ] detect machine type
[node40][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[node40][DEBUG ] determining if provided host has same hostname in remote
[node40][DEBUG ] get remote short hostname
[node40][DEBUG ] deploying mon to node40
[node40][DEBUG ] get remote short hostname
[node40][DEBUG ] remote hostname: node40
[node40][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node40][DEBUG ] create the mon path if it does not exist
[node40][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node40/done
[node40][DEBUG ] create a done file to avoid re-doing the mon deployment
[node40][DEBUG ] create the init path if it does not exist
[node40][INFO ] Running command: sudo systemctl enable ceph.target
[node40][INFO ] Running command: sudo systemctl enable ceph-mon@node40
[node40][INFO ] Running command: sudo systemctl start ceph-mon@node40
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[node40][DEBUG ] ********************************************************************************
[node40][DEBUG ] status for monitor: mon.node40
[node40][DEBUG ] {
[node40][DEBUG ] "election_epoch": 1,
[node40][DEBUG ] "extra_probe_peers": [
[node40][DEBUG ] {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.142:6789",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.141:3300",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v2"
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.141:6789",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.142:3300",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v2"
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.142:6789",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] }
[node40][DEBUG ] ],
[node40][DEBUG ] "feature_map": {
[node40][DEBUG ] "mon": [
[node40][DEBUG ] {
[node40][DEBUG ] "features": "0x3ffddff8ffecffff",
[node40][DEBUG ] "num": 1,
[node40][DEBUG ] "release": "luminous"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] "features": {
[node40][DEBUG ] "quorum_con": "0",
[node40][DEBUG ] "quorum_mon": [],
[node40][DEBUG ] "required_con": "0",
[node40][DEBUG ] "required_mon": []
[node40][DEBUG ] },
[node40][DEBUG ] "monmap": {
[node40][DEBUG ] "created": "2022-04-08 14:14:20.855876",
[node40][DEBUG ] "epoch": 0,
[node40][DEBUG ] "features": {
[node40][DEBUG ] "optional": [],
[node40][DEBUG ] "persistent": []
[node40][DEBUG ] },
[node40][DEBUG ] "fsid": "b3299c95-745f-467f-91e4-a3e30c490483",
[node40][DEBUG ] "min_mon_release": 0,
[node40][DEBUG ] "min_mon_release_name": "unknown",
[node40][DEBUG ] "modified": "2022-04-08 14:14:20.855876",
[node40][DEBUG ] "mons": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.140:6789/0",
[node40][DEBUG ] "name": "node40",
[node40][DEBUG ] "public_addr": "192.168.247.140:6789/0",
[node40][DEBUG ] "public_addrs": {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.140:3300",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v2"
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.140:6789",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] "rank": 0
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.142:6789/0",
[node40][DEBUG ] "name": "node42",
[node40][DEBUG ] "public_addr": "192.168.247.142:6789/0",
[node40][DEBUG ] "public_addrs": {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "192.168.247.142:6789",
[node40][DEBUG ] "nonce": 0,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] "rank": 1
[node40][DEBUG ] },
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "0.0.0.0:0/1",
[node40][DEBUG ] "name": "node41",
[node40][DEBUG ] "public_addr": "0.0.0.0:0/1",
[node40][DEBUG ] "public_addrs": {
[node40][DEBUG ] "addrvec": [
[node40][DEBUG ] {
[node40][DEBUG ] "addr": "0.0.0.0:0",
[node40][DEBUG ] "nonce": 1,
[node40][DEBUG ] "type": "v1"
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] "rank": 2
[node40][DEBUG ] }
[node40][DEBUG ] ]
[node40][DEBUG ] },
[node40][DEBUG ] "name": "node40",
[node40][DEBUG ] "outside_quorum": [
[node40][DEBUG ] "node40"
[node40][DEBUG ] ],
[node40][DEBUG ] "quorum": [],
[node40][DEBUG ] "rank": 0,
[node40][DEBUG ] "state": "probing",
[node40][DEBUG ] "sync_provider": []
[node40][DEBUG ] }
[node40][DEBUG ] ********************************************************************************
[node40][INFO ] monitor: mon.node40 is running
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node41 ...
[node41][DEBUG ] connection detected need for sudo
[node41][DEBUG ] connected to host: node41
[node41][DEBUG ] detect platform information from remote host
[node41][DEBUG ] detect machine type
[node41][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[node41][DEBUG ] determining if provided host has same hostname in remote
[node41][DEBUG ] get remote short hostname
[node41][DEBUG ] deploying mon to node41
[node41][DEBUG ] get remote short hostname
[node41][DEBUG ] remote hostname: node41
[node41][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node41][DEBUG ] create the mon path if it does not exist
[node41][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node41/done
[node41][DEBUG ] create a done file to avoid re-doing the mon deployment
[node41][DEBUG ] create the init path if it does not exist
[node41][INFO ] Running command: sudo systemctl enable ceph.target
[node41][INFO ] Running command: sudo systemctl enable ceph-mon@node41
[node41][INFO ] Running command: sudo systemctl start ceph-mon@node41
[node41][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node41.asok mon_status
[node41][DEBUG ] ********************************************************************************
[node41][DEBUG ] status for monitor: mon.node41
[node41][DEBUG ] {
[node41][DEBUG ] "election_epoch": 117,
[node41][DEBUG ] "extra_probe_peers": [],
[node41][DEBUG ] "feature_map": {
[node41][DEBUG ] "mon": [
[node41][DEBUG ] {
[node41][DEBUG ] "features": "0x3ffddff8ffecffff",
[node41][DEBUG ] "num": 1,
[node41][DEBUG ] "release": "luminous"
[node41][DEBUG ] }
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "features": {
[node41][DEBUG ] "quorum_con": "0",
[node41][DEBUG ] "quorum_mon": [],
[node41][DEBUG ] "required_con": "2449958747315912708",
[node41][DEBUG ] "required_mon": [
[node41][DEBUG ] "kraken",
[node41][DEBUG ] "luminous",
[node41][DEBUG ] "mimic",
[node41][DEBUG ] "osdmap-prune",
[node41][DEBUG ] "nautilus"
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "monmap": {
[node41][DEBUG ] "created": "2022-04-08 14:02:08.362899",
[node41][DEBUG ] "epoch": 1,
[node41][DEBUG ] "features": {
[node41][DEBUG ] "optional": [],
[node41][DEBUG ] "persistent": [
[node41][DEBUG ] "kraken",
[node41][DEBUG ] "luminous",
[node41][DEBUG ] "mimic",
[node41][DEBUG ] "osdmap-prune",
[node41][DEBUG ] "nautilus"
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "fsid": "b3299c95-745f-467f-91e4-a3e30c490483",
[node41][DEBUG ] "min_mon_release": 14,
[node41][DEBUG ] "min_mon_release_name": "nautilus",
[node41][DEBUG ] "modified": "2022-04-08 14:02:08.362899",
[node41][DEBUG ] "mons": [
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "192.168.247.141:6789/0",
[node41][DEBUG ] "name": "node41",
[node41][DEBUG ] "public_addr": "192.168.247.141:6789/0",
[node41][DEBUG ] "public_addrs": {
[node41][DEBUG ] "addrvec": [
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "192.168.247.141:6789",
[node41][DEBUG ] "nonce": 0,
[node41][DEBUG ] "type": "v1"
[node41][DEBUG ] }
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "rank": 0
[node41][DEBUG ] },
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "192.168.247.142:6789/0",
[node41][DEBUG ] "name": "node42",
[node41][DEBUG ] "public_addr": "192.168.247.142:6789/0",
[node41][DEBUG ] "public_addrs": {
[node41][DEBUG ] "addrvec": [
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "192.168.247.142:6789",
[node41][DEBUG ] "nonce": 0,
[node41][DEBUG ] "type": "v1"
[node41][DEBUG ] }
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "rank": 1
[node41][DEBUG ] },
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "0.0.0.0:0/1",
[node41][DEBUG ] "name": "node40",
[node41][DEBUG ] "public_addr": "0.0.0.0:0/1",
[node41][DEBUG ] "public_addrs": {
[node41][DEBUG ] "addrvec": [
[node41][DEBUG ] {
[node41][DEBUG ] "addr": "0.0.0.0:0",
[node41][DEBUG ] "nonce": 1,
[node41][DEBUG ] "type": "v1"
[node41][DEBUG ] }
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "rank": 2
[node41][DEBUG ] }
[node41][DEBUG ] ]
[node41][DEBUG ] },
[node41][DEBUG ] "name": "node41",
[node41][DEBUG ] "outside_quorum": [
[node41][DEBUG ] "node41"
[node41][DEBUG ] ],
[node41][DEBUG ] "quorum": [],
[node41][DEBUG ] "rank": 0,
[node41][DEBUG ] "state": "probing",
[node41][DEBUG ] "sync_provider": []
[node41][DEBUG ] }
[node41][DEBUG ] ********************************************************************************
[node41][INFO ] monitor: mon.node41 is running
[node41][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node41.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node42 ...
[node42][DEBUG ] connection detected need for sudo
[node42][DEBUG ] connected to host: node42
[node42][DEBUG ] detect platform information from remote host
[node42][DEBUG ] detect machine type
[node42][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[node42][DEBUG ] determining if provided host has same hostname in remote
[node42][DEBUG ] get remote short hostname
[node42][DEBUG ] deploying mon to node42
[node42][DEBUG ] get remote short hostname
[node42][DEBUG ] remote hostname: node42
[node42][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node42][DEBUG ] create the mon path if it does not exist
[node42][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node42/done
[node42][DEBUG ] create a done file to avoid re-doing the mon deployment
[node42][DEBUG ] create the init path if it does not exist
[node42][INFO ] Running command: sudo systemctl enable ceph.target
[node42][INFO ] Running command: sudo systemctl enable ceph-mon@node42
[node42][INFO ] Running command: sudo systemctl start ceph-mon@node42
[node42][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node42.asok mon_status
[node42][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[node42][WARNIN] monitor: mon.node42, might not be running yet
[node42][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node42.asok mon_status
[node42][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[node42][WARNIN] monitor node42 does not exist in monmap
[ceph_deploy.mon][INFO ] processing monitor mon.node40
[node40][DEBUG ] connection detected need for sudo
[node40][DEBUG ] connected to host: node40
[node40][DEBUG ] detect platform information from remote host
[node40][DEBUG ] detect machine type
[node40][DEBUG ] find the location of an executable
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node40 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node40 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node40 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node40 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[node40][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node40.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node40 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO ] processing monitor mon.node41
[node41][DEBUG ] connection detected need for sudo
[node41][DEBUG ] connected to host: node41
[node41][DEBUG ] detect platform information from remote host
[node41][DEBUG ] detect machine type
[node41][DEBUG ] find the location of an executable
[node41][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node41.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node41 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[node41][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node41.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node41 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying

错误的含义是:mon.node40 监视器尚未达到仲裁状态,经过多轮尝试后失败。

网络参考可能原因:



  1. 防火墙:



  2. hosts配置和hostname 不一致



  3. public_network配置问题



    一些其他的文档说明是地址是 public_network如下图:



    [分布式存储]Ceph环境部署失败问题总结





推荐阅读
  • CEPH LIO iSCSI Gateway及其使用参考文档
    本文介绍了CEPH LIO iSCSI Gateway以及使用该网关的参考文档,包括Ceph Block Device、CEPH ISCSI GATEWAY、USING AN ISCSI GATEWAY等。同时提供了多个参考链接,详细介绍了CEPH LIO iSCSI Gateway的配置和使用方法。 ... [详细]
  • Centos7.6安装Gitlab教程及注意事项
    本文介绍了在Centos7.6系统下安装Gitlab的详细教程,并提供了一些注意事项。教程包括查看系统版本、安装必要的软件包、配置防火墙等步骤。同时,还强调了使用阿里云服务器时的特殊配置需求,以及建议至少4GB的可用RAM来运行GitLab。 ... [详细]
  • 通过Anaconda安装tensorflow,并安装运行spyder编译器的完整教程
    本文提供了一个完整的教程,介绍了如何通过Anaconda安装tensorflow,并安装运行spyder编译器。文章详细介绍了安装Anaconda、创建tensorflow环境、安装GPU版本tensorflow、安装和运行Spyder编译器以及安装OpenCV等步骤。该教程适用于Windows 8操作系统,并提供了相关的网址供参考。通过本教程,读者可以轻松地安装和配置tensorflow环境,以及运行spyder编译器进行开发。 ... [详细]
  • Linux如何安装Mongodb的详细步骤和注意事项
    本文介绍了Linux如何安装Mongodb的详细步骤和注意事项,同时介绍了Mongodb的特点和优势。Mongodb是一个开源的数据库,适用于各种规模的企业和各类应用程序。它具有灵活的数据模式和高性能的数据读写操作,能够提高企业的敏捷性和可扩展性。文章还提供了Mongodb的下载安装包地址。 ... [详细]
  • Windows 7 部署工具DISM学习(二)添加补丁的步骤详解
    本文详细介绍了在Windows 7系统中使用部署工具DISM添加补丁的步骤。首先需要将光驱中的安装文件复制到指定文件夹,并进行挂载。然后将需要的MSU补丁解压并集成到系统中。文章给出了具体的命令和操作步骤,帮助读者完成补丁的添加过程。 ... [详细]
  • centos安装Mysql的方法及步骤详解
    本文介绍了centos安装Mysql的两种方式:rpm方式和绿色方式安装,详细介绍了安装所需的软件包以及安装过程中的注意事项,包括检查是否安装成功的方法。通过本文,读者可以了解到在centos系统上如何正确安装Mysql。 ... [详细]
  • 微软评估和规划(MAP)的工具包介绍及应用实验手册
    本文介绍了微软评估和规划(MAP)的工具包,该工具包是一个无代理工具,旨在简化和精简通过网络范围内的自动发现和评估IT基础设施在多个方案规划进程。工具包支持库存和使用用于SQL Server和Windows Server迁移评估,以及评估服务器的信息最广泛使用微软的技术。此外,工具包还提供了服务器虚拟化方案,以帮助识别未被充分利用的资源和硬件需要成功巩固服务器使用微软的Hyper - V技术规格。 ... [详细]
  • 本文介绍了在rhel5.5操作系统下搭建网关+LAMP+postfix+dhcp的步骤和配置方法。通过配置dhcp自动分配ip、实现外网访问公司网站、内网收发邮件、内网上网以及SNAT转换等功能。详细介绍了安装dhcp和配置相关文件的步骤,并提供了相关的命令和配置示例。 ... [详细]
  • Windows下配置PHP5.6的方法及注意事项
    本文介绍了在Windows系统下配置PHP5.6的步骤及注意事项,包括下载PHP5.6、解压并配置IIS、添加模块映射、测试等。同时提供了一些常见问题的解决方法,如下载缺失的msvcr110.dll文件等。通过本文的指导,读者可以轻松地在Windows系统下配置PHP5.6,并解决一些常见的配置问题。 ... [详细]
  • 1.直接在cmd窗口运行pipinstalljieba2.使用conda自带的安装工具condainstalljieba3.有一些模块是无法使用以上两种方式安装上ÿ ... [详细]
  • 本文介绍了在CentOS 6.4系统中更新源地址的方法,包括备份现有源文件、下载163源、修改文件名、更新列表和系统,并提供了相应的命令。 ... [详细]
  • 本文介绍了MVP架构模式及其在国庆技术博客中的应用。MVP架构模式是一种演变自MVC架构的新模式,其中View和Model之间的通信通过Presenter进行。相比MVC架构,MVP架构将交互逻辑放在Presenter内部,而View直接从Model中读取数据而不是通过Controller。本文还探讨了MVP架构在国庆技术博客中的具体应用。 ... [详细]
  • 篇首语:本文由编程笔记#小编为大家整理,主要介绍了软件测试知识点之数据库压力测试方法小结相关的知识,希望对你有一定的参考价值。 ... [详细]
  • Centos7搭建ELK(Elasticsearch、Logstash、Kibana)教程及注意事项
    本文介绍了在Centos7上搭建ELK(Elasticsearch、Logstash、Kibana)的详细步骤,包括下载安装包、安装Elasticsearch、创建用户、修改配置文件等。同时提供了使用华为镜像站下载安装包的方法,并强调了保证版本一致的重要性。 ... [详细]
  • 本文详细介绍了在Centos7上部署安装zabbix5.0的步骤和注意事项,包括准备工作、获取所需的yum源、关闭防火墙和SELINUX等。提供了一步一步的操作指南,帮助读者顺利完成安装过程。 ... [详细]
author-avatar
phpfinder
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有