通過ceph-deploy搭建ceph 13.2.5 mimic

一、ceph介紹

通過ceph-deploy搭建ceph 13.2.5 mimic

1. 操作系統需要內核版本在kernel 3.10+或CentOS7以上版本中部署

2. 通過deploy工具安裝簡化部署過程,本文中選用的ceph-deploy版本為1.5.39

3. 準備6個環境,分別為1個ceph-admin管理節點、3個mon/mgr/mds節點、2個osd節點

# 各環境IP和主機名信息
10.52.0.180 bj-zone1-ceph-mon-admin
10.52.0.181 bj-zone1-ceph-mon-node1
10.52.0.182 bj-zone1-ceph-mon-node2
10.52.0.183 bj-zone1-ceph-mon-node3
10.52.0.201 bj-zone1-ceph-osd-node1
10.52.0.202 bj-zone1-ceph-osd-node2

二、安裝Ceph

2.1 修改主機名及YUM源

PS:以 bj-zone1-ceph-mon-node1 為例說明,其他節點參照修改

a) 配置主機名

shell> hostnamectl --static set-hostname bj-zone1-ceph-mon-node1

b) 修改yum源

shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell> yum clean all
shell> yum makecache

2.2 部署ceph-admin管理節點

PS:如無特殊說明,以下所有操作都在管理節點中執行。

a) 配置主機名,配置hosts文件。

shell> hostnamectl --static set-hostname bj-zone1-ceph-admin
# 編輯 /etc/hosts 文件,將各節點的IP與主機名對應關係寫入hosts中。
shell> cat /etc/hosts
10.52.0.181 bj-zone1-ceph-mon-node1
10.52.0.182 bj-zone1-ceph-mon-node2
10.52.0.183 bj-zone1-ceph-mon-node3
10.52.0.201 bj-zone1-ceph-osd-node1
10.52.0.202 bj-zone1-ceph-osd-node2

b) 生成ssh key文件並複製到各個節點

shell> ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TvZDQwvZpIKFAeSyh8Y1QhEOG9EzKaHaNN1rMl8kxfI root@bj-zone1-ceph-admin
The key's randomart image is:
+---[RSA 2048]----+
|=O=o.o... . |
|*+=..+...= |
|+++=o +o= o |
|o*o.. =Eo . |
|+oo o o S + |
|.. = = o . |
| . . o |
| . |
| |
+----[SHA256]-----+
# 複製密鑰至各節點,實現免密登錄。
shell> ssh-copy-id bj-zone1-ceph-mon-node1
shell> ssh-copy-id bj-zone1-ceph-mon-node2
shell> ssh-copy-id bj-zone1-ceph-mon-node3
shell> ssh-copy-id bj-zone1-ceph-osd-node1
shell> ssh-copy-id bj-zone1-ceph-osd-node2

c) 安裝ceph-deploy

# 修改本地yum源
shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> yum clean all
shell> yum makecache
shell> yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
shell> ceph-deploy --version
1.5.39

d) 創建部署目錄

shell> mkdir deploy_ceph_cluster && cd deploy_ceph_cluster

2.3 部署mon/mgr/mds節點

a) 創建Ceph Monitor節點

# 生成ceph配置文件、monitor秘鑰文件以及部署日誌文件。
shell> ceph-deploy new bj-zone1-ceph-mon-node1 bj-zone1-ceph-mon-node2 bj-zone1-ceph-mon-node3

b) 在 /root/deploy_ceph_cluster/ceph.conf 文件中增加以下信息(鑑於篇幅原因,註釋版的可私信作者)

shell> cd /root/deploy_ceph_cluster
# 前七行在執行上述命令後,默認會自動生成,無須修改,參考範文添加其他參數內容。
shell> cat /root/deploy_ceph_cluster/ceph.conf
[global]
 fsid = e93e2cf2-4eb3-400e-b21b-563db9330864
 mon_initial_members = bj-zone2-ceph-mon-node1, bj-zone2-ceph-mon-node2, bj-zone2-ceph-mon-node3
 mon_host = 10.52.0.181,10.52.0.182,10.52.0.183
 auth_cluster_required = cephx
 auth_service_required = cephx
 auth_client_required = cephx
 osd_pool_default_min_size = 1
 osd_pool_default_size = 3
 public_network = 10.52.0.0/24
 cluster_network = 10.52.0.0/24
 rbd_default_features = 1
 cephx_require_signatures = true
 cephx_cluster_require_signatures = true
 cephx_service_require_signatures = true
 cephx_sign_messages = true
	
[mon]
 mon_cpu_threads = 8
 mon_clock_drift_allowed = 2
 mon_clock_drift_warn_backoff = 30
 mon_allow_pool_delete = true
 mon_data_avail_crit = 10
 mon_data_avail_warn = 30
 mon_data_size_warn = 16106127360
 mon_osd_min_down_reporters = 3
[osd]
 objecter_inflight_ops = 819200
 objecter_inflight_op_bytes = 1048576000
 osd_client_message_cap = 1000
 osd_client_message_size_cap = 2147483648
 osd_crush_chooseleaf_type = 0
 osd_deep_scrub_stride = 131072
 osd_enable_op_tracker = true
 osd_journal_size = 10240
 osd_map_cache_size = 1024
 osd_max_backfills = 4
 osd_max_write_size = 512
 osd_min_pg_log_entries = 30000
 osd_max_pg_log_entries = 100000
 osd_mon_heartbeat_interval = 40
 osd_op_log_threshold = 50
 osd_recovery_max_active = 10
 osd_recovery_op_priority = 5
 osd_heartbeat_interval = 10
 osd_heartbeat_grace = 60
 journal_max_write_bytes = 1073714824
 journal_max_write_entries = 10000
 ms_dispatch_throttle_bytes = 148576000
 filestore_fd_cache_size = 1024
 filestore_min_sync_interval = 10
 filestore_max_sync_interval = 15
 filestore queue max bytes = 1048576000
 filestore_merge_threshold = 40
 filestore_op_threads = 32
 filestore_queue_max_ops = 25000
 filestore_split_multiple = 8
[mds]
 debug_ms = 1/5
[client]
 rbd_cache = true
 rbd_cache_max_dirty = 134217728
 rbd_cache_target_dirty = 67108864
 rbd_cache_max_dirty_age = 30
 rbd_cache_max_dirty_object = 2
 rbd_cache_size = 134217728
 rbd_cache_writethrough_until_flush = false

c) 安裝ceph軟件包

shell> ceph-deploy install \
bj-zone1-ceph-mon-node1 bj-zone1-ceph-mon-node2 bj-zone1-ceph-mon-node3 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc

d) 配置初始monitor、並收集所有密鑰

shell> ceph-deploy mon create-initial

e) 分發配置文件

# 通過ceph-deploy將配置文件以及密鑰拷貝至其他節點,使得不需要指定mon地址以及用戶信息就可以直接管理我們的ceph集群
shell> ceph-deploy admin \
bj-zone1-ceph-mon-node1 bj-zone1-ceph-mon-node2 bj-zone1-ceph-mon-node3
 

f)配置mgr

# 運行ceph health,打印
# HEALTH_WARN no active mgr
# 自從ceph 12開始,manager是必須的,應該為每個運行monitor的機器添加一個mgr,否則集群處於WARN狀態。
shell> ceph-deploy mgr create \
bj-zone1-ceph-mon-node1:cephsvr-16101 \
bj-zone1-ceph-mon-node2:cephsvr-16102 \
bj-zone1-ceph-mon-node3:cephsvr-16103
# 提示:當ceph-mgr發生故障,相當於整個ceph集群都會出現嚴重問題,
# 建議在每個mon中都創建獨立的ceph-mgr(至少3個ceph mon節點),只需要在每個mon節點參考上面的方法進行創建即可(每個mgr需要不同的獨立命名)。 

2.4 部署osd節點

a) 安裝ceph軟件包

shell> ceph-deploy install bj-zone1-ceph-osd-node1 bj-zone1-ceph-osd-node2 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc

b) 配置osd節點

shell> ceph-deploy disk zap bj-zone1-ceph-osd-node1:sdb bj-zone1-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd
shell> ceph-deploy osd create bj-zone1-ceph-osd-node1:sdb bj-zone1-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd

c) 分發配置文件

shell> ceph-deploy admin bj-zone1-ceph-osd-node1 bj-zone1-ceph-osd-node2
# 查看ceph osd節點狀態
shell> ceph -s
shell> ceph osd tree

三、啟用Dashboard

3.1 在任意節點中執行,開啟dashboard支持

# 啟用dashboard插件
shell> ceph mgr module enable dashboard
# 生成自簽名證書
shell> ceph dashboard create-self-signed-cert
Self-signed certificate created
# 配置dashboard監聽IP和端口
shell> ceph config set mgr mgr/dashboard/server_port 8080
# 配置dashboard認證
shell> ceph dashboard set-login-credentials root 123456
Username and password updated
# 關閉SSL支持,只用HTTP的方式訪問
shell> ceph config set mgr mgr/dashboard/ssl false
# 重啟每個mon節點使dashboard配置生效
shell> systemctl restart ceph-mgr@$HOSTNAME
# 瀏覽器訪問 http://10.52.0.181:8080
# 查看ceph-mgr服務
shell> ceph mgr services
{
 "dashboard": "http://shyt-ceph-mon1:8080/"
}

四、創建Ceph MDS角色

4.1 安裝ceph mds

# 為防止單點故障,需要部署多臺MDS節點
shell> ceph-deploy mds create shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3

4.2 手動創建data和metadata池

shell> ceph osd pool create data 128 128
shell> ceph osd pool create metadata 128 128
shell> ceph fs new cephfs metadata data
# 查看mds節點狀態
shell> ceph mds stat
cephfs-1/1/1 up {0=shyt-ceph-mon3=up:active}, 2 up:standby

3、掛載cephfs文件系統

shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell> cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=Ceph packages for \$basearch
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/\$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF
shell> yum clean all
shell> yum makecache
shell> yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/ceph-fuse-13.2.5-0.el7.x86_64.rpm
# 創建ceph目錄,將ceph.client.admin.keyring和ceph.conf文件拷貝到該目錄下。
shell> mkdir /etc/ceph/
# 創建掛載目錄
shell> mkdir /storage
shell> ceph-fuse /storage
# 加入開機啟動項
shell> echo "ceph-fuse /storage" >> /etc/rc.d/rc.local

作者:龍龍小寶

原文:https://www.cnblogs.com/91donkey/p/10938488.html

通過ceph-deploy搭建ceph 13.2.5 mimic


分享到:


相關文章: