分佈式存儲的未來 Ceph 14.2.5 集群安裝

Ceph14.2.5 Cluster Install

Pattern:Ceph Cluster InstallAuthor:Aleon

1. 本文建立在對Ceph有一定基礎的情況下

2. 採用手動部署,以增強對ceph的理解

⼀、ENV Message:

1、Written in front:

A. Official documents are used for this installat ion

B. As an experimental test only

C. Ceph Official website

2、SoftWare:

Ceph version:14.2.5

Ceph: Ceph is an open-source, massively scalable, software-defined

storage system which provides object , block and file system storage

from a single clustered plat form. Ceph's main goals is to be completely

dist ributed without a single point of failure, scalable to the exabyte

level, and freely-available. The data is replicated, making it fault

tolerant . Ceph software runs on commodity hardware. The system is

designed to be both self-healing and self-managing and self awesome


3、Machine:

<code>Hostname IP system Config role admin 192.168.184.131 Centos7.7 1C2G && disk*2 &&net *1 node1 192.168.184.132 Centos7.7 1C2G && disk*2&&net *1 node2 192.168.184.133 Centos7.7 1C2G && disk*2&&net *1 node3 192.168.184.134 Centos7.7 1C2G && disk*2&&net *1/<code>

⼆、Host Configure

1、Basic configuration for client * 4

<code># Stop firewalld systemct l stop firewalld && systemct l disable firewalld # Stop Selinux sed -I 's/ SELINUX=enforcing / SELINUX=disabled /g' /etc/selinux/config \ && setenforce 0 # Install ntp server t imedatect l set-t imezone Asia/Shanghai \ && yum install ntp # Configure ntp for admin vi /etc/ntp.conf rest rict 192.168.184.131 nomodify not rap server cn.ntp.org.cn iburst systemct l enable ntpd && systemct l restart ntpd # Configure ntp for node*3 vi /etc/ntp.conf server 192.168.184.131 iburst systemct l enable ntpd && systemct l restart ntpd # Configure hosts cat << EOF >> /etc/hosts 192.168.184.131 admin.example.com admin 192.168.184.132 node1.example.com node1 192.168.184.133 node2.example.com node2 192.168.184.134 node3.example.com node3 EOF for ip in admin node1 node2 node3 do scp /etc/hosts $ip:/etc/ done #Configure Auth ssh-keygen for ip in admin node1 node2 node3 do ssh-copy-id root@$ip done #Configure repo yum -y install epel-release yum-plugin-priorit ies ht tps://mirrors.tuna.tsinghua.ed u.cn/ceph/rpm-naut ilus/el7/noarch/ceph-release-1- 1.el7.noarch.rpm sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/ceph.repo cat << EOF >> /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=https://mirrors.aliyun.com/ceph/rpm-naut ilus/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=https://mirrors.aliyun.com/ceph/rpm-naut ilus/el7/noarch/ gpgcheck=0 EOF for ip in admin node1 node2 node3 do scp /etc/yum.repos.d/ceph.repo $ip:/etc/yum.repos.d/ done /<code>

2、Install Ceph

<code>for ip in admin node1 node2 node3 do ssh $ip yum -y install ceph ceph-radosgw done for ip in admin node1 node2 node3 do ssh $ip ceph -v done/<code>

三、Ceph Configure

1、Mon Configure

<code># 添加 ceph配置文件 vi /etc/ceph/ceph.conf [global] fsid = 497cea05-5471-4a1a-9a4d-c86974b27d49 mon init ial members = node01 mon host = 192.168.184.132 public network = 192.168.184.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = 1024 #設置副本數 osd pool default size = 3 #設置最⼩副本數 osd pool default min size = 2 osd pool default pg num = 333 osd pool default pgp num = 333 osd crush chooseleaf type = 1 osd_mkfs_type = xfs max mds = 5 mds max file size = 100000000000000 mds cache size = 1000000 #設置osd節點down後900s,把此osd節點逐出ceph集群,把之前映射到此節 點的數 據映射到其他節點。 mon osd down out interval = 900 [mon] #把時鐘偏移設置成0.5s,默認是0.05s,由於ceph集群中存在異構PC,導致時鐘 偏移總是⼤於0.05s,為了⽅便同步直接把時鐘偏移設置成0.5s mon clock drift allowed = .50 # 創建 keyring ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' C.Create admin keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client .admin.keyring -- gen- key -n client .admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' -- cap mgr 'allow *' D.創建 sudo ceph-authtool --create-keyring /var/lib/ceph/bootst rap-osd/ceph.keyring -- gen-key -n client .bootst rap-osd --cap mon 'profile bootst rap-osd' E.add keyring with ceph.mon.keyring sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client .admin.keyring sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootst rap-osd/ceph.keyring F.Use Hostname Ipaddress and FSID create Monmap monmaptool --create --add node1 192.168.184.132 --fsid 497cea05-5471- 4a1a- 9a4d-c86974b27d49 /tmp/monmap G.Create default data dir sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1 H.Change ceph dir with user ceph chown ceph.ceph /tmp/ceph.mon.keyring I.initial Mon sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ll /var/lib/ceph/mon/ceph-node1 J.Avoid reinstall create none dir sudo touch /var/lib/ceph/mon/ceph-node1/done K.Start ceph with enable systemct l enable ceph-mon@node1 \ && systemct l restart ceph-mon@node1 \ && systemct l status ceph-mon@node1 -l \ && ceph -s L.Copy keyring file to Cluster host for ip in admin node1 node2 node3 do scp /etc/ceph/\* root@$ip:/etc/ceph/ done 2、Add outher mon A.Create default dir ssh node2 sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node2 B.Get keyring in tmp ssh node2 ceph auth get mon. -o /tmp/ceph.mon.keyring C.Get map ssh node2 ceph mon getmap -o /tmp/ceph.mon.map D.Change file limit ssh node2 chown ceph.ceph /tmp/ceph.mon.keyring E.Initial mon ssh node2 sudo -u ceph ceph-mon --mkfs -i node2 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring F.Start Mon node ssh node2 systemct l start ceph-mon@node2 ssh node2 systemct l enable ceph-mon@node2 \ && ceph -s 3、Configure OSD A. Create Ceph data & journl Disk with bluestore sudo ceph-volume lvm create --data /dev/sdb B.List Osd Number sudo ceph-volume lvm list C.Start OSD systemct l enable ceph-osd@0 && systemctl start ceph-osd@0 Notes: Outher Host osd need get net keyring ceph auth get client .bootst rap-osd -o /var/lib/ceph/bootst rap- osd/ceph.keyring sudo ceph-volume lvm create --data /dev/sdb sudo ceph-volume lvm list systemctl start ceph-osd@1 && systemctl enable ceph-osd@1 4、install Mgr for admin A. Create Keyring ceph auth get-or-create mgr.admin mon 'allow profile mgr' osd 'allow *' mds 'allow *' B. Create dir with ceph sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-admin/ C. get key to dir ceph auth get mgr.admin -o /var/lib/ceph/mgr/ceph-admin/keyring D. Start Mgr systemct l enable ceph-mgr@admin && systemct l restart ceph-mgr@admin Notes: With look the design sketch /<code>

2、Add outher mon

<code># A.Create default dir ssh node2 sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node2 # B.Get keyring in tmp ssh node2 ceph auth get mon. -o /tmp/ceph.mon.keyring # C.Get map ssh node2 ceph mon getmap -o /tmp/ceph.mon.map # D.Change file limit ssh node2 chown ceph.ceph /tmp/ceph.mon.keyring # E.Initial mon ssh node2 sudo -u ceph ceph-mon --mkfs -i node2 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring # F.Start Mon node ssh node2 systemct l start ceph-mon@node2 ssh node2 systemct l enable ceph-mon@node2 && ceph -s /<code>

3、Configure OSD

<code> # A. Create Ceph data & journl Disk with bluestore sudo ceph-volume lvm create --data /dev/sdb # B.List Osd Number sudo ceph-volume lvm list # C.Start OSD systemctl enable ceph-osd@0 && systemctl start ceph-osd@0 # Notes: Outher Host osd need get net keyring ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap- osd/ceph.keyring sudo ceph-volume lvm create --data /dev/sdb sudo ceph-volume lvm list systemctl start ceph-osd@1 && systemctl enable ceph-osd@1/<code>

4、install Mgr for admin

<code># A. Create Keyring ceph auth get-or-create mgr.admin mon 'allow profile mgr' osd 'allow *' mds 'allow *' # B. Create dir with ceph sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-admin/ # C. get key to dir ceph auth get mgr.admin -o /var/lib/ceph/mgr/ceph-admin/keyring # D. Start Mgr systemct l enable ceph-mgr@admin && systemct l restart ceph-mgr@admin Notes: With look the design sketch/<code>

四、Use Ceph block device

<code># A、Create OSD pool ceph osd pool create rbd 128 # B、Check ceph osd lspools # C、Initial pool rbd pool init rbd # D、Create rbd disk rbd create disk01 --size 2G --image-feature layering && rbd ls -l # E、Mapping rbd block local sudo rbd map disk01 # F、show mapping rbd showmapped # G、Format disk sudo mkfs.xfs /dev/rbd0 # H、mount disk sudo mount /dev/rbd0 /mnt && df -Th/<code>

五、Use filesystem

<code># A、Configure with MDS with node1 sudo -u ceph mkdir /var/lib/ceph/mds/ceph-node1 # B、Create MDS keyring ceph auth get-or-create mds.node1 osd "allow rwx" mds "allow" mon "allow profile mds" # C、Get Mds keyring ceph auth get mds.node1 -o /var/lib/ceph/mds/ceph-node01/keyring #D、Configure ceph.conf cat << EOF >> /etc/ceph/ceph.conf [mds.{node1}] host = {node1} EOF # E、Start MDS systemct l enable ceph-mds@node1 && systemct l start ceph-mds@node1 # F、Create Pool ceph osd pool create cephfs_data 128 ceph osd pool create cephfs_metadata 128 ceph fs new cephfs cephfs_metadata cephfs_data ceph fs ls ceph mds stat # G、Mount cephFS with Client yum -y install ceph-fuse ssh node1 "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key chmod 600 admin.key mount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key && df -Th/<code>