目录

在Openshift 4上部署Portworx的安装参考【IBM CP4D版本】

Openshift 4上部署Portworx的安装参考【IBM CP4D版本】

Openshift 4上部署Portworx的安装参考【IBM CP4D版本】


1.前言

Portworx是全球领先容器云原生存储及数据服务应用。

2020 年 10月13日,Pure Storage以高达3.7亿美元与Portworx达成收购协议。现在登录Portworx主页已经是新的log和宣传语。

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018102407419.png

GigaOm Radar for Data Storage for Kubernetes报告可以看出

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018093951276.png

GigaOm总结的各个容器存储厂商的市场定位分析:

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018094122237.png

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018094150954.png

可以看出:

1、从目标市场细分为:中小型企业、大型企业、ISP / MSP

2、产品技术架构分为:具有CSI插件的传统存储阵列、软件优化的容器存储定义、云原生解决方案

值得一提的是传统厂商IBM、Dell EMC和NetApp都是典型的传统存储+CSI接口的架构。

其他厂商:

Redhat的架构是SDS-优化类型。

Portworx直接采用是云原生类型,比较适合大型企业和ISP

IBM得益于收购了Redhat在Openshift平台的生态红利,后续可以利用OCS继续发力。

接下来看报告中关键特征(Key Criteria)和评价指标Evaluation Metrics()

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018095502436.png

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018095523673.png

Portworx除了两项外,其余均为+++,排名后续的厂商分别是Pure Storage和robin.io,

Portworx得分略差的方面在多租户和效率。

Redhat还有一段路要走。

最后是著名的GigaOm雷达图了

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018095853434.png

全闪存储的概念已过时,容器存储市场正当锋芒之时,各家均发力混合云存储市场,这也是促使资本让Pure Storage收购Portworx的原因。

上面是成熟度,下方是创新度,左侧是特性玩法,右侧是平台玩法。

【这里提及下平台玩法】

云原生存储不仅仅是存储,随着云原生技术的不断创新、发展。真正缺乏的是数据的管理、数据的服务能力,打造好以数据为核心价值的平台,是云原生存储领域的核心战争。Portworx平台有完整的容器数据服务能力,支持容器应用的存储、备份、容灾、迁移、安全和自动化等功能。

下图是Portworx的数据服务管理平台功能

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018103347783.png

可以看出Portworx处于领导者地位。

Portworx的优点在于基于混合云基础架构,丰富的灾难恢复选项,能够与Prometheus进行正式集成监控平台。

另外在监控、管理、使用性上,Reahat还是有很强的的优势。

后续会针对Red Hat Openshift容器存储(OCS)做单独安装参考,特别是Openshift虚拟化(CNV)的结合,迁移,灾备。

容器存储这块技术细节太多,有状态容器部署中最难的在于存储,基于更细颗粒度的数据备份等实操更是大坑无数。

so:技术迭代层出不穷,没有什么产品是一直引领时代的,透过学习下当下领导者地位的Portworx软件的部署,学习下技术背后的本质。

2.开始前准备

2.1 必备要求:

本次IBM CP4D附送的Portworx版本分为Enterprise Edition和Standard Editio,后者其实Portworx的Essentials OEM版本。

【最主要的裸盘准备条件】

1、每个计算节点上至少 1 TB 的原始未格式化磁盘,以及用于元数据存储的至少 100 GB 的原始未格式化磁盘。

2、在所有计算节点上,该原始磁盘必须具有相同的设备名。例如各计算节点的metadata盘符均为/dev/sdb,容量盘盘符均为/dev/sdc。

3、在集群的所有节点上,您具有 Red Hat 存储库上可用的最新版本 CRI-O。 (V1.11.16 或更高版本)。Openshift 3.11、4.3及以上版本无此无问题。

2.2 环境配置

登录vmware(试验环境采用Vshpere 7)

【关键步骤:配备1块至少100G的硬盘,及至少1T的硬盘】

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201018084516756.png

这里务必注意【虚拟机硬件设置中,关闭“安全引导”选项】,否则会报错,各节点的CoreOS中的portworx.services会提示报错。

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201017160728167.png

2.3 验证磁盘

登录到各计算节点,此部分可以看之前安装操作

原文参考:https://mp.weixin.qq.com/s/vlpmDINHCRMckiy_2hakcg

1
ssh -i /data/boot-files/ignition/openshift4/ssh-key/id_rsa core@worker1.openshift4.cj.io

【检查磁盘情况】

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[core@worker1 ~]$ sudo fdisk -l
Disk /dev/sda: 250 GiB, 268435456000 bytes, 524288000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 00000000-0000-4000-A000-000000000001

Device       Start       End   Sectors   Size Type
/dev/sda1     2048    788479    786432   384M Linux filesystem
/dev/sda2   788480   1048575    260096   127M EFI System
/dev/sda3  1048576   1050623      2048     1M BIOS boot
/dev/sda4  1050624 524287966 523237343 249.5G Linux filesystem

Disk /dev/mapper/coreos-luks-root-nocrypt: 249.5 GiB, 267880742400 bytes, 523204575 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

# 这里就是一块100G的裸盘,用于metadata,未分区
Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# 这里就是一块1T的裸盘,用于容量盘,未分区
Disk /dev/sdc: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

【确保无分区信息】

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# type disk就是裸盘
[core@worker1 ~]$ sudo lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   250G  0 disk
├─sda1                         8:1    0   384M  0 part /boot
├─sda2                         8:2    0   127M  0 part /boot/efi
├─sda3                         8:3    0     1M  0 part
└─sda4                         8:4    0 249.5G  0 part
  └─coreos-luks-root-nocrypt 253:0    0 249.5G  0 dm   /sysroot
sdb                            8:16   0   100G  0 disk
sdc                            8:32   0  1000G  0 disk
sr0                           11:0    1  1024M  0 rom 

【注意,各个节点均需要检查,确保盘符一致】

这里其实最重要是查看各硬盘名是否一致,从上图来看metedata盘符是sdb,容量是100G,容量盘是sdc,容量是1T,满足要求。

3.开始安装

登录到集群

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[root@support ~]$oc login
Authentication required for https://api.openshift4.cj.io:6443 (openshift)
Username: admin
Password:
Login successful.

You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".

# 执行ibm cp4d附送的下载器,CP4D_STE_Portworx.bin.bin就是下载器,下载到当前目录下standard
./CP4D_STE_Portworx.bin.bin
# 进入目录,进行解压缩操作
tar zxvf cpdv3.0.1_portworx.tg

最终得到standard目录相关文件,相关路径如下

1
2
[root@support cpd-portworx]$pwd
/root/standard/cpd-portworx

接下来,我们后续执行脚本,主要在px-images目录(主要是镜像推送),px-install-4.x目录(安装脚本)、px-install-3.11(针对Openshift3.x版本)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@support cpd-portworx]$tree
.
├── px-images
│   ├── images.lst
│   ├── imgtemp
│   ├── package-px-images.sh
│   ├── process-px-images.sh
│   ├── px_2.5.5-dist.tgz
│   ├── README.txt
│   └── utils.sh
├── px-install-3.11
│   ├── px-images.sh
│   ├── px-install.sh
│   ├── px-sc.sh
│   ├── px-uninstall.sh
│   ├── px-upgrade-precheck.sh
│   ├── px-upgrade.sh
│   ├── px-wipe.sh
│   ├── README.txt
│   └── trial-license-cleaner
├── px-install-4.x
│   ├── 42-cp4d.yaml
│   ├── ContainerRuntimeConfig.yaml
│   ├── cp-pwx-x86.YAML
│   ├── px-install.sh
│   ├── px-install.sh.bak
│   ├── px-sc.sh
│   ├── px-test.yaml
│   ├── px-uninstall.sh
│   ├── README.txt
│   └── versions
└── README.txt

向镜像仓库推送镜像内容,官方文档推荐采用内部镜像仓库,因为内部镜像仓库不需要拉取秘钥。

本文采用直接向外部镜像仓库推送。

1
2
3
4
5
6
7
# 这里的AAA和BBB分别是外部镜像仓库的用户名和密码
[root@support ~]$ ./process-px-images.sh  -r registry.cj.io:5000 -u AAA -p BBB -s portworx -t px_2.5.5-dist.tgz

f749b9b0fb21: Pushing [=================================>                 ] 80.61 MB/120.3 MB
f749b9b0fb21: Pushing [==================================>                ]  82.2 MB/120.3 MB
f749b9b0fb21: Pushing [==================================>                ] 82.72 MB/120.3 MB
f749b9b0fb21: Pushing [==================================>                ] 83.25 MB/120.3 MB 

【注:如果推送镜像过程中出现X.509出错,修改/etc/docker/daemon.json 】

1
2
3
4
5
6
7
# registry.cj.io:5000是镜像仓库地址,请自行修改
vi /etc/docker/daemon.json 
{
   "insecure-registries" : ["registry.cj.io:5000"]
}
# 重启服务
systemctl restart docker

【镜像推送完成后,镜像验证,这里可以看见仓库名portworx已经包括相关镜像】

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@support px-images]$curl -u openshift:redhat https://registry.cj.io:5000/v2/_catalog | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1152  100  1152    0     0   5690      0 --:--:-- --:--:-- --:--:--  5731
{
  "repositories": [
    "buildah",
    "k8s.gcr.io/pause",
    "ocp4/openshift4",
    "ocs4/mcg-rhel8-operator",
    "pause",
    "portworx/autopilot",
    "portworx/csi-attacher",
    "portworx/csi-node-driver-registrar",
    "portworx/csi-provisioner",
    "portworx/csi-resizer",
    "portworx/csi-snapshotter",
    "portworx/kube-controller-manager-amd64",
    "portworx/kube-scheduler-amd64",
    "portworx/lh-config-sync",
    "portworx/lh-stork-connector",
    "portworx/oci-monitor",
    "portworx/pause",
    "portworx/px-enterprise-ibm-icp4d-oem",
    "portworx/px-lighthouse",
    "portworx/px-node-wiper",
    "portworx/px-operator",
    "portworx/stork",
    "portworx/talisman",
    "rhel7/support-tools",
    "rhscl/ruby-26-rhel7"
  ]
}

【创建拉取私钥以用于拉取 kube-system名称空间中的相关portworx镜像内容】

1
2
3
4
[root@support px-install-4.x]$./px-install.sh create-secret registry.cj.io:5000 openshift redhat
2020-10-17 23:24:57  INFO: Creating registry secret
secret/px-install-secret created
2020-10-17 23:24:57  INFO: registry secret created successfully 

可以从web-console登录,可以验证

https://typorabyethancheung911.oss-cn-shanghai.aliyuncs.com/typora/image-20201017232554890.png

【核心安装步骤】

官网文档对于这部分描述相对简单,而且没有详细介绍,实际上根据readme,安装步骤需要三步,第一步是创建外部镜像仓库拉取秘钥(如果采用)

第二步是创建operator,最后是直接创建实例,两个命令缺一不可。

1
2
./px-install.sh -reg-sec -reg-pull registry.cj.io:5000 -reg-suffix portworx install-operator
./px-install.sh -reg-sec -reg-pull registry.cj.io:5000 -reg-suffix portworx install-storage /dev/sdb /dev/sdc 

耐心等待,portworx-operator拉起,portworx-wiper进行磁盘初始化,portworx-api会不断报错,一直等待px-storage-cluster完成。

4.验证

验证是否已正确部署 Portworx:

1
2
PX_POD=$(oc get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

返回结果具体如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 具体命令
[root@support px-install-4.x]$kubectl exec px-storage-cluster-4zsgn -n kube-system -- /opt/pwx/bin/pxctl status

#  如果您看到 Status: PX is operational 消息,表明 Portworx 部署成功
Status: PX is operational
# OEM版本授权信息
License: IBM Cloud Pak for Data
Node ID: 5941071e-4d9b-4b32-962c-8afa7e01c34a
        IP: 172.18.1.49
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE          USED    STATUS  ZONE    REGION
        0       HIGH            raid0           1000 GiB        12 GiB  Online  default default
        Local Storage Devices: 1 device
        Device  Path            Media Type              Size                    Last-Scan
        0:1     /dev/sdc        STORAGE_MEDIUM_MAGNETIC 1000 GiB                17 Oct 20 17:28 UTC
        total                   -                       1000 GiB
        Cache Devices:
        No cache devices
        Metadata Device:
        1       /dev/sdb        STORAGE_MEDIUM_MAGNETIC
# 可以看到没有缓存盘,metadata盘是/dev/sdb,容量盘是/dev/sdc
# 集群名称是px-storage-cluster,三个节点
Cluster Summary
        Cluster ID: px-storage-cluster
        Cluster UUID: 970062e8-7af3-4fe1-8546-10f39585686a
        Scheduler: kubernetes
        Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName               StorageNode     Used    Capacity        Status  StorageStatus   Version         Kernel                          OS
        172.18.1.47     aeaa1684-1aa9-4dee-a1a3-5f2df142453a    worker1.openshift4.cj.io        Yes             0 B     1000 GiB        Online  Up              2.5.5.0-bef5691 4.18.0-193.23.1.el8_2.x86_64    Red Hat Enterprise Linux CoreOS 45.82.202009261329-0 (Ootpa)
        172.18.1.49     5941071e-4d9b-4b32-962c-8afa7e01c34a    worker3.openshift4.cj.io        Yes             0 B     1000 GiB        Online  Up (This node)  2.5.5.0-bef5691 4.18.0-193.23.1.el8_2.x86_64    Red Hat Enterprise Linux CoreOS 45.82.202009261329-0 (Ootpa)
        172.18.1.48     360a19bd-3920-4e1e-ba59-35802722d077    worker2.openshift4.cj.io        Yes             0 B     1000 GiB        Online  Up              2.5.5.0-bef5691 4.18.0-193.23.1.el8_2.x86_64    Red Hat Enterprise Linux CoreOS 45.82.202009261329-0 (Ootpa)
Global Storage Pool
        Total Used      :  0 B
        Total Capacity  :  2.9 TiB

进入各节点查看

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[core@worker1 ~]$ lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   250G  0 disk 
├─sda1                         8:1    0   384M  0 part /boot
├─sda2                         8:2    0   127M  0 part /boot/efi
├─sda3                         8:3    0     1M  0 part 
└─sda4                         8:4    0 249.5G  0 part 
  └─coreos-luks-root-nocrypt 253:0    0 249.5G  0 dm   /sysroot
sdb                            8:16   0   110G  0 disk 
sdc                            8:32   0   1.5T  0 disk 
├─sdc1                         8:33   0     3G  0 part 
└─sdc2                         8:34   0   1.5T  0 part 
sr0                           11:0    1  1024M  0 rom  

查看pod

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
[root@support px-install-3.11]$kubectl get pods -n kube-system  -o wide
NAME                                READY   STATUS  RESTARTS  AGE     IP            NODE                     NOMINATED        NODE   READINESS GATES
portworx-api-7dtmw                  1/1     Running   0      7h48m   172.18.1.47   worker1.openshift4.cj.io   <none>           <none>
portworx-api-qm6jk                  1/1     Running   0      7h48m   172.18.1.49   worker3.openshift4.cj.io   <none>           <none>
portworx-api-vdw75                  1/1     Running   0      7h48m   172.18.1.48   worker2.openshift4.cj.io   <none>           <none>
portworx-operator-579dc8dd6-p7m2f   1/1     Running   0      7h51m   10.131.0.9    worker1.openshift4.cj.io   <none>           <none>
px-storage-cluster-4zsgn            1/1     Running   0      7h48m   172.18.1.49   worker3.openshift4.cj.io   <none>           <none>
px-storage-cluster-ct6gx            1/1     Running   0      7h48m   172.18.1.48   worker2.openshift4.cj.io   <none>           <none>
px-storage-cluster-p258z            1/1     Running   0      7h48m   172.18.1.47   worker1.openshift4.cj.io   <none>           <none>
stork-9769c7445-7gtmp               1/1     Running   0      7h48m   10.128.2.22   worker2.openshift4.cj.io   <none>           <none>
stork-9769c7445-9hp8r               1/1     Running   0      7h48m   10.129.2.19   worker3.openshift4.cj.io   <none>           <none>
stork-9769c7445-g7vcv               1/1     Running   0      7h48m   10.131.0.11   worker1.openshift4.cj.io   <none>           <none>
stork-scheduler-54656c8858-6jb4z    1/1     Running   0      7h48m   10.129.2.18   worker3.openshift4.cj.io   <none>           <none>
stork-scheduler-54656c8858-7nzjf    1/1     Running   0      7h48m   10.131.0.12   worker1.openshift4.cj.io   <none>           <none>
stork-scheduler-54656c8858-ls9q7    1/1     Running   0      7h48m   10.129.2.17   worker3.openshift4.cj.io   <none>           <none>

5.问题修复

portworx提示PX filesystem dependencies错误

在Openshift4 webconsole上,通过log可以直接查看相关报错信息。

1
Failed to load PX filesystem dependencies for kernel 4.18.0-193.23.1.el8_2.x86_64

两种可能:

1、需要更新px_modules

1
tar --strip-components 4 -C /var/lib/osd/pxfs/latest -xvf /opt/pwx/oci/rootfs/pxlib_data/px-fslibs/px_modules.8.tgz x86_64/<4.18.0-193.13.2.el8_2.x86_64>/version/8/px.ko &&mv /var/lib/osd/pxfs/latest/{,8.}px.ko

最新的Openshift 4【版本为4.5.14,k8s版本为4.18.0-193.23.1.el8_2.x86_64】的承载OS,Coreos【版本为Red Hat Enterprise Linux CoreOS 45.82.202009261329-0 (Ootpa)】下本机目录/opt/pwx/oci/rootfs/pxlib_data/px-fslibs自带的是px_modules.10.7z,解压后找到4.18.0-193.23.1.el8_2.x86_64/version/10/px.ko,然后拷贝至/var/lib/osd/pxfs/latest/,修改为8.px.ko和10.px.ko。

2、关闭secure boot,详见前述环境准备章节,将VMware下虚拟机的安全引导关闭。如果在裸机上部署,请在bios上关闭。

When “Secure Boot” is enabled, unsigned kernel extensions will not allowed to be loaded. vmmon.ko and vmnet.ko are of course not signed with Fedora cert, so they just won’t run.

1
2
3
4
5
6
7
8
cd /tmp
tar -xzvf /usr/lib/vmware/modules/source/vmmon.tar
cd vmmon-only/
make
cp vmmon.ko /lib/modules/2.6.32-504.el6.x86_64/misc/vmmon.ko
modprobe vmmon

Start the VM.

6.创建storageclass

创建IBM CP4D必须的storageclass,如果不安装IBM CP4D,此处略过。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@support px-install-4.x]$./px-sc.sh
creating storage classes
storageclass.storage.k8s.io/portworx-couchdb-sc created
storageclass.storage.k8s.io/portworx-elastic-sc created
storageclass.storage.k8s.io/portworx-solr-sc created
storageclass.storage.k8s.io/portworx-cassandra-sc created
storageclass.storage.k8s.io/portworx-kafka-sc created
storageclass.storage.k8s.io/portworx-metastoredb-sc created
storageclass.storage.k8s.io/portworx-rwx-gp3-sc created
storageclass.storage.k8s.io/portworx-shared-gp3 created
storageclass.storage.k8s.io/portworx-rwx-gp2-sc created
storageclass.storage.k8s.io/portworx-dv-shared-gp created
storageclass.storage.k8s.io/portworx-shared-gp-allow created
storageclass.storage.k8s.io/portworx-rwx-gp-sc created
storageclass.storage.k8s.io/portworx-shared-gp created
storageclass.storage.k8s.io/portworx-gp3-sc created
storageclass.storage.k8s.io/portworx-nonshared-gp2 created
storageclass.storage.k8s.io/portworx-db-gp2-sc created
storageclass.storage.k8s.io/portworx-db-gp3-sc created
storageclass.storage.k8s.io/portworx-db2-rwx-sc created
storageclass.storage.k8s.io/portworx-db2-rwo-sc created
storageclass.storage.k8s.io/portworx-db2-sc created
storageclass.storage.k8s.io/portworx-watson-assistant-sc created
storageclass.storage.k8s.io/portworx-db2-fci-sc created

7.sample

创建pvc

1
2
[root@support px-install-4.x]$./px-install.sh install-sample-pvc
[root@support px-install-4.x]$oc create -f px-test.yaml

【验证】

1
2
3
[root@support px-install-4.x]$oc get pod -n portworx -owide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE                       NOMINATED NODE   READINESS GATES
px-test-pod   1/1     Running   0          41s   10.131.0.15   worker1.openshift4.cj.io   <none>           <none>

【Portworx命令查看挂载卷】

1
2
3
[root@support px-install-4.x]$kubectl exec px-storage-cluster-4zsgn -n kube-system -- /opt/pwx/bin/pxctl volume list
ID                   NAME                                 SIZE    HA    SHARED  ENCRYPTED       IO_PRIORITY     STATUS   SNAP-ENABLED
186741747872486621   pvc-e1db9081-6021-428c-a3f0-2f75053848b2    1 GiB   3    v4   no      HIGH    up - attached on 172.18.1.47    no

by 张诚 2020年10月18日