Quantcast
Channel: ONTAP Discussions topics
Viewing all 4954 articles
Browse latest View live

Performance Capacity calculation raw to final

$
0
0

I'm trying to work out how "current_utilization" in the -raw output from "statistics show -object resource_headroom_cpu -raw" correlates (ie. is calculated/converted) to main percentage output in "statistics show -object resource_headroom_cpu"

 

EXAMPLE:

 

cluster1::*> statistics show -object resource_headroom_cpu

Object: resource_headroom_cpu
Instance: CPU_cluster1-n1
Start-time: 11/9/2018 10:23:20
End-time: 11/9/2018 10:31:39
Elapsed-time: 498s
Scope: cluster1-n1

Counter Value
-------------------------------- --------------------------------
current_latency 330us
current_ops 4215
current_utilization 56%             <<<<<<<<<<<<<

 

...

 

cluster1::*> statistics show -object resource_headroom_cpu -raw

Object: resource_headroom_cpu
Instance: CPU_cluster1-n1
Start-time: 11/9/2018 10:27:43
End-time: 11/9/2018 10:27:43
Scope: cluster1-n1

Counter Value
-------------------------------- --------------------------------
current_latency 20267702036037us
current_ops 53787499981
current_utilization 36639819423391%                   <<<<<<  (how is this 56%?)

 

 

Similarly, how is -raw optimal_point_utilization converted to standard output?

 

EG.

statistics show -object resource_headroom_cpu:

optimal_point_utilization 88

 

statistics show -object resource_headroom_cpu -raw:

optimal_point_utilization 28078837


Hybrid Aggregate recommendation - for DS460C with 960GB SSD and 4TB SATA

$
0
0
Could the experts on this forum advise me the best practice for the Hybrid aggregate with 4TB sata and 960GB SAS on DS460C shelf. how many SSDs are recommended for 50x4TB SATA disks. Best Regards, Bhanoji

Unjoin Nodes from a Cluster

$
0
0

Hi,

 

I'm wondering if it is possible to unjoin a ha-pair of a cluster that still hold data volumes. According to what I have seen in the ONTAP documentation all data has to be evacuated before you can unjoin nodes.

 

Problem: not enough space on the shelves of the new nodes to evacuate all the data and not enough rack space for more temporary shelves. Also a downtime is not possible, at least not for all of the data.

 

The plan would be to add the new nodes to the cluster, migrate those data volumes that must stay online to the new nodes/shelves in the cluster. Then unjoin the old nodes, turn the old controller into a disk shelf (internal disks are not partitioned), add these shelves to the new controllers shelf stack and reassign the disks using the disk reassign -s -d command on the node shell.

 

Current ONTAP version is 9.4P3. Old nodes are FAS2554, new ones are FAS2720.

 

Reassigning Disks worked in 7-mode, not sure if it still does in cDOT (because of databases, cluster ring, ...). The disk reassign command is still present on node shell.

 

Any input/idea on this would be highly appreciated.

NFS version 4.2

$
0
0

Hello

Does anybody know if NFS version 4.2 will be supported and if so what ontap version and a rough idea of time line?

 

Thanks

Ian

Snapmirror with encryption

$
0
0

Hello,

Does anybody knows if we can setup a Snapmirror replication from a non-encrypted source volume to an NVE destination volume, running DO 9.3 ?

 

And also, what are the requirement to establish an end to end data encryption using NVE and Snapmirror  between two separated Netapp Cluster.

 

Thanks !

Brocade/Twinax/AFF200/FAS2750

$
0
0

Hope someone can point me in the right direction.

 

I have some brocade vdx6740 switches and currently it has fibre/transceivers implemented but due to cost  I want to swap to twinax. I'm planning on brocade active twinax cables to join the switches together. I am trying to find out about compatibility of using Brocade Twinax cables to connect to the NetApp AFF220 and FAS2750 units to the vdx6740 swotch. Does anyone have any information on this?

Openshift之NetApp SolidFire集成及测试

$
0
0

NetApp公司是一家存储和数据管理公司,主要提供软件,系统和服务来管理和存储数据,包括其专有的Data ONTAP操作系统。NetApp公司主要向全球数据密集型企业提供统一存储解决方案其 Data ONTAP是全球首屈一指的存储操作系统,公司存储解决方案涵盖了专业化的硬件、软件和服务,为开放网络环境提供了无缝的存储管理。——(来自搜狗百科)
SolidFire成立于2010年,是一家全闪存阵列的存储厂商,其存储控制器基于标准的x86服务器,最大可扩展到100个节点,2015年12月,SolidFire被NetApp收购;2017年6月,NetApp基于SolidFire推出超融合一体机。
SolidFire提供分布式块存储,类似于ceph rbd,非常灵活,支持动态扩缩容,具有良好的性能。同时具有很多企业特性:如快照,组快照,丰富的API,非常灵活的QOS配置等。


作者:潘晓华Michael
链接:https://www.jianshu.com/p/8a393c68f2c9
來源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
 
 

什么是Trident?

NetApp是CNCF的金牌会员,它开发的Trident是一款开源存储配置程序和流程编排程序。
在没有Trident的环境下,K8s/Openshift环境要使用NetApp存储,就需要,手动在NetApp控制台上创建volume,并设置创建PV,再创建PVC。这些过程需要在两个平台切换操作,而且很麻烦。
部署了Trident后,配置好相应的storageclass,K8s/Openshift平台就可以直接通过storageclass动态自动创建PVC。K8s/Openshift平台通过Trident控制器调用NetApp设备的API从而达到控制NetApp设备目的,如创建volume,并自动创建PV,及PVC,进而让Pod能够使用,此过程是自动的,对平台使用者是无感知的。
Trident本身也是以Pod的形式在K8s/Openshift平台上运行的。


 

Openshift上部署与使用Trident

准备工作

  1. 用system:admin登录集群
$ oc login -u system:admin
  1. 集群能够访问SolidFire机器的MVIP(管理VIP)及SVIP(存储SVIP)
$ telnet $MVIP 443
$ telnet $SVIP 3260
  1. 安装基本包
$ ansible all -m package -a 'name=lsscsi,iscsi-initiator-utils,sg3_utils,device-mapper-multipath state=present'
$ ansible all -m shell -a 'mpathconf --enable --with_multipathd y'
$ ansible all -m service -a 'name=iscsid enabled=true state=started'
$ ansible all -m service -a 'name=multipathd enabled=true state=started'
$ ansible all -m service -a 'name=iscsi enabled=true state=started'

部署

  1. 下载安装文件,并解压
$ wget https://github.com/NetApp/trident/releases/download/v18.10.0/trident-installer-18.10.0.tar.gz
$ tar -xf trident-installer-18.10.0.tar.gz
$ cd trident-installer
  1. 配置安装backend.json文件
$ cp sample-input/backend-solidfire.json setup/backend.json# 修改里面的配置
$ cat setup/backend.json
{"version": 1,"storageDriverName": "solidfire-san","Endpoint": "https://{{用户名}}:{{密码}}@{{管理VIP}}/json-rpc/11.0","SVIP": "{{存储VIP}}:3260","TenantName": "trident","UseCHAP": true,"InitiatorIFace": "default","Types": [{"Type": "Bronze", "Qos": {"minIOPS": 1000, "maxIOPS": 2000, "burstIOPS": 4000}},
              {"Type": "Silver", "Qos": {"minIOPS": 4000, "maxIOPS": 6000, "burstIOPS": 8000}},
              {"Type": "Gold", "Qos": {"minIOPS": 6000, "maxIOPS": 8000, "burstIOPS": 10000}}]
}
  1. 创建trident project
$ oc new-project trident
  1. 安装检查
$ ./tridentctl install --dry-run -n trident

这个步骤会模拟安装过程进行执行一遍,并会删除所有资源。通过模拟对整个环境进行全面的检测。以下是执行的日志

 

[root@master02 trident-installer]# ./tridentctl install --dry-run -n trident -d
DEBU Initialized logging.                          logLevel=debug
DEBU Running outside a pod, creating CLI-based client. 
DEBU Initialized Kubernetes CLI client.            cli=oc flavor=openshift namespace=trident version=1.11.0+d4cacc0
DEBU Validated installation environment.           installationNamespace=trident kubernetesVersion=
DEBU Deleted Kubernetes configmap.                 label="app=trident-installer.netapp.io"namespace=trident
DEBU Namespace exists.                             namespace=trident
DEBU Deleted Kubernetes object by YAML.           
DEBU Deleted installer cluster role binding.      
DEBU Deleted Kubernetes object by YAML.           
DEBU Deleted installer cluster role.              
DEBU Deleted Kubernetes object by YAML.           
DEBU Deleted installer service account.           
DEBU Removed security context constraint user.     scc=privileged user=trident-installer
DEBU Created Kubernetes object by YAML.           
INFO Created installer service account.            serviceaccount=trident-installer
DEBU Created Kubernetes object by YAML.           
INFO Created installer cluster role.               clusterrole=trident-installer
DEBU Created Kubernetes object by YAML.           
INFO Created installer cluster role binding.       clusterrolebinding=trident-installer
INFO Added security context constraint user.       scc=privileged user=trident-installer
DEBU Created Kubernetes configmap from directory.  label="app=trident-installer.netapp.io" name=trident-installer namespace=trident path=/root/trident-installer/setup
INFO Created installer configmap.                  configmap=trident-installer
DEBU Created Kubernetes object by YAML.           
INFO Created installer pod.                        pod=trident-installer
INFO Waiting for Trident installer pod to start.  
DEBU Trident installer pod not yet started, waiting.  increment=280.357322ms message="pod not yet started (Pending)"
DEBU Trident installer pod not yet started, waiting.  increment=523.702816ms message="pod not yet started (Pending)"
DEBU Trident installer pod not yet started, waiting.  increment=914.246751ms message="pod not yet started (Pending)"
DEBU Trident installer pod not yet started, waiting.  increment=1.111778662s message="pod not yet started (Pending)"
DEBU Pod started.                                  phase=Succeeded
INFO Trident installer pod started.                namespace=trident pod=trident-installer
DEBU Getting logs.                                 cmd="oc --namespace=trident logs trident-installer -f"
DEBU Initialized logging.                          logLevel=debug
DEBU Running in a pod, creating API-based client.  namespace=trident
DEBU Initialized Kubernetes API client.            cli=oc flavor=openshift namespace=trident version=v1.11.0+d4cacc0
DEBU Validated installation environment.           installationNamespace=trident kubernetesVersion=v1.11.0+d4cacc0
DEBU Parsed requested volume size.                 quantity=2Gi
DEBU Dumping RBAC fields.                          ucpBearerToken= ucpHost= useKubernetesRBAC=true
DEBU Namespace exists.                             namespace=trident
DEBU PVC does not exist.                           pvc=trident
DEBU PV does not exist.                            pv=trident
INFO Starting storage driver.                      backend=/setup/backend.json
DEBU config: {"Endpoint":"https://admin:root1234@99.248.106.82/json-rpc/11.0","InitiatorIFace":"default","SVIP":"99.248.82.55:3260","TenantName":"trident","Types":[{"Qos":{"burstIOPS":4000,"maxIOPS":2000,"minIOPS":1000},"Type":"Bronze"},{"Qos":{"burstIOPS":8000,"maxIOPS":6000,"minIOPS":4000},"Type":"Silver"},{"Qos":{"burstIOPS":10000,"maxIOPS":8000,"minIOPS":6000},"Type":"Gold"}],"UseCHAP":true,"storageDriverName":"solidfire-san","version":1} 
DEBU Storage prefix is absent, will use default prefix. 
DEBU Parsed commonConfig: {Version:1 StorageDriverName:solidfire-san BackendName: Debug:false DebugTraceFlags:map[] DisableDelete:false StoragePrefixRaw:[] StoragePrefix:<nil> SerialNumbers:[] DriverContext: LimitVolumeSize:} 
DEBU Initializing storage driver.                  driver=solidfire-san
DEBU Configuration defaults                        Size=1G StoragePrefix= UseCHAP=true
DEBU Parsed into solidfireConfig                   DisableDelete=false StorageDriverName=solidfire-san Version=1
DEBU Decoded to &{CommonStorageDriverConfig:0xc42064e0a0 TenantName:trident EndPoint:https://admin:root1234@99.248.106.82/json-rpc/11.0 SVIP:99.248.82.55:3260 InitiatorIFace:default Types:0xc4206d26e0 LegacyNamePrefix: AccessGroups:[] UseCHAP:true DefaultBlockSize:0 SolidfireStorageDriverConfigDefaults:{CommonStorageDriverConfigDefaults:{Size:1G}}} 
DEBU Set default block size.                       defaultBlockSize=512
DEBU Using SF API version from config file.        version=11.0
DEBU Initializing SolidFire API client.            cfg="{trident https://admin:root1234@99.248.106.82/json-rpc/11.0  99.248.82.55:3260 default 0xc4206d26e0  [] 512 map[]}" endpoint="https://admin:root1234@99.248.106.82/json-rpc/11.0" svip="99.248.82.55:3260"
ERRO Error detected in API response.               ID=637 code=500 message=xUnknownAccount name=xUnknownAccount
DEBU Account not found, creating.                  error="device API error: xUnknownAccount" tenantName=trident
DEBU Created account.                              accountID=0 tenantName=trident
DEBU SolidFire driver initialized.                 AccountID=2 InitiatorIFace=default
DEBU Using CHAP, skipped Volume Access Group logic.  AccessGroups="[]" SVIP="99.248.82.55:3260" UseCHAP=true driver=solidfire-san
DEBU Added pool for SolidFire backend.             attributes="map[media:{Offers: ssd} IOPS:{Min: 1000, Max: 2000} snapshots:{Offer:  true} clones:{Offer:  true} encryption:{Offer:  false} provisioningType:{Offers: thin} backendType:{Offers: solidfire-san}]" backend=solidfire_99.248.82.55 pool=Bronze
DEBU Added pool for SolidFire backend.             attributes="map[clones:{Offer:  true} encryption:{Offer:  false} provisioningType:{Offers: thin} backendType:{Offers: solidfire-san} media:{Offers: ssd} IOPS:{Min: 4000, Max: 6000} snapshots:{Offer:  true}]" backend=solidfire_99.248.82.55 pool=Silver
DEBU Added pool for SolidFire backend.             attributes="map[snapshots:{Offer:  true} clones:{Offer:  true} encryption:{Offer:  false} provisioningType:{Offers: thin} backendType:{Offers: solidfire-san} media:{Offers: ssd} IOPS:{Min: 6000, Max: 8000}]" backend=solidfire_99.248.82.55 pool=Gold
DEBU Storage driver initialized.                   driver=solidfire-san
INFO Storage driver loaded.                        driver=solidfire-san
INFO Dry run completed, no problems found.        
DEBU Received EOF from pod logs.                   container= pod=trident-installer
INFO Waiting for Trident installer pod to finish. 
DEBU Pod finished.                                 phase=Succeeded
INFO Trident installer pod finished.               namespace=trident pod=trident-installer
DEBU Deleted Kubernetes pod.                       label="app=trident-installer.netapp.io"namespace=trident
INFO Deleted installer pod.                        pod=trident-installer
DEBU Deleted Kubernetes configmap.                 label="app=trident-installer.netapp.io"namespace=trident
INFO Deleted installer configmap.                  configmap=trident-installer
INFO In-cluster installation completed.           
DEBU Deleted Kubernetes object by YAML.           
INFO Deleted installer cluster role binding.      
DEBU Deleted Kubernetes object by YAML.           
INFO Deleted installer cluster role.              
DEBU Deleted Kubernetes object by YAML.           
INFO Deleted installer service account.           
INFO Removed security context constraint user.     scc=privileged user=trident-installer
  1. 正式安装
$ ./tridentctl install -n trident

该步骤是真正的执行。会创建serviceaccount, clusterrolebinding,configmap配置,trident-install pod(该pod在部署完trident deployment后会删除)等, 并会创建一个pv与trident pvc进行初始化操作,最终会创建trident deployment,完成trident的安装。

  • trident的安装支持自定义一些配置。
  • --etcd-image可指定etcd的镜像(默认是quay.io/coreos/etcd,下载会比较慢)
  • --trident-image指定trident的镜像
  • --volume-size指定trident持久存储的大小(默认为2GiB)
  • --volume-name指定volume名字(默认是etcd-vol)
  • --pv指定pv名字(默认是trident)
  • --pvc指定pvc名字(默认是trident)
  • --generate-custom-yaml将使用的所有配置进行导出到一个setup文件夹,不会对集群做任何操作
  • --use-custom-yaml安装setup下的所有yaml文件进行部署trident

以下是执行的日志

 

[root@master02 trident-installer]# ./tridentctl install -n trident 
INFO Created installer service account.            serviceaccount=trident-installer
INFO Created installer cluster role.               clusterrole=trident-installer
INFO Created installer cluster role binding.       clusterrolebinding=trident-installer
INFO Added security context constraint user.       scc=privileged user=trident-installer
INFO Created installer configmap.                  configmap=trident-installer
INFO Created installer pod.                        pod=trident-installer
INFO Waiting for Trident installer pod to start.  
INFO Trident installer pod started.                namespace=trident pod=trident-installer
INFO Starting storage driver.                      backend=/setup/backend.json
INFO Storage driver loaded.                        driver=solidfire-san
INFO Starting Trident installation.                namespace=trident
INFO Created service account.                     
INFO Created cluster role.                        
INFO Created cluster role binding.                
INFO Added security context constraint user.       scc=anyuid user=trident
INFO Created PVC.                                 
INFO Controller serial numbers.                    serialNumbers="4BZXJB2,85Q8JB2,4BXXJB2,4BXTJB2"
INFO Created iSCSI CHAP secret.                    secret=trident-chap-solidfire-99-248-82-55-trident
INFO Created PV.                                   pv=trident
INFO Waiting for PVC to be bound.                  pvc=trident
INFO Created Trident deployment.                  
INFO Waiting for Trident pod to start.            
INFO Trident pod started.                          namespace=trident pod=trident-57ccdff48f-gtflx
INFO Waiting for Trident REST interface.          
INFO Trident REST interface is up.                 version=18.10.0
INFO Trident installation succeeded.              
INFO Waiting for Trident installer pod to finish. 
INFO Trident installer pod finished.               namespace=trident pod=trident-installer
INFO Deleted installer pod.                        pod=trident-installer
INFO Deleted installer configmap.                  configmap=trident-installer
INFO In-cluster installation completed.           
INFO Deleted installer cluster role binding.      
INFO Deleted installer cluster role.              
INFO Deleted installer service account.           
INFO Removed security context constraint user.     scc=privileged user=trident-installer
  1. 执行完install后,trident并不会安装之前配置的backend,需要另外再单独添加。(个人觉得netapp这点考虑得有点多余,因为dry-run的时候已经对backend.json作了检查了,直接install将它添加上岂不是更方便)
$ ./tridentctl -n trident create backend -f setup/backend.json
$ ./tridentctl -n trident get backend
+------------------------+----------------+--------+---------+|          NAME          | STORAGE DRIVER | ONLINE | VOLUMES |
+------------------------+----------------+--------+---------+
| solidfire_99.248.82.55| solidfire-san  |true|       0 |
+------------------------+----------------+--------+---------+
  1. 添加基本的storageclass
    将sample-input/storage-class-basic.yaml.templ中的BACKEND_TYPE用指定的backend中的STORAGE DRIVER值替换(此例中为solidfire-san)
$ cat sample-input/storage-class-basic.yaml.templapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: basicprovisioner: netapp.io/tridentparameters:backendType:"__BACKEND_TYPE__"
$ sed "s/__BACKEND_TYPE__/solidfire-san/" sample-input/storage-class-basic.yaml.templ | occreate -f -
  1. 根据backend中的Type创建对应的storageclass
$ cat storage-class-gold.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: goldannotations:
    storageclass.kubernetes.io/is-default-class: "true"provisioner: netapp.io/tridentparameters:storagePools:"solidfire_99.248.82.55:Gold"# solidfire_99.248.82.55为backend name;Gold为指定的Type
$ oc create -f storage-class-gold.yaml

查看当前的storageclass

$ oc get sc
NAME             PROVISIONER         AGE
basic            netapp.io/trident   2hgold(default)   netapp.io/trident   1h

 

使用:创建PVC

  1. 创建第一个PVC
$ cat test-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata:annotations:
    volume.beta.kubernetes.io/storage-class: gold
    volume.beta.kubernetes.io/storage-provisioner: netapp.io/trident
    trident.netapp.io/reclaimPolicy:"Retain"name: testpvcnamespace: testspec:accessModes:
  - ReadWriteOnceresources:requests:storage:1Gi
$ oc create -f test-pvc.yaml

PVC创建的说明:

  • volume.beta.kubernetes.io/storage-class为10,11步创建的storageclass
  • volume.beta.kubernetes.io/storage-provisioner指定为netapp的trident
  • trident.netapp.io/reclaimPolicy指定创建PV的reclaimPolicy,默认为"Delete",支持"Delete"和"Retain",不支持"Recycle"
  • accessModes因SolidFire是块存储,只支持ReadWriteOnce

SolidFire功能测试

快照恢复数据

创建快照

创建快照
 
基于已有快照恢复pvc数据

基于快照创建新的PVC

指定快照,创建新的存储

 

基于已有快照恢复volume

 

 

查看新建的volume的IQN


查看新建的volume的IQN



基于新的volume创建PV

$ cat test-clone-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: netapp.io/trident
    volume.beta.kubernetes.io/storage-class: gold
  name: test-dd-testxx-volume
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  iscsi:
    chapAuthDiscovery: true
    chapAuthSession: true
    fsType: ext4
    iqn: iqn.2010-01.com.solidfire:fs69.test-dd-testxx-volume.169
    iscsiInterface: default
    lun: 0
    secretRef:
      name: trident-chap-solidfire-99-248-82-55-tridentnamespace: trident
    targetPortal: 99.248.82.55:3260
  persistentVolumeReclaimPolicy: Delete
  storageClassName: gold
$ oc create -f test-clone-pv.yaml

创建pvc使用手动创建的pv

$ cat test-clone-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test111x
  namespace: test-dd
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

组快照

组快照与快照类似,不同之处,它把多个存储卷在同一时间的数据做快照,从而避免数据不一致的情况。同时在恢复的时候,也同时将备份时刻的数据进行恢复。

组快照创建.png
 

克隆已有的pvc数据

添加annotations配置trident.netapp.io/cloneFromPVC: test-pvc,创建新的pvc基于已有的PVC test-pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    trident.netapp.io/cloneFromPVC: test-pvc
  name: test-clone-pvc
  namespace: test-dd
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: gold


SolidFire性能测试

测试环境说明

  • openshift 3.11物理机部署:3 Masters4 Nodes
  • SolidFire 4台Node:型号SF9605 每台Node上为10块SSD盘,每个Node的IOPS为5w,集群最高IOPS 20w
  • 每块PV存储设置为gold类型storageclass:{"Type": "Gold", "Qos": {"minIOPS": 6000, "maxIOPS": 8000, "burstIOPS": 10000}}

dd测试

# 测试命令
$ dd if=/dev/zero of=/data/dd.test bs=4k count=200000 oflag=direct
  1. 单个pod,单个pv作dd命令测试
    创建deployment进行测试
$ cat test0-pvc.yamlkind: PersistentVolumeClaimmetadata:annotations:
    volume.beta.kubernetes.io/storage-class: gold
    volume.beta.kubernetes.io/storage-provisioner: netapp.io/tridentname: test0namespace: test-ddspec:accessModes:
  - ReadWriteOnceresources:requests:storage:10Gi
$ oc create -f test0-pvc.yaml ## 创建测试的存储
$ cat dd.yamlapiVersion: apps.openshift.io/v1kind: DeploymentConfigmetadata:labels:run: ddtestname: ddtestspec:replicas:1selector:run: ddteststrategy:type: Recreatetemplate:metadata:labels:run: ddtestspec:containers:
      - command:
        - /bin/bash
        - '-c'
        - |
          #/bin/bash
          dd if=/dev/zero of=/data/out.test1 bs=4k count=200000 oflag=direct
        image: tools/iqperf:latest
        imagePullPolicy: Always
        name: ddtest
        volumeMounts:
        - mountPath: /data
          name: volume-spq10
      volumes:
      - name: volume-spq10
        persistentVolumeClaim:
          claimName: test0
  triggers:
  - type: ConfigChange
$ oc create -f dd.yaml

在webconsole上查看日志如下

200000+0 records in200000+0 records out819200000 bytes (819 MB) copied, 68.8519 s, 11.9 MB/s

NetApp的管理平台上查看集群IO状态,如图(只需要看11:32时间以后部分)


单个pod,单个pv作dd命令测试.png

 

IOPS为2908

  1. 1个pod,1个pv,8个dd进程
    将1中的deploymentconfig中的command内容更新为:
...
- command:
  - '/bin/bash'
  - '-c'
  - |#/bin/bashfor i in {1..8}do
      dd if=/dev/zero of=/data/dd.test$i bs=4k count=200000 oflag=direct &done
    sleep 1000000
...
 1个pod,1个pv,8个dd进程.png

 

IOPS为10000

  1. 8个pod,8个pv同时使用dd命令测试
    创建statefulset,设置volumeClaimTemplates批量创建存储
$ cat dd-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: testdd
  namespace: test-dd
spec:
  serviceName: testdd
  replicas: 8
  selector:
    matchLabels:
      app: testdd
  template:
    metadata:
      labels:
        app: testdd
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: testdd
          containers:
          - command:
            - /bin/bash
            - '-c'
            - |#!/bin/bash
              dd if=/dev/zero of=/data/out.test1 bs=4k count=2000000 oflag=direct
          image: 'harbor.apps.it.mbcloud.com/tools/iqperf:latest'
          imagePullPolicy: Always
          name: testdd
          image: 'tools/dd:latest'
          volumeMounts:
            - name: data
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: gold
        resources:
          requests:
            storage: 100Gi
 
8个pod,8个pv同时使用dd命令测试.png

 

IOPS为33883

  1. 8个pod,8个pv同时每个pod启用8个dd进程,共64个dd进程测试
    更改3中statefulset的command命令如下:
...
- command:
  - '/bin/bash'
  - '-c'
  - |#/bin/bashfor i in {1..8}do
      dd if=/dev/zero of=/data/dd.test$i bs=4k count=200000 oflag=direct &done
    sleep 1000000
...
8个pod,8个pv同时每个pod启用8个dd进程,共64个dd进程测试.png


IOPS为76832
达到了gold Type下设置的IOPS上限

  1. 50个pod,50个pv同时每个pod启用3个dd进程,共150个dd进程测试
图片.png


此时单个PV存储的详情


图片.png


IOPS为205545
达到了gold Type下设置的IOPS上限
综合结果如下:

pod数pv数dd进程数IOPS
1112908
11810000
88833883
886476832
5050150205545

 

数据库测试

测试工具mydbtest
测试配置

$ mysql -uapp -h172.30.213.17 -papp app -e "create table t_mytest(col1 int);"
$ cat test.conf
option

name app

loop 20000

user app/app@172.30.213.17:3306:app

declare

a int 1030000begin#select * from t_mytest where col1 = :a; # 查询
insert into t_mytest set col1 = :a;  # 插入end

执行测试过程

./mydbtest_64.bin query=test.conf  degree=40

执行结果

# 插入数据
2019-01-23 17:59:35 Total  tran=20000=312/s, qtps=40000=624/s, ela=64046 ms, avg=3202 us
Summary: SQL01 exec=800000, rows=0=0/e, avg=65 us
Summary: SQL02 exec=800000, rows=800000=100/e, avg=3135 us
Summary: exec=12307/s, qtps=24615/s# 创建完索引后,读数据(参考意义不大)
2019-01-23 17:56:31 Total  tran=20000=3835/s, qtps=40000=7670/s, ela=5203 ms, avg=260 us
Summary: SQL01 exec=800000, rows=22668078=2833/e, avg=174 us
Summary: SQL02 exec=800000, rows=0=0/e, avg=69 us
Summary: exec=133333/s, qtps=266666/s

插入的qtps为24615/s,性能不错。

小礼物走一走,来简书关注我

Does modifying port parameters ((MTU, Auto-Negotiation, Duplex, Speed) is disruptive?

$
0
0

Does modifying port parameters (MTU, Auto-Negotiation, Duplex, Speed) is disruptive? If so why? The ports are part of an interface group. One port is up and the other port part of the ifgrp is down.


Any Situation to Read Data from a DP Volume?

$
0
0

Greeting everyone.

 

My question originates from a description mentioned in FabricPool feature,

 

https://www.netapp.com/us/media/tr-4598.pdf

On Page 8, "If read, cold data blocks on the cloud tier become hot and are written to the performance tier. "

This description is to describe read behavior of a cold block from a volume with Backup tiering policy.

 

In normal situation, the Backup tiering policy can only be applied to DP volume, which could not have a junction path so no client can read from a DP volume. For non-DP volume, we can designate the Backup tiering-policy while performing a volume move, but it will revert to the Auto tiering-policy as it finishes, so technically we do not read from a volume with the Backup tiering policy. If we perform a flexclone from the DP volume and read from the flexclone volume, it is no longer with the Backup tiering policy and we do not read from a volume with the Backup tiering policy again.

 

Hence here comes the question, on what condition can we read data from a volume with the Backup tiering policy? To the best of my knowledge, SnapMirror cascades can be a possible situation, where the data in the secondary DP volume would be read and transfer to the tertiary DP volume. Is there any possibility to have another situation other than that?

 

Any reply would be very much appreciated.

 

 

 

Enable space allocation command

$
0
0

Hello,

 

I tried to use unmap command for a vmfs datastore but I got the following error:

"Devices backing volume 556c77e1-b49cf970-f474-10604ba82a40 do not support UNMAP"

 

I read that space allocation should be enabled for the corresponding lun in order to use this command.

I just wanted to know what is the impact of "lun set space_alloc <lun_path> enable" command?

I mean, is it safe to run this command on a production lun when the lun is online and used by many users ?

 

BR,

Ilir

Setup multiple peer

$
0
0

Hi,

 

We're trying to setup a peer realtion ship to a cluster who already has een peer relation with an other cluster on an other vlan. I can setup the peer from the cluster wich doesn't have a relation in the state of pending. if i try to setup the peer from the cluster wich already has an peer relation i get an error "an introductory rpc to the peer address failed"

 

Does any one know if this config is possible at all.

 

thanks,

New TR Released:TR-4569: Security Hardening Guide for NetApp ONTAP 9

$
0
0

This document provides guidance and configuration settings for NetApp® ONTAP® 9 to help organizations to meet prescribed security objectives for information system confidentiality, integrity, and availability.
For more info, please click here

 

New TR Released:TR-4650: NetApp ONTAP and Splunk Enterprise

$
0
0

This document presents the performance and reliability validation test results for NetApp ONTAP in a Splunk Enterprise environment. It also includes storage efficiency test results for Splunk indexer data.

For more info, please click here

 

Data Ontap SMI-S Agent is not running. HTTP Error (401 Unauthorized) is showing.

$
0
0

Hi,

 

we have installed SMI-S agent 4.1 in Win 2008 R2 for enabling CIM credential service now discovery. After successfull installation in Server, the agent service is not running. i tried to start the Service using "smis cimserver start" but no success on it. The error " C:\PROGRA~2\Ontap\smis\pegasus\bin\cimcli Pegasus Exception: HTTP Error (401 Una
uthorized).. Cmd = ni Object = ONTAP_FilerData" is showing.

vulnerability

$
0
0

Hi Team,

 

Looking for solution for vurnabilities please check attached file for details.

 

NetApp Release 8.2.3P3 7-Mode: Tue Apr 28 14:48:22 PDT 2015

 

The 'EBJInvokerServlet' and 'JMXInvokerServlet' servlets hosted on the web server on the remote host are accessible to unauthenticated users. The remote host is, therefore, affected by the following vulnerabilities :

 

  - A security bypass vulnerability exists due to improper     restriction of access to the console and web management     interfaces. An unauthenticated, remote attacker can     exploit this, via direct requests, to bypass     authentication and gain administrative access.

    (CVE-2007-1036)

 

  - A remote code execution vulnerability exists due to the     JMXInvokerHAServlet and EJBInvokerHAServlet invoker     servlets not properly restricting access to profiles. An     unauthenticated, remote attacker can exploit this to     bypass authentication and invoke MBean methods,     resulting in the execution of arbitrary code.

    (CVE-2012-0874)

 

  - A remote code execution vulnerability exists in the     EJBInvokerServlet and JMXInvokerServlet servlets due to     the ability to post a marshalled object. An     unauthenticated, remote attacker can exploit this, via a     specially crafted request, to install arbitrary     applications. Note that this issue is known to affect     McAfee Web Reporter versions prior to or equal to     version 5.2.1 as well as Symantec Workspace Streaming     version 7.5.0.493 and possibly earlier.

    (CVE-2013-4810)

 

 

 

Thanks & Regards

Prajyot Katakdound

prajyot.katakdound.wg@hitachi-systems.com

 

 


Ontap select : Half duplex and NFS issues

$
0
0

We are trying to run a VM on an NFS datastore presented by Ontap Select 2 nodes cluster.  Everything is working except that the VM has access to the disks every minutes for 30 seconds.  The ethernet ports on the ontap select appliance are shown has half duplex even if it's configured for full duplex.  ESX host VMnic are Fullduplex. 

Anyone has idea on how to fix this issues?

Thk's!

 

Ontap select 9.5RC1

VMware ESXi, 6.7.0, 10302608

standard vswitch 

 

  Node: ONTAP-02
                                        Port: e0b
                                        Link: up
                                         MTU: 9000
             Auto-Negotiation Administrative: false
                Auto-Negotiation Operational: false
                  Duplex Mode Administrative: full
                     Duplex Mode Operational: half
                        Speed Administrative: 10000
                           Speed Operational: -
                 Flow Control Administrative: none
                    Flow Control Operational: none
                                 MAC Address: 00:a0:b8:f6:e4:fc
                                   Port Type: physical
                 Interface Group Parent Node: -
                 Interface Group Parent Port: -
                       Distribution Function: -
                               Create Policy: -
                            Parent VLAN Node: -
                            Parent VLAN Port: -
                                    VLAN Tag: -
                            Remote Device ID: -
                                IPspace Name: Default
                            Broadcast Domain: Default
                          MTU Administrative: 9000
                          Port Health Status: healthy
                   Ignore Port Health Status: false
                Port Health Degraded Reasons: -

 

 

what is the CPU Concurrency for iSCSI processing in ONTAP 9.3 on 16 or more cores

$
0
0

Hi,

Per this KB:
https://kb.netapp.com/app/answers/answer_view/a_id/1001217/~/faq%3A-cpu-utilization-in-data-ontap%3A-scheduling-and-monitoring-

ssan_exempt = Introduced in clustered Data ONTAP 8.2 = CPU Concurrency = 1+ CPU


According to the iSCSI Performance improvements in 9.3:
The entire iSCSI stack was re-written for ONTAP 9.3 : iSCSI (ONTAP 9.3) is now able to take advantage of numerous CPU cores.

My question is :
In ONTAP 9.3 what is the CPU Concurrency for iSCSI processing on systems such as AFF-A300 with 16 cores ?


Thanks,
-Ashwin


SVM-DR browse snapshots on destination

$
0
0

Hi,

 

I'm wondering if it is possible to browse/access snapshots on a destination SVM-DR (while the destination if offline)?

 

Thank you!

 

FlexVol 20mb minimum

$
0
0

Why? I can't understand why I can't create a 10mb flexvol

how to find a history of volume growth in NetApp 7 Mode

$
0
0

Is there a way find the space growth on a particular volume on NetApp..

Viewing all 4954 articles
Browse latest View live