System info: FAS2720 running ONTAP 9.7P17, with 2 x DS212C expansion shelves with 3.5" 10 TB SAS disks. 8 of the disks on the base unit are root/data partitioned disks and are hosting RAID-TEC root aggregates. The expansion shelves have single-partitioned disks.
What I want to do: make a data aggregate out of all the disks in order to maximize spindles. All the expansion shelf disks are showing as spare and owned by node 1, with their root and data fields showing as unowned. By default, I can't create that aggregate because the 8 base unit disks hosting the root aggregates have a different partition scheme than the rest. Therefore, I'm trying to make the other disks have the same root/data partitioning as those 8 disk. But it's not working.
After disabling disk autoassign, here's what I'm doing for the first disk :
::> storage disk create-partition -source-disk 1.0.1 -target-disk 2.0.1
This initially works: "storage disk show ..." shows 2.0.1 with root/data partitions owned by node 1. If I switch to the aggregate creation GUI, I now see one more disk in the relevant row than I did prior. All good. However, after a few minutes, if I do "storage disk show ..." again, I see that 2.0.1 has lost its root/data ownership. ONTAP appears to have silently reverted my "storage disk create-partition ..." command.
To test this, I did "storage disk create-partition ..." on all the relevant disks as quickly as possible, then created an aggregate, which worked. However, after that, "storage disk show ..." showed nothing in the aggregate field for those disks. Also, while I was able to create volumes on it via the CLI (after assigning it to the correct SVM), I was unable to in the GUI because the aggregate didn't appear in the list.
I'm stumped at this point! Anyone?