Allow deletion of volumes with existing snapshots. The proposal is to integrate this with the existing volume delete path, using an additional parameter to request deletion of snapshots as well when deleting a volume.
When deleting a volume, the delete operation may fail due to snapshots existing for that volume. The caller is forced to examine snapshot information and make many calls to remove snapshots, even if they are not interested in the snapshots at all, and just want the volume gone.
Since snapshots are “children” of volumes in our model of the world, it is reasonable to allow a volume and its snapshots to be removed in one operation both for usability and performance reasons.
I received a request for this functionality because a project integrating with Cinder would like to be able to just delete volumes without having to handle logic for this. I think that is a reasonable point of view (just as it’s reasonable to figure that a user shouldn’t have to handle this).
There are two losses of performance in requiring back and forth between cinder-volume and the backend to delete a volume and snapshots:
- Extra time spent checking the status of X requests.
- Time spent merging snapshot data into another snapshot or volume which is going to immediately be deleted.
This means that we currently force a “delete it all” operation to take more I/O and time than it really needs to. (The degree of which depends on the particular backend.)
A volume delete operation should handle this by default.
This is the generic/”non-optimized” case which will work with any volume driver.
This case is for volume drivers that wish to handle mass volume/snapshot deletion in an optimized fashion.
Starting in the volume manager... 1. Check for a driver capability of ‘volume_with_snapshots_delete’.
(Name TBD.) This will be a new abc driver feature.
No direct impact.
In implementation, we need to ensure we don’t end up with strange things like a volume in a “deleting” status that has snapshots in “available” status. Thus, failures to delete a single snapshot in this model may cascade to marking the volume and all other associated snapshots as errored. (Only relevant for phase 2 above. This doesn’t happen if we leave the snapshot and volume delete operations separate internally.)
Add a boolean parameter “delete_snapshots” to the delete volume call, which defaults to false.
A volume delete with snapshots which previously returned 400 will now succeed.
All snapshot/volume delete notifications will still be fired.
New –delete-snapshots parameter for volume-delete in cinderclient.
This should take whatever driver-specific steps are needed to delete the snapshots and associated volume data.
The assumption can be made that any failed snapshot delete results in a failed volume, so this does not have to account for partial failures.
Note: None of this has to happen at a level above the volume manager since the volume manager handles all related status updates.
Rough order should be: * Add parsing for new parameter to volume delete API * Implement volume manager logic to delete everything * Create an abc class for the new driver interface * Implement volume manager logic to talk to the new driver interface * Implement an optimized case for the LVM driver
Tempest tests will be added to cover this.
Need to document the new behavior of the volume delete call, as well as related client examples, etc.