Enable Rebuild for Instances in cell0

https://blueprints.launchpad.net/nova/+spec/enable-rebuild-for-instances-in-cell0

This spec summarizes the changes needed to enable the rebuilding of instances that failed to be scheduled because there were not enough resources.

Problem description

Presently, it is allowed to rebuild servers in ERROR state, as long as they have successfully started up before. But if a user tries to rebuild an instance that was never launched before because scheduler failed to find a valid host due to the lack of available resources, the request fails with an exception of type InstanceInvalidState [1]. We are not addressing the case where the server was never launched due to exceeding the maximum number of build retries.

Use Cases

  1. As an operator I want to be able to perform corrective actions after a server fails to be scheduled because there were not enough resources (i.e. the instance ends up in PENDING state, if configured). Such actions could be adding more capacity or freeing up used resources. Following the execution of these actions I want to be able to rebuild the server that failed.

Note

Adding the PENDING state as well as setting instances to it, are out of the scope of this spec, as they are being addressed by another change [2].

Proposed change

The flow of the rebuild procedure for instances mapped in cell0 because of scheduling failures caused by lack of resources would then be like this:

  1. The nova-api, after identifying an instance as being in cell0, should create a new BuildRequest and update the instance mapping (cell_id=None).

  2. At this point the api should also delete the instance records from cell0 DB. If this is a soft delete [3], then after the successful completion of the operation, we would end up with one record of the instance in the new cell’s DB and a record of the same instance in cell0 (deleted=True). A better approach, here, would be to hard delete [4] the instance’s information from cell0.

  3. Then the nova-api should make an RPC API call to the conductor’s new method rebuild_instance_in_cell0. This new method’s purpose is almost (if not exactly) the same as the existing schedule_and_build_instances. So we could either call to it internally or extract parts of it’s functionality and reuse them. The reason behind this is mainly to avoid calling schedule and build code in the super-conductor directly from rebuild code in the API.

  4. Finally, an RPC API call is needed from the conductor to the compute service of the selected cell. The rebuild_instance method tries to destroy an existing instance and then re-create it. In this case and since the instance was in cell0, there is nothing to destroy and re-create. So, an RPC API call to the existing method build_and_run_instance seems appropriate.

Information provided by the user in the initial request such as keypair, trusted_image_certificates, BDMs, tags and config_drive can be retrieved from the instance buried in cell0. Currently, there is no way to recover the requested networks while trying to rebuild the instance. For this:

  1. A reasonable change would be to extend the RequestSpec object to adding a requested_networks field, where the requested networks will be stored.

  2. When scheduler fails to find a valid host for an instance and the VM goes to cell0, the list of requested networks will be stored in the RequestSpec.

  3. As soon as the rebuild procedure starts and the requested networks are retrieved, the new field will be set to None.

The same applies for personality files, that can be provided during the initial create request and since microversion 2.57 it is deprecated from the rebuild API [5]. Since the field is not persisted we have no way of retrieving them during rebuild from cell0. For this we have a couple of alternatives:

  1. Handle personality files as requested networks and persist them in the RequestSpec.

  2. Document this as a limitation of the feature and that if people would like to use the new rebuild functionality they should not use personality files.

  3. Another option would be to track in the system_metadata of the instance, if the instance was created with personality files. Then during rebuild from cell0, we could check and not accept the request for instances created with personality files.

There is an ongoing discussion on how to handle personality files in the mailing list [6].

Quota Checks

During the normal build flow, there are quota checks in the API level [7] as well as in the conductor level [8]. Consider the scenario where a user has enough RAM quota for a new instance. As soon as the instance is created, it ends up in cell0 because the scheduling failed.

There are two distinct cases when checking quota for instances, cores, ram:

  1. Checking quota from Nova DB

    In this case, the instance’s resources, although in cell0, will be aggregated since the instance records will be in the DB. There is though a slight window for a race condition when the instance gets hard deleted.

  2. Checking quota from Placement [9]

    When the instance is in cell0, there are no allocations to Placement for this consumer. Meaning that the instance’s resources will not be aggregated during subsequent checks and there is no check in the API level when rebuilding.

Rechecking quota at the conductor level will make sure that user’s quota is enough before proceeding with the build procedure.

Between initial build and rebuild (from cell0) port usage might have changed. In this case and since port quota is not checked when rebuilding from cell0, we might fail late in the compute service trying to create the port. Although the user will not get a quick failure from the API, this is acceptable because at this point usage is already over limit and the server would not have booted successfully.

Alternatives

The user could delete the instance that failed and create a new one with the same characteristics but not the same ID. The proposed functionality is the dependency for supporting preemptible instances, where an external service automatically rebuilds the failed server after taking corrective actions. In the aforementioned feature maintaining the ID of the instance is of vital importance. This is the main reason for which this cannot be considered as an acceptable alternative solution.

Data model impact

Add a requested_networks field in the RequestSpec object that will contain a NetworkRequestList object. Since the RequestSpec is stored as a blob (mediumtext) in the database, no schema modification is needed.

REST API impact

A new API microversion is needed. Rebuilding an instance that is mapped to cell0 will continue to fail for older microversions.

Security impact

None.

Notifications impact

None.

Other end user impact

Users will be allowed to rebuild instances that failed due to the lack of resources.

Performance Impact

None.

Other deployer impact

None.

Developer impact

None.

Upgrade impact

None.

Implementation

Assignee(s)

Primary assignee:

<ttsiouts>

Other contributors:

<johnthetubaguy> <strigazi> <belmoreira>

Work Items

See Proposed change.

Dependencies

None.

Testing

In order to verify the validity of the functionality:

  1. New unit tests have to be implmented and existing ones should be adapted.

  2. New functional tests have to be implemented to verify the rebuilding of instances in cell0 and the handling of instance tags, keypairs, trusted_image_certificates etc.

  3. The new tests should take into consideration BFV instances and the handling of BDMs.

Documentation Impact

We should update the documentation to state that the rebuild is allowed for instances that have never booted before.

References

Discussed at the Dublin PTG: * https://etherpad.openstack.org/p/nova-ptg-rocky (#L459)

History

Revisions

Release Name

Description

Rocky

Introduced

Stein

Re-proposed

Train

Re-proposed