Project

General

Profile

Feature #8186

Updated by Peter Amstutz almost 7 years ago

Design sketch. 

 Currently node manager only distinguishes between cloud instance types. VM sizes.    Enable A given VM type with additional storage probably needs to be treated as a distinct "size" within node manager.    Currently it uses the admin cloud size id, so this means each "size" will need to be given an ID for use by node which is distinct from the cloud size id.   

 At least initially, it will probably be easier if the additional sizes are defined in the node manager configuration (instead of generated on the fly).    One could, for example, specify a 2 core node with default storage, and a second configuration based on the amount same VM type with an additional 400 GB of additional storage for storage.    specific instance types on AWS. For example: 

 <pre> 
 [Size m4.large] 
 cores = 2 
 scratch = 500 100    # default storage 

 [Size m4.large_extra_storage] 
 </pre> 

 Implementation: 

 Determine how instance type = m4.large 
 cores = 2 
 scratch = 100    # default storage is available by default for node type. 
 additional_storage = 400    If additional space is needed, # storage to attach an EBS device. 
 </pre> 

 This is configured via ex_blockdevicemappings to libcloud create_node() & documented at https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html 

 Disks should be VolumeType: 'gp2' (General Purpose SSD), have DeleteOnTermination: true, and specify a VolumeSize: that makes up the difference between instance storage (if any) and the required space. 

 The compute node boot scripts are expected to will discover both instance default and EBS additional attached storage devices and combine them into a single logical partition / file system.    In the above example, after boot time configuration the resulting node should would have a single 500 GB file system for scratch space. 

 When creating the node, the cloud driver will be responsible for attaching the additional disks and then ensuring that they are deleted when the node is deleted. 

Back