Project

General

Profile

Salt Installer Features » History » Version 15

Lucas Di Pentima, 07/03/2024 09:35 PM

1 5 Lucas Di Pentima
{{>toc}}
2
3 1 Lucas Di Pentima
h2. Introduction
4
5 2 Lucas Di Pentima
To be able to plan for a new Arvados deployment tool, we need to list all the features our current "salt installer" supports. In broad terms what we call the "salt installer" consists of the following parts:
6 1 Lucas Di Pentima
7 15 Lucas Di Pentima
h3. The "arvados-formula" salt formula
8 1 Lucas Di Pentima
9
Hosted at https://github.com/arvados/arvados-formula, this code is a group of "salt":https://saltproject.io states & pillars that takes care of installing Arvados packages and setting up the services needed to run a cluster. In this repo there's also the "provision script", meant to enable anyone to use the @arvados-formula@ without needing a full-fledged master+minions salt installation. The provision script installs salt in "masterless mode", and it's mostly useful for the single-host use case, where someone needs a complete Arvados cluster running on a single system, for testing purposes.
10
11 15 Lucas Di Pentima
h3. The Terraform code
12 1 Lucas Di Pentima
13
For multi-host deployments in the cloud (AWS only at the moment), we wrote a set of Terraform files that manage everything from networking, access control, data storage and service nodes resources to speed up the initial setup and be able to quickly modify it once it's deployed. This code outputs a set of useful data that needs to be fed as input to the installer script described below.
14
15 15 Lucas Di Pentima
h3. The "installer.sh" script
16 1 Lucas Di Pentima
17
In order to easily use the above in a multi-host (e.g.: production) setting, the installer script takes care of setting up a local git repository that holds the installer files, distributing those files to the hosts that will take part of a deployment, and orchestrating the execution of the provision script on each host, each one with their particular configurations. This script heavily relies on search&replace operations using @sed@ that modify templates that will in turn get applied to salt, so it gets complicated to add features when we need to manage 2 level of templating.
18
19
h2. Detailed list of features
20
21
Below is the list of functionality that every part of the installer provides. We aim to list everything that'll be likely needed to be implemented in the new version of the tool. The list of features is written in the order an operator currently handles.
22
23
h3. Terraform deployment
24
25
As suggested in the book "Terraform: Up & Running":https://www.oreilly.com/library/view/terraform-up-and/9781098116736/, the terraform code is explicitly split in several sections to limit the "blast radius" of a potential mistake. The below sections are applied in the described order to build the complete cloud infrastructure needed to install Arvados.
26
27
h4. Networking layer
28
29
# Allows the operator to deploy new or use existing network resources, like VPC, security group & subnets.
30
# Creates an S3 endpoint and route so that keepstore nodes have direct access.
31
# Sets up Internet and NAT gateways to give nodes outbound network access.
32
# Sets up the security group that allows communication between nodes in the VPC, and also inbound SSH & HTTP(S) access.
33
# Manages Route53 domain names from a customizable list of hosts, with an optional split-horizon configuration.
34
# Creates credentials for Let's Encrypt to be able to work with Route53 from the service nodes.
35
# Optionally creates Elastic IP resources for user-facing hosts (controller, workbench).
36
37 8 Lucas Di Pentima
h5. Input parameters
38
39 11 Lucas Di Pentima
_These are optional if not explicitly stated as required._
40 8 Lucas Di Pentima
* AWS region (required)
41
* Cluster prefix (required)
42
* Domain name (required)
43
* "Private only" flag
44
* VPC, security group, public and private subnet IDs
45
* "Use RDS" flag
46
* RDS additional subnet ID
47
* List of user facing service node names
48
* List of internal service node names
49
* Node name to private IP address map
50
* DNS alias records to node name map
51
52 1 Lucas Di Pentima
h4. Data layer
53
54
# Creates the S3 bucket needed for Keep blocks storage.
55
# Creates keepstore & compute node roles with policies that grants S3 access to the created bucket.
56
57 9 Lucas Di Pentima
h5. Input parameters
58
59
* "Use external DB" flag -- Not really used by anything, but including it for completeness' sake.
60
61 1 Lucas Di Pentima
h4. Service layer
62
63
# Optionally creates an RDS instance as the database service with a sensible set of default values that can be customized.
64
# Creates an AWS secret to hold the TLS certificate private key's decrypting password (for cases where the TLS certificate is provided by the user).
65
# Creates policy and instance profiles so that every service node has access to the above secret.
66
# Creates a policy that gives permissions to compute nodes so that EBS-autoscale filesystems work.
67
# Creates policy, role & instance profile so that the dispatcher node can do its work (launching EC2 instances, listing them, etc.)
68
# Creates the service nodes from the list of hosts names defined in the networking layer, assigning the public IP addresses to the nodes that need them.
69
70 10 Lucas Di Pentima
h5. Input parameters
71
72
_These are optional if not explicitly stated as required._
73
* SSH public key file path: so that the installer script can log into the nodes without password.
74
* Node name to Instance type map
75
* Node name to volume size map
76
* "Use RDS" flag
77
* RDS username & password, instance type, version, allocated and max storage size, backup retention period, backup before deletion and final backup name parameters.
78
* TLS certificate private key decryption password secret name prefix
79
* Username for deployment
80
* Instance AMI
81
82 4 Lucas Di Pentima
h3. Installer script
83 3 Lucas Di Pentima
84
The @installer.sh@ script provides a handful of useful features, some of which will be needed in some form on the new tool as they are not aimed to mitigate salt shortcomings but necessary in some or all styles of deployments.
85 1 Lucas Di Pentima
86 4 Lucas Di Pentima
# *Selective deployment:* Sometimes doing a quick update on a single node is enough.
87
# *Deployment ordering:* when doing a full deploy run, some nodes need to be updated before others, the current ordering scheme is:
88
## Database node
89
## Controller node(s): To be able to perform rolling updates on balanced controllers deployments, it removes the controller node about to be updated from the balancer's pool on each iteration.
90
## Balancer node (if exists)
91
## Everything else
92
# *Optional use of a jump host:* In some situations, using a reachable jump host is needed for the installer to be able to connect to internal cluster nodes like the database, shell or even keepstore. This will depend on whether the installer is run from the same network as the cluster or from the outside.
93
# *Secret vs Non-secret configuration handling:* Secret config data include cluster's default admin account password, database credentials, dispatcher's private SSH key, etc. These need to be separate from the rest of the configuration parameters so that they can be placed on secure storages if needed.
94
# *General sanity checks:* The installer script does some checks previous to a deploy run, like:
95
## Node connectivity and SSH access.
96
## TLS certificate existence when not using Let's Encrypt
97
# *Cluster Diagnostics test launching:* To confirm everything is working correctly, it runs @arvados-client diagnostics@ from the local host or the shell node.
98 3 Lucas Di Pentima
99 12 Lucas Di Pentima
h4. Input parameters
100
101
102
h5. Config parameters
103
104
_These have default values if not explicitly stated as required._
105
* Cluster prefix & domain (required -- should be taken from terraform's output)
106
* Username for deployment
107
* Arvados admin's username
108
* Arvados admin's email (required)
109
* Use SSH jumphost
110
* AWS region (required -- should be taken from terraform's output)
111
* SSL mode
112
* "Use Let's Encrypt with Route53" flag
113
* Let's Encrypt AWS region (doesn't seem to be used, we should double check)
114
* Compute AMI ID (required)
115
* Compute nodes security group (required -- should be taken from terraform's output)
116
* Compute nodes subnet ID (required -- should be taken from terraform's output)
117
* Compute node AWS region
118
* Compute node username (the one that the dispatcher will use to control the node)
119
* Keep S3 AWS region
120
* Keep S3 Bucket name
121
* Keepstore IAM role
122
* "Is TLS privkey encrypted?" flag
123
* TLS privkey decryption password secret name
124
* TLS privkey decryption password secret AWS region
125
* Prometheus & Grafana UI access user name & email
126
* Prometheus data retention time
127
* Node-to-roles mapping
128
* Arvados services external TLS ports
129
* Cluster internal CIDR
130
* Arvados services internal IP addresses
131
* Arvados database name
132
* Arvados database user name
133
* External database service host name or IP address
134
* Database version
135
* Controller's max workers
136
* Controller's request queue size
137
* Controllers max gateway tunnels
138
* Arvados release (production/development)
139
* Arvados version (latest or specific)
140
141
h5. Secret parameters (all required)
142
143
* Arvados admin's password
144
* Prometheus & Grafana UI access user's password
145
* Arvados Blob signing key
146
* Arvados management token
147
* Arvados system root token
148
* Arvados anonymous user token
149
* Database password
150
* Let's Encrypt access key ID & secret
151
* Arvados dispatcher's SSH private key
152
153 1 Lucas Di Pentima
h3. Salt installer
154
155
The Terraform's output data (vpc and subnet ids, various credentials, Route53 domain name servers, etc) gets used by the installer and provision scripts to install & configure the necessary software on each host.
156
157 13 Lucas Di Pentima
h4. TLS certificate encrypted private key handling
158
159
On those nodes with services accepting requests through nginx as a TLS proxy, if the TLS certificate private key is encrypted with a password, it installs a series of scripts that read the configured AWS secret and feeds a named pipe file inside @/run/arvados/@ with its contents, so that nginx can read the password at startup time.
160
161 14 Lucas Di Pentima
h4. Service node roles
162 1 Lucas Di Pentima
163 14 Lucas Di Pentima
There's a "node-to-roles" mapping that is declared as part of the provision script's configuration, each of them described below:
164 1 Lucas Di Pentima
165 14 Lucas Di Pentima
h5. 'database' role
166
167 1 Lucas Di Pentima
Can be overridden to use an external database service (like AWS RDS)
168
169
* Installs a PostgreSQL database server.
170
* Configures PG user & database for Arvados, enabling the @pg_trgm@ extension.
171
* Configures PG server ACLs to allow access from localhost, websocket, keepbalance and controller nodes.
172
* Installs Prometheus node and PG exporters.
173
174 14 Lucas Di Pentima
h5. 'controller' role
175 1 Lucas Di Pentima
176
* Installs @nginx@, @passenger@ and PG client libraries.
177
** If in "balanced mode", only set up HTTP nginx, as the balancer will act as the TLS termination proxy.
178
* From the @arvados.controller@ & @arvados.api@ formula states
179
** Install rvm if required -- this won't be necessary anymore as we'll be using the distro's provided ruby packages.
180
** Installs @arvados-api-server@, @arvados-controller@
181
** Runs the services and waits up to 2 minutes for the controller service to answer requests, so that Arvados resource creation work in future stages.
182
* If using an external database service, it makes sure the @pg_trgm" extension is enabled.
183
* Sets up @logrotate@ to rotate the RailsAPI's logs daily, keeping the last year of logs. This is because these files are not inside @/var/log/@
184
185 14 Lucas Di Pentima
h5. 'monitoring' role
186 1 Lucas Di Pentima
187
* Installs & configures Nginx, Prometheus, Node exporter, Blackbox exporter and Grafana.
188
* Nginx configuration details
189
** Sets up basic authentication for the prometheus website (as it doesn't seem to provide its own access controls)
190
** Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
191
* Prometheus configuration details
192
** Sets configurable data retention period
193
** Correctly configures multiple controller nodes in balanced configurations.
194
* Grafana configuration details
195
** Sets up admin user & password with @grafana-cli@
196
** Installs custom dashboards
197
198 14 Lucas Di Pentima
h5. 'balancer' role
199 1 Lucas Di Pentima
200
* Installs Nginx with a round-robin balanced upstream configuration.
201
* Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
202
203 14 Lucas Di Pentima
h5. 'workbench/workbench2' role
204 1 Lucas Di Pentima
205
* From @arvados.workbench2@ formula state
206
** Installs @arvados-workbench2@ package
207
* Installs & configures nginx
208
* Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
209
* Uninstalls workbench1 -- this might not be needed in future versions.
210
211 14 Lucas Di Pentima
h5. 'webshell' role
212 1 Lucas Di Pentima
213
* Installs an nginx virtualhost that uses the shell node's @shellinabox@ service as the upstream.
214
* Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
215
216 14 Lucas Di Pentima
h5. 'keepproxy' role
217 1 Lucas Di Pentima
218
* From @arvados.keepproxy@ formula state
219
** Installs @arvados-keepproxy@ and runs the service
220
* Installs & configures nginx
221
** Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
222
223 14 Lucas Di Pentima
h5. 'keepweb' role
224 1 Lucas Di Pentima
225
* From @arvados.keepweb@ formula state
226
** Installs @keep-web@ and runs the service
227
* Installs & configures nginx
228
** Sets up nginx's "download" and "collections" virtualhosts
229
** Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
230
231 14 Lucas Di Pentima
h5. 'websocket' role
232 1 Lucas Di Pentima
233
* From @arvados.websocket@ formula state
234
** Installs @arvados-ws@ and runs the service
235
* Installs & configures nginx
236
** Sets up custom TLS certs or installs Let's Encrypt to manage them, depending on configuration.
237
238 14 Lucas Di Pentima
h5. ' dispatcher' role
239 1 Lucas Di Pentima
240
* From @arvados.dispatcher@ formula state
241
** Installs @arvados-dispatch-cloud@ and runs the service
242
243 14 Lucas Di Pentima
h5. 'keepbalance' role
244 1 Lucas Di Pentima
245
* From @arvados.keepbalance@ formula state
246
** Installs the @keep-balance@ package and runs the service
247
248 14 Lucas Di Pentima
h5. 'keepstore' role
249 1 Lucas Di Pentima
250
* From @arvados.keepstore@ formula state
251
** Installs @keepstore@ and runs the service
252
253 14 Lucas Di Pentima
h5. 'shell' role
254 1 Lucas Di Pentima
255
* Installs @docker@
256
* Installs @sudo@, configures it to allow password-less access to "sudo" group members.
257
* From @arvados.shell@ formula state
258
** Installs @jq@, @arvados-login-sync@, @arvados-client@, @arvados-src@, @libpam-arvados-go@, @python3-arvados-fuse@, @python3-arvados-python-client@, @python3-arvados-cwl-runner@, @python3-crunchstat-summary@ and @shellinabox@
259
** Installs gems: @arvados-cli@, @arvados-login-sync@
260
** Creates a Virtual Machine record for the shell node and sets a scoped 'login' token for it.
261
* Queries the API server for the created virtual machine with the same name as its hostname, and configures cron to run arvados-login-sync with the necessary credentials.
262
263 14 Lucas Di Pentima
h5. Default role mapping
264 1 Lucas Di Pentima
265 7 Lucas Di Pentima
By default the installer deploys a 4-node cluster with only 2 of them needing public IP addresses (in case of a publicly accessible cluster)
266 6 Lucas Di Pentima
* Controller node: @database@ & @controller@ roles
267
* Workbench node: @monitoring@, @workbench@, @workbench2@, @webshell@, @keepproxy@, @keepweb@, @websocket@, @dispatcher@ and @keepbalance@ roles
268
* Keep0 node: @keepstore@ role
269
* Shell node: @shell@ role