In this article

Infrastructure deployments and configurations in the cloud are easily managed programmatically thanks to IAC. There are several tools to enable these capabilities and many of them are platform-agnostic. However, Terraform is a popular choice for most organizations and engineers building their IAC pipelines. This article will highlight the NetApp Cloud Manager Terraform provider and showcase the simplicity of deployment of the NetApp Cloud Manager Connector and Cloud Volumes ONTAP with the Terraform provider.

NetApp Cloud Manager Terraform provider overview

The netapp-cloudmanager provider was developed by NetApp to create and manage Cloud Volumes ONTAPin AWS, Azure, and Google Cloud. Before the release of this provider, you were forced to automate using REST API calls for automation. However, with the Cloud Manager Terraform provider, you now have a fully supported native module for your IAC. The provider is used to deploy and configure every component of a NetApp Cloud Volumes ONTAP (CVO) working environment, including connectors, aggregates, volumes, CIFS Server, AD setup, and both single-node or high availability (HA) environments. Using the variables established for the provider you can even manage the credentials, licensing, and network configuration for the working environment. 

Hashicorp's Terraform allows you to build repeatable and consistent infrastructure OnPrem or in the public cloud using IAC. The fact that NetApp Cloud Manager Terraform provider is a native provider, inside of the Terraform registry, reduces the overhead of initial setup and configuration. Even if you aren't running Terraform Enterprise server, you can deploy Terraform on a basic Linux box and still use it to manage your CVO deployments.

Below we will focus on CVO deployments inside of Google Cloud, and we will use the provider to set up your working environments inside of NetApp Cloud Manager. This same article works with AWS and Azure as well, with some slight changes to the required variables inside of your configuration files specific to the different cloud provider platforms.

Requirements

Here is what you'll need to have set up before you get started:

  • Terraform installed on a server inside of Google Cloud or with access to your Google Cloud VPC
  • An authenticated user inside of your Google Cloud project where you will deploy
  • An account registered with NetApp Cloud Central
  • Account ID from the NetApp Cloud Manager portal
  • A NetApp Cloud Central token 
  • A Google Cloud service account email and credentials/key
  • A Google Cloud project ID and zone information for the deployment

If you have not established the necessary NetApp Cloud Manager credentials inside Google Cloud, you can find the policies here.

You will also need a few Terraform configuration files on your server where the deployment is performed:

  • variables.tf - The variables for the deployment are defined here.
  • terraform.tfvars - This assigns values to the variables in the variables.tf file.
  • main.tf - This is the main Terraform module.
  • connector.tf (optional) - This is file for creating the NetApp Cloud Connector. (In this article we will assume you already have the connector running, so the file is not required.)

Although the connector.tf file is optional, a NetApp Cloud Connector is a requirement for any CVO deployment. The connector is essentially how you communicate between NetApp Cloud Manager and your Google Cloud project.

Below you will find sample content of the variable files mentioned above:

  • variables.tf
variable "token" {
}
variable "subnetid" {
}
variable "region" {
}
variable "account" {
}
  • terraform.tfvars
token= "longstringofuniquecharacters"
subnetid= "atc-routable-subnet"
region= "us-east4"
account = "account-xxxxx"
  • main.tf
terraform {
required_providers {
netapp-cloudmanager = {
source = "NetApp/netapp-cloudmanager"
version = "22.2.2"
}
}
}

Deploy a single-node CVO working environment

Now that we have everything configured and all the requirements set, the following section will walk you through a single-node deployment of CVO, highlighting the various variables needed for a successful deployment. This will ensure that future deployments can use the same configuration files to make future CVO working environments seamless and automated.

  • From within your Terraform Enterprise server or where you have Terraform installed, you must first initialize all of the plugins. This can be easily accomplished with the command terraform init 
  • Once initialized, you need to create a new configuration file single-node.tf , this will be used to deploy CVO based on specific configuration variables for a single-node working environment.
  • Now with the single-node.tf file saved, you can run the Terraform plan and apply commands to build the environment.
    • terraform plan -out single-node
    • terraform apply "single-node"
derek_elbert@gcp-tf:~/terraform-demo$ sudo terraform plan -out single-node

Terraform used the selected providers to generate the following execution plan. 
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # netapp-cloudmanager_cvo_gcp.cl-cvo-gcp will be created
  + resource "netapp-cloudmanager_cvo_gcp" "cl-cvo-gcp" {
      + capacity_package_name = "Essential"
      + capacity_tier         = "cloudStorage"
      + client_id             = "xxxxx"
      + data_encryption_type  = "GCP"
      + gcp_service_account   = "@iam.gserviceaccount.com"
      + gcp_volume_size       = 1
      + gcp_volume_size_unit  = "TB"
      + gcp_volume_type       = "pd-ssd"
      + id                    = (known after apply)
      + instance_type         = "n1-standard-8"
      + is_ha                 = false
      + license_type          = "capacity-paygo"
      + name                  = "builderssinglenode"
      + ontap_version         = "latest"
      + project_id            = "xxxxx"
      + subnet_id             = "vpc-subnet"
      + svm_name              = (known after apply)
      + svm_password          = (sensitive value)
      + tier_level            = "standard"
      + upgrade_ontap_version = false
      + use_latest_version    = true
      + vpc_id                = "vpc-ID"
      + workspace_id          = "xxxxx"
      + zone                  = "us-east4-b"

      + gcp_label {
          + label_key   = "atc"
          + label_value = "builders"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

derek_elbert@gcp-tf:~/terraform-demo$ sudo terraform apply "single-node"
netapp-cloudmanager_cvo_gcp.cl-cvo-gcp: Creating...

As you can see from the last snippet of code, the single-node working environment is deploying inside of our NetApp Cloud Manager environment. We will now show you the subtle difference in the configuration if you are deploying a CVO high availability (HA) working environment.

Deploy a CVO HA working environment

The steps for deploying an HA CVO working environment are the same as the steps for a single-node deployment, except for the configuration file. You can see from the snippet of code below, the variables needed in order to deploy an HA environment successfully. If you are deploying multiple configurations (both single-node and HA) you also need to make sure that your Terraform server allows alias. The plan out of your CVO HA working environment needs to include the following variables inside of your configuration file.

Terraform used the selected providers to generate the following execution plan. 
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # netapp-cloudmanager_cvo_gcp.cl-cvo-gcp will be created
  + resource "netapp-cloudmanager_cvo_gcp" "cl-cvo-gcp" {
      + capacity_package_name = "Essential"
      + capacity_tier         = "cloudStorage"
      + client_id             = "xxxxx"
      + data_encryption_type  = "GCP"
      + gcp_service_account   = "@iam.gserviceaccount.com"
      + gcp_volume_size       = 1
      + gcp_volume_size_unit  = "TB"
      + gcp_volume_type       = "pd-ssd"
      + id                    = (known after apply)
      + instance_type         = "n1-standard-8"
      + is_ha                 = true
      + license_type          = "capacity-paygo"
      + name                  = "buildershighavailability"
      + ontap_version         = "latest"
      + project_id            = "xxxxx"
      + subnet_id             = "vpc-subnet"
      + svm_name              = (known after apply)
      + svm_password          = (sensitive value)
      + tier_level            = "standard"
      + upgrade_ontap_version = false
      + use_latest_version    = true
      + vpc_id                = "vpc-ID"
      + workspace_id          = "xxxxx"
      + zone                  = "us-east4-b"
      + node1_zone			  = "us-east4-b"
      + node2_zone			  = "us-east4-c"
      + mediator_zone		  = "us-east4-a"
      + vpc0_node_and_data_connectivity = "vpcname"
      + vpc1_cluster_connectivity = "vpcname"
      + vpc2_ha_connectivity = "vpcname"
      + vpc3_data_replication = "vpcname"
      + subnet0_node_and_data_connectivity = "vpc-subnet-name"
      + subnet1_cluster_connectivity = "vpc-subnet-name"
      + subnet2_ha_connectivity = "vpc-subnet-name"
      + subnet3_data_replication = "vpc-subnet-name"
            


      + gcp_label {
          + label_key   = "atc"
          + label_value = "buildersHA"
        }
    }

Creating SnapMirror relationships

The last component of the NetApp Cloud Manager Terraform provider we will discuss is the netapp-cloudmanager_snapmirror resource. As part of your deployments, especially hybrid cloud, you may want to replicate volumes from an OnPrem ONTAP system to a CVO working environment. This can easily be a part of your IAC workflow as SnapMirror is built into the provider. The following example below shows you an example of what your snapmirror.tf would look like should you choose to include this resource as part of your deployments mentioned above.

resource "netapp-cloudmanager_snapmirror" "cl-snapmirror" {
  provider = netapp-cloudmanager
  source_working_environment_id = "xxxxxxxx"
  destination_working_environment_id = "xxxxxxxx"
  source_volume_name = "source"
  source_svm_name = "svm_source"
  destination_volume_name = "source_copy"
  destination_svm_name = "svm_dest"
  policy = "MirrorAllSnapshots"
  schedule = "5min"
  destination_aggregate_name = "aggr1"
  max_transfer_rate = "102400"
  client_id = "xxxxxxxxxxx"
}

Summary

Just like your NetApp ONTAP OnPrem systems, NetApp CVO has many built-in automation capabilities that allow you to perform deployments using Terraform. If you are looking to utilize NetApp CVO as part of your data management strategy in the cloud, please make sure to include the NetApp Cloud Manager Terraform provider in your IAC or DevOps tools. You can gain the same level of agility and flexibility with CVO you have with many of your applications today. If you are not comfortable implementing the Terraform configurations into your existing environment, please make sure you utilize the automation and cloud data management experts at WWT for direct hands-on assistance.

Technologies