In this ATC Insight

Summary

Cisco Intersight is a Software as a Service (SaaS) management platform that provides a centralized way of deploying and managing Cisco UCS and Cisco HyperFlex systems and aims to significantly simplify operations and reduce operating costs. 

The Intersight platform offers flexible deployment options, either as Software as a Service (SaaS) on Intersight.com or by running on-premises with the Cisco Intersight virtual appliance. 

During an exploratory deployment lab, WWT Technical Solutions Architect II, Jeff Gargac, deployed a Cisco HyperFlex cluster from a web browser using the Cisco Intersight SaaS platform. In this ATC Insight, Jeff provides a detailed interface walk-through, with comments about any notable discoveries observed during this lab. 

For this effort, Jeff followed the Cisco HyperFlex Installation Guide for Intersight, which is attached under Documentation for reference.

ATC Insight

The Cisco HyperFlex (HX) solution can be deployed using the on-premises HyperFlex Installer, but you will need to configure both the management and the storage network IP addresses on the ESXi host and the HyperFlex CVMs.  

The Cisco Intersight platform offers another way to deploy the HyperFlex solution., which promises a more automated process.  You will only need to configure the management network IP addresses; it automatically configures link-local IP addresses (169.254.0.0/16) for the ESXi and HyperFlex storage networks. 

For the purpose of exploring how Intersight works to deploy HyperFlex using a web browser, we created a lab within our ATC environment and followed Cisco's HyperFlex Installation Guide for Intersight. Below is a visual account of the steps taken, including notable prompts and key observations made along the way.

Deployment Prerequisite

Before initiating the HyperFlex deployment process, configure the KVM IP address pool and create host records in DNS.

Deployment Procedure

Log into UCS Manager, click the network icon in the blue navigation bar, right-click IP Pool ext-mgmt, and then left-click Create Block of IPv4 Addresses.

 

A screen will appear with spaces to fill in the beginning IP addresses, size of the pool, subnet mask, gateway, and DNS. Each node in your cluster will need a KVM IP address.

 

After the KVM pool has been created, Intersight will handle the rest of the installation. Browse to https://intersight.com and log into Intersight.

 

 

On the left navigation bar, click ProfilesHyperFlex Cluster Profiles, and Create HyperFlex Cluster Profile.

 

This starts the cluster configuration process. 

Cluster Configuration Process

Select the Organization, Cluster Name, HyperFlex version, Type, Replication Factor, and Server Firmware version. 

Note: The replication factor cannot be changed once the cluster is created. 

After clicking Next, the settings are saved in Intersight.

 

At any point in the process, from now until the policies are applied, the installation can be paused by clicking Close, and then can be resumed without having to start over from scratch. 

Cluster Configuration Policies

Now it's time to create the cluster configuration policies.  To make identification easier, policy names begin with the cluster name that was specified in the previous step. 

The first policy is local credential policy that specifies passwords for ESXi and HX controller VMs. Since our cluster is straight from the factory, we checked the factory default password box as the preinstalled ESXi image is using the Cisco default passwords. 

New passwords must be specified for both ESXi and HX. Click the "+" to the left of the next section in the list to proceed.

 

The sys-config-policy specifies Timezone, DNS suffix, DNS servers, and NTP servers.

 

The vcenter-config-policy specifies the vCenter, credentials, and the vSphere datacenter to create the HX cluster within.

 

The cluster-storage-policy is optional. This allows the configuration of the cluster for VDI and Logical Availability Zones if needed. 

In this case, Clean up Disk Partitions was selected. 

Note: This will destroy any data on the disks that HX uses for storage (we can do this since there is no data to be kept).

 

The auto-support-policy is also optional. This is to configure the email address to be used for Service Tickets.

 

The node-config-policy configures the ESXi hostname prefix and the network settings for ESXi and HX controller VMs. 

Note: Using the hostname prefix might not match an enterprise naming scheme and desired host names can be manually specified later.

 

The cluster-network-policy configures individual VLANs, the KVM IP addresses, and the MAC address pool. The KVM IP range is the same range that was created earlier in UCS Manager. Jumbo frames were also enabled.

 

The optional ext-fc-storage-policy configures external fibre channel storage. This section was left blank.

 

The optional ext-iscsi-storage-policy configures optional external iSCSI storage. This section was also left blank.

 

The optional proxy-settings-policy configures proxy settings to access the internet.

The final section is for the HyperFlex Storage Network. This section doesn't create a reusable policy because the storage network should be unique for each cluster. Like the VLANs that were specified earlier, a VLAN Name and ID must be specified. 

Click Next to proceed.

 

Nodes Assignment 

In Nodes Assignment, the HyperFlex nodes are assigned. Select the nodes to be added to the HX cluster and click Next.

 

Nodes Configuration 

In Nodes Configuration, the IP& Hostname settings are populated from earlier settings; however, they can be manually configured if desired. This section is also where hostnames can be adjusted if the hostname prefix naming scheme didn't provide the desired hostnames. 

Click Next to proceed.

 

Summary and Results

The Summary page allows for reviewing all the policies and settings on one page. There are two buttons at the bottom of the page, Validate and Validate & Deploy: Validate verifies the settings and the profile can be deployed later; Validate & Deploy will verify the settings and start the cluster deployment.

 

We chose Validate & Deploy. Warnings were found during validation. Click the "+" next to the section to review.

 

The warning is letting us know that both Fabric Interconnects will be rebooted

Click Continue.

 

Intersight starts the configuration of the cluster. Click the "+" next to sections to expand.

 

Notice that the Progress, Current Stage, and section descriptions will update with new information during the installation.

 

Our installation is almost complete and ready to go. Notice that the progress shows 100%, but there's still a task "In Progress." 

Once that finishes, click HyperFlex Clusters on the navigation bar on the left.

 

The newly deployed cluster shows to be healthy and can be accessed by clicking the three dots and selecting, Launch HyperFlex Connect.

It's interesting to see that Intersight allows policies from previous installations to be reused by clicking Select Policy in each section and choosing a policy. 

 

Conclusion

Cisco Intersight offers a more simplified, automated way of deploying the HyperFlex solution.  As such, reusing existing policies may simplify policy management, going forward, as settings may be the same across multiple clusters.  Lastly, clusters can be provisioned in advance in Intersight and then assigned to UCS hardware later in time.  Intersight offers flexibility that isn't available in HyperFlex Installer.

Expectations

Our expectations around the Cisco HyperFlex from Intersight included the following things:

  1. The physical equipment has been cabled correctly and the UCS domain has been created.
  2. 2-3 hours needed for the actual HyperFlex installation from start to end.
  3. Cisco documentation will be used as needed.

 

Technologies