Let me start out by saying, I'm not a NAS admin nor do I play one on TV.  All kidding aside, I'm going to give you an overview of what's new and share my impressions on the new File Enhancements that were added in version 6.3 for the FlashArray line (//X, //XL, //C) of Unified Storage products from Pure Storage.

WWT was an early beta tester back in 2020 for Pure Storage prior to the 6.0 release of the Purity//FA operating system.  Our team of subject matter experts worked closely with Pure engineering to share our feedback on both the setup process and the file capabilities for their upcoming release by using our Advanced Technology Center (ATC) to provide a playground for the FlashArray to live in, along with the testing tools to simulate user file access for creating, modifying, and deleting 100s of thousands of files and folders.

While we felt that adding native file services to the FlashArray starting in v6.0 was needed to keep it in the same category with the competing Unified Storage systems out there, it was missing a few features that Enterprise customers relied on.  One was multi-tenancy support via different directory services, but the bigger feature missing was native file replication.  Prior to the 6.3 release, file replication was accomplished through a partnership with Komprise.  While Komprise's solution provides more than just file replication (file analytics, archiving to low-cost S3 cloud storage, and multi-site replication options for deployments), not having support for native file replication was a deal breaker and excluded Pure's FlashArray from further NAS testing for many of our customers. 

I'm happy to say that with the release of Purity//FA 6.3, native file replication is here for the FlashArray line of products.  As an Elite partner, we were invited to test out version 6.3 in our ATC on a couple FlashArrays prior to the release.  We tested both the beta code and the final release code by creating file systems and exports for both SMB and NFS shares and filling it with real-world genomics data sets and creating synthetic file workloads ranging in size and quantities to see how Purity handled the protection and replication of that data between FlashArrays.  Following Pure's design philosophy of always keeping their solutions simple to setup, configure and manage; creating a file system on a FlashArray, configuring the exports (shares) and creating a replica link between pods on two different FlashArrays was a breeze and took no time.  Be sure to check out the Purity//FA Files video series where I walk through just how easy it is to setup and go through some day 2 operations around self-service restores, quotas, and DR testing.

File replication within a FlashArray uses the ActiveDR technology under the covers that came out in Purity//FA v6. 

ActiveDR

ActiveDR provides near synchronous replication for volumes (via volume snapshots) living within a Pod on one FlashArray to a Pod on a second FlashArray.  Unlike Asynchronous replication that is configured for set intervals, ActiveDR takes a best effort approach by trying to replicate the data as fast as it can without impacting I/O operations on the controllers.  The ActiveDR setup is similar to setting up ActiveCluster, which is Pure's technology for synchronous replication between two FlashArrays.  Both technologies make use of the Pods feature by creating a Pod on both the source and target FlashArrays, then simply create new or import existing volumes or filesystems into the source side Pod.  The next step is connecting the two FlashArrays together, if you haven't already, using the mgmt. IP address and a connection key.  The final step is to create the Pod replica link using the connected arrays connection.

Once setup, ActiveDR will take snapshots of the volumes or filesystems in the POD that's in the "promoted" state, compress the data, and only send the changes to the replica POD (that's in the "demoted" state).  And then repeat that same process.  In our lab testing, the lag time between the source and target array snapshots ranged from about 46 seconds to just over 5 minutes using a 10Gb replication connection depending on the number of writes to the filesystem for our tests.  Your mileage may vary.

How useful is being able to replicate the data if you can't easily test or execute in the event of a disaster recovery situation.  This is where Pure's file system replication goes above and beyond.  By using ActiveDR, where the "promoted" Pod sends data to the replica Pod on the target array, it's super easy to test and act upon in the event that the Primary Pod or array is down.  While the demoted Pod is always accessible on the network in a read-only state, with the simple click of a button, that Pod becomes Read-Writeable and is ready for file services or to test out your DR plans. 

Once the replica POD is "Promoted", the ActiveDR replication direction does not changed and all changes from the original source side array are still being replicated to a hidden area on the target/DR side FlashArray.  With both sides having their Pod in the "Primary" state and Read-Writeable, you have two options to consider.

Both PODs Read/Write
  • Demote the target/DR side Pod, which causes all changes written since being Promoted to be deleted and the data that's continued to replicate in the background is made accessible in a read-only state on the target array.
DR side demoted
  • Demote the source side Pod, which causes the ActiveDR Pod replication to switch directions and all changes on the target side POD are sent to the original source side POD.
Production side demoted

 

The beautiful part about the DR test and failover capabilities of ActiveDR in a Pure FlashArray is that you follow the same steps in the GUI, or same commands via CLI, for a test as you would for a real-world DR scenario.  There's no need to change any part of the process and wonder if it will work during a DR event. 

Today, ActiveDR is limited to a 1:1 relationship between a Pod on both the source and target side.  You can however have multiple Pods on the FlashArrays each having their own ActiveDR partner pod configured to the same target FlashArray or different FlashArray partners.  The diagram below shows you a couple examples of multiple Pod replications setup across FlashArrays.

Diagram

Description automatically generated

In the future, we're hoping ActiveDR will be expanded to support a 3rd copy replica (or cascading replication) to a different FlashArray or S3 target (CloudSnap) for the Native File Replication options, but this release is huge and didn't disappoint.  To see all the new features available in Purity //FA 6.3 take a look at this blog post   

To learn more about ActiveDR, take a look at the ActiveDR white paper on support.purestorage.com or reach out to your WWT Account team to see File Replication using ActiveDR in action.

Technologies