What is the best practice for configuring VMFS UNMAP for ESXi 6.7 or later?
What is UNMAP?: UNMAP (SCSI command 0x42) is the mechanism that allows a host (like ESXi) to inform the storage array that specific blocks of data are no longer in use (e.g., after a VM is deleted or moved). This is critical for Pure Storage because it allows the array to reclaim that space and maintain high data reduction ratios.
Evolution in ESXi: In versions prior to 6.5, UNMAP was a manual process executed via the CLI. Starting with ESXi 6.7, VMware introduced Automatic Space Reclamation, which runs in the background.
The Pure Storage Recommendation: Pure Storage recommends setting the reclamation priority to Auto with Low Priority.
Low Priority: This ensures that the UNMAP commands are sent to the FlashArray at a steady, manageable rate (roughly up to 25 MB/s to 100 MB/s depending on the Purity version). Because FlashArrays are built on a high-performance metadata engine, 'Low Priority' is more than sufficient to keep up with even high-churn environments without causing any contention for active application I/O.
Why avoid High Priority (Option B)?: Setting it to high priority or using a fixed high-burst rate can lead to 'bursty' SCSI traffic. While the FlashArray can handle the load, it is considered a best practice to keep background maintenance tasks like space reclamation at a lower priority to ensure the 'Big Three' (latency, bandwidth, IOPS) for production workloads remain optimized.
Verification: You can verify that UNMAP is working by looking at the Data Reduction metrics in the Purity GUI or Pure1. If the 'Thin Provisioning' or 'Reclaimed' numbers are increasing after file deletions, the host is correctly communicating its freed space to the array.
Which command provides the negotiated port speed of an ethernet port?
On a Pure Storage FlashArray, Ethernet ports operate at both a physical hardware layer and a logical network configuration layer. If you need to verify the actual physical negotiated port speed of an Ethernet port (for example, verifying if a 25GbE port negotiated down to 10GbE due to switch configurations or cable limitations), you must query the hardware layer directly.
The command purehw list --all --type eth interacts directly with the physical NIC hardware components to report their true link status, health, and dynamically negotiated hardware link speed.
Here is why the other options are incorrect:
purenetwork eth list -- all (B): The purenetwork command suite is primarily focused on the logical Layer 2/Layer 3 networking stack. It is used to configure and list IP addresses, subnet masks, MTU sizes (Jumbo Frames), and routing, rather than focusing on the physical hardware negotiation details of the NIC itself.
pureport list (A): The pureport command suite is specifically used for managing and viewing storage protocol target ports. An administrator would use this to list the array's Fibre Channel WWNs or iSCSI IQNs to configure host zoning or initiator connections, not to verify Ethernet link negotiation speeds.
A storage administrator is troubleshooting a FlashArray that is critically low on space. They have successfully deleted and eradicated a large volume, but used space keeps increasing.
What is a possible cause?
Logical vs. Physical Reclamation: When an administrator 'Eradicates' a volume, the FlashArray immediately removes the logical reference to that data. However, the physical blocks are not 'wiped' instantly. Instead, those blocks are marked as 'eligible for reclamation' by Purity's background Garbage Collection (GC) process.
Workload Prioritization: Purity is designed to prioritize Host I/O (production performance) over background system tasks. If the array is under an extremely high workload (high Load Meter percentage), Purity will automatically throttle the Garbage Collection process to ensure the application latency remains as low as possible.
The 'Reclamation Lag': If the incoming write rate from the hosts (new data being written) exceeds the speed at which the throttled GC process can reclaim space from the eradicated volume, the 'Used Space' metric will continue to trend upward. This is a common scenario when arrays are pushed to their performance or capacity limits simultaneously.
Why Option A is incorrect: The 24-hour safeguard applies to Destroyed volumes (the 'Pending Eradication' bucket). Once an administrator manually clicks Eradicate, that safeguard is bypassed, and the space should logically be freed. If the space is still not reflecting as 'Free,' it is a back-end processing delay, not a timer delay.
Why Option C is incorrect: In the Purity Operating Environment, the array does not require the host to 'unmount' or 'disconnect' before it can reclaim space. Once the volume is destroyed and eradicated on the array side, those blocks are gone from the array's perspective, regardless of the host's state (though the host will likely experience I/O errors).
An administrator setup replicated snapshots for a protection group last week. They left the local snapshot schedule disabled.
How many snapshots are stored locally on the source array?
Replication Fundamentals: On a Pure Storage FlashArray, replication is a snapshot-based process. To replicate a Protection Group (pgroup) to a target array, the system must first create a point-in-time snapshot of the volumes within that group on the source array.
The 'Immutable' Rule: Even if the Local Snapshot Schedule is disabled, the act of replicating requires the existence of a local snapshot to serve as the 'base' or 'source' for the data transfer. Purity does not stream data directly from the active volume to the wire; it creates a snapshot and then replicates the unique blocks contained in that snapshot.
Accounting for Local Copies: When a Protection Group is configured for replication, every snapshot generated by the Replication Schedule is stored locally on the source array. These snapshots will remain on the source array until they are aged out according to the Local Retention policy (even if the local schedule itself is off, the retention policy still applies to those replicated snapshots).
Visibility: If you navigate to the Protection Group in the Purity GUI, you will see these snapshots listed under the 'Snapshots' tab. They are functionally identical to local snapshots, meaning they can be used for local clones or restores without needing to pull data back from the target array.
Why Option A and C are incorrect: * Option A: If 0 snapshots were stored, there would be nothing to replicate.
Option C: While Purity uses the most recent snapshot as a reference for delta-tracking, it keeps the entire history of snapshots defined by your retention policy, not just a single one.
An On-Premises ActiveCluster (AC) Mediator is installed on an ESXi server. The mediator was previously online but when the administrator checked the status of the ActiveCluster (AC) pods the mediator status was listed as "unreachable" for both FlashArrays in the ActiveCluster (AC) pair.
What is a possible cause of the mediator being unreachable from both FlashArrays?
The ActiveCluster Mediator (whether it is the Pure1 Cloud Mediator or the On-Premises VM) is a lightweight tie-breaker that communicates continuously with the management interfaces of both FlashArrays. If it was previously online and suddenly reports as 'unreachable' from both arrays simultaneously, the issue is almost always caused by a network interruption or firewall rule change blocking the required communication ports between the arrays' management IP addresses and the Mediator VM.
If a network firewall is suddenly configured to drop or deny outbound TCP traffic (such as port 80/443 depending on the specific HTTP/HTTPS discovery and heartbeat configuration) from the FlashArrays to the ESXi-hosted Mediator, the arrays will fail to send their heartbeats, causing the mediator status to drop to 'unreachable.'
Here is why the other options are incorrect:
Fibre Channel (FC) zoning or network access has not been created properly for the host (A): The Mediator is completely independent of the front-end host storage fabric (Fibre Channel or iSCSI). Host zoning issues would prevent the ESXi server from seeing its volumes, but it would not cause the FlashArrays to lose management network connectivity to the Mediator.
The mediator does not reside within a Pure datastore (B): This is actually a strict best practice and requirement. Pure Storage explicitly states that the On-Premises Mediator VM must be deployed in a separate (third) failure domain. It should not reside on the ActiveCluster mirrored datastore, because a site-wide SAN failure would take the mediator offline exactly when it is needed most. Therefore, not residing on a Pure datastore is the correct setup, not a cause for an outage.
Gary Jones
18 hours agoSarah Moore
12 days agoMark Nguyen
7 days agoAdam Thomas
10 days agoMargaret Lee
10 days agoEmily Hill
10 days agoCurrently there are no comments in this discussion, be the first to comment!