VirtuallyNil
Virtualisation, VMware, Linux and more
Linux VMs and OCFS2 shared physical RDM September 27, 2010
We’ve recently had the need to setup Linux virtual machines using large shared disks with the OCFS2 cluster filesystem. To complicate matters some of these Linux servers in the cluster might run as zLinux servers on a mainframe under z/VM, all sharing the same LUNs.
I had setup a couple of test virtual machines in this scenario, the first I setup with a physical RDM of the shared LUN and enabled SCSI bus sharing on the adapter. Then for the second test virtual machine I pointed it at the mapped VMDK file of the physical RDM from the first virtual machine, this seemed to be obvious way to setup sharing of the physical RDM between VMs. This all works and both Linux servers can use the LUN and the clustered filesystem. The problem I came across is that as soon as you enable SCSI bus sharing vMotion is disabled. I wanted the ability to migrate these virtual machines if I needed to so I looked into how to achieve this.
Adding config.vpxd.filter.rdmFilter in vCenter
Adding config.vpxd.filter.rdmFilter in vCenter
If you only have one virtual machine and do not enable the SCSI bus sharing then vMotion is possible. The solution I came up with was to setup each virtual machine the same, ie. each to have the LUN as a physical RDM but of course once you allocated the LUN to one virtual machine it is not visible to any other VM. The easiest way to allow this to happen is with the config.vpxd.filter.rdmFilter configuration option, this is set in the vCenter settings under Administration > vCenter Server Settings > Advanced Settings and is detailed in this KB article. If the setting is not in the list of Advanced Settings it can be added.
As Duncan Epping rightly describes here it’s not really wise to leave this option set to false. What I did was to set the option to false, carefully allocate the LUN to the three Linux servers I was setting up and then set it back to true.
With the virtual machines setup like this the restriction is that they cannot run on the same host so I set anti-affinity rules to keep them apart.
It is also possible to do with via the command line without the need to set the Advanced Setting as described in this mail archive I found. I haven’t tried this method.
I don’t know if this setup is supported or not but it seems to achieve the goal we need. I’d be interested to hear any opinions on this.
Источник
Raw Device Mappings (RDM)
Currently viewing public documentation. Please login to access the full scope of documentation.
In the early days of ESXi, VMware provided the ability to present a volume / LUN from a backend storage array to a virtual machine directly. This technology is called a Raw Device Mapping, also known as an RDM. While the introduction of RDMs provided several key benefits for end-users those topics are outside of the scope of this document. For an in-depth review of RDMs, and their benefits, it is recommended you review the Raw Device Mapping documentation from VMware.
RDMs are, in many ways, becoming obsolete with the introduction of VMware vSphere Virtual Volumes (vVols). Many of the features that RDMs provide are also available with vVols. Pure Storage recommends the use of vVols wherever possible and would recommend you read further on this topic in our Virtual Volume (vVol) documentation.
RDM Compatibility Modes
There are two compatibility modes for a Raw Device Mapping, physical and virtual. Which option you choose relies on what features are required within your environment.
Physical Compatibility Mode
An RDM used in physical compatibility mode, also known as pass-through RDM or pRDM, exposes the physical properties of the underlying storage volume / LUN to the Guest Operating System (OS) within the virtual machine. This means that all SCSI commands (with the exception of one) are passed directly to the Guest OS thus allowing the VM to take advantage of some of the lower level storage functions that may be required.
Virtual Compatibility Mode
An RDM used in virtual compatibility mode, also known as a vRDM, virtualizes the physical properties of the underlying storage and as a result appears the same way a virtual disk file in a VMFS volume would appear. The only SCSI requests that are not virtualized are READ and WRITE commands, which are still sent directly to the underlying volume / LUN. vRDMs still allow for some of the same benefits as a VMDK on a VMFS datatore and are a little more flexible for moving throughout the environment.
In order to determine which compatibility mode should be used within your environment it is recommend you review the Difference between Physical compatibility RDMs and Virtual compatibility RDMs from VMware.
Due to the various configurations that are required for each RDM mode, Pure Storage does not have a best practice for which to use. Both are equally supported.
Managing Raw Device Mappings
Connecting a volume for use as a Raw Device Mapping
The process of presenting a volume to a cluster of ESXi hosts to be used as a Raw Device Mapping is no different than presenting a volume that will be used as a datastore. The most important step for presenting a volume that will be used as an RDM is ensuring that the LUN ID is consistent across all hosts in the cluster. The easiest way to accomplish this task is by connecting the volume to a host group on the FlashArray instead of individually to each host. This process is outlined in the FlashArray Configuration section of the VMware Platform Guide.
If the volume is not presented with the same LUN ID to all hosts in the ESXi cluster then VMware may incorrectly report that a volume is not in use when it is. VMware Knowledge Article Storage LUNs that are already in use as an RDM appear available in the Add Storage Window further explains this behavior.
Identifying the underlying volume for a Raw Device Mapping
There are times where you will need to determine which volume on the FlashArray is associated with a Raw Device Mapping.
1. Right click on the virtual machine and select «Edit Settings».
2. Locate the desired RDM you wish to inspect and expand the properties of the disk.
3. Under the disk properties locate the «Physical LUN» section and note the vml identifier.
4. Once you have the vml identifier we can then find the LUN ID and the volume identifier to match it to a FlashArray volume.
The above string can be analyzed as follows:
- fa — hex value of the LUN ID (250 in decimal)
- 624a9370 — Pure Storage identifier
8a75393becad4e430004e270 — volume serial number on the FlashArray
5. Now that we know the identifier and LUN ID we can look at the FlashArray to determine which volume is backing the RDM
As seen above we are able to confirm that this particular RDM is backed by the volume called «space-reclamation-test» on the FlashArray.
Removing a Raw Device Mapping from a virtual machine
The process for removing a Raw Device Mapping from a virtual machine is a little different than that of removing a virtual machine disk (VMDK).
1. Right click the virtual machine and select «Edit Settings».
2. Locate the desired RDM you wish to remove and click the «x».
3. Ensure the box «Delete files from datastore» is checked.
5. If you are no longer require the volume then you can safely disconnect the volume from the FlashArray and rescan the ESXi hosts.
By selecting «Delete files from datastore» this is not deleting the data on the volume. This step simply removes the mapping file created on the datastore that points to the underlying volume (raw device).
Resizing a Raw Device Mapping
Depending on which compatibility mode you have chosen for your RDM the resize process will vary. Following the process outlined in Expanding the size of a Raw Device Mapping (RDM) provides an example for both physical and virtual RDMs.
Multipathing
A common question when using Raw Device Maps (RDMs) is where the multipathing configuration should be completed. Because an RDM provides the ability for a virtual machine to access the underlying storage directly it is often thought that configuration within the VM itself was required. Luckily things are not that complicated and the configuration is no different than that of a VMFS datastore. This means that the VMware Native Multipathing Plugin (NMP) is responsible for RDM path management and not the virtual machine.
This means that no MPIO configuration is required (or needed) within the virtual machines utilizing RDMs. All of the available paths are abstracted from the VM and the RDM is presented as a disk with a single path. All of the multipathing logic is handled in the lower levels of the ESXi host.
Below is an example of what a pRDM looks like within a Windows VM:
Multi-Writer
When utilizing RDMs people often have questions regarding the «Sharing» option while adding the RDM to the virtual machine (illustrated below).
Since RDMs are most often used for situations like clustering this becomes a concern on whether or not this value should be set. There is a fear that if left unspecified (which defaults to «No Sharing») corruption of come kind can happen on the disk. This is a good mindset to have as protecting data should always be the number one goal.
The first important thing to note here is that this option is meant for VMDKs or virtual RDMs (vRDM) only. It is not for use with physical RDMs (pRDM) as they are not «VMFS backed». So if your environment is utilizing physical RDMs then you do not need to worry about this setting.
If you utilizing virtual RDMs then there is a possibility that setting this option would be required, specifically if you are utilizing Oracle RAC on your virtual machines. As of this writing this is the only scenario in which multi-writer is known to be required with virtual RDMs on a Pure Storage FlashArray. VMware has provided additional information on this in their Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag KB.
If there are questions around this topic please open a case with Pure Storage Technical Support for additional information.
Do not set multi-writer on RDMs that are going to be used in a Windows Server Failover Cluster (WSFC) as this may cause accessibility issues to the disk(s). Windows manages access to the RDMs via SCSI-3 persistent reservations and enabling multi-writer is not required.
Queue Depth
An additional benefit to utilizing a Raw Device Mapping is that each RDM has its own queue depth limit, which may in some cases provide increased performance. Because the VM is sending I/O directly to the FlashArray there is no sharing queue depth on a datastore like there is with a VMDK.
Aside from the potentially shared queue depth on the virtual machine SCSI controller, each RDM has its own dedicated queue depth and works under the same rules as a datastore would. This means that if you have a Raw Device Mapping presented to a single virtual machine the queue depth for that RDM would be whatever the HBA queue depth was configured to. Alternatively, if you presented the RDM to multiple virtual machines then the device Number of Outstanding I/Os would be the queue limit for that particular RDM.
For additional information on queue depths, you can refer to Understanding VMware ESXi Queuing and the FlashArray.
Unless directed by Pure Storage or VMware Support there typically is no reason to modify either of these values. The default queue depth is sufficient for most workloads.
UNMAP
One of the benefits of using RDMs is that SCSI UNMAP is a much less complicated process than using VMDKs. Depending on the version of ESXi you are using there are different caveats for UNMAP to be successful with VMDKs. With RDM the only requirements are that the Guest OS support SCSI UNMAP and that the ability is enabled.
Windows
If utilizing Windows please refer to Windows Space Reclamation KB for UNMAP requirements and how to verifying this feature is enabled.
Linux
If utilizing Linux please can refer to the Reclaiming Space on Linux KB for UNMAP requirements and how to verifying this feature is enabled.
Managing RDMs with PowerShell
Pure Storage offers a PowerShell Module called PureStorage.FlashArray.VMware to help with PowerShell management of Pure Storage and VMware environments.
To install this module, run:
PS C:\> Install-Module PureStorage.FlashArray.VMware
In this module there are a few cmdlets that assist specifically with RDMs.
PS C:\> Get-Command -Name *RDM* -Module PureStorage.FlashArray.VMware.RDM -CommandType Function
CommandType Name Version Source
———— —- ——- ——
Function Convert-PfaRDMToVvol 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Copy-PfaSnapshotToRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Get-PfaConnectionFromRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Get-PfaRDMSnapshot 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Get-PfaRDMVol 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function New-PfaRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function New-PfaRDMSnapshot 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Remove-PfaRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDM
Function Set-PfaRDMCapacity 1.1.0.2 PureStorage.FlashArray.VMware.RDM
For instance, to create a new RDM, run:
PS C:\> connect-viserver -Server vcenter-01.purestorage.com
PS C:\> $flasharray = new-pfaarray -endpoint flasharray-01.purestorage.com -Credentials (get-credential)
PS C:\> $vm = get-vm SQLVM
PS C:\> $vm | new-pfaRDM -sizeInTB 4 -flasharray $flasharray
Replace vCenter FQDN, FlashArray FQDN, volume size and VM name with your own.
Helpful Links
Windows Server Failover Cluster (WSFC)
Oracle RAC
Converting RDMs to vVols
Recommended articles
- Article type Reference Document Visibility Public Content
- Tags
- Raw Device Map
- RDM
- VMware
© 2015-2021 Pure Storage® (“Pure”), Portworx® and associated its trademarks can be found here as and its virtual patent marking program can be found here. Third party names may be trademarks of their respective owners. The Pure Storage products and programs described in this documentation are distributed under a license agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products. No part of this documentation may be reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described in this documentation at any time without notice. This documentation is provided «as is» and all express or implied conditions, representations and warranties, including any implied warranty of merchantability, fitness for a particular purpose, or non-infringement, are disclaimed, except to the extent that such disclaimers are held to be legally invalid. Pure shall not be liable for incidental or consequential damages in connection with the furnishing, performance, or use of this documentation. The information contained in this documentation is subject to change without notice.
Источник