Mount zfs on windows
To setup a development environment for compiling ZFS.
Download free development Windows 10 image from Microsoft.
and create two VMs.
- Host (running Visual Studio and Kernel Debugger)
- Target (runs the compiled kernel module)
The VM images comes with Visual Studio 2017, which we use to compile the driver.
It is recommended that the VMs are placed on static IP, as they can change IP with all the crashes, and you have to configure the remote kernel development again.
Go download the Windows Driver Kit 10
and install on both VMs. You will need both the SDK and WDK: Download the SDK with the Visual Studio 2017 community edition first and install it. It will update the already installed Visual Studio. Then install the WDK. At the end of the installer, allow it to install the Visual Studio extension.
On Target VM, complete the guide specified here, under section «Prepare the target computer for provisioning».
Which mostly entails running:
C:\Program Files (x86)\Windows Kits\10\Remote\x64\WDK Test Target Setup x64-x64_en-us.msi
On the Host VM, continue the guide to configure Visual Studio 2017.
- Load Visual Studio 2017, there is no need to load the project yet.
- Menu > Driver > Test > Configure Devices
- Click «Add New Device»
- In «Display name:» enter «Target»
- In «Device type:» leave as «Computer»
- In «Network host name:» enter IP of Target VM, for me «172.16.248.103»
- Provisioning options: o Provision device and choose debugger settings.
- Click «Next >»
It now confirms that it talked to the Target, and note here that «Host IP» it that of the Host VM, for me, «172.16.248.102», and not to be confused by the Target IP entered on previous screen.
Watch and wait as remote items are installed on the Target VM. It will most likely reboot the Target VM as well.
I’ve had dialog boxes pop up and I agree to installation, but I am not sure they are supposed to. They probably shouldn’t, it would seem it failed to put WDKRemoteUser in Administrators group. If that happens, use «lusrmgr.msc» to correct it.
The task «Creating system restore point» will most likely fail and that is acceptable, however, if other tasks fail, you may need to retry until they work.
At the end of the run, the output window offers a link to the full log, which is worth reading if you encounter issues.
When things fail, I start a CMD prompt as Administrator, and paste in the commands that fail, from the log file. It would be nice if this process just worked though.
If your version of .NET newer, just move along.
The Target VM should reboot, and login as «WDKRemoteUser».
It is recommended you get GIT bash for Windows and install:
Handling configuration errors with Visual Studio 2019 & WDK 10:
There are some issues with Visual Studio 2019 which can cause the following problem in setting up kernel debugging. ERROR: Task “Configuring kernel debugger settings (possible reboot)” failed to complete successfully. Look at the logs in the driver test group explorer for more details on the failure.
This problem is related to MSVC debug tool location mismatch, and as a workaround use the following steps to mitigate this problem:
As Administrator, run Developer Command Prompt for VS 2019 in your Host VM Run the following commands in the VS Developer Command Prompt:
cd /d %VCToolsRedistDir%\debug_nonredist MKLINK /J x86\Microsoft.VC141.DebugCRT x86\Microsoft.VC142.DebugCRT MKLINK /J x64\Microsoft.VC141.DebugCRT x64\Microsoft.VC142.DebugCRT
Retry configuration by following guide to configure Visual Studio 2017 mentioned above.
Host and Target VMs are now configured.
First time you load the project it might default to
you probably want to change ARM ==> X64.
Load ZFSin solution
Menu > Debug > ZFSin Properties
Configuration Properties > Debugging «Debugging tools for Windows — Kernel Debugger» Remote Computer Name: Target
Configuration Properties > Driver Install > Deployment Target Device Name: Target [Tick] Remove previous driver versions O Hardware ID Driver Update Root\ZFSin
You can run DbgView on the Target VM to see the kernel prints on that VM.
Run the compiled Target
- Compile solution
- Menu > Debug > Start Debugging (F5)
wait a while, for VS2017 to deploy the .sys file on Target and start it.
Target VM optionals.
If you find it frustrating to do development work when Windows Defender or Windows Updates run, you can disable those in gpedit.msc
- Computer Configuration > Administrative Templates > Windows Components > Windows Defender Windows Updates
✅ Compile SPL sources
- Godzillion warnings yet to be addressed
✅ Port SPL sources, atomics, mutex, kmem, condvars
- C11 _Atomics in kmem not yet handled
✅ Compile ZFS sources, stubbing out code as needed
✅ Include kernel zlib library
✅ Load and Unload SPL and ZFS code
✅ Port kernel zfs_ioctl.c to accept ioctls from userland
✅ Compile userland libspl, libzpool, libzfs, .
✅ Include pthread wrapper library
- Replaced with thin pthread.h file
✅ Include userland zlib library
✅ Port functions in libzpool, libzfs. Iterate disks, ioctl
✅ Test ioctl from zpool to talk to kernel
✅ Port kernel vdev_disk.c / vdev_file.c to issue IO
✅ Port over cmd/zfs
✅ Add ioctl calls to MOUNT and create Volume to attach
✅ Add ioctl calls to UNMOUNT and detach and delete Volume
✅ Port kernel zfs_vnops.c / zfs_vnops_windows.c
- Many special cases missing, flags to create/read/etc
✅ Correct file information (dates, size, etc)
✅ Basic DOS usage
✅ Simple Notepad text edit, executables also work.
✅ Basic drag’n’drop in Explorer
✅ zfs send / recv, file and pipe.
✅ git clone ZFS repo on ZFS mounted fs
✅ Compile ZFS on top of ZFS
❎ Scrooge McDuck style swim in cash
Design issues that need addressing.
- Windows does not handle EFI labels, for now they are parsed with libefi, and we send offset and size with the filename, that both libzfs and kernel will parse out and use. This works for a proof of concept.
Possibly a more proper solution would be to write a thin virtual hard disk driver, which reads the EFI label and present just the partitions.
vdev_disk.c spawns a thread to get around that IoCompletionRoutine is called in a different context, to sleep until signalled. Is there a better way to do async in Windows?
ThreadId should be checked, using PsGetCurrentThreadId() but it makes zio_taskq_member(taskq_member()) crash. Investigate.
Functions in posix.c need sustenance.
The Volume created for MOUNT has something wrong with it, we are unable to query it for mountpoint, currently has to string compare a list of all mounts. Possibly also related is that we can not call any of the functions to set mountpoint to change it. This needs to be researched.
Find a way to get system RAM in SPL, so we can size up the kmem as expected. Currently looks up the information in the Registry. kmem should also use Windows signals «\KernelObjects\LowMemoryCondition» to sense pressure.
Thinking on mount structure. Second design:
Add dataset property WinDriveLetter, which is ignored on Unix system. So for a simple drive letter dataset:
zfs set driveletter=Z pool
The default creating of a new pool, AND, importing a UNIX pool, would set the root dataset to
So it is assigned first-available drive letter. All lower datasets will be mounted inside the drive letter. If pool’s WinDriveLetter is not set, it will mount «/pool» as «C:/pool».
Installing a binary release
Latest binary files are available at GitHub releases
If you are running windows 10 with secure boot on and/or installing an older release you will need to enable unsigned drivers from an elevated CMD:
- bcdedit.exe -set testsigning on
- Then reboot. After restart it should have Test Mode bottom right corner of the screen.
After that either
- Run OpenZFSOnWindows.exe installer to install
- Would you like to install device software? should pop up, click install
- If installing an unsigned release, click «Install anyway» in the «unknown developer» popup
Or if you do not want to run the Installer, run this command by hand from elevated CMD:
- zfsinstaller.exe install .\ZFSin.inf
- Would you like to install device software? should pop up, click install
- If installing an unsigned release, click «Install anyway» in the «unknown developer» popup
Run zpool.exe status to confirm it can talk to the kernel
Failure would be:
Success would be:
Creating your first pool.
The basic syntax to creating a pool is as below. We use the pool name «tank» here as with Open ZFS documentation. Feel free to pick your own pool name.
The default options will «mostly» work in Windows, but for best compatibility should use a case insensitive filesystem.
The recommended options string for Windows is currently:
- Creating filebased pools would look like:
- Creating a HDD pool
First, locate disk name
Creating a ZVOL virtual hard disk
Creating a virtual hard disk (ZVOL) is done by passing «-V » to the «zfs create» command.
Which would create a disk of 2GB in size, called «tank/hello». Confirm it was created with:
Exporting the pool
If you have finished with ZFS, or want to eject the USB or HDD that the pool resides on, it must first be exported. Similar to «ejecting» a USB device before unplugging it.
Importing a pool
If a zpool has been created on a disk partition from a different system make sure the partition label contains «zfs». Otherwise zpool import won’t recognize the pool and will fail with «no pools available to import».
Uninstalling the driver
If you used the Installer, you can browse to «C:\Program Files (x86)\OpenZFS On Windows» and run the «uninst000.exe» Uninstaller program.
You can also use «Add Remove Programs» from the Settings menu, and click on «OpenZFS On Windows-debug version x.xx» and select Uninstall.
If you did not use the Installer, you can manually uninstall it:
To verify that the driver got uninstalled properly you can check «zpool.exe status».
When uninstalled with success, «zpool.exe status» should return:
If the driver is still there, it would be:
A reboot might be necessary to uninstall it completely.
You can use the registry to tune various parameters.
Also, there is kstat to dynamically change parameters.
There are nightly builds available at AppVeyor
- These builds are currently not signed and therefore require test mode to be enabled.
There also are test builds available here. These are «hotfix» builds for allowing people to test specific fixes before they are ready for a release.
Best way to mount ZFS backup disk on Windows?
nickt
Member
Still progressing through my new FreeNAS build, and trying to finalise my backup strategy. My basic plan is to backup to a pair of external backup drives (USB): one will be connected to my FreeNAS and the other will be stored at an offsite location, rotating every fortnight / month. While not essential, I’d really like to be able to read the backup drive on a low end Windows machine at the offsite location (my office).
My first thought was to format the backup drives as ZFS and use snapshots / replication with periodic scrubs scheduled. That would surely be the most robust backup strategy, but all options for reading the offsite drive on my Windows machine seem problematic one way or the other. I could:
- Use zfs-win to provide ZFS capability to Windows, but this looks ancient and forgotten
- Build a VirtualBox based FreeNAS VM on my Windows machine, but I only have 3 GB of useable RAM in total
- Build a VirtualBox based Ubuntu VM on my Windows machine and use one of the Ubuntu ZFS solutions
My Windows machine is 32 bit Win 7 on Ivy Bridge (i5) machine with 4 GB physical RAM. VirtualBox allows 64 bit guests on 32 bit hosts with adequate CPU support (which I have).
The other option is to use Crashplan / rsync to an NTFS formatted drive, but I don’t think NTFS support in FreeNAS is present and / or encouraged. Crashplan seems to have its own scrubbing methodology, which is nice, but the NTFS support is a concern.
Any suggestions? I assume rotating external USB drives in the way I have described is a fairly typical backup strategy for home NAS use cases, so I am keen to understand how others do it.
Mounting and Sharing ZFS File Systems
This section describes how mount points and shared file systems are managed in ZFS.
Managing ZFS Mount Points
By default, a ZFS file system is automatically mounted when it is created. You can determine specific mount-point behavior for a file system as described in this section.
You can also set the default mount point for a pool’s dataset at creation time by using zpool create ‘s -m option. For more information about creating pools, see Creating a ZFS Storage Pool.
All ZFS file systems are mounted by ZFS at boot time by using the Service Management Facility’s (SMF) svc://system/filesystem/local service. File systems are mounted under /path , where path is the name of the file system.
You can override the default mount point by using the zfs set command to set the mountpoint property to a specific path. ZFS automatically creates the specified mount point, if needed, and automatically mounts the associated file system when the zfs mount -a command is invoked, without requiring you to edit the /etc/vfstab file.
The mountpoint property is inherited. For example, if pool/home has the mountpoint property set to /export/stuff , then pool/home/user inherits /export/stuff/user for its mountpoint property value.
To prevent a file system from being mounted, set the mountpoint property to none. In addition, the canmount property can be used to control whether a file system can be mounted. For more information about the canmount property, see canmount Property.
File systems can also be explicitly managed through legacy mount interfaces by using zfs set to set the mountpoint property to legacy . Doing so prevents ZFS from automatically mounting and managing a file system. Legacy tools including the mount and umount commands, and the /etc/vfstab file must be used instead. For more information about legacy mounts, see Legacy Mount Points.
Automatic Mount Points
When you change the mountpoint property from legacy or none to a specific path, ZFS automatically mounts the file system.
If ZFS is managing a file system but it is currently unmounted, and the mountpoint property is changed, the file system remains unmounted.
Any dataset whose mountpoint property is not legacy is managed by ZFS. In the following example, a dataset is created whose mount point is automatically managed by ZFS:
You can also explicitly set the mountpoint property as shown in the following example:
When the mountpoint property is changed, the file system is automatically unmounted from the old mount point and remounted to the new mount point. Mount-point directories are created as needed. If ZFS is unable to unmount a file system due to it being active, an error is reported, and a forced manual unmount is necessary.
Legacy Mount Points
You can manage ZFS file systems with legacy tools by setting the mountpoint property to legacy . Legacy file systems must be managed through the mount and umount commands and the /etc/vfstab file. ZFS does not automatically mount legacy file systems at boot time, and the ZFS mount and umount commands do not operate on datasets of this type. The following examples show how to set up and manage a ZFS dataset in legacy mode:
To automatically mount a legacy file system at boot time, you must add an entry to the /etc/vfstab file. The following example shows what the entry in the /etc/vfstab file might look like:
The device to fsck and fsck pass entries are set to - because the fsck command is not applicable to ZFS file systems. For more information about ZFS data integrity, see Transactional Semantics.
Mounting ZFS File Systems
ZFS automatically mounts file systems when file systems are created or when the system boots. Use of the zfs mount command is necessary only when you need to change mount options, or explicitly mount or unmount file systems.
The zfs mount command with no arguments shows all currently mounted file systems that are managed by ZFS. Legacy managed mount points are not displayed. For example:
You can use the -a option to mount all ZFS managed file systems. Legacy managed file systems are not mounted. For example:
By default, ZFS does not allow mounting on top of a nonempty directory. To force a mount on top of a nonempty directory, you must use the -O option. For example:
Legacy mount points must be managed through legacy tools. An attempt to use ZFS tools results in an error. For example:
When a file system is mounted, it uses a set of mount options based on the property values associated with the dataset. The correlation between properties and mount options is as follows: