- Huge Pages
- OS specific setup
- 1GB huge pages Linux only
- RandomX Optimization Guide
- Guides
- Memory size requirements
- Huge Pages
- Hardware prefetchers MSR mod
- Cache QoS
- rx 1 Gb huge pages error #1411
- Comments
- exty357 commented Dec 13, 2019
- pawelantczak commented Dec 13, 2019
- setuidroot commented Dec 13, 2019
- Optimal configuration of xmrig on linux for Monero Mining
Huge Pages
Huge Pages, also known as Large Pages (on Windows) and Super Pages (on BSD or macOS) is very important thing for almost all supported CPU mineable algorithms, typical hashrate increase is 20-30% when huge pages used, for RandomX it can increase performance up to 50%. XMRig use term huge pages on all platforms, please don’t confusing.
If XMRig use huge pages you will see lines with text like bellow in miner log.
OS specific setup
On Windows you need special privilege called as SeLockMemoryPrivilege to use huge pages.
First check miner output if you see line below you already have this privilege and no additional actions required.
If not, you have 2 options to obtain it, both are require admin rights.
- Easiest way: run the miner as Administrator once and reboot.
- Manual configuration
On Windows 10 once you obtain the privilege, admin rights no longer required to use huge pages, but on Windows 7 admin rights always required. Please note on Windows no way to reserve huge pages for future use and the miner still can fail to allocate all required huge pages, because other applications use memory, if you got less than 100% of huge pages best option is reboot. If you heavy use algorithm switching and like to prevent loose huge pages you can use option «memory-pool»: true, in «cpu» object.
For manual configuration you must know how many huge pages you need, general recommendations is 1280 pages per NUMA node for RandomX algorithms and 128 per system for other algorithms. Please note 1280 pages means 2560 MB of memory will be reserved for huge pages and become not available for other usage, in automatic mode the miner reserve precise count of huge pages.
Temporary (until next reboot) reserve huge pages:
Permanent huge pages reservation
1GB huge pages Linux only
For RandomX dataset since version 5.2.0 the miner support 1GB huge pages (regular huge pages size is 2MB), this feature available only on Linux. It increases the hashrate by 1-3% (depends of CPU) and increases memory requirements to 3GB (3 pages) per NUMA node.
By default this feature disabled, to enable use option «1gb-pages»: true, in «randomx» object.
Источник
RandomX Optimization Guide
Guides
Memory size requirements
- 2080 MB per NUMA node for dataset, 1 NUMA node usually equal to 1 CPU socket, the miner show number of nodes on startup.
- 256 MB for cache on first NUMA node.
- 256 KB of L2 cache and 2 MB of L3 cache per 1 mining thread.
CPU cache requirements is the main reason why the miner not use all threads on most CPUs (very popular question). On Windows 4GB memory may not enough system and miner.
There are several ways to increase or reduce memory requirements:
- 1GB huge pages on Linux, increases memory requirements to 3GB (3 pages) per NUMA node and increases the hashrate by 1-3%.
- Disable NUMA support by «numa»: false in «randomx» object, miner will use only 1 dataset, but it reduce hashrate significantly, if you have only 1 NUMA node this option has no effect.
- RandomX light mode, reduces memory requirements to 256 MB but this mode very slow, can be enabled by «mode»: «light», in «randomx» object.
Multiple memory channels may be required:
- DDR3 memory is limited to about 1500-2000 H/s per channel (depending on frequency and timings)
- DDR4 memory is limited to about 4000-6000 H/s per channel (depending on frequency and timings)
Huge Pages
Huge Pages can increase RandomX performance up to 50%, 1GB huge pages (Linux only) increase hashrate by an additional 1-3% on top of regular huge pages.
Hardware prefetchers MSR mod
You must disable hardware prefetchers to get the optimal RandomX performance.
Cache QoS
Experimental feature to disallow all CPU cores which are not mining to not have access to L3 cache which reduce interference with mining.
Источник
rx 1 Gb huge pages error #1411
Comments
exty357 commented Dec 13, 2019
[2019-12-13 11:31:17.395] rx msr kernel module is not available
[2019-12-13 11:31:17.395] rx init dataset algo rx/loki (20 threads) seed 57605565760671d4.
[2019-12-13 11:31:17.975] rx failed to allocate RandomX dataset using 1GB pages
[2019-12-13 11:31:18.023] rx allocated 2336 MB (2080+256) huge pages 100% 1168/1168 +JIT (627 ms)
[2019-12-13 11:31:20.628] rx dataset ready (2605 ms)
[2019-12-13 11:31:20.628] cpu use profile rx (20 threads) scratchpad 2048 KB
[2019-12-13 11:31:21.011] cpu READY threads 20/20 (20) huge pages 100% 20/20 memory 40960 KB (384 ms)
i have 20 vCPU + 64 Gb RAM.
cat /etc/sysctl.conf | grep huge
vm.nr_hugepages = 4096
The text was updated successfully, but these errors were encountered:
pawelantczak commented Dec 13, 2019
You can remove entry from sysctl.conf, xmrig will do it for you (make sure running it with sudo).
setuidroot commented Dec 13, 2019
Typically 1GB hugepages must be enabled at boot with a (linux) kernel boot parameter. I don’t know if xmrig does this on its own (I don’t see how it could unless you had already activated 1GB hugepages at boot with the kernel cmdline parameters.) Unless xmrig adds the kernel boot parameters itself (I’m not sure because I’ve been using my own forked version that is behind on the latest commits.) But even still you would have to reboot to have 1GB hugepages working because they’re created at boot not runtime; they are allocated at runtime, but only if they’ve been configured to be 1GB in size at boot.
I see that xmrig (when run as root) will allocate its own hugepages (no need for «sysctl vm.nr_hugepages=1200») as xmrig grabs hugepages automatically. The problem here is that hugepages are 2MB by default (on most systems.) I’m specifically using Ubuntu/Debian for my examples here, but this should apply to other distros give or take a few commands and/or file paths.
To set 1GB hugepages as the default hugepage size you’ll need to append some kernel boot parameters (I’ll get to that later.) First it’s best to look at /etc/sysctl.conf and comment out any vm.nr_hugepages=x you might have added to it. This is easily done with:
Check sysctl.conf and make sure you didn’t add anything hugepage related at the end of the file, if so it’s best to comment it out (using # in front of the line.) I don’t know if this is necessary, but you’re going to change the kernel’s default hugepage size from 2MB to 1GB. so if you have vm.nr_hugepages=1200 set at boot, hopefully you have 1200+ GB of RAM (otherwise it’ll swap to disk or probably error out.) I don’t know what it would do because I removed my hugepages entry from /etc/sysctl.conf beforehand. You don’t need hugepages set by sysctl.conf because xmrig (when run as root) will allocate it’s own hugepages (so we need to set the system to 1GB hugepages so that xmrig can allocate them.)
Run all my commands here as root, start with this:
Enter your sudo password and now you will be logged in as root for the shell session (Ubuntu is annoying with the non-root users and all the sudoing and password typing lol.) But be careful what you type in when running as root if you’re new to Linux. Stick to copy/paste my commands here and you should be fine.
Note: I also set swappiness to 0 in sysctl.conf (you want to have enough RAM to do this and use 1GB hugepages; I’d say at least 8GB of RAM, multiply that by number of NUMA nodes or set «numa»: false in config.json.) Set swappiness to zero with the command below (this makes it so that Linux won’t swap memory pages to disk unless absolutely necessary; it won’t be necessary if you have enough RAM.) If you don’t have enough RAM, maybe set swappiness=1 so it’ll swap to disk before crashing or killing programs. The default swappiness is 60 (0-100 are valid) and this is too swappy for me lol. I don’t want it trying to swap my hugepages to disk (although those should be locked in memory anyways.)
Now on to 1GB hugepages:
First you’ll want to check and make sure your system has 1GB hugepage capabilities. most newer (18.04+) Ubuntu versions do have this (I’m on kernel 5.0.0-37 currently.) But check for hardware compatibility as well:
The command above should print out cpu flags with «pse» and «pdpe1gb» highlighted in red. If you don’t see «pdpe1gb» printed out in red, then your hardware doesn’t support 1GB hugepages; if you only see «pse» in red, then it only supports regular 2MB hugepages.
Not having pdpe1gb in cpu flags. it’s like not having aes, it is a hardware limitation. If you do see pdpe1gb and pse then continue on.
Next we’ll check to make sure 1GB hugepages are an option in sysfs.
You should see these 3 files:
nr_hugepages is r/w by root, this is the file that allocates 1GB hugepages. These hugepages are per NUMA node. so if you have 2 NUMA nodes, you’ll see two sets of these files (and so forth.) I have 4 nodes, so I see 4 such sets of files. for me to control the number of 1GB hugepages per node, I would do something like this (example.)
For this example let’s say we have 2 NUMA nodes and each node has enough RAM (3GB+ RAM) for 1GB hugepages per node. Let’s say we have xmrig’s example system with 20c/40t, 2 NUMA nodes and plenty of RAM.
We will want to allocate (at least) 3 1GB hugepages per NUMA node. this is done like so:
For the first node:
For the second node:
If you have enough RAM you can allocate 3 1GB hugepages for each and every NUMA node like so:
Oh, but you cannot write to «nr_hugepages» (even though root can write to the file with permissions being rw-r—r—) this is because we must first activate 1GB hugepages at boot time with the kernel parameter. But once we do that, this is how you would allocate the pages per node. However, I think xmrig will allocate the 1GB hugepages automatically once we set them up to run at boot.
To setup 1GB hugepages, you must put them in /etc/default/grub so they’ll append to the kernel command line (/proc/cmdline)
To do that, just add «hugepagesz=1GB default_hugepagesz=1GB hugepages=6» to GRUB_CMDLINE_LINUX
Open it with nano.
Then you’ll see the grub options. go to (probably the last uncommented) line, it’ll look like this:
Make it look like this:
Then save it (Ctrl+O in nano) and then:
Then when you reboot, you will have 1GB hugepages activated on the kernel cmdline (this is, to my knowledge, the only way to activate 1GB hugepages; at boot time I mean, it can’t be done at runtime. or maybe it can but I’m not aware of such ability.)
Once you’ve added these kernel parameters and rebooted, then xmrig should automatically find and use 1GB hugepages.
^ the last line hugepages=X change that number to the number of 1GB hugepages you want configured. Don’t use more than you have RAM. «hugepages=6» means 6GB of RAM (6,144 MB) for hugepages. Make sure you have enough RAM for that, otherwise change the number. Xmrig needs a minimum of 3 1GB hugepages for this to even work. You’ll need at least 4GB of RAM on your system and even that would be a tight squeeze for a full Ubuntu desktop installation. so 5-6GB of RAM is a more realistic minimum amount for a single NUMA node CPU.
oh and yes, it’s «hugepagesz» with a «s» and a «z» . don’t ask why lol
I’m making a shell script to enable this at boot. I’ll probably make a PR with it if it’ll help people. I’m busy testing BIOS settings at the moment though.
Источник
Optimal configuration of xmrig on linux for Monero Mining
At moment of writing 12th september 2017. There are not a lot of cyrptocurrencies that it’s still possible to mine with a simple CPU. Anyway monero is one of them. You should find a good miner pool for monero and also run a good and fast miner software, possibly open source.
First of all, stay away from minergate mining pool. Infact we have discovered that mining rate shown by the pool is very lower than it should be. Some power connected to another pool, gives about 20-30% of more hashes per seconds compared to minergate. Second problem of minergate is the usage of closed source software, that may lead to unknown processing on your cpu system. For these reasons, it’s better to stay away. Better one, instead, is xmrpool.eu , an anonymous pool of xmr miners that seems fair and fast.
We found that for CPU mining on linux command line (in this case we are using a ubuntu 16.04) is xmrig, fast, configurable, and easy to manage miner software. It can be downloaded from github page at https://github.com/xmrig/xmrig . Just decompress and setup the configuration file. Here below an example of configuration file. Please remember that for mining purpose it’s better that the cpu is idle for any other job.
<
“algo”: “cryptonight”,
“av”: 0,
“background”: false,
“colors”: true,
“cpu-affinity”: null,
“cpu-priority”: null,
“donate-level”: 1,
“log-file”: “/var/log/xmrig.log”,
“max-cpu-usage”: 95,
“print-time”: 60,
“retries”: 5,
“retry-pause”: 5,
“safe”: false,
“syslog”: false,
“threads”: null,
“pools”: [
<
“url”: “xmrpool.eu:3333”,
“user”: “yourxmraddress+yourworkerid”,
“pass”: “x”,
“keepalive”: true,
“nicehash”: false
>
]
>
Obviously in reported config file you should insert your monero wallet address + your workerid. Worker id is the id or simply the unique name that you give to the present cpu rig you are running with the software.
If your cpu supports hugepages, it’s better to enable them. The commands to do that are:
sysctl -w vm.nr_hugepages=128
Also you should run the miner as root. The command to start mining in background is:
at this time you should see in the logfile /var/log/xmrig.log the following notices:
[2017-09-12 08:14:31] * VERSIONS: XMRig/2.3.1 libuv/1.8.0 gcc/7.1.0
[2017-09-12 08:14:31] * HUGE PAGES: available, enabled
[2017-09-12 08:14:31] * CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz (1) x64 AES-NI
[2017-09-12 08:14:31] * CPU L2/L3: 1.0 MB/8.0 MB
[2017-09-12 08:14:31] * THREADS: 4, cryptonight, av=1, donate=1%
[2017-09-12 08:14:31] * POOL #1: xmrpool.eu:3333
this means that all is ok and hugepages are available and enabled. Now you should go to the pool, insert the monero wallet address you specified in configuration file, and watch if the mining is really in progress and which is the actual hashrate your worker is producing.
Источник