(bash) How to find the max supported file-size of a filesystem?
(bash) For a particular directory, I need to discover the maximum file size supported by that filesystem. The filesystem in question is probably mounted from external USB media, and might be FAT32, NTFS, exfat, or ext2.
I know I could partially guess the information from mount , but I’d like a cleaner solution — plus in the case of exfat, mount shows the filesystem type as «fuseblk».
(I am running Linux 3.2.0-4-686-pae #1 SMP Debian 3.2.51-1 i686 GNU/Linux)
getconf FILESIZEBITS path does not work for a fuseblk mount of an exfat filesystem: it returns 32, which is inaccurate. So it is not a general solution.
2 Answers 2
I think you can use getconf /path for this. Among the many sizes it prints there is also FILESIZEBITS . APUE says this about it:
minimum number of bits needed to represent, as a signed integer value, the maximum size of a regular file allowed in the specified directory
There is some concern that getconf does not return filesystem-specific information:
getconf isn’t in principle capable of answering such a question because it’s filesystem dependent.
That is not the case:
Edit: As the other answer says, you can use getconf FILESIZEBITS /mypath to find out the maximum number of bits the file size may have, and hence the largest size of file supported — cross-reference these against http://en.wikipedia.org/wiki/Integer_%28computer_science%29#Common_integral_data_types for an idea of what file size (in bytes) that relates to.
You can also cross-reference the filesystem against a list such as http://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits
df -T gives you output that may assist in identifying your filesystems accurately.
It’s worth noting that other limits also exist that may be smaller than those imposed by the filesystem, such as the filesize limit specified by ulimit. You can query and set this with ulimit -f , but on most systems it will be «unlimited».
Источник
How to set max size for a specific file in linux
I have an application which will write to a particular log file until the user session is going on. What i am looking for is to put a max cap on the size of a log file so it does not grow beyond a particular size, 2 scenarios which will be useful are
Any utility which keeps an eye on the log file and as soon as it reaches the max size start truncating the file content from the start so the application can keep appending the content at the end.
Any utility by which while creating the file i can specify the max size of that file and when file reaches that maxsize it should simply not grow beyond that point.
What i don’t want is
- To set up a cron job or a script which will monitor the file size after a particular interval of time (say 1 hour) and then delete its contents at that time.
3 Answers 3
As a shellscript:
Note that you might get some race conditions which may lead to losing log-data.
How about truncate -s 10M log.txt ?
Check man truncate for more details
A process that removes part of a file and then let you append more data is nearly never available on any system, even though it is possible, it’s just not something one does. It could be done at kernel level and be really efficient, but I’ve never see that either. (i.e. the kernel would simply unlink inodes from the start of the file and have an offset in the first inode of the file for byte capability—opposed to page ability.)
On a Unix system, you can use mmap() and unmap() for that purpose. So when your app. determines that the file size went over a certain amount, it would have to read from the start of the file, determine the location of, for example, the 10,000th line of log, and then memmove() the rest over to the start. Finally, it would truncate the file and reopen it in append mode. This last step is a very important step.
(example found in sendmail::dequeue() on GitHub which includes all the error checking not found here.)
IMPORTANT: the memmove() call is going to be slow, especially on a rather large log file.
Note that most of the time, when a process opens a log file, it keeps it opened and that means changing the file under its feet won’t do much good. Actually, in the mmap() example here, you would create a gap with many zeroes ( \0 characters) between the moved data and the next write if you don’t make sure to close and reopen the log (not shown in the code).
So, it’s doable in code (here in C++, you could easily get that to compile in C, though.) However, if you just want to use bash , logrotate is certainly your best bet. However, by default, at least on Ubuntu, logrotate runs only once per day. You can change that specifically for the users who use your application or system wide.
At least you can run it hourly by moving or copying the logrotate script like so:
You can also setup a per minute CRON file which runs that script. To edit the root crontab file:
Then add one line:
Make sure to test and see that it works as expected. If you add such, you could also remove the /etc/cron.daily/logrotate script from there so it does not try to run it twice (once on the ‘daily’ run and once on the per minute run.)
Just be aware that there is a lingering bug in CRON as shown in my bug report to Ubuntu. It can cause memory problems when using CRON too much (like once a minute).
Also, as mentioned previously with the code sample above, you must reopen the log file. Just rotating won’t do you any good unless the application either reopens the log file each time it wants to write to it, or it is told to rotate (i.e. close the old file and open the new one.) Without that rotation kick, the application will continue to append data to the old file, it does not matter what it is named. Unix remembers because it uses the inode once the file was opened and not the filename. (Under MS-Windows, you won’t be able to rename without first closing all accesses to the file. that’s very annoying!)
In many cases, you either restart the whole app. (because it’s too dumb to know how to reopen the log), you send the app. a signal so it reopens the log file, or the app. is aware that the file changed, somehow.
If the app. is not capable or knowing, restarting will be your only option. That may be weird for a user if it has a UI.
Источник
What is the maximum allowed filename (and folder) size with eCryptfs?
I am a new eCryptfs user and I have a very basic question that I wasn’t able to find anywhere. I am interested in using eCryptfs via my Synology NAS that uses Linux.
While trying to encrypt my folder (EXT4) via Synology’s encryption app (eCryptfs) I encounter errors that state that my filename length can not exceed 45 characters in length (so, no encryption).
If the limit really is 45 characters, eCryptfs may not be a usable tool for most.
What is the maximum allowed filename size when encrypting files and folders with eCryptfs? Is Linux 255 characters?
4 Answers 4
Full disclosure: I am one of the authors and the current maintainer of the eCryptfs userspace utilities.
Linux has a maximum filename length of 255 characters for most filesystems (including EXT4), and a maximum path of 4096 characters.
eCryptfs is a layered filesystem. It stacks on top of another filesystem such as EXT4, which is actually used to write data to the disk. eCryptfs always encrypts file contents, but it can optionally encrypt (obscure) filenames (or not).
If filenames are not encrypted, then you can safely write filenames of up to 255 characters and encrypt their contents, as the filenames written to the lower filesystem will simply match. While an attacker would not be able to read the contents of index.html or budget.xls , they would know what file names exist. That may (or may not) leak sensitive information depending on your use case.
If filenames are encrypted, things get a little more complicated. eCryptfs prepends a bit of data on the front of the encrypted filename, such that it can identify encrypted filenames definitively. Also, the encryption itself involves «padding» the filename.
For instance, I have an encrypted file,
/.bashrc . This filename is encrypted using my key to:
Clearly, that 7 character filename now requires more than 7 characters to be encrypted. Empirically, we have found that character filenames longer than 143 characters start requiring >255 characters to encrypt. So we (as eCryptfs upstream developers) typically recommend you limit your filenames to
Now, all that said, the Synology NAS is a commercial product that embeds and uses eCryptfs and Linux to encrypt and secure data on the device. We (the upstream developers of eCryptfs) have nothing to do with Synology or their products, though we’re generally happy to see eCryptfs used in the wild. It seems to me that their recommendation of 45 characters is either a typographical error (from our 140 character recommendation), or simply a far more conservative estimate.
Источник
File Size Limit
What is the maximum file size limit in rhel 32 bit OS, is there any OS limitation for the file size , if there is any limitation set then please tell me what will be for root & oaa.
3 Answers 3
File size is limited by filesystem type not by OS. Typically, OS supports several filesystems, so there is no such thing like «OS file size limit». There are limits for well-known filesystems:
Since more than a decade, 32 bit Linux applications are able to access files larger than 2 GiB (2^31) thanks to the implementation of large file support. The current OS limitation is 8 EiB (2^63) which shouldn’t hit the common of us before a while.
You would need a file system that makes no lower limit on file size too.
Large File Support (LFS) is not supported by default on either:
- 32 bit kernels
- 32 bit processes running on 64 bit kernels.
As stated in the following post it should be explicitly enabled in the kernel at compilation time. Otherwise the file size is limited to 2147483647 bytes = 2^31 — 1 (1 byte is probably reserved for the kernel or FS).
If you have a 32 bit RHEL you can quickly verify that with the following command:
If you have a 64 bit RHEL and the process is compiled for 32 bit systems you have the same problem. You can verify that by running the following program:
You need to compile the program as 32 bit executable:
gcc -m32 -Wall -g main.c -o main
Both programs will stop before that the file reaches the size of 3 GB
Источник
limit the maximum size of file in ext4 filesystem
Ext4 has a maximum filesystem size of 1EB and maximum filesize of 16TB.
However is it possible to make the maximum filesize smaller at filesystem level ? For example I wouldn’t like to allow to create files greater than a specified value (e.g. 1MB). How can this be achieved on ext4 ?
If not ext4 then any other modern filesystem has support for such feature ?
2 Answers 2
ext4 has a max_dir_size_kb mount option to limit the size of directories, but no similar option for regular files.
A process however can be prevented from creating a file bigger than a limit using limits as set by setrlimit() or the ulimit or limit builtin of some shells. Most systems will also let you set those limits system-wide, per user.
When a process exceeds that limit, it receives a SIGXFSZ signal. And when it ignores that signal, the operation that would have caused that file size to be exceeded (like a write() or truncate() system call) fails with a EFBIG error.
To move that limit to the file system, one trick you could do is use a fuse (file system in user space) file system, where the user space handler is started with that limit set. bindfs is a good candidate for that.
If you run bindfs dir dir (that is bind dir over itself), with bindfs started as ( zsh syntax):
Then any attempt to create a file bigger than 1M in that dir will fail. bindfs forwards the EFBIG error to the process writing the file.
Note that that limit only applies to regular files, that won’t stop directories to grow past that limit (for instance by creating a large number of files in them).
Источник