- AWS: How to Mount S3 Bucket on EC2 Linux Instance Using IAM Role
- How to Mount S3 Bucket on CentOS/RHEL and Ubuntu using S3FS
- Step 1 – Remove Existing Packages
- Step 2: Install Required Packages
- Step 3 – Download and Compile Fuse
- Step 4 – Download and Compile Latest S3FS
- Step 5 – Setup Access Key
- Step 6 – Mount S3 Bucket
- Related Posts
- How to Remove CloudFront Cache
- How To Delete Application Versions from AWS Beanstalk
- How to Install and Use AWS CLI Tools on Linux
- 39 Comments
AWS: How to Mount S3 Bucket on EC2 Linux Instance Using IAM Role
This blog has been moved from medium to blogs.tensult.com . All the latest content will be available there. Subscribe to our newsletter to stay updated.
We can mount an S3 bucket onto an AWS instance as a file system known as S3fs. It is a FUSE filesystem application backed by amazon web services, that allows you to mount an Amazon S3 bucket as a local file-system. We can use system commands with this drive just like as any other Hard Disk in the system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
Filesystem in Userspace ( FUSE) is a software interface for Unix and Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running the file system code in user space while the FUSE module provides only a “bridge” to the actual kernel interfaces.
We can consider NFS sort of solution, even now we have EFS from Amazon but it’s costly and even the same data were used for their analytics solution. So we thought to use S3 to satisfy both the requirement.
Follow the below steps to mount your S3 bucket to Your Linux Instance.
We are assuming that you have a running Linux EC2(Red Hat/Centos) instance on AWS with root access and a bucket created in S3 which is to be mounted on your Linux Instance.
Step-1: Using new instance of CentOS or Red Hat.Update the system.
Step-2: Install Required Packages
First, we will install all the dependencies for fuse and s3cmd. Install the required packages to system use following command.
Step-3: Download s3fs source code from git.
Step-4 :Now Compile and install the code.
Following the set of command will compile fuse and add fuse module in the kernel.
Step-5: Use below command to check where s3fs command is placed in os.
Step-6: Creating a IAM role for s3 bucket
Create one IAM role with policy having appropriate access to particular bucket.
For example :- My IAM role name is s3fsmountingrole and bucket created is s3fs-demobucket
Policy attached should be read/ write access for bucket s3fs-demobucket
Enter policy name Description and Policy Document as given below
Attach IAM Role to the running Instance or Launching new Instance
Step-7: Now create a directory or provide the path of an existing directory and mount S3bucket in it.
Step-8: Now mount the s3 bucket using IAM role enter following command :
Step-9: Check mounted s3 bucket. The output will be similar as shown below but Used size may differ.
df -h shows the mounted file system, here you can see we have successfully mounted the S3 bucket on your EC2 Instance.
Note: If you already had some data in s3bucket and it is not visible, then you have to set permission in ACL at the S3 AWS management console for that s3 bucket.
Congrats!! You have successfully mounted your S3 bucket to your EC2 instance.
Here, I explained how to mount AWS s3 bucket on EC2 Linux instance, and for demo purpose, I used RedHat machine and created one IAM role for access to s3 bucket and attached it to running instance. You can also get access to s3 bucket from EC2 instance by providing AWS access key and secret key.
Источник
How to Mount S3 Bucket on CentOS/RHEL and Ubuntu using S3FS
S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in the system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
If you like to access S3 buckets without mounting on system, use s3cmd command line utility to manage s3 buckets. s3cmd is also provides faster speed for data upload and download rather than s3fs. To work with s3cmd use next articles to install s3cmd in Linux systems and Windows systems.
This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.
Step 1 – Remove Existing Packages
First, check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.
Step 2: Install Required Packages
After removing packages. First, we will install all the dependencies for fuse and s3cmd. Install the required packages to system use following command.
Step 3 – Download and Compile Fuse
Download and compile latest version of fuse source code. For this article, we are using fuse version 3.5. Following the set of command will compile fuse and add fuse module in the kernel.
Step 4 – Download and Compile Latest S3FS
Download and compile latest version of s3fs source code. For this article we are using s3fs version 1.74. After downloading extract the archive and compile source code in system.
Step 5 – Setup Access Key
Also In order to configure s3fs, we would require Access Key and Secret Key of your S3 Amazon account. Get these security keys from Here.
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.
Step 6 – Mount S3 Bucket
Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.
Related Posts
How to Remove CloudFront Cache
How To Delete Application Versions from AWS Beanstalk
How to Install and Use AWS CLI Tools on Linux
39 Comments
cd fuse-3.1.0 Not exist ( just fuse-3.5.0)
[[email protected] fuse-3.5.0]# ./configure –prefix=/usr/local
-bash: ./configure: No such file or directory
any idea how to mount local S3 (not on AWS) created on PURE storage. I got the bucket name, Bucket Endpoint, Bucket Access Key and bucket Secret Access Key .
They use meson/ninja now, not make && make install nor ./configure
s3fs: HTTP: 403 Forbidden – it is likely that your credentials are invalid
are you have solved ??
Can be compatible with the cheap wasabi.com instead of amazon s3?
Do you have a solution for this problem? Run into the same problem.
regards Ronald
On Ubuntu steps 1-4 can be replaced with one command:
sudo apt-get install s3fs
Do this instead of steps 1-4 and continue from step 5. It works – tested.
Do NOT use fuse3* but stick to a fuse 2*, otherwise you will not be able to install s3fs.
I am getting below error.
fuse: warning: library too old, some operations may not not work
# rpm -qa | grep fuse
libconfuse-2.7-4.el6.x86_64
fuse-libs-2.8.3-5.el6.x86_64
#
though latest package is not available in Yum repo. please suggest
Hello, i followed your guide (on Ubuntu 14.04, Bitnami – EC2 ) , i am getting following error at “make”
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `[email protected]_2.4_2′
collect2: error: ld returned 1 exit status
make[2]: *** [s3fs] Error 1
make[2]: Leaving directory `/tmp/s3fs-fuse-1.80/src’
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/s3fs-fuse-1.80′
make: *** [all] Error 2
use this if you get “bash: s3fs: command not found”
sudo /usr/local/bin/s3fs -o use_cache=/tmp/cache mydbbackup /s3mnt
Those of you having an error on CentOS: s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory
yum install fuse-libs
Great article! Worked a treat….
only issue was my library path required updating as when I ran s3fs it couldn’t find the fuse dependancies.
LD_LIBRARY_PATH=/usr/local/lib
export LD_LIBRARY_PATH
That did the trick 🙂
Hi..
i have mounted s3 sucessfully , but when i try to “cd ” to mounted dirs it says “operation not permitted”
cd: app_logs/: Operation not permitted
below is permission for dir
d———. 1 root root 1 May 6 2015 app_logs
below command used
s3fs -o use_cache=/tmp/cache s3bukcket /s3mnt
what permission i need to set for bucket or bucket folder ?
am i writing wrong command for mount?
Can you please share the steps to mount Amazon S3 bucket on windows OS for both 2008 & 2012?
Thanks & Regards,
Mehul
Thanks! This was quite helpful.
You can take a look at the docker image which I built, with S3FS and S3 bucket mounting capabilities:
https://registry.hub.docker.com/u/ihealthtechnologies/s3-mount/
Hi all, I’m receiving the following error after trying to mount the bucker:
# s3fs -o user_cache=/tmp/cache lehar-backup /s3mnt
s3fs: /lib/libfuse.so.2: version `FUSE_2.8′ not found (required by s3fs)
I downloaded fuse 2.8 and compiled it per the instructions however if there is something I’m missing I’d love to know what that is. Let me know what information you might need. Thanks for any help you can provide.
My disk is full after mount and not able to use the system. Ideally it should not take disk space.
How can i resolve the issue?
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 10321208 10297608 0 100% /
none 847476 0 847476 0% /dev/shm
s3fs 274877906944 0 274877906944 0% /s3mnt
s3fs: HTTP: 403 Forbidden – it is likely that your credentials are invalid
are you have solved ??
I am getting error:
s3fs: unable to access MOUNTPOINT storingfiles1: No such file or directory
Hi Team,
Very good it working fine in my ubuntu system
So far so good but I get this error:
[[email protected] s3fs-1.74]# s3fs -o use_cache=/tmp/cache agarta /etc/httpd/imagestore/
bash: s3fs: command not found
any idea on how to resolve this?
Thanks for the tutorial. All worked fine, except this messsage – Transport endpoint is not connected. Unmounted and mounted again using the link you shared in comments. I am using s3fs to connect to Google Cloud Storage.
Kindly share some pointers or might be issue with the permission level mentioned above.
/usr/local/lib
include ld.so.conf.d/*.conf
# ldconfig
# s3fs -o use_cache=/tmp/cache ****.******.*** /s3mnt
s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory
I have a problem, can you helpme?
[[email protected]*********** s3fs-1.74]# s3fs -o use_cache=/tmp/cache ****.******.*** /s3mnt
s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory
install fuse-libs and it should solve your issue… it did in my case…
sudo yum install fuse-libs
hope that helps
yes, you could add this in the tutorial ! it solved it in my case too. Thanks
Источник