- Cross Compile files on x86 Linux host for 96Boards ARM systems
- Assumptions
- Part 1 — A simple application
- Step 1: Update 96Boards (ARM) system and Host (x86 Machine) computer
- Step 2: If you are using libsoc and or mraa make sure they are installed and up to date
- Step 3: Install cross compilers on host machine
- Step 4: Install package dependencies
- Step 6: Create a helloworld.c file with your favorite editor
- Step 7: Compile, test, and run x86 file from the command line
- Step 8: Cross compile, test, and run ARM file from the command line
- Part 2 — Shared libsoc C library
- george-hawkins / arm64.md
- This comment has been minimized.
- beyond2002 commented Dec 19, 2017
- This comment has been minimized.
- minhnv-viosoft commented Mar 12, 2018
- This comment has been minimized.
- rshrotey commented Apr 4, 2018
- This comment has been minimized.
- cirosantilli commented Jan 11, 2019
- This comment has been minimized.
- gabrik commented Feb 7, 2019
- This comment has been minimized.
- chankim commented Jan 12, 2021
- This comment has been minimized.
- george-hawkins commented Jan 12, 2021
- This comment has been minimized.
- chankim commented Jan 14, 2021 •
Cross Compile files on x86 Linux host for 96Boards ARM systems
This three part set of instructions will walk you through basic commandline cross compilation on a Linux x86 system for ARM 96Boards devices.
Assumptions
- Linux host system is used as the cross compiling station
- Examples were tested on fully updated Ubuntu 15.04 and 16.04 releases
- Examples depend on matching, latest libsoc and libmraa libraries to be installed on both devices (x86 machine, ARM machine)
- Libraries should be built from source to ensure they are current and will match. Instructions can be found here
- Examples were tested on a DragonBoard 410c, but should work with all 96Boards
This material was covered in our 7th OpenHours session and can be paired with this blog.
Part 1 — A simple application
Here you will learn to cross compile a simple application using Linux C and C++ toolchains. Cross compilation will happen on a Linux x86 machine for 96Boards ARM device.
Step 1: Update 96Boards (ARM) system and Host (x86 Machine) computer
The image on your board/host computer might be out of date. This is possible even when using the stock images, recent downloads, or a newly flashed versions of any operating system.
A few useful commands will help us make sure everything on the board is current:
- apt-get update: Downloads package lists from online repositories and “updates” them to get information on the newest versions of packages and their dependencies.
- apt-get upgrade: Fetches and installs newest package versions which currently exist on the system. APT must know about these new versions by way of вЂapt-get update’
- apt-get dist-upgrade: In addition to performing the function of upgrade, this option also intelligently handles changing dependencies with new versions of packages
Commands:
Step 2: If you are using libsoc and or mraa make sure they are installed and up to date
Installation libsoc: Please go here for first time libsoc installation instructions.
Update: Change directory (cd) to your libsoc source and make sure you have latest code
Commands:
Installation mraa: Please go here for first time mraa installation instructions.
Update: Change directory ( cd ) to your mraa source and make sure you have the latest code.
Commands:
Step 3: Install cross compilers on host machine
The following commands will install C and C++ cross compiler toolchains for 32bit and 64bit devices. You only need to install the toolchain that is the correct size for your board. If your 96Board is a 64bit SoC then only install a 64bit toolchain, if your 96Board is a 32bit board then only install the 32bit toolchain. This document will use the 64bit toolchain.
For ARM 32bit toolchain
$ sudo apt-get install gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf
For ARM 64bit toolchain
$ sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
Step 4: Install package dependencies
$ sudo apt-get install build-essential autoconf libtool cmake pkg-config git python-dev swig3.0 libpcre3-dev nodejs-dev
Step 5: Create a workspace
Step 6: Create a helloworld.c file with your favorite editor
Example (using vim text editor):
Copy and paste the following into your helloworld.c file
Save and quit ( :wq )
Step 7: Compile, test, and run x86 file from the command line
Compile:
$ gcc helloworld.c -o helloworld.x86
Test:
Print out should show a X86 binary file
Run:
Print out should read . Hello World.
Step 8: Cross compile, test, and run ARM file from the command line
Cross compile:
$ aarch64-linux-gnu-gcc helloworld.c -o helloworld.arm
Test:
Print out should show an aarch64 ARM file
Run (On 96Boards):
Copy file to 96Boards and run. It should run and say . Hello World. .
Retrieve 96Boards IP address with the following command:
Commands(From host machine):
If you got this far congratulations, your basic cross compiling is working! Now let’s make it more complex and add a C shared library. For the purpose of the rest of this document we will assume you have installed libsoc and mraa libraries on your 96Boards, they must be current and ready to use.
Part 2 — Shared libsoc C library
Install libsoc, this will take a bit of doing, as we have to cross compile this library and then manually install it so it does not collide with X86 libraries. We use a staged Install process by using the DESTDIR environment variable (below) to redirect the install step into a temporary location so we can move it into the proper cross compile location.
Источник
george-hawkins / arm64.md
QEMU arm64 cloud server emulation
This is basically a rehash of an original post on CNXSoft — all credit (particularly for the Virtio device arguments used below) belongs to the author of that piece.
Determine your current username and get your current ssh public key:
Use these values to create a cloud.txt file replacing the username, here shown as ghawkins , and the ssh-rsa value with the values appropriate for you:
Important: the #cloud-config line above is not a comment and things will fail silently without it.
Cread a cloud-config disk image:
Note: by default cloud-localds creates a raw image and QEMU now complains at having to guess about such an image so use —disk-format qcow2 to specify a well defined format that QEMU can easily consume.
Backup your image:
The QEMU launch command is somewhat more complex than for e.g. a fully virtualized, rather than emulated, setup with an x86_64 guest running on an x86_64 host.
Here is the command first:
You’ll have to change ubuntu-16.04-server-cloudimg-arm64-uefi1.img if you downloaded a later image with a different name.
Now let’s look at the arguments that configure our system:
- -smp 2 — 2 (virtual) cores.
- -m 1024 — 1024MB of system memory.
- -M virt — emulate a generic QEMU ARM machine.
- -cpu cortex-a57 — the CPU model to emulate.
- -bios QEMU_EFI.fd — the BIOS firmware file to use.
- -nographic — output goes to the terminal (rather than opening a graphics capable window).
- -device virtio-blk-device,drive=image — create a Virtio block device called «image».
- -drive if=none,id=image,file=ubuntu-16.04-server-cloudimg-arm64-uefi1.img — create a drive using the «image» device and our cloud server disk image.
- -device virtio-blk-device,drive=cloud — create another Virtio block device called «cloud».
- -drive if=none,id=cloud,file=cloud.img — create a drive using the «cloud» device and our cloud-config disk image.
- -device virtio-net-device,netdev=user0 — create a Virtio network device called «user0»
- -netdev user,id=user0 — create a user mode network stack using device «user0»
- -redir tcp:2222::22 — map port 2222 on the host to port 22 (the standard ssh port) on the guest.
Here we create a generic QEMU ARM machine. You can see a complete list of possible ARM machines like so:
This list seems to include all ARM machines, not just 64-bit ones. The latest versions of QEMU (but not the one that currently comes with Ubuntu 16.04 LTS) include the well know Raspberry Pi 2 (but not the 3).
For a given machine you can then see the supported processors:
Once you run the command up above to launch an emulated ARM64 machine it will take a few minutes to boot and will output something like the following:
The initial error, about «no suitable video mode found», can be ignored — we specifically set -nographic .
Eventually a login prompt will appear — which cannot be used as in our cloud-config file we only specified key based ssh login.
Depending on how fast various jobs (kicked off during the boot process) run further output will appear after the login prompt appears.
The first time you launch a given system you should see output confirming that the ssh key specified up above has been installed.
And eventually you should see something like:
Now in another terminal you can log in to the newly launched cloud server:
If all goes well you’ll log straight in without any username or password.
If you’ve started previous QEMU images in a similar manner then ssh may issue a dire warning like so (and refuse to login):
To resolve this and remove previous details:
When logged into the cloud server you can.
- Confirm that it’s an aarch64 system:
- Has two cores:
- Shut it down:
In the original terminal (where you launched qemu-system-aarch64 ) you can follow the shutdown process.
Note: when running sudo shutdown now the shutdown succeeds but the following error appears:
You’ll see this anytime you run sudo — to resolve it (as per Ask Ubuntu) just edit /etc/hosts and add ubuntu at the end of the existing line for the address 127.0.0.1 so you end up with something like:
QEMU x86_64 cloud server virtualization
Get a cloud image from:
Create a cloud-config called cloud.txt , which defines who can login etc. to the virtual cloud server, and create a disk image from it. For this you need your login name on your current system, along with the public part of your current ssh key:
Copy the line contained in id_rsa.pub into the ssh-authorized-keys section and replace the username specified by name with your username.
Important: I thought #cloud-config was a comment and left it out — but without it no error is reported but you cannot login later.
Backup your image:
Note: this is a compressed qcow2 image — while it’s about 320MB the running guest will see it as 2GB (as we’ll confirm later).
Now start the cloud guest:
The command line arguments:
- -enable-kvm — full virtualization (rather than emulation).
- -smp 2 — two (virtual) processors (as we’ll confirm later).
- -m 1024 — 1024MB of system memory.
- -nographic — output goes to the terminal (rather than opening a graphics capable window).
- -hda ubuntu-16.10-server-cloudimg-amd64.img — use our Ubuntu cloud image as the primary drive.
- -hdb cloud.img — use the image we created from cloud.txt as the secondary drive.
- -redir tcp:2222::22 — map port 2222 on the host to port 22 (the standard ssh port) on the guest.
Once booted you eventually get to the console getty login prompt. No one can login here — so you need to switch to another terminal tab.
Now let’s logon to the guest using the redirected port and check out a few things and then shut down the guest:
So above using df -h we can see that the disk appears to be 2GB and with cat /proc/cpuinfo we can see that we appear to have two processors. Finally using shutdown we can get back to the command prompt in the terminal where the guest was started.
TODO: see how changing the number of virtual CPUs affects the performance of the guest.
If you redo everything from scratch again with a copy of the original disk image then the guest will generate a new key to identify itself which will cause ssh to refuse to allow you to reconnect due to the change in key. To remove the old key from known_hosts do:
Working out how to get this far was down to:
The Ubuntu cloud images page wasn’t as helpful as it should be:
But it does cover uncompressing the qcow2 disk image and increasing its size (2GB isn’t much) and fancier stuff like creating a delta image to keep your initial disk image in a pristine condition.
This comment has been minimized.
Copy link Quote reply
beyond2002 commented Dec 19, 2017
I tried «QEMU arm64 cloud server emulation» with «xenial-server-cloudimg-arm64-uefi1.img», but can’t login with SSH. It seems that cloud.img not working.
This comment has been minimized.
Copy link Quote reply
minhnv-viosoft commented Mar 12, 2018
ubuntu-16.04-server-cloudimg-armhf-disk1.img
how to run armhf on QEMU ?
there is no efi disk for this version
This comment has been minimized.
Copy link Quote reply
rshrotey commented Apr 4, 2018
When I try to launch QEMU for aarch64 I get the following error
» -netdev user,id=user0: could not set up host forwarding rule ‘tcp:2222::22’ «.
This comment has been minimized.
Copy link Quote reply
cirosantilli commented Jan 11, 2019
This comment has been minimized.
Copy link Quote reply
gabrik commented Feb 7, 2019
Thanks for the very useful gits, I’m trying to run the same image using libvirt any ideas on how to write the xml file?
I’m stuck with some error in passing the bios file
This comment has been minimized.
Copy link Quote reply
chankim commented Jan 12, 2021
Hi, I followed this, but qemu gives me «-redir : invalid option». Without -redir option, it goes go login prompt, but I can’t login in using ssh of course. What can I do? my qemu-system-aarch64 version is 5.1.0.
This comment has been minimized.
Copy link Quote reply
george-hawkins commented Jan 12, 2021
@chankim — I think this SO answer covers your issue.
This comment has been minimized.
Copy link Quote reply
chankim commented Jan 14, 2021 •
Hi, George,
Thank you for this good information. I applied your SO answer(-nic user,hostfwd=tcp::5022-:22 instead of -redir ..).
This time it took much longer to the login prompt.
And near the end I saw this message below. (Some key values are modified for this post). I’m not sure if this was ok. (seems ok)
And I tried ‘ssh -p 2222 ckim@localhost’ but access was denied. I also tried ssh -p 5022 ckim@localhost in vain.
I would appreciate it you can give me any suggestion (or anyone else?) . Thanks!
(I don’t know why many lines below are stroke-out)
Ubuntu 16.04.7 LTS ubuntu ttyAMA0
ubuntu login: [ 132.546717] cloud-init[1239]: Generating locales (this might take a while).
[ 136.461912] cloud-init[1239]: en_US.UTF-8. done
[ 136.469791] cloud-init[1239]: Generation complete.
[ 139.689257] cloud-init[1239]: Cloud-init v. 20.4-0ubuntu1 16.04.1 running ‘modules:config’ at Thu, 14 Jan 2021 08:44:54 +0000. Up 131.08 seconds.
ci-info: Authorized keys from /home/ckim/.ssh/authorized_keys for user ckim
ci-info: +———+———————-+———+———+
ci-info: | Keytype | Fingerprint (sha256) | Options | Comment |
ci-info: +———+———————-+———+———+
ci-info: | ssh-rsa | ? | — | — |
ci-info: +———+———————-+———+———+
Jan 14 08:45:06 ec2:
Jan 14 08:45:06 ec2: #############################################################
Jan 14 08:45:06 ec2: ——BEGIN SSH HOST KEY FINGERPRINTS——
Jan 14 08:45:06 ec2: 1024 SHA256:mFHGCCcfGk+nnlJLpTfTuOP7ydqwTS4bxn/GiR2+F7s root@ubuntu (DSA)
Jan 14 08:45:06 ec2: 256 SHA256:H1XuP8WyUffzDE8tLam168jbNECxav0bhVSMsBmxzDs root@ubuntu (ECDSA)
Jan 14 08:45:06 ec2: 256 SHA256:53+YF/q6aN7z69mFjhXDptxBo1b89/2gU3bgigHY234 root@ubuntu (ED25519)
Jan 14 08:45:06 ec2: 2048 SHA256:0Dv9EKcmMIJ9sqBgxTwBMbFcP3YPduK6Nbj55lnPqFk root@ubuntu (RSA)
Jan 14 08:45:06 ec2: ——END SSH HOST KEY FINGERPRINTS——
Jan 14 08:45:06 ec2: #############################################################
——BEGIN SSH HOST KEY KEYS——
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHXyNTYAAABBBGxXZFaUcE32JooNrxw2LkQYDxEFpblTABtSgfY3R8DYpasGreD6CQFP6L5xYk1h/EETL+08kwprOIWIUS07ftg= root@ubuntu
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGICPN2DsGch1AW+1MilQzN+yYMypAmBt71bEii03pX7 root@ubuntu
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABBAABAQDLZIy7XpwYTNjYNbLc3xgK/9rGPEoLNwQH0ETLtQuiUYp/Oy9+TblXrQu8XsEJ3AhdpGePdMs7OKExf5LJnh4F+2HZO3P7WkNYKOPvhwyF0xl7UEABuy1FmUpEo7qvdjA7kE3ez9ymF+ix1DOqlH3Wo2gol+JISkQfeJOAiawBtrTw/tl2LqKh7wRe78bZJ950vpc7UKliAGdvES+KKTJW+rds3+bVb9nHx8hZk4yR0+IP8nWTeCOS5lc4kcf2PxNDoAK/kGJ8iXBM8Kt9i9j9WYEyMAoRNxiCbFLhDUGKoWhFQLnlk0qC4Ltei35laN2yD7jIMn/vWn2SsAvesNcR root@ubuntu
——END SSH HOST KEY KEYS——
[ 143.845149] cloud-init[1283]: Cloud-init v. 20.4-0ubuntu1 16.04.1 running ‘modules:final’ at Thu, 14 Jan 2021 08:45:05 +0000. Up 142.20 seconds.
[ 143.858762] cloud-init[1283]: Cloud-init v. 20.4-0ubuntu1
16.04.1 finished at Thu, 14 Jan 2021 08:45:07 +0000. Datasource DataSourceNoCloud [seed=/dev/vda][dsmode=net]. Up 143.75 seconds
Источник