What is dma in linux

How To Get Host Name and Domain Name Of Linux?

Host name is the identifier of a system in the network. Host name can be get with different ways in Linux. Host name is generally put in the /etc/hostname file.

Get Host name By Echoing Host name File

Host name information is stored in a file /etc/hostname . So we can simple print the host name to the terminal with echo command like below.

Get Host name By Echoing Host name File

Get Host name With hostname Command

We can get host name with hostname command. This will only list host name and will not print domain related information.

Get Host name With host name Command

Get Fully Qualified Host name

We can get fully qualified host name which provides full name with domain information.

Get Fully Qualified Host name

Hostname File

Host name file /etc/hostname will only provide host name with fully qualified domain name. There will any no other data in this file.

Change Host name

As the hostname is stored in the /etc/hostname file we can change the hostname by editing this file. We will set the hostname as ubu1 with the following echo command. Keep in mind that in order to change /etc/hostname file we require root privileges which can be get with the sudo command.

Change Hostname

Get Domain Name

Domain name information is about the systems network configuration.

How To Get Host Name and Domain Name Of Linux? Infografic

Источник

Linux: find out information about current domain name and host name

Q . Under Windows Server 2003 I can use active directory domain tools to get information about current domain and hostname. Can you tell me command to list current domain name and hostname under Red hat enterprise Linux 5?

A . Both Linux / UNIX comes with the following utilities to display hostname / domain name:

a) hostname – show or set the system’s host name

b) domainname – show or set the system’s NIS/YP domain name

  • No ads and tracking
  • In-depth guides for developers and sysadmins at Opensourceflare✨
  • Join my Patreon to support independent content creators and start reading latest guides:
    • How to set up Redis sentinel cluster on Ubuntu or Debian Linux
    • How To Set Up SSH Keys With YubiKey as two-factor authentication (U2F/FIDO2)
    • How to set up Mariadb Galera cluster on Ubuntu or Debian Linux
    • A podman tutorial for beginners – part I (run Linux containers without Docker and in daemonless mode)
    • How to protect Linux against rogue USB devices using USBGuard

Join Patreon

c) dnsdomainname – show the system’s DNS domain name

d) nisdomainname – show or set system’s NIS/YP domain name

e) ypdomainname – show or set the system’s NIS/YP domain name

For example, hostname is the program that is used to either set or display the current host, domain or node name of the system. These names are used by many of the networking programs to identify the machine.
$ hostname
Output

The domain name is also used by NIS/YP or Internet DNS:
$ dnsdomainname
Output:

🐧 Get the latest tutorials on Linux, Open Source & DevOps via

Источник

Analog Devices Wiki

Table of Contents

DMA Operations

Introduction

Linux DMA Framework

There are two aspects of the Linux DMA framework.

Linux DMA Mapping API

If you are writing a portable device driver, make sure to use the generic DMA APIs (for a full list please refer to the documentation):

What is a bus address

When the CPU (say with the MMU turned off) wants to access physical memory it puts that address on its output pins. This a physical Address.

Читайте также:  Включение wifi адаптера linux

So a bus address is the address used by a peripheral to access a certain physical address.

Generic DMA mapping guide

Please refer to the Linux kernel document DMA API HOWTO for details.

DMA APIs for SC5xx

The SC5xx processor offers a wide array of DMA capabilities.

Flow Types and Descriptor

The flow type can be defined in a CONFIG word in a descriptor so the modes can be mixed and the operation quite complex.

Descriptor Memory Layout

MDMA Copy Wrapper for Linux Drivers

DMA Operation for Linux Drivers

Please refer to: arch/arm/mach-sc58x/include/mach/dma.h, and arch/arm/mach-sc58x/dma.c or arch/arm/mach-sc57x/include/mach/dma.h, and arch/arm/mach-sc57x/dma.c.

DMA Example

Page Tools

Analog Devices Uses Cookies for Enhanced Online Performance

Some cookies are required for secure log-ins but others are optional for functional activities. Our data collection is used to improve our products and services. We recommend you accept our cookies to ensure you’re receiving the best performance and functionality our site can provide. For additional information you may view the cookie details. Read more about our privacy policy.

The cookies we use can be categorized as follows:

Strictly Necessary Cookies: These are cookies that are required for the operation of analog.com or specific functionality offered. They either serve the sole purpose of carrying out network transmissions or are strictly necessary to provide an online service explicitly requested by you. Analytics/Performance Cookies: These cookies allow us to carry out web analytics or other forms of audience measuring such as recognizing and counting the number of visitors and seeing how visitors move around our website. This helps us to improve the way the website works, for example, by ensuring that users are easily finding what they are looking for. Functionality Cookies: These cookies are used to recognize you when you return to our website. This enables us to personalize our content for you, greet you by name and remember your preferences (for example, your choice of language or region). Loss of the information in these cookies may make our services less functional, but would not prevent the website from working. Targeting/Profiling Cookies: These cookies record your visit to our website and/or your use of the services, the pages you have visited and the links you have followed. We will use this information to make the website and the advertising displayed on it more relevant to your interests. We may also share this information with third parties for this purpose. Decline cookies

©1995 — 2019 Analog Devices, Inc. All Rights Reserved

Источник

What is DMA mapping and DMA engine in context of linux kernel?

What is DMA mapping and DMA engine in context of linux kernel? When DMA mapping API and DMA engine API can be used in Linux Device Driver? Any real Linux Device Driver example as a reference would be great.

2 Answers 2

What is DMA mapping and DMA engine in context of linux kernel?

The kernel normally uses virtual address. Functions like kmalloc() , vmalloc() normally return virtual address. It can be stored in void* . Virtual memory system converts these addresses to physical addresses. These physical addresses are not actually useful to drivers. Drivers must use ioremap() to map the space and produce a virtual address.

If device supports DMA , the driver sets up buffer using kmalloc or similar interface which returns virtual address (X). The virtual memory system maps X to a physical address (Y) in system RAM. The driver can use virtual address X to access the buffer, but the device itself cannot because DMA doesn’t go through the CPU virtual memory system. In some system only Device can directly do DMA to physical address. In some system IOMMU hardware is used to translate DMA address to physical address.Look at the figure above It translate Z to Y.

When DMA mapping API can be used in Linux Device Driver?

Reason to use DMA mapping API is driver can return virtual address X to interface like dma_map_single() , which sets up any required IOMMU mapping and returns the DMA address Z.The driver then tells the device to do DMA to Z, and the IOMMU maps it to the buffer at address Y in system RAM.

Reference is taken from this link.

Any real Linux Device Driver example as a reference would be great.

Inside linux kernel you can look to drivers/dma for various real drivers.

Источник

DMA Engine API GuideВ¶

For DMA Engine usage in async_tx please see: Documentation/crypto/async-tx-api.rst

Below is a guide to device driver writers on how to use the Slave-DMA API of the DMA Engine. This is applicable only for slave DMA usage only.

Читайте также:  Windows maintenance service что это

DMA usageВ¶

The slave DMA usage consists of following steps:

Allocate a DMA slave channel

Set slave and controller specific parameters

Get a descriptor for transaction

Submit the transaction

Issue pending requests and wait for callback notification

The details of these operations are:

Allocate a DMA slave channel

Channel allocation is slightly different in the slave DMA context, client drivers typically need a channel from a particular DMA controller only and even in some cases a specific channel is desired. To request a channel dma_request_chan() API is used.

Which will find and return the name DMA channel associated with the ‘dev’ device. The association is done via DT, ACPI or board file based dma_slave_map matching table.

A channel allocated via this interface is exclusive to the caller, until dma_release_channel() is called.

Set slave and controller specific parameters

Next step is always to pass some specific information to the DMA driver. Most of the generic information which a slave DMA can use is in struct dma_slave_config. This allows the clients to specify DMA direction, DMA addresses, bus widths, DMA burst lengths etc for the peripheral.

If some DMA controllers have more parameters to be sent then they should try to embed struct dma_slave_config in their controller specific structure. That gives flexibility to client to pass more parameters, if required.

Please see the dma_slave_config structure definition in dmaengine.h for a detailed explanation of the struct members. Please note that the ‘direction’ member will be going away as it duplicates the direction given in the prepare call.

Get a descriptor for transaction

For slave usage the various modes of slave transfers supported by the DMA-engine are:

slave_sg: DMA a list of scatter gather buffers from/to a peripheral

dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the operation is explicitly stopped.

interleaved_dma: This is common to Slave as well as M2M clients. For slave address of devices’ fifo could be already known to the driver. Various types of operations could be expressed by setting appropriate values to the ‘dma_interleaved_template’ members. Cyclic interleaved DMA transfers are also possible if supported by the channel by setting the DMA_PREP_REPEAT transfer flag.

A non-NULL return of this transfer API represents a “descriptor” for the given transaction.

The peripheral driver is expected to have mapped the scatterlist for the DMA operation prior to calling dmaengine_prep_slave_sg(), and must keep the scatterlist mapped until the DMA operation has completed. The scatterlist must be mapped using the DMA struct device . If a mapping needs to be synchronized later, dma_sync_*_for_*() must be called using the DMA struct device , too. So, normal setup should look like this:

Once a descriptor has been obtained, the callback information can be added and the descriptor must then be submitted. Some DMA engine drivers may hold a spinlock between a successful preparation and submission so it is important that these two operations are closely paired.

Although the async_tx API specifies that completion callback routines cannot submit any new operations, this is not the case for slave/cyclic DMA.

For slave DMA, the subsequent transaction may not be available for submission prior to callback function being invoked, so slave DMA callbacks are permitted to prepare and submit a new transaction.

For cyclic DMA, a callback function may wish to terminate the DMA via dmaengine_terminate_async().

Therefore, it is important that DMA engine drivers drop any locks before calling the callback function which may cause a deadlock.

Note that callbacks will always be invoked from the DMA engines tasklet, never from interrupt context.

Optional: per descriptor metadata

DMAengine provides two ways for metadata support.

The metadata buffer is allocated/provided by the client driver and it is attached to the descriptor.

The metadata buffer is allocated/managed by the DMA driver. The client driver can ask for the pointer, maximum size and the currently used size of the metadata and can directly update or read it.

Becasue the DMA driver manages the memory area containing the metadata, clients must make sure that they do not try to access or get the pointer after their transfer completion callback has run for the descriptor. If no completion callback has been defined for the transfer, then the metadata must not be accessed after issue_pending. In other words: if the aim is to read back metadata after the transfer is completed, then the client must use completion callback.

Client drivers can query if a given mode is supported with:

Читайте также:  Lenovo как узнать серийный номер windows

Depending on the used mode client drivers must follow different flow.

prepare the descriptor (dmaengine_prep_*) construct the metadata in the client’s buffer

use dmaengine_desc_attach_metadata() to attach the buffer to the descriptor

submit the transfer

prepare the descriptor (dmaengine_prep_*)

use dmaengine_desc_attach_metadata() to attach the buffer to the descriptor

submit the transfer

when the transfer is completed, the metadata should be available in the attached buffer

prepare the descriptor (dmaengine_prep_*)

use dmaengine_desc_get_metadata_ptr() to get the pointer to the engine’s metadata area

update the metadata at the pointer

use dmaengine_desc_set_metadata_len() to tell the DMA engine the amount of data the client has placed into the metadata buffer

submit the transfer

prepare the descriptor (dmaengine_prep_*)

submit the transfer

on transfer completion, use dmaengine_desc_get_metadata_ptr() to get the pointer to the engine’s metadata area

read out the metadata from the pointer

When DESC_METADATA_ENGINE mode is used the metadata area for the descriptor is no longer valid after the transfer has been completed (valid up to the point when the completion callback returns if used).

Mixed use of DESC_METADATA_CLIENT / DESC_METADATA_ENGINE is not allowed, client drivers must use either of the modes per descriptor.

Submit the transaction

Once the descriptor has been prepared and the callback information added, it must be placed on the DMA engine drivers pending queue.

This returns a cookie can be used to check the progress of DMA engine activity via other DMA engine calls not covered in this document.

dmaengine_submit() will not start the DMA operation, it merely adds it to the pending queue. For this, see step 5, dma_async_issue_pending.

After calling dmaengine_submit() the submitted transfer descriptor ( struct dma_async_tx_descriptor ) belongs to the DMA engine. Consequently, the client must consider invalid the pointer to that descriptor.

Issue pending DMA requests and wait for callback notification

The transactions in the pending queue can be activated by calling the issue_pending API. If channel is idle then the first transaction in queue is started and subsequent ones queued up.

On completion of each DMA operation, the next in queue is started and a tasklet triggered. The tasklet will then call the client driver completion callback routine for notification, if set.

Further APIsВ¶

This causes all activity for the DMA channel to be stopped, and may discard data in the DMA FIFO which hasn’t been fully transferred. No callback functions will be called for any incomplete transfers.

Two variants of this function are available.

dmaengine_terminate_async() might not wait until the DMA has been fully stopped or until any running complete callbacks have finished. But it is possible to call dmaengine_terminate_async() from atomic context or from within a complete callback. dmaengine_synchronize() must be called before it is safe to free the memory accessed by the DMA transfer or free resources accessed from within the complete callback.

dmaengine_terminate_sync() will wait for the transfer and any running complete callbacks to finish before it returns. But the function must not be called from atomic context or from within a complete callback.

dmaengine_terminate_all() is deprecated and should not be used in new code.

This pauses activity on the DMA channel without data loss.

Resume a previously paused DMA channel. It is invalid to resume a channel which is not currently paused.

Check Txn complete

This can be used to check the status of the channel. Please see the documentation in include/linux/dmaengine.h for a more complete description of this API.

This can be used in conjunction with dma_async_is_complete() and the cookie returned from dmaengine_submit() to check for completion of a specific DMA transaction.

Not all DMA engine drivers can return reliable information for a running DMA channel. It is recommended that DMA engine users pause or stop (via dmaengine_terminate_all()) the channel before using this API.

Synchronize termination API

Synchronize the termination of the DMA channel to the current context.

This function should be used after dmaengine_terminate_async() to synchronize the termination of the DMA channel to the current context. The function will wait for the transfer and any running complete callbacks to finish before it returns.

If dmaengine_terminate_async() is used to stop the DMA channel this function must be called before it is safe to free memory accessed by previously submitted descriptors or to free any resources accessed within the complete callback of previously submitted descriptors.

The behavior of this function is undefined if dma_async_issue_pending() has been called between dmaengine_terminate_async() and this function.

© Copyright The kernel development community.

Источник

Оцените статью