About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / DMA-API-HOWTO.txt




Custom Search

Based on kernel version 4.3. Page generated on 2015-11-02 12:48 EST.

1			     Dynamic DMA mapping Guide
2			     =========================
3	
4			 David S. Miller <davem@redhat.com>
5			 Richard Henderson <rth@cygnus.com>
6			  Jakub Jelinek <jakub@redhat.com>
7	
8	This is a guide to device driver writers on how to use the DMA API
9	with example pseudo-code.  For a concise description of the API, see
10	DMA-API.txt.
11	
12	                       CPU and DMA addresses
13	
14	There are several kinds of addresses involved in the DMA API, and it's
15	important to understand the differences.
16	
17	The kernel normally uses virtual addresses.  Any address returned by
18	kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
19	be stored in a "void *".
20	
21	The virtual memory system (TLB, page tables, etc.) translates virtual
22	addresses to CPU physical addresses, which are stored as "phys_addr_t" or
23	"resource_size_t".  The kernel manages device resources like registers as
24	physical addresses.  These are the addresses in /proc/iomem.  The physical
25	address is not directly useful to a driver; it must use ioremap() to map
26	the space and produce a virtual address.
27	
28	I/O devices use a third kind of address: a "bus address".  If a device has
29	registers at an MMIO address, or if it performs DMA to read or write system
30	memory, the addresses used by the device are bus addresses.  In some
31	systems, bus addresses are identical to CPU physical addresses, but in
32	general they are not.  IOMMUs and host bridges can produce arbitrary
33	mappings between physical and bus addresses.
34	
35	From a device's point of view, DMA uses the bus address space, but it may
36	be restricted to a subset of that space.  For example, even if a system
37	supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
38	so devices only need to use 32-bit DMA addresses.
39	
40	Here's a picture and some examples:
41	
42	               CPU                  CPU                  Bus
43	             Virtual              Physical             Address
44	             Address              Address               Space
45	              Space                Space
46	
47	            +-------+             +------+             +------+
48	            |       |             |MMIO  |   Offset    |      |
49	            |       |  Virtual    |Space |   applied   |      |
50	          C +-------+ --------> B +------+ ----------> +------+ A
51	            |       |  mapping    |      |   by host   |      |
52	  +-----+   |       |             |      |   bridge    |      |   +--------+
53	  |     |   |       |             +------+             |      |   |        |
54	  | CPU |   |       |             | RAM  |             |      |   | Device |
55	  |     |   |       |             |      |             |      |   |        |
56	  +-----+   +-------+             +------+             +------+   +--------+
57	            |       |  Virtual    |Buffer|   Mapping   |      |
58	          X +-------+ --------> Y +------+ <---------- +------+ Z
59	            |       |  mapping    | RAM  |   by IOMMU
60	            |       |             |      |
61	            |       |             |      |
62	            +-------+             +------+
63	
64	During the enumeration process, the kernel learns about I/O devices and
65	their MMIO space and the host bridges that connect them to the system.  For
66	example, if a PCI device has a BAR, the kernel reads the bus address (A)
67	from the BAR and converts it to a CPU physical address (B).  The address B
68	is stored in a struct resource and usually exposed via /proc/iomem.  When a
69	driver claims a device, it typically uses ioremap() to map physical address
70	B at a virtual address (C).  It can then use, e.g., ioread32(C), to access
71	the device registers at bus address A.
72	
73	If the device supports DMA, the driver sets up a buffer using kmalloc() or
74	a similar interface, which returns a virtual address (X).  The virtual
75	memory system maps X to a physical address (Y) in system RAM.  The driver
76	can use virtual address X to access the buffer, but the device itself
77	cannot because DMA doesn't go through the CPU virtual memory system.
78	
79	In some simple systems, the device can do DMA directly to physical address
80	Y.  But in many others, there is IOMMU hardware that translates DMA
81	addresses to physical addresses, e.g., it translates Z to Y.  This is part
82	of the reason for the DMA API: the driver can give a virtual address X to
83	an interface like dma_map_single(), which sets up any required IOMMU
84	mapping and returns the DMA address Z.  The driver then tells the device to
85	do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
86	RAM.
87	
88	So that Linux can use the dynamic DMA mapping, it needs some help from the
89	drivers, namely it has to take into account that DMA addresses should be
90	mapped only for the time they are actually used and unmapped after the DMA
91	transfer.
92	
93	The following API will work of course even on platforms where no such
94	hardware exists.
95	
96	Note that the DMA API works with any bus independent of the underlying
97	microprocessor architecture. You should use the DMA API rather than the
98	bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
99	pci_map_*() interfaces.
100	
101	First of all, you should make sure
102	
103	#include <linux/dma-mapping.h>
104	
105	is in your driver, which provides the definition of dma_addr_t.  This type
106	can hold any valid DMA address for the platform and should be used
107	everywhere you hold a DMA address returned from the DMA mapping functions.
108	
109				 What memory is DMA'able?
110	
111	The first piece of information you must know is what kernel memory can
112	be used with the DMA mapping facilities.  There has been an unwritten
113	set of rules regarding this, and this text is an attempt to finally
114	write them down.
115	
116	If you acquired your memory via the page allocator
117	(i.e. __get_free_page*()) or the generic memory allocators
118	(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
119	that memory using the addresses returned from those routines.
120	
121	This means specifically that you may _not_ use the memory/addresses
122	returned from vmalloc() for DMA.  It is possible to DMA to the
123	_underlying_ memory mapped into a vmalloc() area, but this requires
124	walking page tables to get the physical addresses, and then
125	translating each of those pages back to a kernel address using
126	something like __va().  [ EDIT: Update this when we integrate
127	Gerd Knorr's generic code which does this. ]
128	
129	This rule also means that you may use neither kernel image addresses
130	(items in data/text/bss segments), nor module image addresses, nor
131	stack addresses for DMA.  These could all be mapped somewhere entirely
132	different than the rest of physical memory.  Even if those classes of
133	memory could physically work with DMA, you'd need to ensure the I/O
134	buffers were cacheline-aligned.  Without that, you'd see cacheline
135	sharing problems (data corruption) on CPUs with DMA-incoherent caches.
136	(The CPU could write to one word, DMA would write to a different one
137	in the same cache line, and one of them could be overwritten.)
138	
139	Also, this means that you cannot take the return of a kmap()
140	call and DMA to/from that.  This is similar to vmalloc().
141	
142	What about block I/O and networking buffers?  The block I/O and
143	networking subsystems make sure that the buffers they use are valid
144	for you to DMA from/to.
145	
146				DMA addressing limitations
147	
148	Does your device have any DMA addressing limitations?  For example, is
149	your device only capable of driving the low order 24-bits of address?
150	If so, you need to inform the kernel of this fact.
151	
152	By default, the kernel assumes that your device can address the full
153	32-bits.  For a 64-bit capable device, this needs to be increased.
154	And for a device with limitations, as discussed in the previous
155	paragraph, it needs to be decreased.
156	
157	Special note about PCI: PCI-X specification requires PCI-X devices to
158	support 64-bit addressing (DAC) for all transactions.  And at least
159	one platform (SGI SN2) requires 64-bit consistent allocations to
160	operate correctly when the IO bus is in PCI-X mode.
161	
162	For correct operation, you must interrogate the kernel in your device
163	probe routine to see if the DMA controller on the machine can properly
164	support the DMA addressing limitation your device has.  It is good
165	style to do this even if your device holds the default setting,
166	because this shows that you did think about these issues wrt. your
167	device.
168	
169	The query is performed via a call to dma_set_mask_and_coherent():
170	
171		int dma_set_mask_and_coherent(struct device *dev, u64 mask);
172	
173	which will query the mask for both streaming and coherent APIs together.
174	If you have some special requirements, then the following two separate
175	queries can be used instead:
176	
177		The query for streaming mappings is performed via a call to
178		dma_set_mask():
179	
180			int dma_set_mask(struct device *dev, u64 mask);
181	
182		The query for consistent allocations is performed via a call
183		to dma_set_coherent_mask():
184	
185			int dma_set_coherent_mask(struct device *dev, u64 mask);
186	
187	Here, dev is a pointer to the device struct of your device, and mask
188	is a bit mask describing which bits of an address your device
189	supports.  It returns zero if your card can perform DMA properly on
190	the machine given the address mask you provided.  In general, the
191	device struct of your device is embedded in the bus-specific device
192	struct of your device.  For example, &pdev->dev is a pointer to the
193	device struct of a PCI device (pdev is a pointer to the PCI device
194	struct of your device).
195	
196	If it returns non-zero, your device cannot perform DMA properly on
197	this platform, and attempting to do so will result in undefined
198	behavior.  You must either use a different mask, or not use DMA.
199	
200	This means that in the failure case, you have three options:
201	
202	1) Use another DMA mask, if possible (see below).
203	2) Use some non-DMA mode for data transfer, if possible.
204	3) Ignore this device and do not initialize it.
205	
206	It is recommended that your driver print a kernel KERN_WARNING message
207	when you end up performing either #2 or #3.  In this manner, if a user
208	of your driver reports that performance is bad or that the device is not
209	even detected, you can ask them for the kernel messages to find out
210	exactly why.
211	
212	The standard 32-bit addressing device would do something like this:
213	
214		if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
215			dev_warn(dev, "mydev: No suitable DMA available\n");
216			goto ignore_this_device;
217		}
218	
219	Another common scenario is a 64-bit capable device.  The approach here
220	is to try for 64-bit addressing, but back down to a 32-bit mask that
221	should not fail.  The kernel may fail the 64-bit mask not because the
222	platform is not capable of 64-bit addressing.  Rather, it may fail in
223	this case simply because 32-bit addressing is done more efficiently
224	than 64-bit addressing.  For example, Sparc64 PCI SAC addressing is
225	more efficient than DAC addressing.
226	
227	Here is how you would handle a 64-bit capable device which can drive
228	all 64-bits when accessing streaming DMA:
229	
230		int using_dac;
231	
232		if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
233			using_dac = 1;
234		} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
235			using_dac = 0;
236		} else {
237			dev_warn(dev, "mydev: No suitable DMA available\n");
238			goto ignore_this_device;
239		}
240	
241	If a card is capable of using 64-bit consistent allocations as well,
242	the case would look like this:
243	
244		int using_dac, consistent_using_dac;
245	
246		if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
247			using_dac = 1;
248			consistent_using_dac = 1;
249		} else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
250			using_dac = 0;
251			consistent_using_dac = 0;
252		} else {
253			dev_warn(dev, "mydev: No suitable DMA available\n");
254			goto ignore_this_device;
255		}
256	
257	The coherent mask will always be able to set the same or a smaller mask as
258	the streaming mask. However for the rare case that a device driver only
259	uses consistent allocations, one would have to check the return value from
260	dma_set_coherent_mask().
261	
262	Finally, if your device can only drive the low 24-bits of
263	address you might do something like:
264	
265		if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
266			dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
267			goto ignore_this_device;
268		}
269	
270	When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
271	returns zero, the kernel saves away this mask you have provided.  The
272	kernel will use this information later when you make DMA mappings.
273	
274	There is a case which we are aware of at this time, which is worth
275	mentioning in this documentation.  If your device supports multiple
276	functions (for example a sound card provides playback and record
277	functions) and the various different functions have _different_
278	DMA addressing limitations, you may wish to probe each mask and
279	only provide the functionality which the machine can handle.  It
280	is important that the last call to dma_set_mask() be for the
281	most specific mask.
282	
283	Here is pseudo-code showing how this might be done:
284	
285		#define PLAYBACK_ADDRESS_BITS	DMA_BIT_MASK(32)
286		#define RECORD_ADDRESS_BITS	DMA_BIT_MASK(24)
287	
288		struct my_sound_card *card;
289		struct device *dev;
290	
291		...
292		if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
293			card->playback_enabled = 1;
294		} else {
295			card->playback_enabled = 0;
296			dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
297			       card->name);
298		}
299		if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
300			card->record_enabled = 1;
301		} else {
302			card->record_enabled = 0;
303			dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
304			       card->name);
305		}
306	
307	A sound card was used as an example here because this genre of PCI
308	devices seems to be littered with ISA chips given a PCI front end,
309	and thus retaining the 16MB DMA addressing limitations of ISA.
310	
311				Types of DMA mappings
312	
313	There are two types of DMA mappings:
314	
315	- Consistent DMA mappings which are usually mapped at driver
316	  initialization, unmapped at the end and for which the hardware should
317	  guarantee that the device and the CPU can access the data
318	  in parallel and will see updates made by each other without any
319	  explicit software flushing.
320	
321	  Think of "consistent" as "synchronous" or "coherent".
322	
323	  The current default is to return consistent memory in the low 32
324	  bits of the DMA space.  However, for future compatibility you should
325	  set the consistent mask even if this default is fine for your
326	  driver.
327	
328	  Good examples of what to use consistent mappings for are:
329	
330		- Network card DMA ring descriptors.
331		- SCSI adapter mailbox command data structures.
332		- Device firmware microcode executed out of
333		  main memory.
334	
335	  The invariant these examples all require is that any CPU store
336	  to memory is immediately visible to the device, and vice
337	  versa.  Consistent mappings guarantee this.
338	
339	  IMPORTANT: Consistent DMA memory does not preclude the usage of
340	             proper memory barriers.  The CPU may reorder stores to
341		     consistent memory just as it may normal memory.  Example:
342		     if it is important for the device to see the first word
343		     of a descriptor updated before the second, you must do
344		     something like:
345	
346			desc->word0 = address;
347			wmb();
348			desc->word1 = DESC_VALID;
349	
350	             in order to get correct behavior on all platforms.
351	
352		     Also, on some platforms your driver may need to flush CPU write
353		     buffers in much the same way as it needs to flush write buffers
354		     found in PCI bridges (such as by reading a register's value
355		     after writing it).
356	
357	- Streaming DMA mappings which are usually mapped for one DMA
358	  transfer, unmapped right after it (unless you use dma_sync_* below)
359	  and for which hardware can optimize for sequential accesses.
360	
361	  Think of "streaming" as "asynchronous" or "outside the coherency
362	  domain".
363	
364	  Good examples of what to use streaming mappings for are:
365	
366		- Networking buffers transmitted/received by a device.
367		- Filesystem buffers written/read by a SCSI device.
368	
369	  The interfaces for using this type of mapping were designed in
370	  such a way that an implementation can make whatever performance
371	  optimizations the hardware allows.  To this end, when using
372	  such mappings you must be explicit about what you want to happen.
373	
374	Neither type of DMA mapping has alignment restrictions that come from
375	the underlying bus, although some devices may have such restrictions.
376	Also, systems with caches that aren't DMA-coherent will work better
377	when the underlying buffers don't share cache lines with other data.
378	
379	
380			 Using Consistent DMA mappings.
381	
382	To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
383	you should do:
384	
385		dma_addr_t dma_handle;
386	
387		cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
388	
389	where device is a struct device *. This may be called in interrupt
390	context with the GFP_ATOMIC flag.
391	
392	Size is the length of the region you want to allocate, in bytes.
393	
394	This routine will allocate RAM for that region, so it acts similarly to
395	__get_free_pages() (but takes size instead of a page order).  If your
396	driver needs regions sized smaller than a page, you may prefer using
397	the dma_pool interface, described below.
398	
399	The consistent DMA mapping interfaces, for non-NULL dev, will by
400	default return a DMA address which is 32-bit addressable.  Even if the
401	device indicates (via DMA mask) that it may address the upper 32-bits,
402	consistent allocation will only return > 32-bit addresses for DMA if
403	the consistent DMA mask has been explicitly changed via
404	dma_set_coherent_mask().  This is true of the dma_pool interface as
405	well.
406	
407	dma_alloc_coherent() returns two values: the virtual address which you
408	can use to access it from the CPU and dma_handle which you pass to the
409	card.
410	
411	The CPU virtual address and the DMA address are both
412	guaranteed to be aligned to the smallest PAGE_SIZE order which
413	is greater than or equal to the requested size.  This invariant
414	exists (for example) to guarantee that if you allocate a chunk
415	which is smaller than or equal to 64 kilobytes, the extent of the
416	buffer you receive will not cross a 64K boundary.
417	
418	To unmap and free such a DMA region, you call:
419	
420		dma_free_coherent(dev, size, cpu_addr, dma_handle);
421	
422	where dev, size are the same as in the above call and cpu_addr and
423	dma_handle are the values dma_alloc_coherent() returned to you.
424	This function may not be called in interrupt context.
425	
426	If your driver needs lots of smaller memory regions, you can write
427	custom code to subdivide pages returned by dma_alloc_coherent(),
428	or you can use the dma_pool API to do that.  A dma_pool is like
429	a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
430	Also, it understands common hardware constraints for alignment,
431	like queue heads needing to be aligned on N byte boundaries.
432	
433	Create a dma_pool like this:
434	
435		struct dma_pool *pool;
436	
437		pool = dma_pool_create(name, dev, size, align, boundary);
438	
439	The "name" is for diagnostics (like a kmem_cache name); dev and size
440	are as above.  The device's hardware alignment requirement for this
441	type of data is "align" (which is expressed in bytes, and must be a
442	power of two).  If your device has no boundary crossing restrictions,
443	pass 0 for boundary; passing 4096 says memory allocated from this pool
444	must not cross 4KByte boundaries (but at that time it may be better to
445	use dma_alloc_coherent() directly instead).
446	
447	Allocate memory from a DMA pool like this:
448	
449		cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
450	
451	flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
452	holding SMP locks), GFP_ATOMIC otherwise.  Like dma_alloc_coherent(),
453	this returns two values, cpu_addr and dma_handle.
454	
455	Free memory that was allocated from a dma_pool like this:
456	
457		dma_pool_free(pool, cpu_addr, dma_handle);
458	
459	where pool is what you passed to dma_pool_alloc(), and cpu_addr and
460	dma_handle are the values dma_pool_alloc() returned. This function
461	may be called in interrupt context.
462	
463	Destroy a dma_pool by calling:
464	
465		dma_pool_destroy(pool);
466	
467	Make sure you've called dma_pool_free() for all memory allocated
468	from a pool before you destroy the pool. This function may not
469	be called in interrupt context.
470	
471				DMA Direction
472	
473	The interfaces described in subsequent portions of this document
474	take a DMA direction argument, which is an integer and takes on
475	one of the following values:
476	
477	 DMA_BIDIRECTIONAL
478	 DMA_TO_DEVICE
479	 DMA_FROM_DEVICE
480	 DMA_NONE
481	
482	You should provide the exact DMA direction if you know it.
483	
484	DMA_TO_DEVICE means "from main memory to the device"
485	DMA_FROM_DEVICE means "from the device to main memory"
486	It is the direction in which the data moves during the DMA
487	transfer.
488	
489	You are _strongly_ encouraged to specify this as precisely
490	as you possibly can.
491	
492	If you absolutely cannot know the direction of the DMA transfer,
493	specify DMA_BIDIRECTIONAL.  It means that the DMA can go in
494	either direction.  The platform guarantees that you may legally
495	specify this, and that it will work, but this may be at the
496	cost of performance for example.
497	
498	The value DMA_NONE is to be used for debugging.  One can
499	hold this in a data structure before you come to know the
500	precise direction, and this will help catch cases where your
501	direction tracking logic has failed to set things up properly.
502	
503	Another advantage of specifying this value precisely (outside of
504	potential platform-specific optimizations of such) is for debugging.
505	Some platforms actually have a write permission boolean which DMA
506	mappings can be marked with, much like page protections in the user
507	program address space.  Such platforms can and do report errors in the
508	kernel logs when the DMA controller hardware detects violation of the
509	permission setting.
510	
511	Only streaming mappings specify a direction, consistent mappings
512	implicitly have a direction attribute setting of
513	DMA_BIDIRECTIONAL.
514	
515	The SCSI subsystem tells you the direction to use in the
516	'sc_data_direction' member of the SCSI command your driver is
517	working on.
518	
519	For Networking drivers, it's a rather simple affair.  For transmit
520	packets, map/unmap them with the DMA_TO_DEVICE direction
521	specifier.  For receive packets, just the opposite, map/unmap them
522	with the DMA_FROM_DEVICE direction specifier.
523	
524			  Using Streaming DMA mappings
525	
526	The streaming DMA mapping routines can be called from interrupt
527	context.  There are two versions of each map/unmap, one which will
528	map/unmap a single memory region, and one which will map/unmap a
529	scatterlist.
530	
531	To map a single region, you do:
532	
533		struct device *dev = &my_dev->dev;
534		dma_addr_t dma_handle;
535		void *addr = buffer->ptr;
536		size_t size = buffer->len;
537	
538		dma_handle = dma_map_single(dev, addr, size, direction);
539		if (dma_mapping_error(dev, dma_handle)) {
540			/*
541			 * reduce current DMA mapping usage,
542			 * delay and try again later or
543			 * reset driver.
544			 */
545			goto map_error_handling;
546		}
547	
548	and to unmap it:
549	
550		dma_unmap_single(dev, dma_handle, size, direction);
551	
552	You should call dma_mapping_error() as dma_map_single() could fail and return
553	error. Not all DMA implementations support the dma_mapping_error() interface.
554	However, it is a good practice to call dma_mapping_error() interface, which
555	will invoke the generic mapping error check interface. Doing so will ensure
556	that the mapping code will work correctly on all DMA implementations without
557	any dependency on the specifics of the underlying implementation. Using the
558	returned address without checking for errors could result in failures ranging
559	from panics to silent data corruption. A couple of examples of incorrect ways
560	to check for errors that make assumptions about the underlying DMA
561	implementation are as follows and these are applicable to dma_map_page() as
562	well.
563	
564	Incorrect example 1:
565		dma_addr_t dma_handle;
566	
567		dma_handle = dma_map_single(dev, addr, size, direction);
568		if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
569			goto map_error;
570		}
571	
572	Incorrect example 2:
573		dma_addr_t dma_handle;
574	
575		dma_handle = dma_map_single(dev, addr, size, direction);
576		if (dma_handle == DMA_ERROR_CODE) {
577			goto map_error;
578		}
579	
580	You should call dma_unmap_single() when the DMA activity is finished, e.g.,
581	from the interrupt which told you that the DMA transfer is done.
582	
583	Using CPU pointers like this for single mappings has a disadvantage:
584	you cannot reference HIGHMEM memory in this way.  Thus, there is a
585	map/unmap interface pair akin to dma_{map,unmap}_single().  These
586	interfaces deal with page/offset pairs instead of CPU pointers.
587	Specifically:
588	
589		struct device *dev = &my_dev->dev;
590		dma_addr_t dma_handle;
591		struct page *page = buffer->page;
592		unsigned long offset = buffer->offset;
593		size_t size = buffer->len;
594	
595		dma_handle = dma_map_page(dev, page, offset, size, direction);
596		if (dma_mapping_error(dev, dma_handle)) {
597			/*
598			 * reduce current DMA mapping usage,
599			 * delay and try again later or
600			 * reset driver.
601			 */
602			goto map_error_handling;
603		}
604	
605		...
606	
607		dma_unmap_page(dev, dma_handle, size, direction);
608	
609	Here, "offset" means byte offset within the given page.
610	
611	You should call dma_mapping_error() as dma_map_page() could fail and return
612	error as outlined under the dma_map_single() discussion.
613	
614	You should call dma_unmap_page() when the DMA activity is finished, e.g.,
615	from the interrupt which told you that the DMA transfer is done.
616	
617	With scatterlists, you map a region gathered from several regions by:
618	
619		int i, count = dma_map_sg(dev, sglist, nents, direction);
620		struct scatterlist *sg;
621	
622		for_each_sg(sglist, sg, count, i) {
623			hw_address[i] = sg_dma_address(sg);
624			hw_len[i] = sg_dma_len(sg);
625		}
626	
627	where nents is the number of entries in the sglist.
628	
629	The implementation is free to merge several consecutive sglist entries
630	into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
631	consecutive sglist entries can be merged into one provided the first one
632	ends and the second one starts on a page boundary - in fact this is a huge
633	advantage for cards which either cannot do scatter-gather or have very
634	limited number of scatter-gather entries) and returns the actual number
635	of sg entries it mapped them to. On failure 0 is returned.
636	
637	Then you should loop count times (note: this can be less than nents times)
638	and use sg_dma_address() and sg_dma_len() macros where you previously
639	accessed sg->address and sg->length as shown above.
640	
641	To unmap a scatterlist, just call:
642	
643		dma_unmap_sg(dev, sglist, nents, direction);
644	
645	Again, make sure DMA activity has already finished.
646	
647	PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
648	              the _same_ one you passed into the dma_map_sg call,
649		      it should _NOT_ be the 'count' value _returned_ from the
650	              dma_map_sg call.
651	
652	Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
653	counterpart, because the DMA address space is a shared resource and
654	you could render the machine unusable by consuming all DMA addresses.
655	
656	If you need to use the same streaming DMA region multiple times and touch
657	the data in between the DMA transfers, the buffer needs to be synced
658	properly in order for the CPU and device to see the most up-to-date and
659	correct copy of the DMA buffer.
660	
661	So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
662	transfer call either:
663	
664		dma_sync_single_for_cpu(dev, dma_handle, size, direction);
665	
666	or:
667	
668		dma_sync_sg_for_cpu(dev, sglist, nents, direction);
669	
670	as appropriate.
671	
672	Then, if you wish to let the device get at the DMA area again,
673	finish accessing the data with the CPU, and then before actually
674	giving the buffer to the hardware call either:
675	
676		dma_sync_single_for_device(dev, dma_handle, size, direction);
677	
678	or:
679	
680		dma_sync_sg_for_device(dev, sglist, nents, direction);
681	
682	as appropriate.
683	
684	After the last DMA transfer call one of the DMA unmap routines
685	dma_unmap_{single,sg}(). If you don't touch the data from the first
686	dma_map_*() call till dma_unmap_*(), then you don't have to call the
687	dma_sync_*() routines at all.
688	
689	Here is pseudo code which shows a situation in which you would need
690	to use the dma_sync_*() interfaces.
691	
692		my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
693		{
694			dma_addr_t mapping;
695	
696			mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
697			if (dma_mapping_error(cp->dev, dma_handle)) {
698				/*
699				 * reduce current DMA mapping usage,
700				 * delay and try again later or
701				 * reset driver.
702				 */
703				goto map_error_handling;
704			}
705	
706			cp->rx_buf = buffer;
707			cp->rx_len = len;
708			cp->rx_dma = mapping;
709	
710			give_rx_buf_to_card(cp);
711		}
712	
713		...
714	
715		my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
716		{
717			struct my_card *cp = devid;
718	
719			...
720			if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
721				struct my_card_header *hp;
722	
723				/* Examine the header to see if we wish
724				 * to accept the data.  But synchronize
725				 * the DMA transfer with the CPU first
726				 * so that we see updated contents.
727				 */
728				dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
729							cp->rx_len,
730							DMA_FROM_DEVICE);
731	
732				/* Now it is safe to examine the buffer. */
733				hp = (struct my_card_header *) cp->rx_buf;
734				if (header_is_ok(hp)) {
735					dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
736							 DMA_FROM_DEVICE);
737					pass_to_upper_layers(cp->rx_buf);
738					make_and_setup_new_rx_buf(cp);
739				} else {
740					/* CPU should not write to
741					 * DMA_FROM_DEVICE-mapped area,
742					 * so dma_sync_single_for_device() is
743					 * not needed here. It would be required
744					 * for DMA_BIDIRECTIONAL mapping if
745					 * the memory was modified.
746					 */
747					give_rx_buf_to_card(cp);
748				}
749			}
750		}
751	
752	Drivers converted fully to this interface should not use virt_to_bus() any
753	longer, nor should they use bus_to_virt(). Some drivers have to be changed a
754	little bit, because there is no longer an equivalent to bus_to_virt() in the
755	dynamic DMA mapping scheme - you have to always store the DMA addresses
756	returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
757	calls (dma_map_sg() stores them in the scatterlist itself if the platform
758	supports dynamic DMA mapping in hardware) in your driver structures and/or
759	in the card registers.
760	
761	All drivers should be using these interfaces with no exceptions.  It
762	is planned to completely remove virt_to_bus() and bus_to_virt() as
763	they are entirely deprecated.  Some ports already do not provide these
764	as it is impossible to correctly support them.
765	
766				Handling Errors
767	
768	DMA address space is limited on some architectures and an allocation
769	failure can be determined by:
770	
771	- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
772	
773	- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
774	  by using dma_mapping_error():
775	
776		dma_addr_t dma_handle;
777	
778		dma_handle = dma_map_single(dev, addr, size, direction);
779		if (dma_mapping_error(dev, dma_handle)) {
780			/*
781			 * reduce current DMA mapping usage,
782			 * delay and try again later or
783			 * reset driver.
784			 */
785			goto map_error_handling;
786		}
787	
788	- unmap pages that are already mapped, when mapping error occurs in the middle
789	  of a multiple page mapping attempt. These example are applicable to
790	  dma_map_page() as well.
791	
792	Example 1:
793		dma_addr_t dma_handle1;
794		dma_addr_t dma_handle2;
795	
796		dma_handle1 = dma_map_single(dev, addr, size, direction);
797		if (dma_mapping_error(dev, dma_handle1)) {
798			/*
799			 * reduce current DMA mapping usage,
800			 * delay and try again later or
801			 * reset driver.
802			 */
803			goto map_error_handling1;
804		}
805		dma_handle2 = dma_map_single(dev, addr, size, direction);
806		if (dma_mapping_error(dev, dma_handle2)) {
807			/*
808			 * reduce current DMA mapping usage,
809			 * delay and try again later or
810			 * reset driver.
811			 */
812			goto map_error_handling2;
813		}
814	
815		...
816	
817		map_error_handling2:
818			dma_unmap_single(dma_handle1);
819		map_error_handling1:
820	
821	Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
822		    mapping error is detected in the middle)
823	
824		dma_addr_t dma_addr;
825		dma_addr_t array[DMA_BUFFERS];
826		int save_index = 0;
827	
828		for (i = 0; i < DMA_BUFFERS; i++) {
829	
830			...
831	
832			dma_addr = dma_map_single(dev, addr, size, direction);
833			if (dma_mapping_error(dev, dma_addr)) {
834				/*
835				 * reduce current DMA mapping usage,
836				 * delay and try again later or
837				 * reset driver.
838				 */
839				goto map_error_handling;
840			}
841			array[i].dma_addr = dma_addr;
842			save_index++;
843		}
844	
845		...
846	
847		map_error_handling:
848	
849		for (i = 0; i < save_index; i++) {
850	
851			...
852	
853			dma_unmap_single(array[i].dma_addr);
854		}
855	
856	Networking drivers must call dev_kfree_skb() to free the socket buffer
857	and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
858	(ndo_start_xmit). This means that the socket buffer is just dropped in
859	the failure case.
860	
861	SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
862	fails in the queuecommand hook. This means that the SCSI subsystem
863	passes the command to the driver again later.
864	
865			Optimizing Unmap State Space Consumption
866	
867	On many platforms, dma_unmap_{single,page}() is simply a nop.
868	Therefore, keeping track of the mapping address and length is a waste
869	of space.  Instead of filling your drivers up with ifdefs and the like
870	to "work around" this (which would defeat the whole purpose of a
871	portable API) the following facilities are provided.
872	
873	Actually, instead of describing the macros one by one, we'll
874	transform some example code.
875	
876	1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
877	   Example, before:
878	
879		struct ring_state {
880			struct sk_buff *skb;
881			dma_addr_t mapping;
882			__u32 len;
883		};
884	
885	   after:
886	
887		struct ring_state {
888			struct sk_buff *skb;
889			DEFINE_DMA_UNMAP_ADDR(mapping);
890			DEFINE_DMA_UNMAP_LEN(len);
891		};
892	
893	2) Use dma_unmap_{addr,len}_set() to set these values.
894	   Example, before:
895	
896		ringp->mapping = FOO;
897		ringp->len = BAR;
898	
899	   after:
900	
901		dma_unmap_addr_set(ringp, mapping, FOO);
902		dma_unmap_len_set(ringp, len, BAR);
903	
904	3) Use dma_unmap_{addr,len}() to access these values.
905	   Example, before:
906	
907		dma_unmap_single(dev, ringp->mapping, ringp->len,
908				 DMA_FROM_DEVICE);
909	
910	   after:
911	
912		dma_unmap_single(dev,
913				 dma_unmap_addr(ringp, mapping),
914				 dma_unmap_len(ringp, len),
915				 DMA_FROM_DEVICE);
916	
917	It really should be self-explanatory.  We treat the ADDR and LEN
918	separately, because it is possible for an implementation to only
919	need the address in order to perform the unmap operation.
920	
921				Platform Issues
922	
923	If you are just writing drivers for Linux and do not maintain
924	an architecture port for the kernel, you can safely skip down
925	to "Closing".
926	
927	1) Struct scatterlist requirements.
928	
929	   Don't invent the architecture specific struct scatterlist; just use
930	   <asm-generic/scatterlist.h>. You need to enable
931	   CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
932	   (including software IOMMU).
933	
934	2) ARCH_DMA_MINALIGN
935	
936	   Architectures must ensure that kmalloc'ed buffer is
937	   DMA-safe. Drivers and subsystems depend on it. If an architecture
938	   isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
939	   the CPU cache is identical to data in main memory),
940	   ARCH_DMA_MINALIGN must be set so that the memory allocator
941	   makes sure that kmalloc'ed buffer doesn't share a cache line with
942	   the others. See arch/arm/include/asm/cache.h as an example.
943	
944	   Note that ARCH_DMA_MINALIGN is about DMA memory alignment
945	   constraints. You don't need to worry about the architecture data
946	   alignment constraints (e.g. the alignment constraints about 64-bit
947	   objects).
948	
949	3) Supporting multiple types of IOMMUs
950	
951	   If your architecture needs to support multiple types of IOMMUs, you
952	   can use include/linux/asm-generic/dma-mapping-common.h. It's a
953	   library to support the DMA API with multiple types of IOMMUs. Lots
954	   of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
955	   sparc) use it. Choose one to see how it can be used. If you need to
956	   support multiple types of IOMMUs in a single system, the example of
957	   x86 or powerpc helps.
958	
959				   Closing
960	
961	This document, and the API itself, would not be in its current
962	form without the feedback and suggestions from numerous individuals.
963	We would like to specifically mention, in no particular order, the
964	following people:
965	
966		Russell King <rmk@arm.linux.org.uk>
967		Leo Dagum <dagum@barrel.engr.sgi.com>
968		Ralf Baechle <ralf@oss.sgi.com>
969		Grant Grundler <grundler@cup.hp.com>
970		Jay Estabrook <Jay.Estabrook@compaq.com>
971		Thomas Sailer <sailer@ife.ee.ethz.ch>
972		Andrea Arcangeli <andrea@suse.de>
973		Jens Axboe <jens.axboe@oracle.com>
974		David Mosberger-Tang <davidm@hpl.hp.com>
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.