About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / DMA-API-HOWTO.txt




Custom Search

Based on kernel version 3.16. Page generated on 2014-08-06 21:38 EST.

1			     Dynamic DMA mapping Guide
2			     =========================
3	
4			 David S. Miller <davem@redhat.com>
5			 Richard Henderson <rth@cygnus.com>
6			  Jakub Jelinek <jakub@redhat.com>
7	
8	This is a guide to device driver writers on how to use the DMA API
9	with example pseudo-code.  For a concise description of the API, see
10	DMA-API.txt.
11	
12	                       CPU and DMA addresses
13	
14	There are several kinds of addresses involved in the DMA API, and it's
15	important to understand the differences.
16	
17	The kernel normally uses virtual addresses.  Any address returned by
18	kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
19	be stored in a "void *".
20	
21	The virtual memory system (TLB, page tables, etc.) translates virtual
22	addresses to CPU physical addresses, which are stored as "phys_addr_t" or
23	"resource_size_t".  The kernel manages device resources like registers as
24	physical addresses.  These are the addresses in /proc/iomem.  The physical
25	address is not directly useful to a driver; it must use ioremap() to map
26	the space and produce a virtual address.
27	
28	I/O devices use a third kind of address: a "bus address" or "DMA address".
29	If a device has registers at an MMIO address, or if it performs DMA to read
30	or write system memory, the addresses used by the device are bus addresses.
31	In some systems, bus addresses are identical to CPU physical addresses, but
32	in general they are not.  IOMMUs and host bridges can produce arbitrary
33	mappings between physical and bus addresses.
34	
35	Here's a picture and some examples:
36	
37	               CPU                  CPU                  Bus
38	             Virtual              Physical             Address
39	             Address              Address               Space
40	              Space                Space
41	
42	            +-------+             +------+             +------+
43	            |       |             |MMIO  |   Offset    |      |
44	            |       |  Virtual    |Space |   applied   |      |
45	          C +-------+ --------> B +------+ ----------> +------+ A
46	            |       |  mapping    |      |   by host   |      |
47	  +-----+   |       |             |      |   bridge    |      |   +--------+
48	  |     |   |       |             +------+             |      |   |        |
49	  | CPU |   |       |             | RAM  |             |      |   | Device |
50	  |     |   |       |             |      |             |      |   |        |
51	  +-----+   +-------+             +------+             +------+   +--------+
52	            |       |  Virtual    |Buffer|   Mapping   |      |
53	          X +-------+ --------> Y +------+ <---------- +------+ Z
54	            |       |  mapping    | RAM  |   by IOMMU
55	            |       |             |      |
56	            |       |             |      |
57	            +-------+             +------+
58	
59	During the enumeration process, the kernel learns about I/O devices and
60	their MMIO space and the host bridges that connect them to the system.  For
61	example, if a PCI device has a BAR, the kernel reads the bus address (A)
62	from the BAR and converts it to a CPU physical address (B).  The address B
63	is stored in a struct resource and usually exposed via /proc/iomem.  When a
64	driver claims a device, it typically uses ioremap() to map physical address
65	B at a virtual address (C).  It can then use, e.g., ioread32(C), to access
66	the device registers at bus address A.
67	
68	If the device supports DMA, the driver sets up a buffer using kmalloc() or
69	a similar interface, which returns a virtual address (X).  The virtual
70	memory system maps X to a physical address (Y) in system RAM.  The driver
71	can use virtual address X to access the buffer, but the device itself
72	cannot because DMA doesn't go through the CPU virtual memory system.
73	
74	In some simple systems, the device can do DMA directly to physical address
75	Y.  But in many others, there is IOMMU hardware that translates bus
76	addresses to physical addresses, e.g., it translates Z to Y.  This is part
77	of the reason for the DMA API: the driver can give a virtual address X to
78	an interface like dma_map_single(), which sets up any required IOMMU
79	mapping and returns the bus address Z.  The driver then tells the device to
80	do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
81	RAM.
82	
83	So that Linux can use the dynamic DMA mapping, it needs some help from the
84	drivers, namely it has to take into account that DMA addresses should be
85	mapped only for the time they are actually used and unmapped after the DMA
86	transfer.
87	
88	The following API will work of course even on platforms where no such
89	hardware exists.
90	
91	Note that the DMA API works with any bus independent of the underlying
92	microprocessor architecture. You should use the DMA API rather than the
93	bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
94	pci_map_*() interfaces.
95	
96	First of all, you should make sure
97	
98	#include <linux/dma-mapping.h>
99	
100	is in your driver, which provides the definition of dma_addr_t.  This type
101	can hold any valid DMA or bus address for the platform and should be used
102	everywhere you hold a DMA address returned from the DMA mapping functions.
103	
104				 What memory is DMA'able?
105	
106	The first piece of information you must know is what kernel memory can
107	be used with the DMA mapping facilities.  There has been an unwritten
108	set of rules regarding this, and this text is an attempt to finally
109	write them down.
110	
111	If you acquired your memory via the page allocator
112	(i.e. __get_free_page*()) or the generic memory allocators
113	(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
114	that memory using the addresses returned from those routines.
115	
116	This means specifically that you may _not_ use the memory/addresses
117	returned from vmalloc() for DMA.  It is possible to DMA to the
118	_underlying_ memory mapped into a vmalloc() area, but this requires
119	walking page tables to get the physical addresses, and then
120	translating each of those pages back to a kernel address using
121	something like __va().  [ EDIT: Update this when we integrate
122	Gerd Knorr's generic code which does this. ]
123	
124	This rule also means that you may use neither kernel image addresses
125	(items in data/text/bss segments), nor module image addresses, nor
126	stack addresses for DMA.  These could all be mapped somewhere entirely
127	different than the rest of physical memory.  Even if those classes of
128	memory could physically work with DMA, you'd need to ensure the I/O
129	buffers were cacheline-aligned.  Without that, you'd see cacheline
130	sharing problems (data corruption) on CPUs with DMA-incoherent caches.
131	(The CPU could write to one word, DMA would write to a different one
132	in the same cache line, and one of them could be overwritten.)
133	
134	Also, this means that you cannot take the return of a kmap()
135	call and DMA to/from that.  This is similar to vmalloc().
136	
137	What about block I/O and networking buffers?  The block I/O and
138	networking subsystems make sure that the buffers they use are valid
139	for you to DMA from/to.
140	
141				DMA addressing limitations
142	
143	Does your device have any DMA addressing limitations?  For example, is
144	your device only capable of driving the low order 24-bits of address?
145	If so, you need to inform the kernel of this fact.
146	
147	By default, the kernel assumes that your device can address the full
148	32-bits.  For a 64-bit capable device, this needs to be increased.
149	And for a device with limitations, as discussed in the previous
150	paragraph, it needs to be decreased.
151	
152	Special note about PCI: PCI-X specification requires PCI-X devices to
153	support 64-bit addressing (DAC) for all transactions.  And at least
154	one platform (SGI SN2) requires 64-bit consistent allocations to
155	operate correctly when the IO bus is in PCI-X mode.
156	
157	For correct operation, you must interrogate the kernel in your device
158	probe routine to see if the DMA controller on the machine can properly
159	support the DMA addressing limitation your device has.  It is good
160	style to do this even if your device holds the default setting,
161	because this shows that you did think about these issues wrt. your
162	device.
163	
164	The query is performed via a call to dma_set_mask_and_coherent():
165	
166		int dma_set_mask_and_coherent(struct device *dev, u64 mask);
167	
168	which will query the mask for both streaming and coherent APIs together.
169	If you have some special requirements, then the following two separate
170	queries can be used instead:
171	
172		The query for streaming mappings is performed via a call to
173		dma_set_mask():
174	
175			int dma_set_mask(struct device *dev, u64 mask);
176	
177		The query for consistent allocations is performed via a call
178		to dma_set_coherent_mask():
179	
180			int dma_set_coherent_mask(struct device *dev, u64 mask);
181	
182	Here, dev is a pointer to the device struct of your device, and mask
183	is a bit mask describing which bits of an address your device
184	supports.  It returns zero if your card can perform DMA properly on
185	the machine given the address mask you provided.  In general, the
186	device struct of your device is embedded in the bus-specific device
187	struct of your device.  For example, &pdev->dev is a pointer to the
188	device struct of a PCI device (pdev is a pointer to the PCI device
189	struct of your device).
190	
191	If it returns non-zero, your device cannot perform DMA properly on
192	this platform, and attempting to do so will result in undefined
193	behavior.  You must either use a different mask, or not use DMA.
194	
195	This means that in the failure case, you have three options:
196	
197	1) Use another DMA mask, if possible (see below).
198	2) Use some non-DMA mode for data transfer, if possible.
199	3) Ignore this device and do not initialize it.
200	
201	It is recommended that your driver print a kernel KERN_WARNING message
202	when you end up performing either #2 or #3.  In this manner, if a user
203	of your driver reports that performance is bad or that the device is not
204	even detected, you can ask them for the kernel messages to find out
205	exactly why.
206	
207	The standard 32-bit addressing device would do something like this:
208	
209		if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
210			dev_warn(dev, "mydev: No suitable DMA available\n");
211			goto ignore_this_device;
212		}
213	
214	Another common scenario is a 64-bit capable device.  The approach here
215	is to try for 64-bit addressing, but back down to a 32-bit mask that
216	should not fail.  The kernel may fail the 64-bit mask not because the
217	platform is not capable of 64-bit addressing.  Rather, it may fail in
218	this case simply because 32-bit addressing is done more efficiently
219	than 64-bit addressing.  For example, Sparc64 PCI SAC addressing is
220	more efficient than DAC addressing.
221	
222	Here is how you would handle a 64-bit capable device which can drive
223	all 64-bits when accessing streaming DMA:
224	
225		int using_dac;
226	
227		if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
228			using_dac = 1;
229		} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
230			using_dac = 0;
231		} else {
232			dev_warn(dev, "mydev: No suitable DMA available\n");
233			goto ignore_this_device;
234		}
235	
236	If a card is capable of using 64-bit consistent allocations as well,
237	the case would look like this:
238	
239		int using_dac, consistent_using_dac;
240	
241		if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
242			using_dac = 1;
243		   	consistent_using_dac = 1;
244		} else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
245			using_dac = 0;
246			consistent_using_dac = 0;
247		} else {
248			dev_warn(dev, "mydev: No suitable DMA available\n");
249			goto ignore_this_device;
250		}
251	
252	The coherent mask will always be able to set the same or a smaller mask as
253	the streaming mask. However for the rare case that a device driver only
254	uses consistent allocations, one would have to check the return value from
255	dma_set_coherent_mask().
256	
257	Finally, if your device can only drive the low 24-bits of
258	address you might do something like:
259	
260		if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
261			dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
262			goto ignore_this_device;
263		}
264	
265	When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
266	returns zero, the kernel saves away this mask you have provided.  The
267	kernel will use this information later when you make DMA mappings.
268	
269	There is a case which we are aware of at this time, which is worth
270	mentioning in this documentation.  If your device supports multiple
271	functions (for example a sound card provides playback and record
272	functions) and the various different functions have _different_
273	DMA addressing limitations, you may wish to probe each mask and
274	only provide the functionality which the machine can handle.  It
275	is important that the last call to dma_set_mask() be for the
276	most specific mask.
277	
278	Here is pseudo-code showing how this might be done:
279	
280		#define PLAYBACK_ADDRESS_BITS	DMA_BIT_MASK(32)
281		#define RECORD_ADDRESS_BITS	DMA_BIT_MASK(24)
282	
283		struct my_sound_card *card;
284		struct device *dev;
285	
286		...
287		if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
288			card->playback_enabled = 1;
289		} else {
290			card->playback_enabled = 0;
291			dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
292			       card->name);
293		}
294		if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
295			card->record_enabled = 1;
296		} else {
297			card->record_enabled = 0;
298			dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
299			       card->name);
300		}
301	
302	A sound card was used as an example here because this genre of PCI
303	devices seems to be littered with ISA chips given a PCI front end,
304	and thus retaining the 16MB DMA addressing limitations of ISA.
305	
306				Types of DMA mappings
307	
308	There are two types of DMA mappings:
309	
310	- Consistent DMA mappings which are usually mapped at driver
311	  initialization, unmapped at the end and for which the hardware should
312	  guarantee that the device and the CPU can access the data
313	  in parallel and will see updates made by each other without any
314	  explicit software flushing.
315	
316	  Think of "consistent" as "synchronous" or "coherent".
317	
318	  The current default is to return consistent memory in the low 32
319	  bits of the bus space.  However, for future compatibility you should
320	  set the consistent mask even if this default is fine for your
321	  driver.
322	
323	  Good examples of what to use consistent mappings for are:
324	
325		- Network card DMA ring descriptors.
326		- SCSI adapter mailbox command data structures.
327		- Device firmware microcode executed out of
328		  main memory.
329	
330	  The invariant these examples all require is that any CPU store
331	  to memory is immediately visible to the device, and vice
332	  versa.  Consistent mappings guarantee this.
333	
334	  IMPORTANT: Consistent DMA memory does not preclude the usage of
335	             proper memory barriers.  The CPU may reorder stores to
336		     consistent memory just as it may normal memory.  Example:
337		     if it is important for the device to see the first word
338		     of a descriptor updated before the second, you must do
339		     something like:
340	
341			desc->word0 = address;
342			wmb();
343			desc->word1 = DESC_VALID;
344	
345	             in order to get correct behavior on all platforms.
346	
347		     Also, on some platforms your driver may need to flush CPU write
348		     buffers in much the same way as it needs to flush write buffers
349		     found in PCI bridges (such as by reading a register's value
350		     after writing it).
351	
352	- Streaming DMA mappings which are usually mapped for one DMA
353	  transfer, unmapped right after it (unless you use dma_sync_* below)
354	  and for which hardware can optimize for sequential accesses.
355	
356	  This of "streaming" as "asynchronous" or "outside the coherency
357	  domain".
358	
359	  Good examples of what to use streaming mappings for are:
360	
361		- Networking buffers transmitted/received by a device.
362		- Filesystem buffers written/read by a SCSI device.
363	
364	  The interfaces for using this type of mapping were designed in
365	  such a way that an implementation can make whatever performance
366	  optimizations the hardware allows.  To this end, when using
367	  such mappings you must be explicit about what you want to happen.
368	
369	Neither type of DMA mapping has alignment restrictions that come from
370	the underlying bus, although some devices may have such restrictions.
371	Also, systems with caches that aren't DMA-coherent will work better
372	when the underlying buffers don't share cache lines with other data.
373	
374	
375			 Using Consistent DMA mappings.
376	
377	To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
378	you should do:
379	
380		dma_addr_t dma_handle;
381	
382		cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
383	
384	where device is a struct device *. This may be called in interrupt
385	context with the GFP_ATOMIC flag.
386	
387	Size is the length of the region you want to allocate, in bytes.
388	
389	This routine will allocate RAM for that region, so it acts similarly to
390	__get_free_pages() (but takes size instead of a page order).  If your
391	driver needs regions sized smaller than a page, you may prefer using
392	the dma_pool interface, described below.
393	
394	The consistent DMA mapping interfaces, for non-NULL dev, will by
395	default return a DMA address which is 32-bit addressable.  Even if the
396	device indicates (via DMA mask) that it may address the upper 32-bits,
397	consistent allocation will only return > 32-bit addresses for DMA if
398	the consistent DMA mask has been explicitly changed via
399	dma_set_coherent_mask().  This is true of the dma_pool interface as
400	well.
401	
402	dma_alloc_coherent() returns two values: the virtual address which you
403	can use to access it from the CPU and dma_handle which you pass to the
404	card.
405	
406	The CPU virtual address and the DMA bus address are both
407	guaranteed to be aligned to the smallest PAGE_SIZE order which
408	is greater than or equal to the requested size.  This invariant
409	exists (for example) to guarantee that if you allocate a chunk
410	which is smaller than or equal to 64 kilobytes, the extent of the
411	buffer you receive will not cross a 64K boundary.
412	
413	To unmap and free such a DMA region, you call:
414	
415		dma_free_coherent(dev, size, cpu_addr, dma_handle);
416	
417	where dev, size are the same as in the above call and cpu_addr and
418	dma_handle are the values dma_alloc_coherent() returned to you.
419	This function may not be called in interrupt context.
420	
421	If your driver needs lots of smaller memory regions, you can write
422	custom code to subdivide pages returned by dma_alloc_coherent(),
423	or you can use the dma_pool API to do that.  A dma_pool is like
424	a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
425	Also, it understands common hardware constraints for alignment,
426	like queue heads needing to be aligned on N byte boundaries.
427	
428	Create a dma_pool like this:
429	
430		struct dma_pool *pool;
431	
432		pool = dma_pool_create(name, dev, size, align, boundary);
433	
434	The "name" is for diagnostics (like a kmem_cache name); dev and size
435	are as above.  The device's hardware alignment requirement for this
436	type of data is "align" (which is expressed in bytes, and must be a
437	power of two).  If your device has no boundary crossing restrictions,
438	pass 0 for boundary; passing 4096 says memory allocated from this pool
439	must not cross 4KByte boundaries (but at that time it may be better to
440	use dma_alloc_coherent() directly instead).
441	
442	Allocate memory from a DMA pool like this:
443	
444		cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
445	
446	flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
447	holding SMP locks), GFP_ATOMIC otherwise.  Like dma_alloc_coherent(),
448	this returns two values, cpu_addr and dma_handle.
449	
450	Free memory that was allocated from a dma_pool like this:
451	
452		dma_pool_free(pool, cpu_addr, dma_handle);
453	
454	where pool is what you passed to dma_pool_alloc(), and cpu_addr and
455	dma_handle are the values dma_pool_alloc() returned. This function
456	may be called in interrupt context.
457	
458	Destroy a dma_pool by calling:
459	
460		dma_pool_destroy(pool);
461	
462	Make sure you've called dma_pool_free() for all memory allocated
463	from a pool before you destroy the pool. This function may not
464	be called in interrupt context.
465	
466				DMA Direction
467	
468	The interfaces described in subsequent portions of this document
469	take a DMA direction argument, which is an integer and takes on
470	one of the following values:
471	
472	 DMA_BIDIRECTIONAL
473	 DMA_TO_DEVICE
474	 DMA_FROM_DEVICE
475	 DMA_NONE
476	
477	You should provide the exact DMA direction if you know it.
478	
479	DMA_TO_DEVICE means "from main memory to the device"
480	DMA_FROM_DEVICE means "from the device to main memory"
481	It is the direction in which the data moves during the DMA
482	transfer.
483	
484	You are _strongly_ encouraged to specify this as precisely
485	as you possibly can.
486	
487	If you absolutely cannot know the direction of the DMA transfer,
488	specify DMA_BIDIRECTIONAL.  It means that the DMA can go in
489	either direction.  The platform guarantees that you may legally
490	specify this, and that it will work, but this may be at the
491	cost of performance for example.
492	
493	The value DMA_NONE is to be used for debugging.  One can
494	hold this in a data structure before you come to know the
495	precise direction, and this will help catch cases where your
496	direction tracking logic has failed to set things up properly.
497	
498	Another advantage of specifying this value precisely (outside of
499	potential platform-specific optimizations of such) is for debugging.
500	Some platforms actually have a write permission boolean which DMA
501	mappings can be marked with, much like page protections in the user
502	program address space.  Such platforms can and do report errors in the
503	kernel logs when the DMA controller hardware detects violation of the
504	permission setting.
505	
506	Only streaming mappings specify a direction, consistent mappings
507	implicitly have a direction attribute setting of
508	DMA_BIDIRECTIONAL.
509	
510	The SCSI subsystem tells you the direction to use in the
511	'sc_data_direction' member of the SCSI command your driver is
512	working on.
513	
514	For Networking drivers, it's a rather simple affair.  For transmit
515	packets, map/unmap them with the DMA_TO_DEVICE direction
516	specifier.  For receive packets, just the opposite, map/unmap them
517	with the DMA_FROM_DEVICE direction specifier.
518	
519			  Using Streaming DMA mappings
520	
521	The streaming DMA mapping routines can be called from interrupt
522	context.  There are two versions of each map/unmap, one which will
523	map/unmap a single memory region, and one which will map/unmap a
524	scatterlist.
525	
526	To map a single region, you do:
527	
528		struct device *dev = &my_dev->dev;
529		dma_addr_t dma_handle;
530		void *addr = buffer->ptr;
531		size_t size = buffer->len;
532	
533		dma_handle = dma_map_single(dev, addr, size, direction);
534		if (dma_mapping_error(dma_handle)) {
535			/*
536			 * reduce current DMA mapping usage,
537			 * delay and try again later or
538			 * reset driver.
539			 */
540			goto map_error_handling;
541		}
542	
543	and to unmap it:
544	
545		dma_unmap_single(dev, dma_handle, size, direction);
546	
547	You should call dma_mapping_error() as dma_map_single() could fail and return
548	error. Not all DMA implementations support the dma_mapping_error() interface.
549	However, it is a good practice to call dma_mapping_error() interface, which
550	will invoke the generic mapping error check interface. Doing so will ensure
551	that the mapping code will work correctly on all DMA implementations without
552	any dependency on the specifics of the underlying implementation. Using the
553	returned address without checking for errors could result in failures ranging
554	from panics to silent data corruption. A couple of examples of incorrect ways
555	to check for errors that make assumptions about the underlying DMA
556	implementation are as follows and these are applicable to dma_map_page() as
557	well.
558	
559	Incorrect example 1:
560		dma_addr_t dma_handle;
561	
562		dma_handle = dma_map_single(dev, addr, size, direction);
563		if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
564			goto map_error;
565		}
566	
567	Incorrect example 2:
568		dma_addr_t dma_handle;
569	
570		dma_handle = dma_map_single(dev, addr, size, direction);
571		if (dma_handle == DMA_ERROR_CODE) {
572			goto map_error;
573		}
574	
575	You should call dma_unmap_single() when the DMA activity is finished, e.g.,
576	from the interrupt which told you that the DMA transfer is done.
577	
578	Using CPU pointers like this for single mappings has a disadvantage:
579	you cannot reference HIGHMEM memory in this way.  Thus, there is a
580	map/unmap interface pair akin to dma_{map,unmap}_single().  These
581	interfaces deal with page/offset pairs instead of CPU pointers.
582	Specifically:
583	
584		struct device *dev = &my_dev->dev;
585		dma_addr_t dma_handle;
586		struct page *page = buffer->page;
587		unsigned long offset = buffer->offset;
588		size_t size = buffer->len;
589	
590		dma_handle = dma_map_page(dev, page, offset, size, direction);
591		if (dma_mapping_error(dma_handle)) {
592			/*
593			 * reduce current DMA mapping usage,
594			 * delay and try again later or
595			 * reset driver.
596			 */
597			goto map_error_handling;
598		}
599	
600		...
601	
602		dma_unmap_page(dev, dma_handle, size, direction);
603	
604	Here, "offset" means byte offset within the given page.
605	
606	You should call dma_mapping_error() as dma_map_page() could fail and return
607	error as outlined under the dma_map_single() discussion.
608	
609	You should call dma_unmap_page() when the DMA activity is finished, e.g.,
610	from the interrupt which told you that the DMA transfer is done.
611	
612	With scatterlists, you map a region gathered from several regions by:
613	
614		int i, count = dma_map_sg(dev, sglist, nents, direction);
615		struct scatterlist *sg;
616	
617		for_each_sg(sglist, sg, count, i) {
618			hw_address[i] = sg_dma_address(sg);
619			hw_len[i] = sg_dma_len(sg);
620		}
621	
622	where nents is the number of entries in the sglist.
623	
624	The implementation is free to merge several consecutive sglist entries
625	into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
626	consecutive sglist entries can be merged into one provided the first one
627	ends and the second one starts on a page boundary - in fact this is a huge
628	advantage for cards which either cannot do scatter-gather or have very
629	limited number of scatter-gather entries) and returns the actual number
630	of sg entries it mapped them to. On failure 0 is returned.
631	
632	Then you should loop count times (note: this can be less than nents times)
633	and use sg_dma_address() and sg_dma_len() macros where you previously
634	accessed sg->address and sg->length as shown above.
635	
636	To unmap a scatterlist, just call:
637	
638		dma_unmap_sg(dev, sglist, nents, direction);
639	
640	Again, make sure DMA activity has already finished.
641	
642	PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
643	              the _same_ one you passed into the dma_map_sg call,
644		      it should _NOT_ be the 'count' value _returned_ from the
645	              dma_map_sg call.
646	
647	Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
648	counterpart, because the bus address space is a shared resource and
649	you could render the machine unusable by consuming all bus addresses.
650	
651	If you need to use the same streaming DMA region multiple times and touch
652	the data in between the DMA transfers, the buffer needs to be synced
653	properly in order for the CPU and device to see the most up-to-date and
654	correct copy of the DMA buffer.
655	
656	So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
657	transfer call either:
658	
659		dma_sync_single_for_cpu(dev, dma_handle, size, direction);
660	
661	or:
662	
663		dma_sync_sg_for_cpu(dev, sglist, nents, direction);
664	
665	as appropriate.
666	
667	Then, if you wish to let the device get at the DMA area again,
668	finish accessing the data with the CPU, and then before actually
669	giving the buffer to the hardware call either:
670	
671		dma_sync_single_for_device(dev, dma_handle, size, direction);
672	
673	or:
674	
675		dma_sync_sg_for_device(dev, sglist, nents, direction);
676	
677	as appropriate.
678	
679	After the last DMA transfer call one of the DMA unmap routines
680	dma_unmap_{single,sg}(). If you don't touch the data from the first
681	dma_map_*() call till dma_unmap_*(), then you don't have to call the
682	dma_sync_*() routines at all.
683	
684	Here is pseudo code which shows a situation in which you would need
685	to use the dma_sync_*() interfaces.
686	
687		my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
688		{
689			dma_addr_t mapping;
690	
691			mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
692			if (dma_mapping_error(dma_handle)) {
693				/*
694				 * reduce current DMA mapping usage,
695				 * delay and try again later or
696				 * reset driver.
697				 */
698				goto map_error_handling;
699			}
700	
701			cp->rx_buf = buffer;
702			cp->rx_len = len;
703			cp->rx_dma = mapping;
704	
705			give_rx_buf_to_card(cp);
706		}
707	
708		...
709	
710		my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
711		{
712			struct my_card *cp = devid;
713	
714			...
715			if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
716				struct my_card_header *hp;
717	
718				/* Examine the header to see if we wish
719				 * to accept the data.  But synchronize
720				 * the DMA transfer with the CPU first
721				 * so that we see updated contents.
722				 */
723				dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
724							cp->rx_len,
725							DMA_FROM_DEVICE);
726	
727				/* Now it is safe to examine the buffer. */
728				hp = (struct my_card_header *) cp->rx_buf;
729				if (header_is_ok(hp)) {
730					dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
731							 DMA_FROM_DEVICE);
732					pass_to_upper_layers(cp->rx_buf);
733					make_and_setup_new_rx_buf(cp);
734				} else {
735					/* CPU should not write to
736					 * DMA_FROM_DEVICE-mapped area,
737					 * so dma_sync_single_for_device() is
738					 * not needed here. It would be required
739					 * for DMA_BIDIRECTIONAL mapping if
740					 * the memory was modified.
741					 */
742					give_rx_buf_to_card(cp);
743				}
744			}
745		}
746	
747	Drivers converted fully to this interface should not use virt_to_bus() any
748	longer, nor should they use bus_to_virt(). Some drivers have to be changed a
749	little bit, because there is no longer an equivalent to bus_to_virt() in the
750	dynamic DMA mapping scheme - you have to always store the DMA addresses
751	returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
752	calls (dma_map_sg() stores them in the scatterlist itself if the platform
753	supports dynamic DMA mapping in hardware) in your driver structures and/or
754	in the card registers.
755	
756	All drivers should be using these interfaces with no exceptions.  It
757	is planned to completely remove virt_to_bus() and bus_to_virt() as
758	they are entirely deprecated.  Some ports already do not provide these
759	as it is impossible to correctly support them.
760	
761				Handling Errors
762	
763	DMA address space is limited on some architectures and an allocation
764	failure can be determined by:
765	
766	- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
767	
768	- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
769	  by using dma_mapping_error():
770	
771		dma_addr_t dma_handle;
772	
773		dma_handle = dma_map_single(dev, addr, size, direction);
774		if (dma_mapping_error(dev, dma_handle)) {
775			/*
776			 * reduce current DMA mapping usage,
777			 * delay and try again later or
778			 * reset driver.
779			 */
780			goto map_error_handling;
781		}
782	
783	- unmap pages that are already mapped, when mapping error occurs in the middle
784	  of a multiple page mapping attempt. These example are applicable to
785	  dma_map_page() as well.
786	
787	Example 1:
788		dma_addr_t dma_handle1;
789		dma_addr_t dma_handle2;
790	
791		dma_handle1 = dma_map_single(dev, addr, size, direction);
792		if (dma_mapping_error(dev, dma_handle1)) {
793			/*
794			 * reduce current DMA mapping usage,
795			 * delay and try again later or
796			 * reset driver.
797			 */
798			goto map_error_handling1;
799		}
800		dma_handle2 = dma_map_single(dev, addr, size, direction);
801		if (dma_mapping_error(dev, dma_handle2)) {
802			/*
803			 * reduce current DMA mapping usage,
804			 * delay and try again later or
805			 * reset driver.
806			 */
807			goto map_error_handling2;
808		}
809	
810		...
811	
812		map_error_handling2:
813			dma_unmap_single(dma_handle1);
814		map_error_handling1:
815	
816	Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
817		    mapping error is detected in the middle)
818	
819		dma_addr_t dma_addr;
820		dma_addr_t array[DMA_BUFFERS];
821		int save_index = 0;
822	
823		for (i = 0; i < DMA_BUFFERS; i++) {
824	
825			...
826	
827			dma_addr = dma_map_single(dev, addr, size, direction);
828			if (dma_mapping_error(dev, dma_addr)) {
829				/*
830				 * reduce current DMA mapping usage,
831				 * delay and try again later or
832				 * reset driver.
833				 */
834				goto map_error_handling;
835			}
836			array[i].dma_addr = dma_addr;
837			save_index++;
838		}
839	
840		...
841	
842		map_error_handling:
843	
844		for (i = 0; i < save_index; i++) {
845	
846			...
847	
848			dma_unmap_single(array[i].dma_addr);
849		}
850	
851	Networking drivers must call dev_kfree_skb() to free the socket buffer
852	and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
853	(ndo_start_xmit). This means that the socket buffer is just dropped in
854	the failure case.
855	
856	SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
857	fails in the queuecommand hook. This means that the SCSI subsystem
858	passes the command to the driver again later.
859	
860			Optimizing Unmap State Space Consumption
861	
862	On many platforms, dma_unmap_{single,page}() is simply a nop.
863	Therefore, keeping track of the mapping address and length is a waste
864	of space.  Instead of filling your drivers up with ifdefs and the like
865	to "work around" this (which would defeat the whole purpose of a
866	portable API) the following facilities are provided.
867	
868	Actually, instead of describing the macros one by one, we'll
869	transform some example code.
870	
871	1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
872	   Example, before:
873	
874		struct ring_state {
875			struct sk_buff *skb;
876			dma_addr_t mapping;
877			__u32 len;
878		};
879	
880	   after:
881	
882		struct ring_state {
883			struct sk_buff *skb;
884			DEFINE_DMA_UNMAP_ADDR(mapping);
885			DEFINE_DMA_UNMAP_LEN(len);
886		};
887	
888	2) Use dma_unmap_{addr,len}_set() to set these values.
889	   Example, before:
890	
891		ringp->mapping = FOO;
892		ringp->len = BAR;
893	
894	   after:
895	
896		dma_unmap_addr_set(ringp, mapping, FOO);
897		dma_unmap_len_set(ringp, len, BAR);
898	
899	3) Use dma_unmap_{addr,len}() to access these values.
900	   Example, before:
901	
902		dma_unmap_single(dev, ringp->mapping, ringp->len,
903				 DMA_FROM_DEVICE);
904	
905	   after:
906	
907		dma_unmap_single(dev,
908				 dma_unmap_addr(ringp, mapping),
909				 dma_unmap_len(ringp, len),
910				 DMA_FROM_DEVICE);
911	
912	It really should be self-explanatory.  We treat the ADDR and LEN
913	separately, because it is possible for an implementation to only
914	need the address in order to perform the unmap operation.
915	
916				Platform Issues
917	
918	If you are just writing drivers for Linux and do not maintain
919	an architecture port for the kernel, you can safely skip down
920	to "Closing".
921	
922	1) Struct scatterlist requirements.
923	
924	   Don't invent the architecture specific struct scatterlist; just use
925	   <asm-generic/scatterlist.h>. You need to enable
926	   CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
927	   (including software IOMMU).
928	
929	2) ARCH_DMA_MINALIGN
930	
931	   Architectures must ensure that kmalloc'ed buffer is
932	   DMA-safe. Drivers and subsystems depend on it. If an architecture
933	   isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
934	   the CPU cache is identical to data in main memory),
935	   ARCH_DMA_MINALIGN must be set so that the memory allocator
936	   makes sure that kmalloc'ed buffer doesn't share a cache line with
937	   the others. See arch/arm/include/asm/cache.h as an example.
938	
939	   Note that ARCH_DMA_MINALIGN is about DMA memory alignment
940	   constraints. You don't need to worry about the architecture data
941	   alignment constraints (e.g. the alignment constraints about 64-bit
942	   objects).
943	
944	3) Supporting multiple types of IOMMUs
945	
946	   If your architecture needs to support multiple types of IOMMUs, you
947	   can use include/linux/asm-generic/dma-mapping-common.h. It's a
948	   library to support the DMA API with multiple types of IOMMUs. Lots
949	   of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
950	   sparc) use it. Choose one to see how it can be used. If you need to
951	   support multiple types of IOMMUs in a single system, the example of
952	   x86 or powerpc helps.
953	
954				   Closing
955	
956	This document, and the API itself, would not be in its current
957	form without the feedback and suggestions from numerous individuals.
958	We would like to specifically mention, in no particular order, the
959	following people:
960	
961		Russell King <rmk@arm.linux.org.uk>
962		Leo Dagum <dagum@barrel.engr.sgi.com>
963		Ralf Baechle <ralf@oss.sgi.com>
964		Grant Grundler <grundler@cup.hp.com>
965		Jay Estabrook <Jay.Estabrook@compaq.com>
966		Thomas Sailer <sailer@ife.ee.ethz.ch>
967		Andrea Arcangeli <andrea@suse.de>
968		Jens Axboe <jens.axboe@oracle.com>
969		David Mosberger-Tang <davidm@hpl.hp.com>
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.