About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / networking / dpaa.txt


Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.

1	The QorIQ DPAA Ethernet Driver
2	==============================
3	
4	Authors:
5	Madalin Bucur <madalin.bucur@nxp.com>
6	Camelia Groza <camelia.groza@nxp.com>
7	
8	Contents
9	========
10	
11		- DPAA Ethernet Overview
12		- DPAA Ethernet Supported SoCs
13		- Configuring DPAA Ethernet in your kernel
14		- DPAA Ethernet Frame Processing
15		- DPAA Ethernet Features
16		- DPAA IRQ Affinity and Receive Side Scaling
17		- Debugging
18	
19	DPAA Ethernet Overview
20	======================
21	
22	DPAA stands for Data Path Acceleration Architecture and it is a
23	set of networking acceleration IPs that are available on several
24	generations of SoCs, both on PowerPC and ARM64.
25	
26	The Freescale DPAA architecture consists of a series of hardware blocks
27	that support Ethernet connectivity. The Ethernet driver depends upon the
28	following drivers in the Linux kernel:
29	
30	 - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)
31	    drivers/iommu/fsl_*
32	 - Frame Manager (FMan)
33	    drivers/net/ethernet/freescale/fman
34	 - Queue Manager (QMan), Buffer Manager (BMan)
35	    drivers/soc/fsl/qbman
36	
37	A simplified view of the dpaa_eth interfaces mapped to FMan MACs:
38	
39	  dpaa_eth       /eth0\     ...       /ethN\
40	  driver        |      |             |      |
41	  -------------   ----   -----------   ----   -------------
42	       -Ports  / Tx  Rx \    ...    / Tx  Rx \
43	  FMan        |          |         |          |
44	       -MACs  |   MAC0   |         |   MACN   |
45	             /   dtsec0   \  ...  /   dtsecN   \ (or tgec)
46	            /              \     /              \(or memac)
47	  ---------  --------------  ---  --------------  ---------
48	      FMan, FMan Port, FMan SP, FMan MURAM drivers
49	  ---------------------------------------------------------
50	      FMan HW blocks: MURAM, MACs, Ports, SP
51	  ---------------------------------------------------------
52	
53	The dpaa_eth relation to the QMan, BMan and FMan:
54	              ________________________________
55	  dpaa_eth   /            eth0                \
56	  driver    /                                  \
57	  ---------   -^-   -^-   -^-   ---    ---------
58	  QMan driver / \   / \   / \  \   /  | BMan    |
59	             |Rx | |Rx | |Tx | |Tx |  | driver  |
60	  ---------  |Dfl| |Err| |Cnf| |FQs|  |         |
61	  QMan HW    |FQ | |FQ | |FQs| |   |  |         |
62	             /   \ /   \ /   \  \ /   |         |
63	  ---------   ---   ---   ---   -v-    ---------
64	            |        FMan QMI         |         |
65	            | FMan HW       FMan BMI  | BMan HW |
66	              -----------------------   --------
67	
68	where the acronyms used above (and in the code) are:
69	DPAA = Data Path Acceleration Architecture
70	FMan = DPAA Frame Manager
71	QMan = DPAA Queue Manager
72	BMan = DPAA Buffers Manager
73	QMI = QMan interface in FMan
74	BMI = BMan interface in FMan
75	FMan SP = FMan Storage Profiles
76	MURAM = Multi-user RAM in FMan
77	FQ = QMan Frame Queue
78	Rx Dfl FQ = default reception FQ
79	Rx Err FQ = Rx error frames FQ
80	Tx Cnf FQ = Tx confirmation FQs
81	Tx FQs = transmission frame queues
82	dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
83	tgec = ten gigabit Ethernet controller (10 Gbps)
84	memac = multirate Ethernet MAC (10/100/1000/10000)
85	
86	DPAA Ethernet Supported SoCs
87	============================
88	
89	The DPAA drivers enable the Ethernet controllers present on the following SoCs:
90	
91	# PPC
92	P1023
93	P2041
94	P3041
95	P4080
96	P5020
97	P5040
98	T1023
99	T1024
100	T1040
101	T1042
102	T2080
103	T4240
104	B4860
105	
106	# ARM
107	LS1043A
108	LS1046A
109	
110	Configuring DPAA Ethernet in your kernel
111	========================================
112	
113	To enable the DPAA Ethernet driver, the following Kconfig options are required:
114	
115	# common for arch/arm64 and arch/powerpc platforms
116	CONFIG_FSL_DPAA=y
117	CONFIG_FSL_FMAN=y
118	CONFIG_FSL_DPAA_ETH=y
119	CONFIG_FSL_XGMAC_MDIO=y
120	
121	# for arch/powerpc only
122	CONFIG_FSL_PAMU=y
123	
124	# common options needed for the PHYs used on the RDBs
125	CONFIG_VITESSE_PHY=y
126	CONFIG_REALTEK_PHY=y
127	CONFIG_AQUANTIA_PHY=y
128	
129	DPAA Ethernet Frame Processing
130	==============================
131	
132	On Rx, buffers for the incoming frames are retrieved from one of the three
133	existing buffers pools. The driver initializes and seeds these, each with
134	buffers of different sizes: 1KB, 2KB and 4KB.
135	
136	On Tx, all transmitted frames are returned to the driver through Tx
137	confirmation frame queues. The driver is then responsible for freeing the
138	buffers. In order to do this properly, a backpointer is added to the buffer
139	before transmission that points to the skb. When the buffer returns to the
140	driver on a confirmation FQ, the skb can be correctly consumed.
141	
142	DPAA Ethernet Features
143	======================
144	
145	Currently the DPAA Ethernet driver enables the basic features required for
146	a Linux Ethernet driver. The support for advanced features will be added
147	gradually.
148	
149	The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
150	checksum offload feature is enabled by default and cannot be controlled through
151	ethtool. Also, rx-flow-hash and rx-hashing was added. The addition of RSS
152	provides a big performance boost for the forwarding scenarios, allowing
153	different traffic flows received by one interface to be processed by different
154	CPUs in parallel.
155	
156	The driver has support for multiple prioritized Tx traffic classes. Priorities
157	range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
158	strict priority levels. Each traffic class contains NR_CPU TX queues. By
159	default, only one traffic class is enabled and the lowest priority Tx queues
160	are used. Higher priority traffic classes can be enabled with the mqprio
161	qdisc. For example, all four traffic classes are enabled on an interface with
162	the following command. Furthermore, skb priority levels are mapped to traffic
163	classes as follows:
164	
165		* priorities 0 to 3 - traffic class 0 (low priority)
166		* priorities 4 to 7 - traffic class 1 (medium-low priority)
167		* priorities 8 to 11 - traffic class 2 (medium-high priority)
168		* priorities 12 to 15 - traffic class 3 (high priority)
169	
170	tc qdisc add dev <int> root handle 1: \
171		 mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
172	
173	DPAA IRQ Affinity and Receive Side Scaling
174	==========================================
175	
176	Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation
177	queues is seen by the CPU as ingress traffic on a certain portal.
178	The DPAA QMan portal interrupts are affined each to a certain CPU.
179	The same portal interrupt services all the QMan portal consumers.
180	
181	By default the DPAA Ethernet driver enables RSS, making use of the
182	DPAA FMan Parser and Keygen blocks to distribute traffic on 128
183	hardware frame queues using a hash on IP v4/v6 source and destination
184	and L4 source and destination ports, in present in the received frame.
185	When RSS is disabled, all traffic received by a certain interface is
186	received on the default Rx frame queue. The default DPAA Rx frame
187	queues are configured to put the received traffic into a pool channel
188	that allows any available CPU portal to dequeue the ingress traffic.
189	The default frame queues have the HOLDACTIVE option set, ensuring that
190	traffic bursts from a certain queue are serviced by the same CPU.
191	This ensures a very low rate of frame reordering. A drawback of this
192	is that only one CPU at a time can service the traffic received by a
193	certain interface when RSS is not enabled.
194	
195	To implement RSS, the DPAA Ethernet driver allocates an extra set of
196	128 Rx frame queues that are configured to dedicated channels, in a
197	round-robin manner. The mapping of the frame queues to CPUs is now
198	hardcoded, there is no indirection table to move traffic for a certain
199	FQ (hash result) to another CPU. The ingress traffic arriving on one
200	of these frame queues will arrive at the same portal and will always
201	be processed by the same CPU. This ensures intra-flow order preservation
202	and workload distribution for multiple traffic flows.
203	
204	RSS can be turned off for a certain interface using ethtool, i.e.
205	
206		# ethtool -N fm1-mac9 rx-flow-hash tcp4 ""
207	
208	To turn it back on, one needs to set rx-flow-hash for tcp4/6 or udp4/6:
209	
210		# ethtool -N fm1-mac9 rx-flow-hash udp4 sfdn
211	
212	There is no independent control for individual protocols, any command
213	run for one of tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 is
214	going to control the rx-flow-hashing for all protocols on that interface.
215	
216	Besides using the FMan Keygen computed hash for spreading traffic on the
217	128 Rx FQs, the DPAA Ethernet driver also sets the skb hash value when
218	the NETIF_F_RXHASH feature is on (active by default). This can be turned
219	on or off through ethtool, i.e.:
220	
221		# ethtool -K fm1-mac9 rx-hashing off
222		# ethtool -k fm1-mac9 | grep hash
223		receive-hashing: off
224		# ethtool -K fm1-mac9 rx-hashing on
225		Actual changes:
226		receive-hashing: on
227		# ethtool -k fm1-mac9 | grep hash
228		receive-hashing: on
229	
230	Please note that Rx hashing depends upon the rx-flow-hashing being on
231	for that interface - turning off rx-flow-hashing will also disable the
232	rx-hashing (without ethtool reporting it as off as that depends on the
233	NETIF_F_RXHASH feature flag).
234	
235	Debugging
236	=========
237	
238	The following statistics are exported for each interface through ethtool:
239	
240		- interrupt count per CPU
241		- Rx packets count per CPU
242		- Tx packets count per CPU
243		- Tx confirmed packets count per CPU
244		- Tx S/G frames count per CPU
245		- Tx error count per CPU
246		- Rx error count per CPU
247		- Rx error count per type
248		- congestion related statistics:
249			- congestion status
250			- time spent in congestion
251			- number of time the device entered congestion
252			- dropped packets count per cause
253	
254	The driver also exports the following information in sysfs:
255	
256		- the FQ IDs for each FQ type
257		/sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids
258	
259		- the IDs of the buffer pools in use
260		/sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids
Hide Line Numbers


About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog