About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / block / queue-sysfs.txt


Based on kernel version 4.16.1. Page generated on 2018-04-09 11:52 EST.

1	Queue sysfs files
2	=================
3	
4	This text file will detail the queue files that are located in the sysfs tree
5	for each block device. Note that stacked devices typically do not export
6	any settings, since their queue merely functions are a remapping target.
7	These files are the ones found in the /sys/block/xxx/queue/ directory.
8	
9	Files denoted with a RO postfix are readonly and the RW postfix means
10	read-write.
11	
12	add_random (RW)
13	----------------
14	This file allows to turn off the disk entropy contribution. Default
15	value of this file is '1'(on).
16	
17	dax (RO)
18	--------
19	This file indicates whether the device supports Direct Access (DAX),
20	used by CPU-addressable storage to bypass the pagecache.  It shows '1'
21	if true, '0' if not.
22	
23	discard_granularity (RO)
24	-----------------------
25	This shows the size of internal allocation of the device in bytes, if
26	reported by the device. A value of '0' means device does not support
27	the discard functionality.
28	
29	discard_max_hw_bytes (RO)
30	----------------------
31	Devices that support discard functionality may have internal limits on
32	the number of bytes that can be trimmed or unmapped in a single operation.
33	The discard_max_bytes parameter is set by the device driver to the maximum
34	number of bytes that can be discarded in a single operation. Discard
35	requests issued to the device must not exceed this limit. A discard_max_bytes
36	value of 0 means that the device does not support discard functionality.
37	
38	discard_max_bytes (RW)
39	----------------------
40	While discard_max_hw_bytes is the hardware limit for the device, this
41	setting is the software limit. Some devices exhibit large latencies when
42	large discards are issued, setting this value lower will make Linux issue
43	smaller discards and potentially help reduce latencies induced by large
44	discard operations.
45	
46	hw_sector_size (RO)
47	-------------------
48	This is the hardware sector size of the device, in bytes.
49	
50	io_poll (RW)
51	------------
52	When read, this file shows whether polling is enabled (1) or disabled
53	(0).  Writing '0' to this file will disable polling for this device.
54	Writing any non-zero value will enable this feature.
55	
56	io_poll_delay (RW)
57	------------------
58	If polling is enabled, this controls what kind of polling will be
59	performed. It defaults to -1, which is classic polling. In this mode,
60	the CPU will repeatedly ask for completions without giving up any time.
61	If set to 0, a hybrid polling mode is used, where the kernel will attempt
62	to make an educated guess at when the IO will complete. Based on this
63	guess, the kernel will put the process issuing IO to sleep for an amount
64	of time, before entering a classic poll loop. This mode might be a
65	little slower than pure classic polling, but it will be more efficient.
66	If set to a value larger than 0, the kernel will put the process issuing
67	IO to sleep for this amont of microseconds before entering classic
68	polling.
69	
70	iostats (RW)
71	-------------
72	This file is used to control (on/off) the iostats accounting of the
73	disk.
74	
75	logical_block_size (RO)
76	-----------------------
77	This is the logical block size of the device, in bytes.
78	
79	max_hw_sectors_kb (RO)
80	----------------------
81	This is the maximum number of kilobytes supported in a single data transfer.
82	
83	max_integrity_segments (RO)
84	---------------------------
85	When read, this file shows the max limit of integrity segments as
86	set by block layer which a hardware controller can handle.
87	
88	max_sectors_kb (RW)
89	-------------------
90	This is the maximum number of kilobytes that the block layer will allow
91	for a filesystem request. Must be smaller than or equal to the maximum
92	size allowed by the hardware.
93	
94	max_segments (RO)
95	-----------------
96	Maximum number of segments of the device.
97	
98	max_segment_size (RO)
99	---------------------
100	Maximum segment size of the device.
101	
102	minimum_io_size (RO)
103	--------------------
104	This is the smallest preferred IO size reported by the device.
105	
106	nomerges (RW)
107	-------------
108	This enables the user to disable the lookup logic involved with IO
109	merging requests in the block layer. By default (0) all merges are
110	enabled. When set to 1 only simple one-hit merges will be tried. When
111	set to 2 no merge algorithms will be tried (including one-hit or more
112	complex tree/hash lookups).
113	
114	nr_requests (RW)
115	----------------
116	This controls how many requests may be allocated in the block layer for
117	read or write requests. Note that the total allocated number may be twice
118	this amount, since it applies only to reads or writes (not the accumulated
119	sum).
120	
121	To avoid priority inversion through request starvation, a request
122	queue maintains a separate request pool per each cgroup when
123	CONFIG_BLK_CGROUP is enabled, and this parameter applies to each such
124	per-block-cgroup request pool.  IOW, if there are N block cgroups,
125	each request queue may have up to N request pools, each independently
126	regulated by nr_requests.
127	
128	optimal_io_size (RO)
129	--------------------
130	This is the optimal IO size reported by the device.
131	
132	physical_block_size (RO)
133	------------------------
134	This is the physical block size of device, in bytes.
135	
136	read_ahead_kb (RW)
137	------------------
138	Maximum number of kilobytes to read-ahead for filesystems on this block
139	device.
140	
141	rotational (RW)
142	---------------
143	This file is used to stat if the device is of rotational type or
144	non-rotational type.
145	
146	rq_affinity (RW)
147	----------------
148	If this option is '1', the block layer will migrate request completions to the
149	cpu "group" that originally submitted the request. For some workloads this
150	provides a significant reduction in CPU cycles due to caching effects.
151	
152	For storage configurations that need to maximize distribution of completion
153	processing setting this option to '2' forces the completion to run on the
154	requesting cpu (bypassing the "group" aggregation logic).
155	
156	scheduler (RW)
157	--------------
158	When read, this file will display the current and available IO schedulers
159	for this block device. The currently active IO scheduler will be enclosed
160	in [] brackets. Writing an IO scheduler name to this file will switch
161	control of this block device to that new IO scheduler. Note that writing
162	an IO scheduler name to this file will attempt to load that IO scheduler
163	module, if it isn't already present in the system.
164	
165	write_cache (RW)
166	----------------
167	When read, this file will display whether the device has write back
168	caching enabled or not. It will return "write back" for the former
169	case, and "write through" for the latter. Writing to this file can
170	change the kernels view of the device, but it doesn't alter the
171	device state. This means that it might not be safe to toggle the
172	setting from "write back" to "write through", since that will also
173	eliminate cache flushes issued by the kernel.
174	
175	write_same_max_bytes (RO)
176	-------------------------
177	This is the number of bytes the device can write in a single write-same
178	command.  A value of '0' means write-same is not supported by this
179	device.
180	
181	wb_lat_usec (RW)
182	----------------
183	If the device is registered for writeback throttling, then this file shows
184	the target minimum read latency. If this latency is exceeded in a given
185	window of time (see wb_window_usec), then the writeback throttling will start
186	scaling back writes. Writing a value of '0' to this file disables the
187	feature. Writing a value of '-1' to this file resets the value to the
188	default setting.
189	
190	throttle_sample_time (RW)
191	-------------------------
192	This is the time window that blk-throttle samples data, in millisecond.
193	blk-throttle makes decision based on the samplings. Lower time means cgroups
194	have more smooth throughput, but higher CPU overhead. This exists only when
195	CONFIG_BLK_DEV_THROTTLING_LOW is enabled.
196	
197	Jens Axboe <jens.axboe@oracle.com>, February 2009
Hide Line Numbers


About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog