Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.
1 ===================== 2 I/O statistics fields 3 ===================== 4 5 Since 2.4.20 (and some versions before, with patches), and 2.5.45, 6 more extensive disk statistics have been introduced to help measure disk 7 activity. Tools such as ``sar`` and ``iostat`` typically interpret these and do 8 the work for you, but in case you are interested in creating your own 9 tools, the fields are explained here. 10 11 In 2.4 now, the information is found as additional fields in 12 ``/proc/partitions``. In 2.6 and upper, the same information is found in two 13 places: one is in the file ``/proc/diskstats``, and the other is within 14 the sysfs file system, which must be mounted in order to obtain 15 the information. Throughout this document we'll assume that sysfs 16 is mounted on ``/sys``, although of course it may be mounted anywhere. 17 Both ``/proc/diskstats`` and sysfs use the same source for the information 18 and so should not differ. 19 20 Here are examples of these different formats:: 21 22 2.4: 23 3 0 39082680 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 24 3 1 9221278 hda1 35486 0 35496 38030 0 0 0 0 0 38030 38030 25 26 2.6+ sysfs: 27 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 28 35486 38030 38030 38030 29 30 2.6+ diskstats: 31 3 0 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 32 3 1 hda1 35486 38030 38030 38030 33 34 On 2.4 you might execute ``grep 'hda ' /proc/partitions``. On 2.6+, you have 35 a choice of ``cat /sys/block/hda/stat`` or ``grep 'hda ' /proc/diskstats``. 36 37 The advantage of one over the other is that the sysfs choice works well 38 if you are watching a known, small set of disks. ``/proc/diskstats`` may 39 be a better choice if you are watching a large number of disks because 40 you'll avoid the overhead of 50, 100, or 500 or more opens/closes with 41 each snapshot of your disk statistics. 42 43 In 2.4, the statistics fields are those after the device name. In 44 the above example, the first field of statistics would be 446216. 45 By contrast, in 2.6+ if you look at ``/sys/block/hda/stat``, you'll 46 find just the eleven fields, beginning with 446216. If you look at 47 ``/proc/diskstats``, the eleven fields will be preceded by the major and 48 minor device numbers, and device name. Each of these formats provides 49 eleven fields of statistics, each meaning exactly the same things. 50 All fields except field 9 are cumulative since boot. Field 9 should 51 go to zero as I/Os complete; all others only increase (unless they 52 overflow and wrap). Yes, these are (32-bit or 64-bit) unsigned long 53 (native word size) numbers, and on a very busy or long-lived system they 54 may wrap. Applications should be prepared to deal with that; unless 55 your observations are measured in large numbers of minutes or hours, 56 they should not wrap twice before you notice them. 57 58 Each set of stats only applies to the indicated device; if you want 59 system-wide stats you'll have to find all the devices and sum them all up. 60 61 Field 1 -- # of reads completed 62 This is the total number of reads completed successfully. 63 64 Field 2 -- # of reads merged, field 6 -- # of writes merged 65 Reads and writes which are adjacent to each other may be merged for 66 efficiency. Thus two 4K reads may become one 8K read before it is 67 ultimately handed to the disk, and so it will be counted (and queued) 68 as only one I/O. This field lets you know how often this was done. 69 70 Field 3 -- # of sectors read 71 This is the total number of sectors read successfully. 72 73 Field 4 -- # of milliseconds spent reading 74 This is the total number of milliseconds spent by all reads (as 75 measured from __make_request() to end_that_request_last()). 76 77 Field 5 -- # of writes completed 78 This is the total number of writes completed successfully. 79 80 Field 6 -- # of writes merged 81 See the description of field 2. 82 83 Field 7 -- # of sectors written 84 This is the total number of sectors written successfully. 85 86 Field 8 -- # of milliseconds spent writing 87 This is the total number of milliseconds spent by all writes (as 88 measured from __make_request() to end_that_request_last()). 89 90 Field 9 -- # of I/Os currently in progress 91 The only field that should go to zero. Incremented as requests are 92 given to appropriate struct request_queue and decremented as they finish. 93 94 Field 10 -- # of milliseconds spent doing I/Os 95 This field increases so long as field 9 is nonzero. 96 97 Field 11 -- weighted # of milliseconds spent doing I/Os 98 This field is incremented at each I/O start, I/O completion, I/O 99 merge, or read of these stats by the number of I/Os in progress 100 (field 9) times the number of milliseconds spent doing I/O since the 101 last update of this field. This can provide an easy measure of both 102 I/O completion time and the backlog that may be accumulating. 103 104 105 To avoid introducing performance bottlenecks, no locks are held while 106 modifying these counters. This implies that minor inaccuracies may be 107 introduced when changes collide, so (for instance) adding up all the 108 read I/Os issued per partition should equal those made to the disks ... 109 but due to the lack of locking it may only be very close. 110 111 In 2.6+, there are counters for each CPU, which make the lack of locking 112 almost a non-issue. When the statistics are read, the per-CPU counters 113 are summed (possibly overflowing the unsigned long variable they are 114 summed to) and the result given to the user. There is no convenient 115 user interface for accessing the per-CPU counters themselves. 116 117 Disks vs Partitions 118 ------------------- 119 120 There were significant changes between 2.4 and 2.6+ in the I/O subsystem. 121 As a result, some statistic information disappeared. The translation from 122 a disk address relative to a partition to the disk address relative to 123 the host disk happens much earlier. All merges and timings now happen 124 at the disk level rather than at both the disk and partition level as 125 in 2.4. Consequently, you'll see a different statistics output on 2.6+ for 126 partitions from that for disks. There are only *four* fields available 127 for partitions on 2.6+ machines. This is reflected in the examples above. 128 129 Field 1 -- # of reads issued 130 This is the total number of reads issued to this partition. 131 132 Field 2 -- # of sectors read 133 This is the total number of sectors requested to be read from this 134 partition. 135 136 Field 3 -- # of writes issued 137 This is the total number of writes issued to this partition. 138 139 Field 4 -- # of sectors written 140 This is the total number of sectors requested to be written to 141 this partition. 142 143 Note that since the address is translated to a disk-relative one, and no 144 record of the partition-relative address is kept, the subsequent success 145 or failure of the read cannot be attributed to the partition. In other 146 words, the number of reads for partitions is counted slightly before time 147 of queuing for partitions, and at completion for whole disks. This is 148 a subtle distinction that is probably uninteresting for most cases. 149 150 More significant is the error induced by counting the numbers of 151 reads/writes before merges for partitions and after for disks. Since a 152 typical workload usually contains a lot of successive and adjacent requests, 153 the number of reads/writes issued can be several times higher than the 154 number of reads/writes completed. 155 156 In 2.6.25, the full statistic set is again available for partitions and 157 disk and partition statistics are consistent again. Since we still don't 158 keep record of the partition-relative address, an operation is attributed to 159 the partition which contains the first sector of the request after the 160 eventual merges. As requests can be merged across partition, this could lead 161 to some (probably insignificant) inaccuracy. 162 163 Additional notes 164 ---------------- 165 166 In 2.6+, sysfs is not mounted by default. If your distribution of 167 Linux hasn't added it already, here's the line you'll want to add to 168 your ``/etc/fstab``:: 169 170 none /sys sysfs defaults 0 0 171 172 173 In 2.6+, all disk statistics were removed from ``/proc/stat``. In 2.4, they 174 appear in both ``/proc/partitions`` and ``/proc/stat``, although the ones in 175 ``/proc/stat`` take a very different format from those in ``/proc/partitions`` 176 (see proc(5), if your system has it.) 177 178 -- ricklind@us.ibm.com