Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.
1 ORANGEFS 2 ======== 3 4 OrangeFS is an LGPL userspace scale-out parallel storage system. It is ideal 5 for large storage problems faced by HPC, BigData, Streaming Video, 6 Genomics, Bioinformatics. 7 8 Orangefs, originally called PVFS, was first developed in 1993 by 9 Walt Ligon and Eric Blumer as a parallel file system for Parallel 10 Virtual Machine (PVM) as part of a NASA grant to study the I/O patterns 11 of parallel programs. 12 13 Orangefs features include: 14 15 * Distributes file data among multiple file servers 16 * Supports simultaneous access by multiple clients 17 * Stores file data and metadata on servers using local file system 18 and access methods 19 * Userspace implementation is easy to install and maintain 20 * Direct MPI support 21 * Stateless 22 23 24 MAILING LIST 25 ============ 26 27 http://beowulf-underground.org/mailman/listinfo/pvfs2-users 28 29 30 DOCUMENTATION 31 ============= 32 33 http://www.orangefs.org/documentation/ 34 35 36 USERSPACE FILESYSTEM SOURCE 37 =========================== 38 39 http://www.orangefs.org/download 40 41 Orangefs versions prior to 2.9.3 would not be compatible with the 42 upstream version of the kernel client. 43 44 45 BUILDING THE USERSPACE FILESYSTEM ON A SINGLE SERVER 46 ==================================================== 47 48 You can omit --prefix if you don't care that things are sprinkled around in 49 /usr/local. As of version 2.9.6, Orangefs uses Berkeley DB by default, we 50 will probably be changing the default to lmdb soon. 51 52 ./configure --prefix=/opt/ofs --with-db-backend=lmdb 53 54 make 55 56 make install 57 58 Create an orangefs config file: 59 /opt/ofs/bin/pvfs2-genconfig /etc/pvfs2.conf 60 61 for "Enter hostnames", use the hostname, don't let it default to 62 localhost. 63 64 create a pvfs2tab file in /etc: 65 cat /etc/pvfs2tab 66 tcp://myhostname:3334/orangefs /mymountpoint pvfs2 defaults,noauto 0 0 67 68 create the mount point you specified in the tab file if needed: 69 mkdir /mymountpoint 70 71 bootstrap the server: 72 /opt/ofs/sbin/pvfs2-server /etc/pvfs2.conf -f 73 74 start the server: 75 /opt/osf/sbin/pvfs2-server /etc/pvfs2.conf 76 77 Now the server is running. At this point you might like to 78 prove things are working with: 79 80 /opt/osf/bin/pvfs2-ls /mymountpoint 81 82 If stuff seems to be working, turn on the client core: 83 /opt/osf/sbin/pvfs2-client -p /opt/osf/sbin/pvfs2-client-core 84 85 Mount your filesystem. 86 mount -t pvfs2 tcp://myhostname:3334/orangefs /mymountpoint 87 88 89 OPTIONS 90 ======= 91 92 The following mount options are accepted: 93 94 acl 95 Allow the use of Access Control Lists on files and directories. 96 97 intr 98 Some operations between the kernel client and the user space 99 filesystem can be interruptible, such as changes in debug levels 100 and the setting of tunable parameters. 101 102 local_lock 103 Enable posix locking from the perspective of "this" kernel. The 104 default file_operations lock action is to return ENOSYS. Posix 105 locking kicks in if the filesystem is mounted with -o local_lock. 106 Distributed locking is being worked on for the future. 107 108 109 DEBUGGING 110 ========= 111 112 If you want the debug (GOSSIP) statements in a particular 113 source file (inode.c for example) go to syslog: 114 115 echo inode > /sys/kernel/debug/orangefs/kernel-debug 116 117 No debugging (the default): 118 119 echo none > /sys/kernel/debug/orangefs/kernel-debug 120 121 Debugging from several source files: 122 123 echo inode,dir > /sys/kernel/debug/orangefs/kernel-debug 124 125 All debugging: 126 127 echo all > /sys/kernel/debug/orangefs/kernel-debug 128 129 Get a list of all debugging keywords: 130 131 cat /sys/kernel/debug/orangefs/debug-help 132 133 134 PROTOCOL BETWEEN KERNEL MODULE AND USERSPACE 135 ============================================ 136 137 Orangefs is a user space filesystem and an associated kernel module. 138 We'll just refer to the user space part of Orangefs as "userspace" 139 from here on out. Orangefs descends from PVFS, and userspace code 140 still uses PVFS for function and variable names. Userspace typedefs 141 many of the important structures. Function and variable names in 142 the kernel module have been transitioned to "orangefs", and The Linux 143 Coding Style avoids typedefs, so kernel module structures that 144 correspond to userspace structures are not typedefed. 145 146 The kernel module implements a pseudo device that userspace 147 can read from and write to. Userspace can also manipulate the 148 kernel module through the pseudo device with ioctl. 149 150 THE BUFMAP: 151 152 At startup userspace allocates two page-size-aligned (posix_memalign) 153 mlocked memory buffers, one is used for IO and one is used for readdir 154 operations. The IO buffer is 41943040 bytes and the readdir buffer is 155 4194304 bytes. Each buffer contains logical chunks, or partitions, and 156 a pointer to each buffer is added to its own PVFS_dev_map_desc structure 157 which also describes its total size, as well as the size and number of 158 the partitions. 159 160 A pointer to the IO buffer's PVFS_dev_map_desc structure is sent to a 161 mapping routine in the kernel module with an ioctl. The structure is 162 copied from user space to kernel space with copy_from_user and is used 163 to initialize the kernel module's "bufmap" (struct orangefs_bufmap), which 164 then contains: 165 166 * refcnt - a reference counter 167 * desc_size - PVFS2_BUFMAP_DEFAULT_DESC_SIZE (4194304) - the IO buffer's 168 partition size, which represents the filesystem's block size and 169 is used for s_blocksize in super blocks. 170 * desc_count - PVFS2_BUFMAP_DEFAULT_DESC_COUNT (10) - the number of 171 partitions in the IO buffer. 172 * desc_shift - log2(desc_size), used for s_blocksize_bits in super blocks. 173 * total_size - the total size of the IO buffer. 174 * page_count - the number of 4096 byte pages in the IO buffer. 175 * page_array - a pointer to page_count * (sizeof(struct page*)) bytes 176 of kcalloced memory. This memory is used as an array of pointers 177 to each of the pages in the IO buffer through a call to get_user_pages. 178 * desc_array - a pointer to desc_count * (sizeof(struct orangefs_bufmap_desc)) 179 bytes of kcalloced memory. This memory is further intialized: 180 181 user_desc is the kernel's copy of the IO buffer's ORANGEFS_dev_map_desc 182 structure. user_desc->ptr points to the IO buffer. 183 184 pages_per_desc = bufmap->desc_size / PAGE_SIZE 185 offset = 0 186 187 bufmap->desc_array.page_array = &bufmap->page_array[offset] 188 bufmap->desc_array.array_count = pages_per_desc = 1024 189 bufmap->desc_array.uaddr = (user_desc->ptr) + (0 * 1024 * 4096) 190 offset += 1024 191 . 192 . 193 . 194 bufmap->desc_array.page_array = &bufmap->page_array[offset] 195 bufmap->desc_array.array_count = pages_per_desc = 1024 196 bufmap->desc_array.uaddr = (user_desc->ptr) + 197 (9 * 1024 * 4096) 198 offset += 1024 199 200 * buffer_index_array - a desc_count sized array of ints, used to 201 indicate which of the IO buffer's partitions are available to use. 202 * buffer_index_lock - a spinlock to protect buffer_index_array during update. 203 * readdir_index_array - a five (ORANGEFS_READDIR_DEFAULT_DESC_COUNT) element 204 int array used to indicate which of the readdir buffer's partitions are 205 available to use. 206 * readdir_index_lock - a spinlock to protect readdir_index_array during 207 update. 208 209 OPERATIONS: 210 211 The kernel module builds an "op" (struct orangefs_kernel_op_s) when it 212 needs to communicate with userspace. Part of the op contains the "upcall" 213 which expresses the request to userspace. Part of the op eventually 214 contains the "downcall" which expresses the results of the request. 215 216 The slab allocator is used to keep a cache of op structures handy. 217 218 At init time the kernel module defines and initializes a request list 219 and an in_progress hash table to keep track of all the ops that are 220 in flight at any given time. 221 222 Ops are stateful: 223 224 * unknown - op was just initialized 225 * waiting - op is on request_list (upward bound) 226 * inprogr - op is in progress (waiting for downcall) 227 * serviced - op has matching downcall; ok 228 * purged - op has to start a timer since client-core 229 exited uncleanly before servicing op 230 * given up - submitter has given up waiting for it 231 232 When some arbitrary userspace program needs to perform a 233 filesystem operation on Orangefs (readdir, I/O, create, whatever) 234 an op structure is initialized and tagged with a distinguishing ID 235 number. The upcall part of the op is filled out, and the op is 236 passed to the "service_operation" function. 237 238 Service_operation changes the op's state to "waiting", puts 239 it on the request list, and signals the Orangefs file_operations.poll 240 function through a wait queue. Userspace is polling the pseudo-device 241 and thus becomes aware of the upcall request that needs to be read. 242 243 When the Orangefs file_operations.read function is triggered, the 244 request list is searched for an op that seems ready-to-process. 245 The op is removed from the request list. The tag from the op and 246 the filled-out upcall struct are copy_to_user'ed back to userspace. 247 248 If any of these (and some additional protocol) copy_to_users fail, 249 the op's state is set to "waiting" and the op is added back to 250 the request list. Otherwise, the op's state is changed to "in progress", 251 and the op is hashed on its tag and put onto the end of a list in the 252 in_progress hash table at the index the tag hashed to. 253 254 When userspace has assembled the response to the upcall, it 255 writes the response, which includes the distinguishing tag, back to 256 the pseudo device in a series of io_vecs. This triggers the Orangefs 257 file_operations.write_iter function to find the op with the associated 258 tag and remove it from the in_progress hash table. As long as the op's 259 state is not "canceled" or "given up", its state is set to "serviced". 260 The file_operations.write_iter function returns to the waiting vfs, 261 and back to service_operation through wait_for_matching_downcall. 262 263 Service operation returns to its caller with the op's downcall 264 part (the response to the upcall) filled out. 265 266 The "client-core" is the bridge between the kernel module and 267 userspace. The client-core is a daemon. The client-core has an 268 associated watchdog daemon. If the client-core is ever signaled 269 to die, the watchdog daemon restarts the client-core. Even though 270 the client-core is restarted "right away", there is a period of 271 time during such an event that the client-core is dead. A dead client-core 272 can't be triggered by the Orangefs file_operations.poll function. 273 Ops that pass through service_operation during a "dead spell" can timeout 274 on the wait queue and one attempt is made to recycle them. Obviously, 275 if the client-core stays dead too long, the arbitrary userspace processes 276 trying to use Orangefs will be negatively affected. Waiting ops 277 that can't be serviced will be removed from the request list and 278 have their states set to "given up". In-progress ops that can't 279 be serviced will be removed from the in_progress hash table and 280 have their states set to "given up". 281 282 Readdir and I/O ops are atypical with respect to their payloads. 283 284 - readdir ops use the smaller of the two pre-allocated pre-partitioned 285 memory buffers. The readdir buffer is only available to userspace. 286 The kernel module obtains an index to a free partition before launching 287 a readdir op. Userspace deposits the results into the indexed partition 288 and then writes them to back to the pvfs device. 289 290 - io (read and write) ops use the larger of the two pre-allocated 291 pre-partitioned memory buffers. The IO buffer is accessible from 292 both userspace and the kernel module. The kernel module obtains an 293 index to a free partition before launching an io op. The kernel module 294 deposits write data into the indexed partition, to be consumed 295 directly by userspace. Userspace deposits the results of read 296 requests into the indexed partition, to be consumed directly 297 by the kernel module. 298 299 Responses to kernel requests are all packaged in pvfs2_downcall_t 300 structs. Besides a few other members, pvfs2_downcall_t contains a 301 union of structs, each of which is associated with a particular 302 response type. 303 304 The several members outside of the union are: 305 - int32_t type - type of operation. 306 - int32_t status - return code for the operation. 307 - int64_t trailer_size - 0 unless readdir operation. 308 - char *trailer_buf - initialized to NULL, used during readdir operations. 309 310 The appropriate member inside the union is filled out for any 311 particular response. 312 313 PVFS2_VFS_OP_FILE_IO 314 fill a pvfs2_io_response_t 315 316 PVFS2_VFS_OP_LOOKUP 317 fill a PVFS_object_kref 318 319 PVFS2_VFS_OP_CREATE 320 fill a PVFS_object_kref 321 322 PVFS2_VFS_OP_SYMLINK 323 fill a PVFS_object_kref 324 325 PVFS2_VFS_OP_GETATTR 326 fill in a PVFS_sys_attr_s (tons of stuff the kernel doesn't need) 327 fill in a string with the link target when the object is a symlink. 328 329 PVFS2_VFS_OP_MKDIR 330 fill a PVFS_object_kref 331 332 PVFS2_VFS_OP_STATFS 333 fill a pvfs2_statfs_response_t with useless info <g>. It is hard for 334 us to know, in a timely fashion, these statistics about our 335 distributed network filesystem. 336 337 PVFS2_VFS_OP_FS_MOUNT 338 fill a pvfs2_fs_mount_response_t which is just like a PVFS_object_kref 339 except its members are in a different order and "__pad1" is replaced 340 with "id". 341 342 PVFS2_VFS_OP_GETXATTR 343 fill a pvfs2_getxattr_response_t 344 345 PVFS2_VFS_OP_LISTXATTR 346 fill a pvfs2_listxattr_response_t 347 348 PVFS2_VFS_OP_PARAM 349 fill a pvfs2_param_response_t 350 351 PVFS2_VFS_OP_PERF_COUNT 352 fill a pvfs2_perf_count_response_t 353 354 PVFS2_VFS_OP_FSKEY 355 file a pvfs2_fs_key_response_t 356 357 PVFS2_VFS_OP_READDIR 358 jamb everything needed to represent a pvfs2_readdir_response_t into 359 the readdir buffer descriptor specified in the upcall. 360 361 Userspace uses writev() on /dev/pvfs2-req to pass responses to the requests 362 made by the kernel side. 363 364 A buffer_list containing: 365 - a pointer to the prepared response to the request from the 366 kernel (struct pvfs2_downcall_t). 367 - and also, in the case of a readdir request, a pointer to a 368 buffer containing descriptors for the objects in the target 369 directory. 370 ... is sent to the function (PINT_dev_write_list) which performs 371 the writev. 372 373 PINT_dev_write_list has a local iovec array: struct iovec io_array; 374 375 The first four elements of io_array are initialized like this for all 376 responses: 377 378 io_array.iov_base = address of local variable "proto_ver" (int32_t) 379 io_array.iov_len = sizeof(int32_t) 380 381 io_array.iov_base = address of global variable "pdev_magic" (int32_t) 382 io_array.iov_len = sizeof(int32_t) 383 384 io_array.iov_base = address of parameter "tag" (PVFS_id_gen_t) 385 io_array.iov_len = sizeof(int64_t) 386 387 io_array.iov_base = address of out_downcall member (pvfs2_downcall_t) 388 of global variable vfs_request (vfs_request_t) 389 io_array.iov_len = sizeof(pvfs2_downcall_t) 390 391 Readdir responses initialize the fifth element io_array like this: 392 393 io_array.iov_base = contents of member trailer_buf (char *) 394 from out_downcall member of global variable 395 vfs_request 396 io_array.iov_len = contents of member trailer_size (PVFS_size) 397 from out_downcall member of global variable 398 vfs_request 399 400 Orangefs exploits the dcache in order to avoid sending redundant 401 requests to userspace. We keep object inode attributes up-to-date with 402 orangefs_inode_getattr. Orangefs_inode_getattr uses two arguments to 403 help it decide whether or not to update an inode: "new" and "bypass". 404 Orangefs keeps private data in an object's inode that includes a short 405 timeout value, getattr_time, which allows any iteration of 406 orangefs_inode_getattr to know how long it has been since the inode was 407 updated. When the object is not new (new == 0) and the bypass flag is not 408 set (bypass == 0) orangefs_inode_getattr returns without updating the inode 409 if getattr_time has not timed out. Getattr_time is updated each time the 410 inode is updated. 411 412 Creation of a new object (file, dir, sym-link) includes the evaluation of 413 its pathname, resulting in a negative directory entry for the object. 414 A new inode is allocated and associated with the dentry, turning it from 415 a negative dentry into a "productive full member of society". Orangefs 416 obtains the new inode from Linux with new_inode() and associates 417 the inode with the dentry by sending the pair back to Linux with 418 d_instantiate(). 419 420 The evaluation of a pathname for an object resolves to its corresponding 421 dentry. If there is no corresponding dentry, one is created for it in 422 the dcache. Whenever a dentry is modified or verified Orangefs stores a 423 short timeout value in the dentry's d_time, and the dentry will be trusted 424 for that amount of time. Orangefs is a network filesystem, and objects 425 can potentially change out-of-band with any particular Orangefs kernel module 426 instance, so trusting a dentry is risky. The alternative to trusting 427 dentries is to always obtain the needed information from userspace - at 428 least a trip to the client-core, maybe to the servers. Obtaining information 429 from a dentry is cheap, obtaining it from userspace is relatively expensive, 430 hence the motivation to use the dentry when possible. 431 432 The timeout values d_time and getattr_time are jiffy based, and the 433 code is designed to avoid the jiffy-wrap problem: 434 435 "In general, if the clock may have wrapped around more than once, there 436 is no way to tell how much time has elapsed. However, if the times t1 437 and t2 are known to be fairly close, we can reliably compute the 438 difference in a way that takes into account the possibility that the 439 clock may have wrapped between times." 440 441 from course notes by instructor Andy Wang