About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / kprobes.txt

Custom Search

Based on kernel version 4.15. Page generated on 2018-01-29 10:00 EST.

1	=======================
2	Kernel Probes (Kprobes)
3	=======================
5	:Author: Jim Keniston <jkenisto@us.ibm.com>
6	:Author: Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
7	:Author: Masami Hiramatsu <mhiramat@redhat.com>
11	  1. Concepts: Kprobes, and Return Probes
12	  2. Architectures Supported
13	  3. Configuring Kprobes
14	  4. API Reference
15	  5. Kprobes Features and Limitations
16	  6. Probe Overhead
17	  7. TODO
18	  8. Kprobes Example
19	  9. Kretprobes Example
20	  10. Deprecated Features
21	  Appendix A: The kprobes debugfs interface
22	  Appendix B: The kprobes sysctl interface
24	Concepts: Kprobes and Return Probes
25	=========================================
27	Kprobes enables you to dynamically break into any kernel routine and
28	collect debugging and performance information non-disruptively. You
29	can trap at almost any kernel code address [1]_, specifying a handler
30	routine to be invoked when the breakpoint is hit.
32	.. [1] some parts of the kernel code can not be trapped, see
33	       :ref:`kprobes_blacklist`)
35	There are currently two types of probes: kprobes, and kretprobes
36	(also called return probes).  A kprobe can be inserted on virtually
37	any instruction in the kernel.  A return probe fires when a specified
38	function returns.
40	In the typical case, Kprobes-based instrumentation is packaged as
41	a kernel module.  The module's init function installs ("registers")
42	one or more probes, and the exit function unregisters them.  A
43	registration function such as register_kprobe() specifies where
44	the probe is to be inserted and what handler is to be called when
45	the probe is hit.
47	There are also ``register_/unregister_*probes()`` functions for batch
48	registration/unregistration of a group of ``*probes``. These functions
49	can speed up unregistration process when you have to unregister
50	a lot of probes at once.
52	The next four subsections explain how the different types of
53	probes work and how jump optimization works.  They explain certain
54	things that you'll need to know in order to make the best use of
55	Kprobes -- e.g., the difference between a pre_handler and
56	a post_handler, and how to use the maxactive and nmissed fields of
57	a kretprobe.  But if you're in a hurry to start using Kprobes, you
58	can skip ahead to :ref:`kprobes_archs_supported`.
60	How Does a Kprobe Work?
61	-----------------------
63	When a kprobe is registered, Kprobes makes a copy of the probed
64	instruction and replaces the first byte(s) of the probed instruction
65	with a breakpoint instruction (e.g., int3 on i386 and x86_64).
67	When a CPU hits the breakpoint instruction, a trap occurs, the CPU's
68	registers are saved, and control passes to Kprobes via the
69	notifier_call_chain mechanism.  Kprobes executes the "pre_handler"
70	associated with the kprobe, passing the handler the addresses of the
71	kprobe struct and the saved registers.
73	Next, Kprobes single-steps its copy of the probed instruction.
74	(It would be simpler to single-step the actual instruction in place,
75	but then Kprobes would have to temporarily remove the breakpoint
76	instruction.  This would open a small time window when another CPU
77	could sail right past the probepoint.)
79	After the instruction is single-stepped, Kprobes executes the
80	"post_handler," if any, that is associated with the kprobe.
81	Execution then continues with the instruction following the probepoint.
83	Return Probes
84	-------------
86	How Does a Return Probe Work?
87	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89	When you call register_kretprobe(), Kprobes establishes a kprobe at
90	the entry to the function.  When the probed function is called and this
91	probe is hit, Kprobes saves a copy of the return address, and replaces
92	the return address with the address of a "trampoline."  The trampoline
93	is an arbitrary piece of code -- typically just a nop instruction.
94	At boot time, Kprobes registers a kprobe at the trampoline.
96	When the probed function executes its return instruction, control
97	passes to the trampoline and that probe is hit.  Kprobes' trampoline
98	handler calls the user-specified return handler associated with the
99	kretprobe, then sets the saved instruction pointer to the saved return
100	address, and that's where execution resumes upon return from the trap.
102	While the probed function is executing, its return address is
103	stored in an object of type kretprobe_instance.  Before calling
104	register_kretprobe(), the user sets the maxactive field of the
105	kretprobe struct to specify how many instances of the specified
106	function can be probed simultaneously.  register_kretprobe()
107	pre-allocates the indicated number of kretprobe_instance objects.
109	For example, if the function is non-recursive and is called with a
110	spinlock held, maxactive = 1 should be enough.  If the function is
111	non-recursive and can never relinquish the CPU (e.g., via a semaphore
112	or preemption), NR_CPUS should be enough.  If maxactive <= 0, it is
113	set to a default value.  If CONFIG_PREEMPT is enabled, the default
114	is max(10, 2*NR_CPUS).  Otherwise, the default is NR_CPUS.
116	It's not a disaster if you set maxactive too low; you'll just miss
117	some probes.  In the kretprobe struct, the nmissed field is set to
118	zero when the return probe is registered, and is incremented every
119	time the probed function is entered but there is no kretprobe_instance
120	object available for establishing the return probe.
122	Kretprobe entry-handler
123	^^^^^^^^^^^^^^^^^^^^^^^
125	Kretprobes also provides an optional user-specified handler which runs
126	on function entry. This handler is specified by setting the entry_handler
127	field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
128	function entry is hit, the user-defined entry_handler, if any, is invoked.
129	If the entry_handler returns 0 (success) then a corresponding return handler
130	is guaranteed to be called upon function return. If the entry_handler
131	returns a non-zero error then Kprobes leaves the return address as is, and
132	the kretprobe has no further effect for that particular function instance.
134	Multiple entry and return handler invocations are matched using the unique
135	kretprobe_instance object associated with them. Additionally, a user
136	may also specify per return-instance private data to be part of each
137	kretprobe_instance object. This is especially useful when sharing private
138	data between corresponding user entry and return handlers. The size of each
139	private data object can be specified at kretprobe registration time by
140	setting the data_size field of the kretprobe struct. This data can be
141	accessed through the data field of each kretprobe_instance object.
143	In case probed function is entered but there is no kretprobe_instance
144	object available, then in addition to incrementing the nmissed count,
145	the user entry_handler invocation is also skipped.
147	.. _kprobes_jump_optimization:
149	How Does Jump Optimization Work?
150	--------------------------------
152	If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
153	is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
154	the "debug.kprobes_optimization" kernel parameter is set to 1 (see
155	sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
156	instruction instead of a breakpoint instruction at each probepoint.
158	Init a Kprobe
159	^^^^^^^^^^^^^
161	When a probe is registered, before attempting this optimization,
162	Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
163	address. So, even if it's not possible to optimize this particular
164	probepoint, there'll be a probe there.
166	Safety Check
167	^^^^^^^^^^^^
169	Before optimizing a probe, Kprobes performs the following safety checks:
171	- Kprobes verifies that the region that will be replaced by the jump
172	  instruction (the "optimized region") lies entirely within one function.
173	  (A jump instruction is multiple bytes, and so may overlay multiple
174	  instructions.)
176	- Kprobes analyzes the entire function and verifies that there is no
177	  jump into the optimized region.  Specifically:
179	  - the function contains no indirect jump;
180	  - the function contains no instruction that causes an exception (since
181	    the fixup code triggered by the exception could jump back into the
182	    optimized region -- Kprobes checks the exception tables to verify this);
183	  - there is no near jump to the optimized region (other than to the first
184	    byte).
186	- For each instruction in the optimized region, Kprobes verifies that
187	  the instruction can be executed out of line.
189	Preparing Detour Buffer
190	^^^^^^^^^^^^^^^^^^^^^^^
192	Next, Kprobes prepares a "detour" buffer, which contains the following
193	instruction sequence:
195	- code to push the CPU's registers (emulating a breakpoint trap)
196	- a call to the trampoline code which calls user's probe handlers.
197	- code to restore registers
198	- the instructions from the optimized region
199	- a jump back to the original execution path.
201	Pre-optimization
202	^^^^^^^^^^^^^^^^
204	After preparing the detour buffer, Kprobes verifies that none of the
205	following situations exist:
207	- The probe has a post_handler.
208	- Other instructions in the optimized region are probed.
209	- The probe is disabled.
211	In any of the above cases, Kprobes won't start optimizing the probe.
212	Since these are temporary situations, Kprobes tries to start
213	optimizing it again if the situation is changed.
215	If the kprobe can be optimized, Kprobes enqueues the kprobe to an
216	optimizing list, and kicks the kprobe-optimizer workqueue to optimize
217	it.  If the to-be-optimized probepoint is hit before being optimized,
218	Kprobes returns control to the original instruction path by setting
219	the CPU's instruction pointer to the copied code in the detour buffer
220	-- thus at least avoiding the single-step.
222	Optimization
223	^^^^^^^^^^^^
225	The Kprobe-optimizer doesn't insert the jump instruction immediately;
226	rather, it calls synchronize_sched() for safety first, because it's
227	possible for a CPU to be interrupted in the middle of executing the
228	optimized region [3]_.  As you know, synchronize_sched() can ensure
229	that all interruptions that were active when synchronize_sched()
230	was called are done, but only if CONFIG_PREEMPT=n.  So, this version
231	of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
233	After that, the Kprobe-optimizer calls stop_machine() to replace
234	the optimized region with a jump instruction to the detour buffer,
235	using text_poke_smp().
237	Unoptimization
238	^^^^^^^^^^^^^^
240	When an optimized kprobe is unregistered, disabled, or blocked by
241	another kprobe, it will be unoptimized.  If this happens before
242	the optimization is complete, the kprobe is just dequeued from the
243	optimized list.  If the optimization has been done, the jump is
244	replaced with the original code (except for an int3 breakpoint in
245	the first byte) by using text_poke_smp().
247	.. [3] Please imagine that the 2nd instruction is interrupted and then
248	   the optimizer replaces the 2nd instruction with the jump *address*
249	   while the interrupt handler is running. When the interrupt
250	   returns to original address, there is no valid instruction,
251	   and it causes an unexpected result.
253	.. [4] This optimization-safety checking may be replaced with the
254	   stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
255	   kernel.
257	NOTE for geeks:
258	The jump optimization changes the kprobe's pre_handler behavior.
259	Without optimization, the pre_handler can change the kernel's execution
260	path by changing regs->ip and returning 1.  However, when the probe
261	is optimized, that modification is ignored.  Thus, if you want to
262	tweak the kernel's execution path, you need to suppress optimization,
263	using one of the following techniques:
265	- Specify an empty function for the kprobe's post_handler or break_handler.
267	or
269	- Execute 'sysctl -w debug.kprobes_optimization=n'
271	.. _kprobes_blacklist:
273	Blacklist
274	---------
276	Kprobes can probe most of the kernel except itself. This means
277	that there are some functions where kprobes cannot probe. Probing
278	(trapping) such functions can cause a recursive trap (e.g. double
279	fault) or the nested probe handler may never be called.
280	Kprobes manages such functions as a blacklist.
281	If you want to add a function into the blacklist, you just need
282	to (1) include linux/kprobes.h and (2) use NOKPROBE_SYMBOL() macro
283	to specify a blacklisted function.
284	Kprobes checks the given probe address against the blacklist and
285	rejects registering it, if the given address is in the blacklist.
287	.. _kprobes_archs_supported:
289	Architectures Supported
290	=======================
292	Kprobes and return probes are implemented on the following
293	architectures:
295	- i386 (Supports jump optimization)
296	- x86_64 (AMD-64, EM64T) (Supports jump optimization)
297	- ppc64
298	- ia64 (Does not support probes on instruction slot1.)
299	- sparc64 (Return probes not yet implemented.)
300	- arm
301	- ppc
302	- mips
303	- s390
305	Configuring Kprobes
306	===================
308	When configuring the kernel using make menuconfig/xconfig/oldconfig,
309	ensure that CONFIG_KPROBES is set to "y". Under "General setup", look
310	for "Kprobes".
312	So that you can load and unload Kprobes-based instrumentation modules,
313	make sure "Loadable module support" (CONFIG_MODULES) and "Module
314	unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
316	Also make sure that CONFIG_KALLSYMS and perhaps even CONFIG_KALLSYMS_ALL
317	are set to "y", since kallsyms_lookup_name() is used by the in-kernel
318	kprobe address resolution code.
320	If you need to insert a probe in the middle of a function, you may find
321	it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
322	so you can use "objdump -d -l vmlinux" to see the source-to-object
323	code mapping.
325	API Reference
326	=============
328	The Kprobes API includes a "register" function and an "unregister"
329	function for each type of probe. The API also includes "register_*probes"
330	and "unregister_*probes" functions for (un)registering arrays of probes.
331	Here are terse, mini-man-page specifications for these functions and
332	the associated probe handlers that you'll write. See the files in the
333	samples/kprobes/ sub-directory for examples.
335	register_kprobe
336	---------------
338	::
340		#include <linux/kprobes.h>
341		int register_kprobe(struct kprobe *kp);
343	Sets a breakpoint at the address kp->addr.  When the breakpoint is
344	hit, Kprobes calls kp->pre_handler.  After the probed instruction
345	is single-stepped, Kprobe calls kp->post_handler.  If a fault
346	occurs during execution of kp->pre_handler or kp->post_handler,
347	or during single-stepping of the probed instruction, Kprobes calls
348	kp->fault_handler.  Any or all handlers can be NULL. If kp->flags
349	is set KPROBE_FLAG_DISABLED, that kp will be registered but disabled,
350	so, its handlers aren't hit until calling enable_kprobe(kp).
352	.. note::
354	   1. With the introduction of the "symbol_name" field to struct kprobe,
355	      the probepoint address resolution will now be taken care of by the kernel.
356	      The following will now work::
358		kp.symbol_name = "symbol_name";
360	      (64-bit powerpc intricacies such as function descriptors are handled
361	      transparently)
363	   2. Use the "offset" field of struct kprobe if the offset into the symbol
364	      to install a probepoint is known. This field is used to calculate the
365	      probepoint.
367	   3. Specify either the kprobe "symbol_name" OR the "addr". If both are
368	      specified, kprobe registration will fail with -EINVAL.
370	   4. With CISC architectures (such as i386 and x86_64), the kprobes code
371	      does not validate if the kprobe.addr is at an instruction boundary.
372	      Use "offset" with caution.
374	register_kprobe() returns 0 on success, or a negative errno otherwise.
376	User's pre-handler (kp->pre_handler)::
378		#include <linux/kprobes.h>
379		#include <linux/ptrace.h>
380		int pre_handler(struct kprobe *p, struct pt_regs *regs);
382	Called with p pointing to the kprobe associated with the breakpoint,
383	and regs pointing to the struct containing the registers saved when
384	the breakpoint was hit.  Return 0 here unless you're a Kprobes geek.
386	User's post-handler (kp->post_handler)::
388		#include <linux/kprobes.h>
389		#include <linux/ptrace.h>
390		void post_handler(struct kprobe *p, struct pt_regs *regs,
391				  unsigned long flags);
393	p and regs are as described for the pre_handler.  flags always seems
394	to be zero.
396	User's fault-handler (kp->fault_handler)::
398		#include <linux/kprobes.h>
399		#include <linux/ptrace.h>
400		int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
402	p and regs are as described for the pre_handler.  trapnr is the
403	architecture-specific trap number associated with the fault (e.g.,
404	on i386, 13 for a general protection fault or 14 for a page fault).
405	Returns 1 if it successfully handled the exception.
407	register_kretprobe
408	------------------
410	::
412		#include <linux/kprobes.h>
413		int register_kretprobe(struct kretprobe *rp);
415	Establishes a return probe for the function whose address is
416	rp->kp.addr.  When that function returns, Kprobes calls rp->handler.
417	You must set rp->maxactive appropriately before you call
418	register_kretprobe(); see "How Does a Return Probe Work?" for details.
420	register_kretprobe() returns 0 on success, or a negative errno
421	otherwise.
423	User's return-probe handler (rp->handler)::
425		#include <linux/kprobes.h>
426		#include <linux/ptrace.h>
427		int kretprobe_handler(struct kretprobe_instance *ri,
428				      struct pt_regs *regs);
430	regs is as described for kprobe.pre_handler.  ri points to the
431	kretprobe_instance object, of which the following fields may be
432	of interest:
434	- ret_addr: the return address
435	- rp: points to the corresponding kretprobe object
436	- task: points to the corresponding task struct
437	- data: points to per return-instance private data; see "Kretprobe
438		entry-handler" for details.
440	The regs_return_value(regs) macro provides a simple abstraction to
441	extract the return value from the appropriate register as defined by
442	the architecture's ABI.
444	The handler's return value is currently ignored.
446	unregister_*probe
447	------------------
449	::
451		#include <linux/kprobes.h>
452		void unregister_kprobe(struct kprobe *kp);
453		void unregister_kretprobe(struct kretprobe *rp);
455	Removes the specified probe.  The unregister function can be called
456	at any time after the probe has been registered.
458	.. note::
460	   If the functions find an incorrect probe (ex. an unregistered probe),
461	   they clear the addr field of the probe.
463	register_*probes
464	----------------
466	::
468		#include <linux/kprobes.h>
469		int register_kprobes(struct kprobe **kps, int num);
470		int register_kretprobes(struct kretprobe **rps, int num);
472	Registers each of the num probes in the specified array.  If any
473	error occurs during registration, all probes in the array, up to
474	the bad probe, are safely unregistered before the register_*probes
475	function returns.
477	- kps/rps/jps: an array of pointers to ``*probe`` data structures
478	- num: the number of the array entries.
480	.. note::
482	   You have to allocate(or define) an array of pointers and set all
483	   of the array entries before using these functions.
485	unregister_*probes
486	------------------
488	::
490		#include <linux/kprobes.h>
491		void unregister_kprobes(struct kprobe **kps, int num);
492		void unregister_kretprobes(struct kretprobe **rps, int num);
494	Removes each of the num probes in the specified array at once.
496	.. note::
498	   If the functions find some incorrect probes (ex. unregistered
499	   probes) in the specified array, they clear the addr field of those
500	   incorrect probes. However, other probes in the array are
501	   unregistered correctly.
503	disable_*probe
504	--------------
506	::
508		#include <linux/kprobes.h>
509		int disable_kprobe(struct kprobe *kp);
510		int disable_kretprobe(struct kretprobe *rp);
512	Temporarily disables the specified ``*probe``. You can enable it again by using
513	enable_*probe(). You must specify the probe which has been registered.
515	enable_*probe
516	-------------
518	::
520		#include <linux/kprobes.h>
521		int enable_kprobe(struct kprobe *kp);
522		int enable_kretprobe(struct kretprobe *rp);
524	Enables ``*probe`` which has been disabled by disable_*probe(). You must specify
525	the probe which has been registered.
527	Kprobes Features and Limitations
528	================================
530	Kprobes allows multiple probes at the same address. Also,
531	a probepoint for which there is a post_handler cannot be optimized.
532	So if you install a kprobe with a post_handler, at an optimized
533	probepoint, the probepoint will be unoptimized automatically.
535	In general, you can install a probe anywhere in the kernel.
536	In particular, you can probe interrupt handlers.  Known exceptions
537	are discussed in this section.
539	The register_*probe functions will return -EINVAL if you attempt
540	to install a probe in the code that implements Kprobes (mostly
541	kernel/kprobes.c and ``arch/*/kernel/kprobes.c``, but also functions such
542	as do_page_fault and notifier_call_chain).
544	If you install a probe in an inline-able function, Kprobes makes
545	no attempt to chase down all inline instances of the function and
546	install probes there.  gcc may inline a function without being asked,
547	so keep this in mind if you're not seeing the probe hits you expect.
549	A probe handler can modify the environment of the probed function
550	-- e.g., by modifying kernel data structures, or by modifying the
551	contents of the pt_regs struct (which are restored to the registers
552	upon return from the breakpoint).  So Kprobes can be used, for example,
553	to install a bug fix or to inject faults for testing.  Kprobes, of
554	course, has no way to distinguish the deliberately injected faults
555	from the accidental ones.  Don't drink and probe.
557	Kprobes makes no attempt to prevent probe handlers from stepping on
558	each other -- e.g., probing printk() and then calling printk() from a
559	probe handler.  If a probe handler hits a probe, that second probe's
560	handlers won't be run in that instance, and the kprobe.nmissed member
561	of the second probe will be incremented.
563	As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
564	the same handler) may run concurrently on different CPUs.
566	Kprobes does not use mutexes or allocate memory except during
567	registration and unregistration.
569	Probe handlers are run with preemption disabled.  Depending on the
570	architecture and optimization state, handlers may also run with
571	interrupts disabled (e.g., kretprobe handlers and optimized kprobe
572	handlers run without interrupt disabled on x86/x86-64).  In any case,
573	your handler should not yield the CPU (e.g., by attempting to acquire
574	a semaphore).
576	Since a return probe is implemented by replacing the return
577	address with the trampoline's address, stack backtraces and calls
578	to __builtin_return_address() will typically yield the trampoline's
579	address instead of the real return address for kretprobed functions.
580	(As far as we can tell, __builtin_return_address() is used only
581	for instrumentation and error reporting.)
583	If the number of times a function is called does not match the number
584	of times it returns, registering a return probe on that function may
585	produce undesirable results. In such a case, a line:
586	kretprobe BUG!: Processing kretprobe d000000000041aa8 @ c00000000004f48c
587	gets printed. With this information, one will be able to correlate the
588	exact instance of the kretprobe that caused the problem. We have the
589	do_exit() case covered. do_execve() and do_fork() are not an issue.
590	We're unaware of other specific cases where this could be a problem.
592	If, upon entry to or exit from a function, the CPU is running on
593	a stack other than that of the current task, registering a return
594	probe on that function may produce undesirable results.  For this
595	reason, Kprobes doesn't support return probes (or kprobes)
596	on the x86_64 version of __switch_to(); the registration functions
597	return -EINVAL.
599	On x86/x86-64, since the Jump Optimization of Kprobes modifies
600	instructions widely, there are some limitations to optimization. To
601	explain it, we introduce some terminology. Imagine a 3-instruction
602	sequence consisting of a two 2-byte instructions and one 3-byte
603	instruction.
605	::
607			IA
608			|
609		[-2][-1][0][1][2][3][4][5][6][7]
610			[ins1][ins2][  ins3 ]
611			[<-     DCR       ->]
612			[<- JTPR ->]
614		ins1: 1st Instruction
615		ins2: 2nd Instruction
616		ins3: 3rd Instruction
617		IA:  Insertion Address
618		JTPR: Jump Target Prohibition Region
619		DCR: Detoured Code Region
621	The instructions in DCR are copied to the out-of-line buffer
622	of the kprobe, because the bytes in DCR are replaced by
623	a 5-byte jump instruction. So there are several limitations.
625	a) The instructions in DCR must be relocatable.
626	b) The instructions in DCR must not include a call instruction.
627	c) JTPR must not be targeted by any jump or call instruction.
628	d) DCR must not straddle the border between functions.
630	Anyway, these limitations are checked by the in-kernel instruction
631	decoder, so you don't need to worry about that.
633	Probe Overhead
634	==============
636	On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
637	microseconds to process.  Specifically, a benchmark that hits the same
638	probepoint repeatedly, firing a simple handler each time, reports 1-2
639	million hits per second, depending on the architecture.  A return-probe
640	hit typically takes 50-75% longer than a kprobe hit.
641	When you have a return probe set on a function, adding a kprobe at
642	the entry to that function adds essentially no overhead.
644	Here are sample overhead figures (in usec) for different architectures::
646	  k = kprobe; r = return probe; kr = kprobe + return probe
647	  on same function
649	  i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
650	  k = 0.57 usec; r = 0.92; kr = 0.99
652	  x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
653	  k = 0.49 usec; r = 0.80; kr = 0.82
655	  ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
656	  k = 0.77 usec; r = 1.26; kr = 1.45
658	Optimized Probe Overhead
659	------------------------
661	Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
662	process. Here are sample overhead figures (in usec) for x86 architectures::
664	  k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
665	  r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
667	  i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
668	  k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
670	  x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
671	  k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
673	TODO
674	====
676	a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
677	   programming interface for probe-based instrumentation.  Try it out.
678	b. Kernel return probes for sparc64.
679	c. Support for other architectures.
680	d. User-space probes.
681	e. Watchpoint probes (which fire on data references).
683	Kprobes Example
684	===============
686	See samples/kprobes/kprobe_example.c
688	Kretprobes Example
689	==================
691	See samples/kprobes/kretprobe_example.c
693	For additional information on Kprobes, refer to the following URLs:
695	- http://www-106.ibm.com/developerworks/library/l-kprobes.html?ca=dgr-lnxw42Kprobe
696	- http://www.redhat.com/magazine/005mar05/features/kprobes/
697	- http://www-users.cs.umn.edu/~boutcher/kprobes/
698	- http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115)
700	Deprecated Features
701	===================
703	Jprobes is now a deprecated feature. People who are depending on it should
704	migrate to other tracing features or use older kernels. Please consider to
705	migrate your tool to one of the following options:
707	- Use trace-event to trace target function with arguments.
709	  trace-event is a low-overhead (and almost no visible overhead if it
710	  is off) statically defined event interface. You can define new events
711	  and trace it via ftrace or any other tracing tools.
713	  See the following urls:
715	    - https://lwn.net/Articles/379903/
716	    - https://lwn.net/Articles/381064/
717	    - https://lwn.net/Articles/383362/
719	- Use ftrace dynamic events (kprobe event) with perf-probe.
721	  If you build your kernel with debug info (CONFIG_DEBUG_INFO=y), you can
722	  find which register/stack is assigned to which local variable or arguments
723	  by using perf-probe and set up new event to trace it.
725	  See following documents:
727	  - Documentation/trace/kprobetrace.txt
728	  - Documentation/trace/events.txt
729	  - tools/perf/Documentation/perf-probe.txt
732	The kprobes debugfs interface
733	=============================
736	With recent kernels (> 2.6.20) the list of registered kprobes is visible
737	under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at //sys/kernel/debug).
739	/sys/kernel/debug/kprobes/list: Lists all registered probes on the system::
741		c015d71a  k  vfs_read+0x0
742		c03dedc5  r  tcp_v4_rcv+0x0
744	The first column provides the kernel address where the probe is inserted.
745	The second column identifies the type of probe (k - kprobe and r - kretprobe)
746	while the third column specifies the symbol+offset of the probe.
747	If the probed function belongs to a module, the module name is also
748	specified. Following columns show probe status. If the probe is on
749	a virtual address that is no longer valid (module init sections, module
750	virtual addresses that correspond to modules that've been unloaded),
751	such probes are marked with [GONE]. If the probe is temporarily disabled,
752	such probes are marked with [DISABLED]. If the probe is optimized, it is
753	marked with [OPTIMIZED]. If the probe is ftrace-based, it is marked with
754	[FTRACE].
756	/sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
758	Provides a knob to globally and forcibly turn registered kprobes ON or OFF.
759	By default, all kprobes are enabled. By echoing "0" to this file, all
760	registered probes will be disarmed, till such time a "1" is echoed to this
761	file. Note that this knob just disarms and arms all kprobes and doesn't
762	change each probe's disabling state. This means that disabled kprobes (marked
763	[DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
766	The kprobes sysctl interface
767	============================
769	/proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
771	When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
772	a knob to globally and forcibly turn jump optimization (see section
773	:ref:`kprobes_jump_optimization`) ON or OFF. By default, jump optimization
774	is allowed (ON). If you echo "0" to this file or set
775	"debug.kprobes_optimization" to 0 via sysctl, all optimized probes will be
776	unoptimized, and any new probes registered after that will not be optimized.
778	Note that this knob *changes* the optimized state. This means that optimized
779	probes (marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
780	removed). If the knob is turned on, they will be optimized again.
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.