Browse Source

modify broken links related with Thoery, Data structure, Misc and so on

pull/583/head
Dongliang Mu 1 year ago
parent
commit
49c8951eb8

+ 3
- 3
Concepts/linux-cpu-3.md View File

@@ -213,7 +213,7 @@ If you are interested, you can find these sections in the `arch/x86/kernel/vmlin
213 213
 }
214 214
 ```
215 215
 
216
-If you are not familiar with this then you can know more about [linkers](https://en.wikipedia.org/wiki/Linker_%28computing%29) in the special [part](https://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html) of this book.
216
+If you are not familiar with this then you can know more about [linkers](https://en.wikipedia.org/wiki/Linker_%28computing%29) in the special [part](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html) of this book.
217 217
 
218 218
 As we just saw, the `do_initcall_level` function takes one parameter - level of `initcall` and does following two things: First of all this function parses the `initcall_command_line` which is copy of usual kernel [command line](https://www.kernel.org/doc/Documentation/kernel-parameters.txt) which may contain parameters for modules with the `parse_args` function from the [kernel/params.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/params.c) source code file and call the `do_on_initcall` function for each level:
219 219
 
@@ -387,9 +387,9 @@ Links
387 387
 * [symbols concatenation](https://gcc.gnu.org/onlinedocs/cpp/Concatenation.html)
388 388
 * [GCC](https://en.wikipedia.org/wiki/GNU_Compiler_Collection)
389 389
 * [Link time optimization](https://gcc.gnu.org/wiki/LinkTimeOptimization)
390
-* [Introduction to linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html)
390
+* [Introduction to linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html)
391 391
 * [Linux kernel command line](https://www.kernel.org/doc/Documentation/kernel-parameters.txt)
392 392
 * [Process identifier](https://en.wikipedia.org/wiki/Process_identifier)
393 393
 * [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
394 394
 * [rootfs](https://en.wikipedia.org/wiki/Initramfs)
395
-* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
395
+* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)

+ 1
- 1
Concepts/linux-cpu-4.md View File

@@ -366,4 +366,4 @@ Links
366 366
 * [system call](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html)
367 367
 * [init_module system call](http://man7.org/linux/man-pages/man2/init_module.2.html)
368 368
 * [delete_module](http://man7.org/linux/man-pages/man2/delete_module.2.html)
369
-* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/initcall.html)
369
+* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-3.html)

+ 2
- 2
DataStructures/linux-datastructures-3.md View File

@@ -13,7 +13,7 @@ Besides these two files, there is also architecture-specific header file which p
13 13
 
14 14
 * [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/include/asm/bitops.h)
15 15
 
16
-header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc.
16
+header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc.
17 17
 
18 18
 So, the main goal of this part is to see how `bit arrays` are implemented in the Linux kernel. Let's start.
19 19
 
@@ -365,7 +365,7 @@ Links
365 365
 * [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure)
366 366
 * [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) 
367 367
 * [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
368
-* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
368
+* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
369 369
 * [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
370 370
 * [API](https://en.wikipedia.org/wiki/Application_programming_interface)
371 371
 * [atomic operations](https://en.wikipedia.org/wiki/Linearizability)

+ 4
- 4
Initialization/linux-initialization-1.md View File

@@ -88,7 +88,7 @@ After we got the address of the `startup_64`, we need to do a check that this ad
88 88
 	jnz	bad_address
89 89
 ```
90 90
 
91
-Here we just compare low part of the `rbp` register with the complemented value of the `PMD_PAGE_MASK`. The `PMD_PAGE_MASK` indicates the mask for `Page middle directory` (read [paging](http://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html) about it) and defined as:
91
+Here we just compare low part of the `rbp` register with the complemented value of the `PMD_PAGE_MASK`. The `PMD_PAGE_MASK` indicates the mask for `Page middle directory` (read [Paging](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html) about it) and defined as:
92 92
 
93 93
 ```C
94 94
 #define PMD_PAGE_MASK           (~(PMD_PAGE_SIZE-1))
@@ -163,7 +163,7 @@ Looks hard, but it isn't. First of all let's look at the `early_level4_pgt`. It
163 163
                          _PAGE_ACCESSED | _PAGE_DIRTY)
164 164
 ```
165 165
 
166
-You can read more about it in the [paging](http://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html) part.
166
+You can read more about it in the [Paging](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html) part.
167 167
 
168 168
 The `level3_kernel_pgt` - stores two entries which map kernel space. At the start of it's definition, we can see that it is filled with zeros `L3_START_KERNEL` or `510` times. Here the `L3_START_KERNEL` is the index in the page upper directory which contains `__START_KERNEL_map` address and it equals `510`. After this, we can see the definition of the two `level3_kernel_pgt` entries: `level2_kernel_pgt` and `level2_fixmap_pgt`. First is simple, it is page table entry which contains pointer to the page middle directory which maps kernel space and it has:
169 169
 
@@ -485,7 +485,7 @@ INIT_PER_CPU(gdt_page);
485 485
 
486 486
 As we got `init_per_cpu__gdt_page` in `INIT_PER_CPU_VAR` and `INIT_PER_CPU` macro from linker script will be expanded we will get offset from the `__per_cpu_load`. After this calculations, we will have correct base address of the new GDT.
487 487
 
488
-Generally per-CPU variables is a 2.6 kernel feature. You can understand what it is from its name. When we create `per-CPU` variable, each CPU will have its own copy of this variable. Here we creating `gdt_page` per-CPU variable. There are many advantages for variables of this type, like there are no locks, because each CPU works with its own copy of variable and etc... So every core on multiprocessor will have its own `GDT` table and every entry in the table will represent a memory segment which can be accessed from the thread which ran on the core. You can read in details about `per-CPU` variables in the [Theory/per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) post.
488
+Generally per-CPU variables is a 2.6 kernel feature. You can understand what it is from its name. When we create `per-CPU` variable, each CPU will have its own copy of this variable. Here we creating `gdt_page` per-CPU variable. There are many advantages for variables of this type, like there are no locks, because each CPU works with its own copy of variable and etc... So every core on multiprocessor will have its own `GDT` table and every entry in the table will represent a memory segment which can be accessed from the thread which ran on the core. You can read in details about `per-CPU` variables in the [Theory/per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) post.
489 489
 
490 490
 As we loaded new Global Descriptor Table, we reload segments as we did it every time:
491 491
 
@@ -614,7 +614,7 @@ Links
614 614
 --------------------------------------------------------------------------------
615 615
 
616 616
 * [Model Specific Register](http://en.wikipedia.org/wiki/Model-specific_register)
617
-* [Paging](http://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html)
617
+* [Paging](http://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html)
618 618
 * [Previous part - Kernel decompression](http://0xax.gitbooks.io/linux-insides/content/Booting/linux-bootstrap-5.html)
619 619
 * [NX](http://en.wikipedia.org/wiki/NX_bit)
620 620
 * [ASLR](http://en.wikipedia.org/wiki/Address_space_layout_randomization)

+ 2
- 2
Initialization/linux-initialization-4.md View File

@@ -241,7 +241,7 @@ For now it is just zero. If the `CONFIG_DEBUG_PREEMPT` configuration option is d
241 241
 #define raw_smp_processor_id() (this_cpu_read(cpu_number))
242 242
 ```
243 243
 
244
-`this_cpu_read` as many other function like this (`this_cpu_write`, `this_cpu_add` and etc...) defined in the [include/linux/percpu-defs.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/percpu-defs.h) and presents `this_cpu` operation. These operations provide a way of optimizing access to the [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Theory/per-cpu.html) variables which are associated with the current processor. In our case it is `this_cpu_read`:
244
+`this_cpu_read` as many other function like this (`this_cpu_write`, `this_cpu_add` and etc...) defined in the [include/linux/percpu-defs.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/percpu-defs.h) and presents `this_cpu` operation. These operations provide a way of optimizing access to the [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variables which are associated with the current processor. In our case it is `this_cpu_read`:
245 245
 
246 246
 ```
247 247
 __pcpu_size_call_return(this_cpu_read_, pcp)
@@ -346,7 +346,7 @@ static inline int __check_is_bitmap(const unsigned long *bitmap)
346 346
 
347 347
 Yeah, it just returns `1` every time. Actually we need in it here only for one purpose: at compile time it checks that the given `bitmap` is a bitmap, or in other words it checks that the given `bitmap` has a type of `unsigned long *`. So we just pass `cpu_possible_bits` to the `to_cpumask` macro for converting the array of `unsigned long` to the `struct cpumask *`. Now we can call `cpumask_set_cpu` function with the `cpu` - 0 and `struct cpumask *cpu_possible_bits`. This function makes only one call of the `set_bit` function which sets the given `cpu` in the cpumask. All of these `set_cpu_*` functions work on the same principle.
348 348
 
349
-If you're not sure that this `set_cpu_*` operations and `cpumask` are not clear for you, don't worry about it. You can get more info by reading the special part about it - [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) or [documentation](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt).
349
+If you're not sure that this `set_cpu_*` operations and `cpumask` are not clear for you, don't worry about it. You can get more info by reading the special part about it - [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) or [documentation](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt).
350 350
 
351 351
 As we activated the bootstrap processor, it's time to go to the next function in the `start_kernel.` Now it is `page_address_init`, but this function does nothing in our case, because it executes only when all `RAM` can't be mapped directly.
352 352
 

+ 2
- 2
Initialization/linux-initialization-6.md View File

@@ -128,7 +128,7 @@ int __init acpi_mps_check(void)
128 128
 }
129 129
 ```
130 130
 
131
-It checks the built-in `MPS` or [MultiProcessor Specification](http://en.wikipedia.org/wiki/MultiProcessor_Specification) table. If `CONFIG_X86_LOCAL_APIC` is set and `CONFIG_x86_MPPAARSE` is not set, `acpi_mps_check` prints warning message if the one of the command line options: `acpi=off`, `acpi=noirq` or `pci=noacpi` passed to the kernel. If `acpi_mps_check` returns `1` it means that we disable local [APIC](http://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller) and clear `X86_FEATURE_APIC` bit in the of the current CPU with the `setup_clear_cpu_cap` macro. (more about CPU mask you can read in the [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)).
131
+It checks the built-in `MPS` or [MultiProcessor Specification](http://en.wikipedia.org/wiki/MultiProcessor_Specification) table. If `CONFIG_X86_LOCAL_APIC` is set and `CONFIG_x86_MPPAARSE` is not set, `acpi_mps_check` prints warning message if the one of the command line options: `acpi=off`, `acpi=noirq` or `pci=noacpi` passed to the kernel. If `acpi_mps_check` returns `1` it means that we disable local [APIC](http://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller) and clear `X86_FEATURE_APIC` bit in the of the current CPU with the `setup_clear_cpu_cap` macro. (more about CPU mask you can read in the [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)).
132 132
 
133 133
 Early PCI dump
134 134
 --------------------------------------------------------------------------------
@@ -535,7 +535,7 @@ Links
535 535
 * [NX bit](http://en.wikipedia.org/wiki/NX_bit)
536 536
 * [Documentation/kernel-parameters.txt](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/Documentation/kernel-parameters.txt)
537 537
 * [APIC](http://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
538
-* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
538
+* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
539 539
 * [Linux kernel memory management](http://0xax.gitbooks.io/linux-insides/content/MM/index.html)
540 540
 * [PCI](http://en.wikipedia.org/wiki/Conventional_PCI)
541 541
 * [e820](http://en.wikipedia.org/wiki/E820)

+ 3
- 3
Initialization/linux-initialization-7.md View File

@@ -320,7 +320,7 @@ if (acpi_lapic && early)
320 320
    return;
321 321
 ```
322 322
 
323
-Here we can see that multiprocessor configuration was found in the `smp_scan_config` function or just return from the function if not. The next check is `acpi_lapic` and `early`. And as we did this checks, we start to read the `SMP` configuration. As we finished reading it, the next step is - `prefill_possible_map` function which makes preliminary filling of the possible CPU's `cpumask` (more about it you can read in the [Introduction to the cpumasks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)).
323
+Here we can see that multiprocessor configuration was found in the `smp_scan_config` function or just return from the function if not. The next check is `acpi_lapic` and `early`. And as we did this checks, we start to read the `SMP` configuration. As we finished reading it, the next step is - `prefill_possible_map` function which makes preliminary filling of the possible CPU's `cpumask` (more about it you can read in the [Introduction to the cpumasks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)).
324 324
 
325 325
 The rest of the setup_arch
326 326
 --------------------------------------------------------------------------------
@@ -334,7 +334,7 @@ That's all, and now we can back to the `start_kernel` from the `setup_arch`.
334 334
 Back to the main.c
335 335
 ================================================================================
336 336
 
337
-As I wrote above, we have finished with the `setup_arch` function and now we can back to the `start_kernel` function from the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c). As you may remember or saw yourself, `start_kernel` function as big as the `setup_arch`. So the couple of the next part will be dedicated to learning of this function. So, let's continue with it. After the `setup_arch` we can see the call of the `mm_init_cpumask` function. This function sets the [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) pointer to the memory descriptor `cpumask`. We can look on its implementation:
337
+As I wrote above, we have finished with the `setup_arch` function and now we can back to the `start_kernel` function from the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c). As you may remember or saw yourself, `start_kernel` function as big as the `setup_arch`. So the couple of the next part will be dedicated to learning of this function. So, let's continue with it. After the `setup_arch` we can see the call of the `mm_init_cpumask` function. This function sets the [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) pointer to the memory descriptor `cpumask`. We can look on its implementation:
338 338
 
339 339
 ```C
340 340
 static inline void mm_init_cpumask(struct mm_struct *mm)
@@ -379,7 +379,7 @@ static void __init setup_command_line(char *command_line)
379 379
 
380 380
 Here we can see that we allocate space for the three buffers which will contain kernel command line for the different purposes (read above). And as we allocated space, we store `boot_command_line` in the `saved_command_line` and `command_line` (kernel command line from the `setup_arch`) to the `static_command_line`.
381 381
 
382
-The next function after the `setup_command_line` is the `setup_nr_cpu_ids`. This function setting `nr_cpu_ids` (number of CPUs) according to the last bit in the `cpu_possible_mask` (more about it you can read in the chapter describes [cpumasks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) concept). Let's look on its implementation:
382
+The next function after the `setup_command_line` is the `setup_nr_cpu_ids`. This function setting `nr_cpu_ids` (number of CPUs) according to the last bit in the `cpu_possible_mask` (more about it you can read in the chapter describes [cpumasks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) concept). Let's look on its implementation:
383 383
 
384 384
 ```C
385 385
 void __init setup_nr_cpu_ids(void)

+ 5
- 5
Initialization/linux-initialization-8.md View File

@@ -6,7 +6,7 @@ Scheduler initialization
6 6
 
7 7
 This is the eighth [part](http://0xax.gitbooks.io/linux-insides/content/Initialization/index.html) of the Linux kernel initialization process chapter and we stopped on the `setup_nr_cpu_ids` function in the [previous part](https://github.com/0xAX/linux-insides/blob/master/Initialization/linux-initialization-7.md).
8 8
 
9
-The main point of this part is [scheduler](http://en.wikipedia.org/wiki/Scheduling_%28computing%29) initialization. But before we will start to learn initialization process of the scheduler, we need to do some stuff. The next step in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) is the `setup_per_cpu_areas` function. This function setups memory areas for the `percpu` variables, more about it you can read in the special part about the [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html). After `percpu` areas is up and running, the next step is the `smp_prepare_boot_cpu` function.
9
+The main point of this part is [scheduler](http://en.wikipedia.org/wiki/Scheduling_%28computing%29) initialization. But before we will start to learn initialization process of the scheduler, we need to do some stuff. The next step in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) is the `setup_per_cpu_areas` function. This function setups memory areas for the `percpu` variables, more about it you can read in the special part about the [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html). After `percpu` areas is up and running, the next step is the `smp_prepare_boot_cpu` function.
10 10
 
11 11
 This function does some preparations for [symmetric multiprocessing](http://en.wikipedia.org/wiki/Symmetric_multiprocessing). Since this function is architecture specific, it is located in the [arch/x86/include/asm/smp.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/include/asm/smp.h#L78) Linux kernel header file. Let's look at the definition of this function:
12 12
 
@@ -107,7 +107,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
107 107
     ...
108 108
 ```
109 109
 
110
-more about `percpu` variables you can read in the [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) part. As we got address and size of the `GDT` descriptor we reload `GDT` with the `load_gdt` which just execute `lgdt` instruct and load `percpu_segment` with the following function:
110
+more about `percpu` variables you can read in the [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) part. As we got address and size of the `GDT` descriptor we reload `GDT` with the `load_gdt` which just execute `lgdt` instruct and load `percpu_segment` with the following function:
111 111
 
112 112
 ```C
113 113
 void load_percpu_segment(int cpu) {
@@ -230,7 +230,7 @@ pid_hash = alloc_large_system_hash("PID", sizeof(*pid_hash), 0, 18,
230 230
 ```
231 231
 
232 232
 The number of elements of the `pid_hash` depends on the `RAM` configuration, but it can be between `2^4` and `2^12`. The `pidhash_init` computes the size
233
-and allocates the required storage (which is `hlist` in our case - the same as [doubly linked list](http://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html), but contains one pointer instead on the [struct hlist_head](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/types.h)]. The `alloc_large_system_hash` function allocates a large system hash table with `memblock_virt_alloc_nopanic` if we pass `HASH_EARLY` flag (as it in our case) or with `__vmalloc` if we did no pass this flag.
233
+and allocates the required storage (which is `hlist` in our case - the same as [doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html), but contains one pointer instead on the [struct hlist_head](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/types.h)]. The `alloc_large_system_hash` function allocates a large system hash table with `memblock_virt_alloc_nopanic` if we pass `HASH_EARLY` flag (as it in our case) or with `__vmalloc` if we did no pass this flag.
234 234
 
235 235
 The result we can see in the `dmesg` output:
236 236
 
@@ -555,7 +555,7 @@ If you have any questions or suggestions write me a comment or ping me at [twitt
555 555
 Links
556 556
 --------------------------------------------------------------------------------
557 557
 
558
-* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
558
+* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
559 559
 * [high-resolution kernel timer](https://www.kernel.org/doc/Documentation/timers/hrtimers.txt)
560 560
 * [spinlock](http://en.wikipedia.org/wiki/Spinlock)
561 561
 * [Run queue](http://en.wikipedia.org/wiki/Run_queue)
@@ -565,7 +565,7 @@ Links
565 565
 * [Linux kernel hotplug documentation](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
566 566
 * [IRQ](http://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
567 567
 * [Global Descriptor Table](http://en.wikipedia.org/wiki/Global_Descriptor_Table)
568
-* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
568
+* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
569 569
 * [SMP](http://en.wikipedia.org/wiki/Symmetric_multiprocessing)
570 570
 * [RCU](http://en.wikipedia.org/wiki/Read-copy-update)
571 571
 * [CFS Scheduler documentation](https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt)

+ 4
- 4
Initialization/linux-initialization-9.md View File

@@ -38,7 +38,7 @@ In the first implementation of the `preempt_disable` we increment this `__preemp
38 38
 #define preempt_count_add(val)  __preempt_count_add(val)
39 39
 ```
40 40
 
41
-where `preempt_count_add` calls the `raw_cpu_add_4` macro which adds `1` to the given `percpu` variable (`__preempt_count`) in our case (more about `precpu` variables you can read in the part about [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)). Ok, we increased `__preempt_count` and the next step we can see the call of the `barrier` macro in the both macros. The `barrier` macro inserts an optimization barrier. In the processors with `x86_64` architecture independent memory access operations can be performed in any order. That's why we need the opportunity to point compiler and processor on compliance of order. This mechanism is memory barrier. Let's consider a simple example:
41
+where `preempt_count_add` calls the `raw_cpu_add_4` macro which adds `1` to the given `percpu` variable (`__preempt_count`) in our case (more about `precpu` variables you can read in the part about [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)). Ok, we increased `__preempt_count` and the next step we can see the call of the `barrier` macro in the both macros. The `barrier` macro inserts an optimization barrier. In the processors with `x86_64` architecture independent memory access operations can be performed in any order. That's why we need the opportunity to point compiler and processor on compliance of order. This mechanism is memory barrier. Let's consider a simple example:
42 42
 
43 43
 ```C
44 44
 preempt_disable();
@@ -127,7 +127,7 @@ The next step is [RCU](http://en.wikipedia.org/wiki/Read-copy-update) initializa
127 127
 
128 128
 In the first case `rcu_init` will be in the [kernel/rcu/tiny.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/rcu/tiny.c) and in the second case it will be defined in the [kernel/rcu/tree.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/rcu/tree.c). We will see the implementation of the `tree rcu`, but first of all about the `RCU` in general.
129 129
 
130
-`RCU` or read-copy update is a scalable high-performance synchronization mechanism implemented in the Linux kernel. On the early stage the linux kernel provided support and environment for the concurrently running applications, but all execution was serialized in the kernel using a single global lock. In our days linux kernel has no single global lock, but provides different mechanisms including [lock-free data structures](http://en.wikipedia.org/wiki/Concurrent_data_structure), [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) data structures and other. One of these mechanisms is - the `read-copy update`. The `RCU` technique is designed for rarely-modified data structures. The idea of the `RCU` is simple. For example we have a rarely-modified data structure. If somebody wants to change this data structure, we make a copy of this data structure and make all changes in the copy. In the same time all other users of the data structure use old version of it. Next, we need to choose safe moment when original version of the data structure will have no users and update it with the modified copy.
130
+`RCU` or read-copy update is a scalable high-performance synchronization mechanism implemented in the Linux kernel. On the early stage the linux kernel provided support and environment for the concurrently running applications, but all execution was serialized in the kernel using a single global lock. In our days linux kernel has no single global lock, but provides different mechanisms including [lock-free data structures](http://en.wikipedia.org/wiki/Concurrent_data_structure), [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) data structures and other. One of these mechanisms is - the `read-copy update`. The `RCU` technique is designed for rarely-modified data structures. The idea of the `RCU` is simple. For example we have a rarely-modified data structure. If somebody wants to change this data structure, we make a copy of this data structure and make all changes in the copy. In the same time all other users of the data structure use old version of it. Next, we need to choose safe moment when original version of the data structure will have no users and update it with the modified copy.
131 131
 
132 132
 Of course this description of the `RCU` is very simplified. To understand some details about `RCU`, first of all we need to learn some terminology. Data readers in the `RCU` executed in the [critical section](http://en.wikipedia.org/wiki/Critical_section). Every time when data reader get to the critical section, it calls the `rcu_read_lock`, and `rcu_read_unlock` on exit from the critical section. If the thread is not in the critical section, it will be in state which called - `quiescent state`. The moment when every thread is in the `quiescent state` called - `grace period`. If a thread wants to remove an element from the data structure, this occurs in two steps. First step is `removal` - atomically removes element from the data structure, but does not release the physical memory. After this thread-writer announces and waits until it is finished. From this moment, the removed element is available to the thread-readers. After the `grace period` finished, the second step of the element removal will be started, it just removes the element from the physical memory.
133 133
 
@@ -378,7 +378,7 @@ Ok, we already passed the main theme of this part which is `RCU` initialization,
378 378
 
379 379
 After we initialized `RCU`, the next step which you can see in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) is the - `trace_init` function. As you can understand from its name, this function initialize [tracing](http://en.wikipedia.org/wiki/Tracing_%28software%29) subsystem. You can read more about linux kernel trace system - [here](http://elinux.org/Kernel_Trace_Systems).
380 380
 
381
-After the `trace_init`, we can see the call of the `radix_tree_init`. If you are familiar with the different data structures, you can understand from the name of this function that it initializes kernel implementation of the [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). This function is defined in the [lib/radix-tree.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/lib/radix-tree.c) and you can read more about it in the part about [Radix tree](https://0xax.gitbooks.io/linux-insides/content/DataStructures/radix-tree.html).
381
+After the `trace_init`, we can see the call of the `radix_tree_init`. If you are familiar with the different data structures, you can understand from the name of this function that it initializes kernel implementation of the [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). This function is defined in the [lib/radix-tree.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/lib/radix-tree.c) and you can read more about it in the part about [Radix tree](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-2.html).
382 382
 
383 383
 In the next step we can see the functions which are related to the `interrupts handling` subsystem, they are:
384 384
 
@@ -423,7 +423,7 @@ Links
423 423
 * [integer ID management](https://lwn.net/Articles/103209/)
424 424
 * [Documentation/memory-barriers.txt](https://www.kernel.org/doc/Documentation/memory-barriers.txt)
425 425
 * [Runtime locking correctness validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt)
426
-* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
426
+* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
427 427
 * [Linux kernel memory management](http://0xax.gitbooks.io/linux-insides/content/MM/index.html)
428 428
 * [slab](http://en.wikipedia.org/wiki/Slab_allocation)
429 429
 * [i2c](http://en.wikipedia.org/wiki/I%C2%B2C)

+ 1
- 1
Interrupts/linux-interrupts-1.md View File

@@ -306,7 +306,7 @@ union irq_stack_union {
306 306
 
307 307
 The first `irq_stack` field is a 16 kilobytes array. Also you can see that `irq_stack_union` contains a structure with the two fields:
308 308
 
309
-* `gs_base` - The `gs` register always points to the bottom of the `irqstack` union. On the `x86_64`, the `gs` register is shared by per-cpu area and stack canary (more about `per-cpu` variables you can read in the special [part](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)).  All per-cpu symbols are zero based and the `gs` points to the base of the per-cpu area. You already know that [segmented memory model](http://en.wikipedia.org/wiki/Memory_segmentation) is abolished in the long mode, but we can set the base address for the two segment registers - `fs` and `gs` with the [Model specific registers](http://en.wikipedia.org/wiki/Model-specific_register) and these registers can be still be used as address registers. If you remember the first [part](http://0xax.gitbooks.io/linux-insides/content/Initialization/linux-initialization-1.html) of the Linux kernel initialization process, you can remember that we have set the `gs` register:
309
+* `gs_base` - The `gs` register always points to the bottom of the `irqstack` union. On the `x86_64`, the `gs` register is shared by per-cpu area and stack canary (more about `per-cpu` variables you can read in the special [part](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)).  All per-cpu symbols are zero based and the `gs` points to the base of the per-cpu area. You already know that [segmented memory model](http://en.wikipedia.org/wiki/Memory_segmentation) is abolished in the long mode, but we can set the base address for the two segment registers - `fs` and `gs` with the [Model specific registers](http://en.wikipedia.org/wiki/Model-specific_register) and these registers can be still be used as address registers. If you remember the first [part](http://0xax.gitbooks.io/linux-insides/content/Initialization/linux-initialization-1.html) of the Linux kernel initialization process, you can remember that we have set the `gs` register:
310 310
 
311 311
 ```assembly
312 312
 	movl	$MSR_GS_BASE,%ecx

+ 3
- 3
Interrupts/linux-interrupts-10.md View File

@@ -346,7 +346,7 @@ common_interrupt:
346 346
 	interrupt do_IRQ
347 347
 ```
348 348
 
349
-The macro `interrupt` defined in the same source code file and saves [general purpose](https://en.wikipedia.org/wiki/Processor_register) registers on the stack, change the userspace `gs` on the kernel with the `SWAPGS` assembler instruction if need, increase [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) - `irq_count` variable that shows that we are in interrupt and call the `do_IRQ` function. This function defined in the [arch/x86/kernel/irq.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/kernel/irq.c) source code file and handles our device interrupt. Let's look at this function. The `do_IRQ` function takes one parameter - `pt_regs` structure that stores values of the userspace registers:
349
+The macro `interrupt` defined in the same source code file and saves [general purpose](https://en.wikipedia.org/wiki/Processor_register) registers on the stack, change the userspace `gs` on the kernel with the `SWAPGS` assembler instruction if need, increase [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) - `irq_count` variable that shows that we are in interrupt and call the `do_IRQ` function. This function defined in the [arch/x86/kernel/irq.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/kernel/irq.c) source code file and handles our device interrupt. Let's look at this function. The `do_IRQ` function takes one parameter - `pt_regs` structure that stores values of the userspace registers:
350 350
 
351 351
 ```C
352 352
 __visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
@@ -413,7 +413,7 @@ We already know that when an `IRQ` finishes its work, deferred interrupts will b
413 413
 Exit from interrupt
414 414
 --------------------------------------------------------------------------------
415 415
 
416
-Ok, the interrupt handler finished its execution and now we must return from the interrupt. When the work of the `do_IRQ` function will be finsihed, we will return back to the assembler code in the [arch/x86/entry/entry_64.S](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/entry_entry_64.S) to the `ret_from_intr` label. First of all we disable interrupts with the `DISABLE_INTERRUPTS` macro that expands to the `cli` instruction and decreases value of the `irq_count` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable. Remember, this variable had value - `1`, when we were in interrupt context:
416
+Ok, the interrupt handler finished its execution and now we must return from the interrupt. When the work of the `do_IRQ` function will be finsihed, we will return back to the assembler code in the [arch/x86/entry/entry_64.S](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/entry_entry_64.S) to the `ret_from_intr` label. First of all we disable interrupts with the `DISABLE_INTERRUPTS` macro that expands to the `cli` instruction and decreases value of the `irq_count` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable. Remember, this variable had value - `1`, when we were in interrupt context:
417 417
 
418 418
 ```assembly
419 419
 DISABLE_INTERRUPTS(CLBR_NONE)
@@ -469,7 +469,7 @@ Links
469 469
 * [APIC](https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
470 470
 * [GNU assembler](https://en.wikipedia.org/wiki/GNU_Assembler)
471 471
 * [Processor register](https://en.wikipedia.org/wiki/Processor_register)
472
-* [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
472
+* [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
473 473
 * [pid](https://en.wikipedia.org/wiki/Process_identifier)
474 474
 * [device tree](https://en.wikipedia.org/wiki/Device_tree)
475 475
 * [system calls](https://en.wikipedia.org/wiki/System_call)

+ 1
- 1
Interrupts/linux-interrupts-2.md View File

@@ -245,7 +245,7 @@ static inline void boot_init_stack_canary(void)
245 245
 #endif
246 246
 ```
247 247
 
248
-If the `CONFIG_CC_STACKPROTECTOR` kernel configuration option is set, the `boot_init_stack_canary` function starts from the check stat `irq_stack_union` that represents [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) interrupt stack has offset equal to forty bytes from the `stack_canary` value:
248
+If the `CONFIG_CC_STACKPROTECTOR` kernel configuration option is set, the `boot_init_stack_canary` function starts from the check stat `irq_stack_union` that represents [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) interrupt stack has offset equal to forty bytes from the `stack_canary` value:
249 249
 
250 250
 ```C
251 251
 #ifdef CONFIG_X86_64

+ 1
- 1
Interrupts/linux-interrupts-3.md View File

@@ -516,7 +516,7 @@ Links
516 516
 * [system call](http://en.wikipedia.org/wiki/System_call)
517 517
 * [swapgs](http://www.felixcloutier.com/x86/SWAPGS.html)
518 518
 * [SIGTRAP](https://en.wikipedia.org/wiki/Unix_signal#SIGTRAP)
519
-* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
519
+* [Per-CPU variables](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
520 520
 * [kgdb](https://en.wikipedia.org/wiki/KGDB)
521 521
 * [ACPI](https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface)
522 522
 * [Previous part](http://0xax.gitbooks.io/linux-insides/content/Interrupts/index.html)

+ 2
- 2
Interrupts/linux-interrupts-4.md View File

@@ -300,7 +300,7 @@ In the next step we fill the `used_vectors` array which defined in the [arch/x86
300 300
 DECLARE_BITMAP(used_vectors, NR_VECTORS);
301 301
 ```
302 302
 
303
-of the first `32` interrupts (more about bitmaps in the Linux kernel you can read in the part which describes [cpumasks and bitmaps](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html))
303
+of the first `32` interrupts (more about bitmaps in the Linux kernel you can read in the part which describes [cpumasks and bitmaps](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html))
304 304
 
305 305
 ```C
306 306
 for (i = 0; i < FIRST_EXTERNAL_VECTOR; i++)
@@ -459,7 +459,7 @@ Links
459 459
 * [x87 FPU](https://en.wikipedia.org/wiki/X86_instruction_listings#x87_floating-point_instructions)
460 460
 * [MCE exception](https://en.wikipedia.org/wiki/Machine-check_exception)
461 461
 * [SIMD](https://en.wikipedia.org/?title=SIMD)
462
-* [cpumasks and bitmaps](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
462
+* [cpumasks and bitmaps](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
463 463
 * [NX](https://en.wikipedia.org/wiki/NX_bit)
464 464
 * [Task State Segment](https://en.wikipedia.org/wiki/Task_state_segment)
465 465
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-3.html)

+ 2
- 2
Interrupts/linux-interrupts-6.md View File

@@ -260,7 +260,7 @@ Now let's look on the `do_nmi` exception handler. This function defined in the [
260 260
 * address of the `pt_regs`;
261 261
 * error code.
262 262
 
263
-as all exception handlers. The `do_nmi` starts from the call of the `nmi_nesting_preprocess` function and ends with the call of the `nmi_nesting_postprocess`. The `nmi_nesting_preprocess` function checks that we likely do not work with the debug stack and if we on the debug stack set the `update_debug_stack` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable to `1` and call the `debug_stack_set_zero` function from the [arch/x86/kernel/cpu/common.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/kernel/cpu/common.c). This function increases the `debug_stack_use_ctr` per-cpu variable and loads new `Interrupt Descriptor Table`:
263
+as all exception handlers. The `do_nmi` starts from the call of the `nmi_nesting_preprocess` function and ends with the call of the `nmi_nesting_postprocess`. The `nmi_nesting_preprocess` function checks that we likely do not work with the debug stack and if we on the debug stack set the `update_debug_stack` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable to `1` and call the `debug_stack_set_zero` function from the [arch/x86/kernel/cpu/common.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/kernel/cpu/common.c). This function increases the `debug_stack_use_ctr` per-cpu variable and loads new `Interrupt Descriptor Table`:
264 264
  
265 265
 ```C
266 266
 static inline void nmi_nesting_preprocess(struct pt_regs *regs)
@@ -473,7 +473,7 @@ Links
473 473
 * [Global Descriptor Table](https://en.wikipedia.org/wiki/Global_Descriptor_Table)
474 474
 * [stack frame](https://en.wikipedia.org/wiki/Call_stack)
475 475
 * [Model Specific regiser](https://en.wikipedia.org/wiki/Model-specific_register)
476
-* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
476
+* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
477 477
 * [RCU](https://en.wikipedia.org/wiki/Read-copy-update) 
478 478
 * [MPX](https://en.wikipedia.org/wiki/Intel_MPX)
479 479
 * [x87 FPU](https://en.wikipedia.org/wiki/X87)

+ 8
- 8
Interrupts/linux-interrupts-7.md View File

@@ -95,7 +95,7 @@ More about this will be in the another chapter about the `NUMA`. The next step a
95 95
 init_irq_default_affinity();
96 96
 ```
97 97
 
98
-function. The `init_irq_default_affinity` function defined in the same source code file and depends on the `CONFIG_SMP` kernel configuration option allocates a given [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) structure (in our case it is the `irq_default_affinity`):
98
+function. The `init_irq_default_affinity` function defined in the same source code file and depends on the `CONFIG_SMP` kernel configuration option allocates a given [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) structure (in our case it is the `irq_default_affinity`):
99 99
 
100 100
 ```C
101 101
 #if defined(CONFIG_SMP)
@@ -207,7 +207,7 @@ for (i = 0; i < count; i++) {
207 207
 
208 208
 We are going through the all interrupt descriptors and do the following things:
209 209
 
210
-First of all we allocate [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable for the `irq` kernel statistic with the `alloc_percpu` macro. This macro allocates one instance of an object of the given type for every processor on the system. You can access kernel statistic from the userspace via `/proc/stat`:
210
+First of all we allocate [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable for the `irq` kernel statistic with the `alloc_percpu` macro. This macro allocates one instance of an object of the given type for every processor on the system. You can access kernel statistic from the userspace via `/proc/stat`:
211 211
 
212 212
 ```
213 213
 ~$ cat /proc/stat
@@ -221,7 +221,7 @@ cpu3 26648 8 6931 678891 414 0 244 0 0 0
221 221
 ...
222 222
 ```
223 223
 
224
-Where the sixth column is the servicing interrupts. After this we allocate [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) for the given irq descriptor affinity and initialize the [spinlock](https://en.wikipedia.org/wiki/Spinlock) for the given interrupt descriptor. After this before the [critical section](https://en.wikipedia.org/wiki/Critical_section), the lock will be acquired with a call of the `raw_spin_lock` and unlocked with the call of the `raw_spin_unlock`. In the next step we call the `lockdep_set_class` macro which set the [Lock validator](https://lwn.net/Articles/185666/) `irq_desc_lock_class` class for the lock of the given interrupt descriptor. More about `lockdep`, `spinlock` and other synchronization primitives will be described in the separate chapter.
224
+Where the sixth column is the servicing interrupts. After this we allocate [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) for the given irq descriptor affinity and initialize the [spinlock](https://en.wikipedia.org/wiki/Spinlock) for the given interrupt descriptor. After this before the [critical section](https://en.wikipedia.org/wiki/Critical_section), the lock will be acquired with a call of the `raw_spin_lock` and unlocked with the call of the `raw_spin_unlock`. In the next step we call the `lockdep_set_class` macro which set the [Lock validator](https://lwn.net/Articles/185666/) `irq_desc_lock_class` class for the lock of the given interrupt descriptor. More about `lockdep`, `spinlock` and other synchronization primitives will be described in the separate chapter.
225 225
 
226 226
 In the end of the loop we call the `desc_set_defaults` function from the [kernel/irq/irqdesc.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irq/irqdesc.c). This function takes four parameters:
227 227
 
@@ -275,7 +275,7 @@ desc->owner = owner;
275 275
 ...
276 276
 ```
277 277
 
278
-After this we go through the all [possible](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) processor with the [for_each_possible_cpu](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/cpumask.h#L714) helper and set the `kstat_irqs` to zero for the given interrupt descriptor:
278
+After this we go through the all [possible](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) processor with the [for_each_possible_cpu](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/cpumask.h#L714) helper and set the `kstat_irqs` to zero for the given interrupt descriptor:
279 279
 
280 280
 ```C
281 281
 	for_each_possible_cpu(cpu)
@@ -413,7 +413,7 @@ if (WARN_ON(initcnt > IRQ_BITMAP_BITS))
413 413
     initcnt = IRQ_BITMAP_BITS;
414 414
 ```
415 415
 
416
-where `IRQ_BITMAP_BITS` is equal to the `NR_IRQS` if the `CONFIG_SPARSE_IRQ` is not set and `NR_IRQS + 8196` in other way. In the next step we are going over all interrupt descriptors which need to be allocated in the loop and allocate space for the descriptor and insert to the `irq_desc_tree` [radix tree](http://0xax.gitbooks.io/linux-insides/content/DataStructures/radix-tree.html):
416
+where `IRQ_BITMAP_BITS` is equal to the `NR_IRQS` if the `CONFIG_SPARSE_IRQ` is not set and `NR_IRQS + 8196` in other way. In the next step we are going over all interrupt descriptors which need to be allocated in the loop and allocate space for the descriptor and insert to the `irq_desc_tree` [radix tree](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-2.html):
417 417
 
418 418
 ```C
419 419
 for (i = 0; i < initcnt; i++) {
@@ -446,8 +446,8 @@ Links
446 446
 * [IRQ](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
447 447
 * [numa](https://en.wikipedia.org/wiki/Non-uniform_memory_access)
448 448
 * [Enum type](https://en.wikipedia.org/wiki/Enumerated_type)
449
-* [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
450
-* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
449
+* [cpumask](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
450
+* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
451 451
 * [spinlock](https://en.wikipedia.org/wiki/Spinlock)
452 452
 * [critical section](https://en.wikipedia.org/wiki/Critical_section)
453 453
 * [Lock validator](https://lwn.net/Articles/185666/)
@@ -457,5 +457,5 @@ Links
457 457
 * [Intel 8259](https://en.wikipedia.org/wiki/Intel_8259)
458 458
 * [PIC](https://en.wikipedia.org/wiki/Programmable_Interrupt_Controller)
459 459
 * [MultiProcessor Configuration Table](https://en.wikipedia.org/wiki/MultiProcessor_Specification)
460
-* [radix tree](http://0xax.gitbooks.io/linux-insides/content/DataStructures/radix-tree.html)
460
+* [radix tree](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-2.html)
461 461
 * [dmesg](https://en.wikipedia.org/wiki/Dmesg)

+ 3
- 3
Interrupts/linux-interrupts-8.md View File

@@ -6,7 +6,7 @@ Non-early initialization of the IRQs
6 6
 
7 7
 This is the eighth part of the Interrupts and Interrupt Handling in the Linux kernel [chapter](http://0xax.gitbooks.io/linux-insides/content/Interrupts/index.html) and in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-7.html) we started to dive into the external hardware [interrupts](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29). We looked on the implementation of the `early_irq_init` function from the [kernel/irq/irqdesc.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irq/irqdesc.c) source code file and saw the initialization of the `irq_desc` structure in this function. Remind that `irq_desc` structure (defined in the [include/linux/irqdesc.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/irqdesc.h#L46) is the foundation of interrupt management code in the Linux kernel and represents an interrupt descriptor. In this part we will continue to dive into the initialization stuff which is related to the external hardware interrupts.
8 8
 
9
-Right after the call of the `early_irq_init` function in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) we can see the call of the `init_IRQ` function. This function is architecture-specific and defined in the [arch/x86/kernel/irqinit.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irqinit.c). The `init_IRQ` function makes initialization of the `vector_irq` [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable that defined in the same [arch/x86/kernel/irqinit.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irqinit.c) source code file:
9
+Right after the call of the `early_irq_init` function in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) we can see the call of the `init_IRQ` function. This function is architecture-specific and defined in the [arch/x86/kernel/irqinit.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irqinit.c). The `init_IRQ` function makes initialization of the `vector_irq` [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable that defined in the same [arch/x86/kernel/irqinit.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/irqinit.c) source code file:
10 10
 
11 11
 ```C
12 12
 ...
@@ -28,7 +28,7 @@ where `NR_VECTORS` is count of the vector number and as you can remember from th
28 28
 #define NR_VECTORS                       256
29 29
 ```
30 30
 
31
-So, in the start of the `init_IRQ` function we fill the `vector_irq` [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) array with the vector number of the `legacy` interrupts:
31
+So, in the start of the `init_IRQ` function we fill the `vector_irq` [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) array with the vector number of the `legacy` interrupts:
32 32
 
33 33
 ```C
34 34
 void __init init_IRQ(void)
@@ -521,7 +521,7 @@ Links
521 521
 --------------------------------------------------------------------------------
522 522
 
523 523
 * [IRQ](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
524
-* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
524
+* [percpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
525 525
 * [x86_64](https://en.wikipedia.org/wiki/X86-64)
526 526
 * [Intel 8259](https://en.wikipedia.org/wiki/Intel_8259)
527 527
 * [Programmable Interrupt Controller](https://en.wikipedia.org/wiki/Programmable_Interrupt_Controller)

+ 4
- 4
Interrupts/linux-interrupts-9.md View File

@@ -227,7 +227,7 @@ void __init softirq_init(void)
227 227
 }
228 228
 ```
229 229
 
230
-We can see definition of the integer `cpu` variable at the beginning of the `softirq_init` function. Next we will use it as parameter for the `for_each_possible_cpu` macro that goes through the all possible processors in the system. If the `possible processor` is the new terminology for you, you can read more about it the [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) chapter. In short words, `possible cpus` is the set of processors that can be plugged in anytime during the life of that system boot. All `possible processors` stored in the `cpu_possible_bits` bitmap, you can find its definition in the [kernel/cpu.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/cpu.c):
230
+We can see definition of the integer `cpu` variable at the beginning of the `softirq_init` function. Next we will use it as parameter for the `for_each_possible_cpu` macro that goes through the all possible processors in the system. If the `possible processor` is the new terminology for you, you can read more about it the [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) chapter. In short words, `possible cpus` is the set of processors that can be plugged in anytime during the life of that system boot. All `possible processors` stored in the `cpu_possible_bits` bitmap, you can find its definition in the [kernel/cpu.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/cpu.c):
231 231
 
232 232
 ```C
233 233
 static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly;
@@ -237,7 +237,7 @@ static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly;
237 237
 const struct cpumask *const cpu_possible_mask = to_cpumask(cpu_possible_bits);
238 238
 ```
239 239
 
240
-Ok, we defined the integer `cpu` variable and go through the all possible processors with the `for_each_possible_cpu` macro and makes initialization of the two following [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variables:
240
+Ok, we defined the integer `cpu` variable and go through the all possible processors with the `for_each_possible_cpu` macro and makes initialization of the two following [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variables:
241 241
 
242 242
 * `tasklet_vec`;
243 243
 * `tasklet_hi_vec`;
@@ -520,7 +520,7 @@ Links
520 520
 * [initcall](http://www.compsoc.man.ac.uk/~moz/kernelnewbies/documents/initcall/index.html)
521 521
 * [IF](https://en.wikipedia.org/wiki/Interrupt_flag)
522 522
 * [eflags](https://en.wikipedia.org/wiki/FLAGS_register)
523
-* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
524
-* [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
523
+* [CPU masks](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
524
+* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
525 525
 * [Workqueue](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/Documentation/workqueue.txt)
526 526
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-8.html)

+ 1
- 1
MM/linux-mm-2.md View File

@@ -535,5 +535,5 @@ Links
535 535
 * [e820](http://en.wikipedia.org/wiki/E820)
536 536
 * [Memory management unit](http://en.wikipedia.org/wiki/Memory_management_unit)
537 537
 * [TLB](http://en.wikipedia.org/wiki/Translation_lookaside_buffer)
538
-* [Paging](http://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html)
538
+* [Paging](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html)
539 539
 * [Linux kernel memory management Part 1.](http://0xax.gitbooks.io/linux-insides/content/MM/linux-mm-1.html)

+ 6
- 7
MM/linux-mm-3.md View File

@@ -148,7 +148,7 @@ Ok, so we know that `kmemcheck` provides mechanism to check usage of `uninitiali
148 148
 struct my_struct *my_struct = kmalloc(sizeof(struct my_struct), GFP_KERNEL);
149 149
 ```
150 150
 
151
-or in other words somebody wants to access a [page](https://en.wikipedia.org/wiki/Page_%28computer_memory%29), a [page fault](https://en.wikipedia.org/wiki/Page_fault) exception is generated. This is achieved by the fact that the `kmemcheck` marks memory pages as `non-present` (more about this you can read in the special part which is devoted to [paging](https://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html)). If a `page fault` exception is occurred, the exception handler knows about it and in a case when the `kmemcheck` is enabled it transfers control to it. After the `kmemcheck` will finish its checks, the page will be marked as `present` and the interrupted code will be able to continue execution. There is little subtlety in this chain. When the first instruction of interrupted code will be executed, the `kmemcheck` will mark the page as `non-present` again. In this way next access to memory will be caught again.
151
+or in other words somebody wants to access a [page](https://en.wikipedia.org/wiki/Page_%28computer_memory%29), a [page fault](https://en.wikipedia.org/wiki/Page_fault) exception is generated. This is achieved by the fact that the `kmemcheck` marks memory pages as `non-present` (more about this you can read in the special part which is devoted to [Paging](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html)). If a `page fault` exception is occurred, the exception handler knows about it and in a case when the `kmemcheck` is enabled it transfers control to it. After the `kmemcheck` will finish its checks, the page will be marked as `present` and the interrupted code will be able to continue execution. There is little subtlety in this chain. When the first instruction of interrupted code will be executed, the `kmemcheck` will mark the page as `non-present` again. In this way next access to memory will be caught again.
152 152
 
153 153
 We just considered the `kmemcheck` mechanism from theoretical side. Now let's consider how it is implemented in the Linux kernel.
154 154
 
@@ -190,7 +190,7 @@ early_param("kmemcheck", param_kmemcheck);
190 190
 
191 191
 As we already saw, the `param_kmemcheck` may have one of the following values: `0` (enabled), `1` (disabled) or `2` (one-shot). The implementation of the `param_kmemcheck` is pretty simple. We just convert string value of the `kmemcheck` command line option to integer representation and set it to the `kmemcheck_enabled` variable.
192 192
 
193
-The second stage will be executed during initialization of the Linux kernel, rather during initialization of early [initcalls](https://0xax.gitbooks.io/linux-insides/content/Concepts/initcall.html). The second stage is represented by the `kmemcheck_init`:
193
+The second stage will be executed during initialization of the Linux kernel, rather during initialization of early [initcalls](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-3.html). The second stage is represented by the `kmemcheck_init`:
194 194
 
195 195
 ```C
196 196
 int __init kmemcheck_init(void)
@@ -296,7 +296,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
296 296
 }
297 297
 ```
298 298
 
299
-The `kmemcheck_active` gets `kmemcheck_context` [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) structure and return the result of comparison of the `balance` field of this structure with zero:
299
+The `kmemcheck_active` gets `kmemcheck_context` [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) structure and return the result of comparison of the `balance` field of this structure with zero:
300 300
 
301 301
 ```
302 302
 bool kmemcheck_active(struct pt_regs *regs)
@@ -422,13 +422,12 @@ Links
422 422
 * [memory leaks](https://en.wikipedia.org/wiki/Memory_leak)
423 423
 * [kmemcheck documentation](https://www.kernel.org/doc/Documentation/kmemcheck.txt)
424 424
 * [valgrind](https://en.wikipedia.org/wiki/Valgrind)
425
-* [paging](https://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html)
425
+* [Paging](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-1.html)
426 426
 * [page fault](https://en.wikipedia.org/wiki/Page_fault)
427
-* [initcalls](https://0xax.gitbooks.io/linux-insides/content/Concepts/initcall.html)
427
+* [initcalls](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-3.html)
428 428
 * [opcode](https://en.wikipedia.org/wiki/Opcode)
429 429
 * [translation lookaside buffer](https://en.wikipedia.org/wiki/Translation_lookaside_buffer)
430
-* [per-cpu variables](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
430
+* [per-cpu variables](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
431 431
 * [flags register](https://en.wikipedia.org/wiki/FLAGS_register)
432 432
 * [tasklet](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-9.html)
433
-* [Paging](http://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html)
434 433
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/MM/linux-mm-2.html)

+ 2
- 2
SyncPrim/linux-sync-2.md View File

@@ -101,7 +101,7 @@ In the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/l
101 101
 
102 102
 The topic of this part is `queued spinlocks`. This approach may help to solve both of these problems. The `queued spinlocks` allows to each processor to use its own memory location to spin. The basic principle of a queue-based spinlock can best be understood by studying a classic queue-based spinlock implementation called the [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) lock. Before we will look at implementation of the `queued spinlocks` in the Linux kernel, we will try to understand what is it `MCS` lock.
103 103
 
104
-The basic idea of the `MCS` lock is in that as I already wrote in the previous paragraph, a thread spins on a local variable and each processor in the system has its own copy of these variable. In other words this concept is built on top of the [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variables concept in the Linux kernel.
104
+The basic idea of the `MCS` lock is in that as I already wrote in the previous paragraph, a thread spins on a local variable and each processor in the system has its own copy of these variable. In other words this concept is built on top of the [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variables concept in the Linux kernel.
105 105
 
106 106
 When the first thread wants to acquire a lock, it registers itself in the `queue` or in other words it will be added to the special `queue` and will acquire lock, because it is free for now. When the second thread will want to acquire the same lock before the first thread will release it, this thread adds its own copy of the lock variable into this `queue`. In this case the first thread will contain a `next` field which will point to the second thread. From this moment, the second thread will wait until the first thread will release its lock and notify `next` thread about this event. The first thread will be deleted from the `queue` and the second thread will be owner of a lock.
107 107
 
@@ -477,7 +477,7 @@ Links
477 477
 * [API](https://en.wikipedia.org/wiki/Application_programming_interface)
478 478
 * [Test and Set](https://en.wikipedia.org/wiki/Test-and-set)
479 479
 * [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf)
480
-* [per-cpu variables](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
480
+* [per-cpu variables](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
481 481
 * [atomic instruction](https://en.wikipedia.org/wiki/Linearizability)
482 482
 * [CMPXCHG instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 
483 483
 * [LOCK instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)

+ 4
- 4
SyncPrim/linux-sync-3.md View File

@@ -76,7 +76,7 @@ The `__SEMAPHORE_INITIALIZER` macro takes the name of the future `semaphore` str
76 76
 #define __ARCH_SPIN_LOCK_UNLOCKED       { { 0 } }
77 77
 ```
78 78
 
79
-The last two fields of the `semaphore` structure `count` and `wait_list` are initialized with the given value which represents count of available resources and empty [list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html).
79
+The last two fields of the `semaphore` structure `count` and `wait_list` are initialized with the given value which represents count of available resources and empty [list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html).
80 80
 
81 81
 The second way to initialize a `semaphore` structure is to pass the `semaphore` and number of available resources to the `sema_init` function which is defined in the [include/linux/semaphore.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/semaphore.h) header file:
82 82
 
@@ -184,7 +184,7 @@ The first represents current task for the local processor which wants to acquire
184 184
 #define current get_current()
185 185
 ```
186 186
 
187
-Where the `get_current` function returns value of the `current_task` [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable:
187
+Where the `get_current` function returns value of the `current_task` [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable:
188 188
 
189 189
 ```C
190 190
 DECLARE_PER_CPU(struct task_struct *, current_task);
@@ -342,10 +342,10 @@ Links
342 342
 * [preemption](https://en.wikipedia.org/wiki/Preemption_%28computing%29)
343 343
 * [deadlocks](https://en.wikipedia.org/wiki/Deadlock)
344 344
 * [scheduler](https://en.wikipedia.org/wiki/Scheduling_%28computing%29)
345
-* [Doubly linked list in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html)
345
+* [Doubly linked list in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html)
346 346
 * [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
347 347
 * [interrupts](https://en.wikipedia.org/wiki/Interrupt)
348
-* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
348
+* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
349 349
 * [bitmask](https://en.wikipedia.org/wiki/Mask_%28computing%29)
350 350
 * [SIGKILL](https://en.wikipedia.org/wiki/Unix_signal#SIGKILL)
351 351
 * [errno](https://en.wikipedia.org/wiki/Errno.h)

+ 4
- 4
SyncPrim/linux-sync-4.md View File

@@ -114,7 +114,7 @@ macro. Let's consider implementation of this macro. As we may see, the `DEFINE_M
114 114
 }
115 115
 ```
116 116
 
117
-This macro is defined in the [same](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/mutex.h) header file and as we may understand it initializes fields of the `mutex` structure the initial values. The `count` field get initialized with the `1` which represents `unlocked` state of a mutex. The `wait_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) get initialized to the unlocked state and the last field `wait_list` to empty [doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html).
117
+This macro is defined in the [same](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/mutex.h) header file and as we may understand it initializes fields of the `mutex` structure the initial values. The `count` field get initialized with the `1` which represents `unlocked` state of a mutex. The `wait_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) get initialized to the unlocked state and the last field `wait_list` to empty [doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html).
118 118
 
119 119
 The second approach allows us to initialize a `mutex` dynamically. To do this we need to call the `__mutex_init` function from the [kernel/locking/mutex.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/locking/mutex.c) source code file. Actually, the `__mutex_init` function rarely called directly. Instead of the `__mutex_init`, the:
120 120
 
@@ -176,7 +176,7 @@ We may see the call of the `might_sleep` macro from the [include/linux/kernel.h]
176 176
 
177 177
 After the `might_sleep` macro, we may see the call of the `__mutex_fastpath_lock` function. This function is architecture-specific and as we consider [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture in this book, the implementation of the `__mutex_fastpath_lock` is located in the [arch/x86/include/asm/mutex_64.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/include/asm/mutex_64.h) header file. As we may understand from the name of the `__mutex_fastpath_lock` function, this function will try to acquire lock in a fast path or in other words this function will try to decrement the value of the `count` of the given mutex.
178 178
 
179
-Implementation of the `__mutex_fastpath_lock` function consists from two parts. The first part is [inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/asm.html) statement. Let's look at it:
179
+Implementation of the `__mutex_fastpath_lock` function consists from two parts. The first part is [inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-3.html) statement. Let's look at it:
180 180
 
181 181
 ```C
182 182
 asm_volatile_goto(LOCK_PREFIX "   decl %0\n"
@@ -429,9 +429,9 @@ Links
429 429
 * [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt)
430 430
 * [Atomic](https://en.wikipedia.org/wiki/Linearizability)
431 431
 * [MCS lock](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf)
432
-* [Doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html)
432
+* [Doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html)
433 433
 * [x86_64](https://en.wikipedia.org/wiki/X86-64)
434
-* [Inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/asm.html)
434
+* [Inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-3.html)
435 435
 * [Memory barrier](https://en.wikipedia.org/wiki/Memory_barrier)
436 436
 * [Lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)
437 437
 * [JNS instruction](http://unixwiz.net/techtips/x86-jumps.html)

+ 4
- 4
SyncPrim/linux-sync-5.md View File

@@ -59,7 +59,7 @@ config RWSEM_GENERIC_SPINLOCK
59 59
 
60 60
 So, as this [book](https://0xax.gitbooks.io/linux-insides/content) describes only [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture related stuff, we will skip the case when the `CONFIG_RWSEM_GENERIC_SPINLOCK` kernel configuration is enabled and consider definition of the `rw_semaphore` structure only from the [include/linux/rwsem.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/rwsem.h) header file.
61 61
 
62
-If we will take a look at the definition of the `rw_semaphore` structure, we will notice that first three fields are the same that in the `semaphore` structure. It contains `count` field which represents amount of available resources, the `wait_list` field which represents [doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html) of processes which are waiting to acquire a lock and `wait_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) for protection of this list. Notice that `rw_semaphore.count` field is `long` type unlike the same field in the `semaphore` structure.
62
+If we will take a look at the definition of the `rw_semaphore` structure, we will notice that first three fields are the same that in the `semaphore` structure. It contains `count` field which represents amount of available resources, the `wait_list` field which represents [doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html) of processes which are waiting to acquire a lock and `wait_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) for protection of this list. Notice that `rw_semaphore.count` field is `long` type unlike the same field in the `semaphore` structure.
63 63
 
64 64
 The `count` field of a `rw_semaphore` structure may have following values:
65 65
 
@@ -240,7 +240,7 @@ static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
240 240
 }
241 241
 ```
242 242
 
243
-As for other synchronization primitives which we saw in this chapter, usually `lock/unlock` functions consists only from an [inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/asm.html) statement. As we may see, in our case the same for `__down_write_nested` function. Let's try to understand what does this function do. The first line of our assembly statement is just a comment, let's skip it. The second like contains `LOCK_PREFIX` which will be expanded to the [LOCK](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction as we already know. The next [xadd](http://x86.renejeschke.de/html/file_module_x86_id_327.html) instruction executes `add` and `exchange` operations. In other words, `xadd` instruction adds value of the `RWSEM_ACTIVE_WRITE_BIAS`:
243
+As for other synchronization primitives which we saw in this chapter, usually `lock/unlock` functions consists only from an [inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-3.html) statement. As we may see, in our case the same for `__down_write_nested` function. Let's try to understand what does this function do. The first line of our assembly statement is just a comment, let's skip it. The second like contains `LOCK_PREFIX` which will be expanded to the [LOCK](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction as we already know. The next [xadd](http://x86.renejeschke.de/html/file_module_x86_id_327.html) instruction executes `add` and `exchange` operations. In other words, `xadd` instruction adds value of the `RWSEM_ACTIVE_WRITE_BIAS`:
244 244
 
245 245
 ```C
246 246
 #define RWSEM_ACTIVE_WRITE_BIAS         (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
@@ -422,12 +422,12 @@ Links
422 422
 * [Semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29)
423 423
 * [Mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
424 424
 * [x86_64 architecture](https://en.wikipedia.org/wiki/X86-64)
425
-* [Doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html)
425
+* [Doubly linked list](https://0xax.gitbooks.io/linux-insides/content/DataStructures/linux-datastructures-1.html)
426 426
 * [MCS lock](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf)
427 427
 * [API](https://en.wikipedia.org/wiki/Application_programming_interface)
428 428
 * [Linux kernel lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt)
429 429
 * [Atomic operations](https://en.wikipedia.org/wiki/Linearizability)
430
-* [Inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/asm.html)
430
+* [Inline assembly](https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-3.html)
431 431
 * [XADD instruction](http://x86.renejeschke.de/html/file_module_x86_id_327.html)
432 432
 * [LOCK instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)
433 433
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html)

+ 2
- 2
SysCall/linux-syscall-2.md View File

@@ -210,7 +210,7 @@ This macro is defined in the [arch/x86/include/asm/irqflags.h](https://github.co
210 210
 #define SWAPGS_UNSAFE_STACK	swapgs
211 211
 ```
212 212
 
213
-which exchanges the current GS base register value with the value contained in the `MSR_KERNEL_GS_BASE ` model specific register. In other words we moved it on to the kernel stack. After this we point the old stack pointer to the `rsp_scratch` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable and setup the stack pointer to point to the top of stack for the current processor:
213
+which exchanges the current GS base register value with the value contained in the `MSR_KERNEL_GS_BASE ` model specific register. In other words we moved it on to the kernel stack. After this we point the old stack pointer to the `rsp_scratch` [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable and setup the stack pointer to point to the top of stack for the current processor:
214 214
 
215 215
 ```assembly
216 216
 movq	%rsp, PER_CPU_VAR(rsp_scratch)
@@ -402,7 +402,7 @@ Links
402 402
 * [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
403 403
 * [flags register](https://en.wikipedia.org/wiki/FLAGS_register)
404 404
 * [Global Descriptor Table](https://en.wikipedia.org/wiki/Global_Descriptor_Table)
405
-* [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
405
+* [per-cpu](http://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
406 406
 * [general purpose registers](https://en.wikipedia.org/wiki/Processor_register)
407 407
 * [ABI](https://en.wikipedia.org/wiki/Application_binary_interface)
408 408
 * [x86_64 C ABI](http://www.x86-64.org/documentation/abi.pdf)

+ 2
- 2
SysCall/linux-syscall-3.md View File

@@ -252,7 +252,7 @@ Here we can see that [uname](https://en.wikipedia.org/wiki/Uname) util was linke
252 252
 * `libc.so.6`;
253 253
 * `ld-linux-x86-64.so.2`.
254 254
 
255
-The first provides `vDSO` functionality, the second is `C` [standard library](https://en.wikipedia.org/wiki/C_standard_library) and the third is the program interpreter (more about this you can read in the part that describes [linkers](http://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html)). So, the `vDSO` solves limitations of the `vsyscall`. Implementation of the `vDSO` is similar to `vsyscall`.
255
+The first provides `vDSO` functionality, the second is `C` [standard library](https://en.wikipedia.org/wiki/C_standard_library) and the third is the program interpreter (more about this you can read in the part that describes [linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html)). So, the `vDSO` solves limitations of the `vsyscall`. Implementation of the `vDSO` is similar to `vsyscall`.
256 256
 
257 257
 Initialization of the `vDSO` occurs in the `init_vdso` function that defined in the [arch/x86/entry/vdso/vma.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/entry/vdso/vma.c) source code file. This function starts from the initialization of the `vDSO` images for 32-bits and 64-bits depends on the `CONFIG_X86_X32_ABI` kernel configuration option:
258 258
 
@@ -399,5 +399,5 @@ Links
399 399
 * [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
400 400
 * [stack pointer](https://en.wikipedia.org/wiki/Stack_register)
401 401
 * [uname](https://en.wikipedia.org/wiki/Uname)
402
-* [Linkers](http://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html)
402
+* [Linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html)
403 403
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-2.html)

+ 2
- 2
SysCall/linux-syscall-4.md View File

@@ -334,7 +334,7 @@ if (!elf_phdata)
334 334
 	goto out;
335 335
 ```
336 336
 
337
-that describes [segments](https://en.wikipedia.org/wiki/Memory_segmentation). Read the `program interpreter` and libraries that linked with the our executable binary file from disk and load it to memory. The `program interpreter` specified in the `.interp` section of the executable file and as you can read in the part that describes [Linkers](http://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html) it is - `/lib64/ld-linux-x86-64.so.2` for the `x86_64`. It setups the stack and map `elf` binary into the correct location in memory. It maps the [bss](https://en.wikipedia.org/wiki/.bss) and the [brk](http://man7.org/linux/man-pages/man2/sbrk.2.html) sections and does many many other different things to prepare executable file to execute.
337
+that describes [segments](https://en.wikipedia.org/wiki/Memory_segmentation). Read the `program interpreter` and libraries that linked with the our executable binary file from disk and load it to memory. The `program interpreter` specified in the `.interp` section of the executable file and as you can read in the part that describes [Linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html) it is - `/lib64/ld-linux-x86-64.so.2` for the `x86_64`. It setups the stack and map `elf` binary into the correct location in memory. It maps the [bss](https://en.wikipedia.org/wiki/.bss) and the [brk](http://man7.org/linux/man-pages/man2/sbrk.2.html) sections and does many many other different things to prepare executable file to execute.
338 338
 
339 339
 In the end of the execution of the `load_elf_binary` we call the `start_thread` function and pass three arguments to it:
340 340
 
@@ -424,7 +424,7 @@ Links
424 424
 * [Alpha](https://en.wikipedia.org/wiki/DEC_Alpha)
425 425
 * [FDPIC](http://elinux.org/UClinux_Shared_Library#FDPIC_ELF)
426 426
 * [segments](https://en.wikipedia.org/wiki/Memory_segmentation)
427
-* [Linkers](http://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html)
427
+* [Linkers](https://0xax.gitbooks.io/linux-insides/content/Misc/linux-misc-3.html)
428 428
 * [Processor register](https://en.wikipedia.org/wiki/Processor_register)
429 429
 * [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
430 430
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-3.html)

+ 4
- 4
Timers/linux-timers-3.md View File

@@ -102,7 +102,7 @@ void __init tick_broadcast_init(void)
102 102
 }
103 103
 ```
104 104
 
105
-As we can see, the `tick_broadcast_init` function allocates different [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) with the help of the `zalloc_cpumask_var` function. The `zalloc_cpumask_var` function defined in the [lib/cpumask.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/lib/cpumask.c) source code file and expands to the call of the following function:
105
+As we can see, the `tick_broadcast_init` function allocates different [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) with the help of the `zalloc_cpumask_var` function. The `zalloc_cpumask_var` function defined in the [lib/cpumask.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/lib/cpumask.c) source code file and expands to the call of the following function:
106 106
 
107 107
 ```C
108 108
 bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
@@ -407,7 +407,7 @@ for_each_cpu(cpu, tick_nohz_full_mask)
407 407
 	context_tracking_cpu_set(cpu);
408 408
 ```
409 409
 
410
-The `context_tracking_cpu_set` function defined in the [kernel/context_tracking.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/context_tracking.c) source code file and main point of this function is to set the `context_tracking.active` [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable to `true`. When the `active` field will be set to `true` for the certain processor, all [context switches](https://en.wikipedia.org/wiki/Context_switch) will be ignored by the Linux kernel context tracking subsystem for this processor.
410
+The `context_tracking_cpu_set` function defined in the [kernel/context_tracking.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/context_tracking.c) source code file and main point of this function is to set the `context_tracking.active` [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable to `true`. When the `active` field will be set to `true` for the certain processor, all [context switches](https://en.wikipedia.org/wiki/Context_switch) will be ignored by the Linux kernel context tracking subsystem for this processor.
411 411
 
412 412
 That's all. This is the end of the `tick_nohz_init` function. After this `NO_HZ` related data structures will be initialized. We didn't see API of the `NO_HZ` mode, but will see it soon.
413 413
 
@@ -433,12 +433,12 @@ Links
433 433
 * [CPU idle](https://en.wikipedia.org/wiki/Idle_%28CPU%29)
434 434
 * [power management](https://en.wikipedia.org/wiki/Power_management)
435 435
 * [NO_HZ documentation](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/Documentation/timers/NO_HZ.txt)
436
-* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
436
+* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
437 437
 * [high precision event timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer)
438 438
 * [irq](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
439 439
 * [IPI](https://en.wikipedia.org/wiki/Inter-processor_interrupt)
440 440
 * [CPUID](https://en.wikipedia.org/wiki/CPUID)
441 441
 * [APIC](https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
442
-* [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
442
+* [percpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
443 443
 * [context switches](https://en.wikipedia.org/wiki/Context_switch)
444 444
 * [Previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html)

+ 4
- 4
Timers/linux-timers-4.md View File

@@ -75,7 +75,7 @@ static void __init init_timer_cpus(void)
75 75
 }
76 76
 ```
77 77
 
78
-If you do not know or do not remember what is it a `possible` cpu, you can read the special [part](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) of this book which describes `cpumask` concept in the Linux kernel. In short words, a `possible` processor is a processor which can be plugged in anytime during the life of the system.
78
+If you do not know or do not remember what is it a `possible` cpu, you can read the special [part](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) of this book which describes `cpumask` concept in the Linux kernel. In short words, a `possible` processor is a processor which can be plugged in anytime during the life of the system.
79 79
 
80 80
 The `init_timer_cpu` function does main work for us, namely it executes initialization of the `tvec_base` structure for each processor. This structure defined in the [kernel/time/timer.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/time/timer.c) source code file and stores data related to a `dynamic` timer for a certain processor. Let's look on the definition of this structure:
81 81
 
@@ -136,7 +136,7 @@ static void __init init_timer_cpu(int cpu)
136 136
 }
137 137
 ```
138 138
 
139
-The `tvec_bases` represents [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html) variable which represents main data structure for a dynamic timer for a given processor. This `per-cpu` variable defined in the same source code file:
139
+The `tvec_bases` represents [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html) variable which represents main data structure for a dynamic timer for a given processor. This `per-cpu` variable defined in the same source code file:
140 140
 
141 141
 ```C
142 142
 static DEFINE_PER_CPU(struct tvec_base, tvec_bases);
@@ -418,10 +418,10 @@ Links
418 418
 * [IP](https://en.wikipedia.org/wiki/Internet_Protocol)
419 419
 * [netfilter](https://en.wikipedia.org/wiki/Netfilter)
420 420
 * [network](https://en.wikipedia.org/wiki/Computer_network)
421
-* [cpumask](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
421
+* [cpumask](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
422 422
 * [interrupt](https://en.wikipedia.org/wiki/Interrupt)
423 423
 * [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
424
-* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
424
+* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-1.html)
425 425
 * [spinlock](https://en.wikipedia.org/wiki/Spinlock)
426 426
 * [procfs](https://en.wikipedia.org/wiki/Procfs)
427 427
 * [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html)

+ 2
- 2
Timers/linux-timers-5.md View File

@@ -130,7 +130,7 @@ The next two fields `shift` and `mult` are familiar to us. They will be used to
130 130
 #define cpumask_of(cpu) (get_cpu_mask(cpu))
131 131
 ```
132 132
 
133
-Where the `get_cpu_mask` returns the cpumask containing just a given `cpu` number. More about `cpumasks` concept you may read in the [CPU masks in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part. In the last four lines of code we set callbacks for the clock event device suspend/resume, device shutdown and update of the clock event device state.
133
+Where the `get_cpu_mask` returns the cpumask containing just a given `cpu` number. More about `cpumasks` concept you may read in the [CPU masks in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html) part. In the last four lines of code we set callbacks for the clock event device suspend/resume, device shutdown and update of the clock event device state.
134 134
 
135 135
 After we finished with the initialization of the `at91sam926x` periodic timer, we can register it by the call of the following functions:
136 136
 
@@ -409,7 +409,7 @@ Links
409 409
 * [local APIC](https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
410 410
 * [C3 state](https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface#Device_states) 
411 411
 * [Periodic Interval Timer (PIT) for at91sam926x](http://www.atmel.com/Images/doc6062.pdf)
412
-* [CPU masks in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
412
+* [CPU masks in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Concepts/linux-cpu-2.html)
413 413
 * [deadlock](https://en.wikipedia.org/wiki/Deadlock)
414 414
 * [CPU hotplug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
415 415
 * [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html)

Loading…
Cancel
Save