qubes-linux-kernel/patches.suse/sched-revert-latency-defaults
2010-07-07 13:12:45 +02:00

96 lines
3.5 KiB
Plaintext

From: Suresh Jayaraman <sjayaraman@suse.de>
Subject: Revert sched latency defaults
References: bnc#557307
Patch-mainline: Never
The upstream commit 172e082a91 re-tuned the sched latency defaults to better
suit desktop workloads. This hurt server workloads. So revert the latency
defaults to values similar to SLE11 GM to avoid several performance
regressions.
Also, turn FAIR_SLEEPERS off and NORMALIZED_SLEEPER on. The above scheduler
tunables seem to be most effective with FAIR_SLEEPERS off and
NORMALIZED_SLEEPER on.
The sysbench, dbench and Specjjb results showed much better performance with
these changes.
The interbench results didn't show any user visible impact and I expect
desktop workloads won't be affected much. Iam not aware of/heard of any impact
of this tuning that is affecting any specific workload.
Signed-off-by: Suresh Jayaraman <sjayaraman@suse.de>
---
kernel/sched_fair.c | 12 ++++++------
kernel/sched_features.h | 4 ++--
2 files changed, 8 insertions(+), 8 deletions(-)
Index: linux-2.6.32-master/kernel/sched_fair.c
===================================================================
--- linux-2.6.32-master.orig/kernel/sched_fair.c
+++ linux-2.6.32-master/kernel/sched_fair.c
@@ -24,7 +24,7 @@
/*
* Targeted preemption latency for CPU-bound tasks:
- * (default: 5ms * (1 + ilog(ncpus)), units: nanoseconds)
+ * (default: 20ms * (1 + ilog(ncpus)), units: nanoseconds)
*
* NOTE: this latency value is not the same as the concept of
* 'timeslice length' - timeslices in CFS are of variable length
@@ -34,13 +34,13 @@
* (to see the precise effective timeslice length of your workload,
* run vmstat and monitor the context-switches (cs) field)
*/
-unsigned int sysctl_sched_latency = 5000000ULL;
+unsigned int sysctl_sched_latency = 20000000ULL;
/*
* Minimal preemption granularity for CPU-bound tasks:
- * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds)
+ * (default: 4 msec * (1 + ilog(ncpus)), units: nanoseconds)
*/
-unsigned int sysctl_sched_min_granularity = 1000000ULL;
+unsigned int sysctl_sched_min_granularity = 4000000ULL;
/*
* is kept at sysctl_sched_latency / sysctl_sched_min_granularity
@@ -63,13 +63,13 @@ unsigned int __read_mostly sysctl_sched_
/*
* SCHED_OTHER wake-up granularity.
- * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds)
+ * (default: 5 msec * (1 + ilog(ncpus)), units: nanoseconds)
*
* This option delays the preemption effects of decoupled workloads
* and reduces their over-scheduling. Synchronous workloads will still
* have immediate wakeup/sleep latencies.
*/
-unsigned int sysctl_sched_wakeup_granularity = 1000000UL;
+unsigned int sysctl_sched_wakeup_granularity = 5000000UL;
const_debug unsigned int sysctl_sched_migration_cost = 500000UL;
Index: linux-2.6.32-master/kernel/sched_features.h
===================================================================
--- linux-2.6.32-master.orig/kernel/sched_features.h
+++ linux-2.6.32-master/kernel/sched_features.h
@@ -3,7 +3,7 @@
* considers the task to be running during that period. This gives it
* a service deficit on wakeup, allowing it to run sooner.
*/
-SCHED_FEAT(FAIR_SLEEPERS, 1)
+SCHED_FEAT(FAIR_SLEEPERS, 0)
/*
* Only give sleepers 50% of their service deficit. This allows
@@ -17,7 +17,7 @@ SCHED_FEAT(GENTLE_FAIR_SLEEPERS, 1)
* longer period, and lighter task an effective shorter period they
* are considered running.
*/
-SCHED_FEAT(NORMALIZED_SLEEPER, 0)
+SCHED_FEAT(NORMALIZED_SLEEPER, 1)
/*
* Place new tasks ahead so that they do not starve already running