clock synchronization mechanism rewritten to use systemd-timesync instead of NtpDate; at the moment, requires:
- modifying /etc/qubes-rpc/policy/qubes.GetDate to redirect GetDate to designated clockvm
- enabling clocksync service in clockvm ( qvm-features clockvm-name service/clocksync true )
Works as specified in issue listed below, except for:
- each VM synces with clockvm after boot and every 6h
- clockvm synces time with the Internet using systemd-timesync
- dom0 synces itself with clockvm every 1h (using cron)
fixesQubesOS/qubes-issues#1230
There was a second place with exactly the same bug. See
dad208a "qrexec: fix pending requests cleanup code" for details.
FixesQubesOS/qubes-issues#2699
qubes-dom0-update script use qvm-run tool, which is in
qubes-core-admin-client package (python3-qubesadmin isn't enough).
Also, this should fix package installation order during install:
template needs to be installed after qubes-core-admin-client (for
qvm-template-postprocess tool). But we can't add this dependency there
directly, as it will not work on Qubes < 4.0.
- don't call removed qvm-sync-clock
- use qvm-start --skip-if-running instead of qvm-run ... true, to start
a VM
- update qvm-run options
- use dnf directly, not through compatibility wrapper
Use UDEV_DISABLE_PERSISTENT_STORAGE_RULES_FLAG instead, which is
available since systemd 231.
- Do not merge to branches where dom0 is older than Fedora 25 -
/usr/lib/* is a place only for some auxiliary binaries. While in
majority cases, qrexec-client and qrexec-daemon are called from some
other scripts, it is valid to call them directly too.
Prior to this commit, if the Linux kernel's Xen-related components were
built into the kernel (as opposed to the use of kernel modules), then
the dracut module initialization would fail during the generation of the
initial ramdisk image.
This commit corrects this issue by using an if/then block.
Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
There was a logic error in pending requests cleanup code, causing
policy_pending_max being set to 0, even if there were more pending
requests. This effectively limited maximum pending requests to 1, after
some system uptime, because policy_pending_max set to 0 makes the code
looks only at the first pending request slot.
While at it, remove outdated FIXME comment, actually this bug is in the
code implementing this FIXME.
FixesQubesOS/qubes-issues#2699
There was a logic error in pending requests cleanup code, causing
policy_pending_max being set to 0, even if there were more pending
requests. This effectively limited maximum pending requests to 1, after
some system uptime, because policy_pending_max set to 0 makes the code
looks only at the first pending request slot.
While at it, remove outdated FIXME comment, actually this bug is in the
code implementing this FIXME.
FixesQubesOS/qubes-issues#2699
Can close windows of a VM while it's paused, and can not accidentally
harm dom0 by errant clicking.
Discussion in https://github.com/QubesOS/qubes-issues/issues/881
Thanks to rustybird for suggested implementation.
Design documentation says:
'note string dom0 does not match the $anyvm pattern; all other names do'
This behaviour was broken, because 'is not' in python isn't the same as
string comparison. In theory this could result in some service
erroneously allowed to execute in dom0, but in practice such services are
not installed in dom0 at all, so the only impact was misleading error
message.
FixesQubesOS/qubes-issues#2031
Reported by @Jeeppler