Networking is not allowed on the main thread, and on some devices with strict
mode on, even the creation of the http handler is enough to trigger an
exception (i.e even if not used from the thread).
This moves even the creation to a thread which fixes the issue.
HTTP requests and responses are logged when logging to file. Until now,
only the existence of requests was logged. With this change, also the
content and headers of the requests and responses is printed to the log.
Before this change we were printing added/changed contacts and groups
to the adb log. This is not a big deal on its own, but now since we
have ACRA, we share these logs on crash (if user approves) so it's
better to remove personal information to make sure it's not being
accidentally shared.
Android annoyingly kill sync managers that don't have a significant
amount of network traffic within a given minute. This means that if we
have a lot of entries to process, we may get killed by the system if we
have a lot of entries to prepare for pushing. We were sending in chunks,
for network performance, but now we make the whole process work in
chunks.
This should fix an issue reported by a user who imported a significant
amount of contacts in one go.
This is similar to the issue fixed for fetch in:
f7104bbcef
We were doing it to make sure we don't get overridden by
server changes. But we already changed this behaviour in
the past, so this call was just doing nothing and slowing
down the sync.
This was causing issues when importing from a Google account in some cases
because we were getting weird UIDs.
This was also problematic when importing from other sources that
reported weird UIDs.
This will make it easier to identify and fix crashes.
Until now we relied on user to automatically figure out if the app has
crashed and gather debug info manually. This didn't work well,
especially in places like "import" where they just assumed the import
finished successfully if there was a crash.
This change makes it so whenever there's a crash, the email app is
opened with a template email and the stack trace attached.
This should make it easier for us to detect and fix issues.
Important to note: nothing is sent automatically.
Some device manufacturers (I'm looking at you Xiaomi!) made some changes
to Android that break content providers and other background apps. This
affects a few apps, including DAVdroid from which EteSync is derived.
This change attempts to automatically detect such devices, alert users
and point them to the relevant FAQ entry.
I've already had to deal with a few bug reports stemming from this
issue, so it's good to have this handled automatically.
This addresses #22
Groups are saved as separate vCards. We removed support for groups to
speed up development and deferred adding them back until there was
demand.
There is demand now, and also, not having this support resulted in the
sync not working, not just groups not supported.
Many thanks to "359" (this user's preferred alias) for investigating and
reporting this issue.
Due to a logical issue in the code, new journal entries were added to the
local cache after they've been created locally, and not after they've
been added to the server. Under normal circumstances this doesn't pose a
problem, however when pushing to the server fails, the local cache
would have the new entries as if they were saved on the server, causing
the app to think there has been a corruption on the server (as entries
should never be removed from the server) and halt the sync.
This change makes it so the entries are saved to the local cache only
after they've been saved on the server.
Note: this was not spotted until now because it relies on an unfortunate
specific sequence of events. It only happens when creating journal
entries, and when trying to sync them successfully connecting to the
server to fetch the journal list and the content of the journal itself,
and only failing when coming to push the journals.
Many thanks to "359" (this user's preferred alias) for reporting the
issue that resulted in this fix.
I assumed the lifecycle of the fragment and the task were tied because they
are tied to the instance, but it looks like I was wrong. We need to
explicitly cancel tasks.
This patch changes the fetching so if the last fetch returned less entries
than the limit, we don't try and fetch again because we already know there
are no others left.
Before this commit we used to fetch the whole journal entry list in one
go, which caused issues in two cases:
1. On slow internet connections the download may fail.
2. With big journals: Android interrupts sync managers if they don't
perform any significant network traffic for over a minute[1],
and because we would first download and only then process, we would
sometimes hit this threshold.
Current chunk size is set to 50.
1: https://developer.android.com/reference/android/content/AbstractThreadedSyncAdapter.html
I set it to 2048 following the NIST recommendations[1] which said it was
OK, but actually, as pointed out by Dominik Schürmann, it's probably a
better idea to set to 3072.
Users who already have a 2048 key pair won't be affected, while users
who don't will have a 3072 key created for them. Users with different
key lengths can interact with each other without any issues.
1: https://www.keylength.com/en/4/
This change makes clicking on journal items in the list to show in a
separate activity. At the moment it just makes for a slightly nicer
presentation. In the future we would change it to show the data in a
nice formatted way instead of a raw dump of the vObject.
There was an issue that for the first load it would only check the url
after a redirect (if there is one), which meant that for example,
the dashboard, would open in app because you'd be redirected to the
login page.
Account migration works in most cases, though while testing I managed to
get it to fail in some rare occasions. This commit adds a check to
verify the number of contacts we thought we migrated is equal to the
number of contacts we have after migration.
If the check fails, it presents the user with a notification that opens
the relevant FAQ entry on the EteSync website.