* Gravity performance improvements.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Do not move downloaded lists into migration_backup directory.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Do not (strictly) sort domains. Random-leaf access is faster than always-last-leaf access (on average).
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Append instead of overwrite gravity_new collection list.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Rename table gravity_new to gravity_temp to clarify that this is only an intermediate table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Add timers for each of the calls to compute intense parts. They are to be removed before this finally hits the release/v5.0 branch.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Fix legacy list files import. It currently doesn't work when the gravity database has already been updated to using the single domainlist table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Simplify database_table_from_file(), remove all to this function for gravity lost downloads.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Update gravity.db.sql to version 10 to have newle created databases already reflect the most recent state.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Create second gravity database and swap them on success. This has a number of advantages such as instantaneous gravity updates (as seen from FTL) and always available gravity blocking. Furthermore, this saves disk space as the old database is removed on completion.
* Add timing output for the database swapping SQLite3 call.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Explicitly generate index as a separate process.
Signed-off-by: DL6ER <dl6er@dl6er.de>
* Remove time measurements.
Signed-off-by: DL6ER <dl6er@dl6er.de>
# Update timestamp when the gravity table was last updated successfully
output=$({printf".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));"| sqlite3 "${gravityDBfile}";} 2>&1)
# Copy data from old to new database file and swap them
gravity_swap_databases(){
local str
str="Building tree"
echo -ne "${INFO}${str}..."
# The index is intentionally not UNIQUE as prro quality adlists may contain domains more than once
output=$({ sqlite3 "${gravityTEMPfile}""CREATE INDEX idx_gravity ON gravity (domain, adlist_id);";} 2>&1)
status="$?"
if[["${status}" -ne 0]];then
echo -e "\\n ${CROSS} Unable to update gravity timestamp in database ${gravityDBfile}\\n ${output}"
echo -e "\\n ${CROSS} Unable to build gravity tree in ${gravityTEMPfile}\\n ${output}"
return1
fi
return0
}
echo -e "${OVER}${TICK}${str}"
database_truncate_table(){
local table
table="${1}"
str="Swapping databases"
echo -ne "${INFO}${str}..."
output=$({printf".timeout 30000\\nDELETE FROM %s;""${table}"| sqlite3 "${gravityDBfile}";} 2>&1)
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return1
fi
echo -e "${OVER}${TICK}${str}"
# Swap databases and remove old database
rm "${gravityDBfile}"
mv "${gravityTEMPfile}""${gravityDBfile}"
}
# Update timestamp when the gravity table was last updated successfully
update_gravity_timestamp(){
output=$({printf".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));"| sqlite3 "${gravityTEMPfile}";} 2>&1)
status="$?"
if[["${status}" -ne 0]];then
echo -e "\\n ${CROSS} Unable to update gravity timestamp in database ${gravityTEMPfile}\\n ${output}"
return1
fi
return0
@ -113,73 +133,80 @@ database_truncate_table() {
# Import domains from file and store them in the specified database table
database_table_from_file(){
# Define locals
local table source backup_path backup_file arg
local table source backup_path backup_file tmpFile type
table="${1}"
source="${2}"
arg="${3}"
backup_path="${piholeDir}/migration_backup"
backup_file="${backup_path}/$(basename "${2}")"
# Truncate table only if not gravity (we add multiple times to this table)
if[["${table}" !="gravity"]];then
database_truncate_table "${table}"
fi
local tmpFile
tmpFile="$(mktemp -p "/tmp" --suffix=".gravity")"
local timestamp
timestamp="$(date --utc +'%s')"
local inputfile
# Apply format for white-, blacklist, regex, and adlist tables
# Read file line by line
local rowid
declare -i rowid
rowid=1
if[["${table}"=="gravity"]];then
#Append ,${arg} to every line and then remove blank lines before import