[v2,3/3] importer: Purge any redundant entries

Message ID 8a59a82a-ba19-d1d2-0a1d-87fc72195f16@ipfire.org
State Accepted
Commit c2cc55d5a6875c3838f060032eaed89dcfb92ef6
Headers
Series [v2,1/3] location-importer.in: avoid violating NOT NULL constraints during JOIN |

Commit Message

Peter Müller Sept. 20, 2020, 7:21 p.m. UTC
  When importing inetnums, we might import various small networks
which are not relevant for us as long as they do not have a
different country code than their parent network.

Therefore we delete all these entries to keep the database
smaller without losing any information. The second version of this
patch introduces a SQL statement parallelised across all CPUs
available, while the DELETE-statement of the first version literally
took ages to complete.

However, cleaning up those data still takes about 26 hours (!) on
our location02 testing machine, making daily updates of the location
database impossible to the current knowledge.

real    1521m30.620s
user    38m45.521s
sys     9m6.027s

Special thanks goes to Michael for spending numerous hours
on this, setting up a testing environment, doing PostgreSQL magic
and providing helpful advice while debugging.

Partially fixes: #12458

Cc: Michael Tremer <michael.tremer@ipfire.org>
Signed-off-by: Peter Müller <peter.mueller@ipfire.org>
---
 src/python/location-importer.in | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)
  

Comments

Michael Tremer Sept. 24, 2020, 10:22 a.m. UTC | #1
Hello,

I had to revert this patch and the previous one.

For some reason, it is impossible to generate a new database although I made more processor cores and memory available to the system.

Please review this and try to find another approach that has less of an impact on the database system and that enables us to run a database update within about an hour.

Best,
-Michael

> On 20 Sep 2020, at 20:21, Peter Müller <peter.mueller@ipfire.org> wrote:
> 
> When importing inetnums, we might import various small networks
> which are not relevant for us as long as they do not have a
> different country code than their parent network.
> 
> Therefore we delete all these entries to keep the database
> smaller without losing any information. The second version of this
> patch introduces a SQL statement parallelised across all CPUs
> available, while the DELETE-statement of the first version literally
> took ages to complete.
> 
> However, cleaning up those data still takes about 26 hours (!) on
> our location02 testing machine, making daily updates of the location
> database impossible to the current knowledge.
> 
> real    1521m30.620s
> user    38m45.521s
> sys     9m6.027s
> 
> Special thanks goes to Michael for spending numerous hours
> on this, setting up a testing environment, doing PostgreSQL magic
> and providing helpful advice while debugging.
> 
> Partially fixes: #12458
> 
> Cc: Michael Tremer <michael.tremer@ipfire.org>
> Signed-off-by: Peter Müller <peter.mueller@ipfire.org>
> ---
> src/python/location-importer.in | 22 +++++++++++++++++++++-
> 1 file changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/src/python/location-importer.in b/src/python/location-importer.in
> index e3a07a0..1467923 100644
> --- a/src/python/location-importer.in
> +++ b/src/python/location-importer.in
> @@ -374,7 +374,27 @@ class CLI(object):
> 				INSERT INTO autnums(number, name)
> 					SELECT _autnums.number, _organizations.name FROM _autnums
> 						JOIN _organizations ON _autnums.organization = _organizations.handle
> -				ON CONFLICT (number) DO UPDATE SET name = excluded.name;
> +				ON CONFLICT (number) DO UPDATE SET name = excluded.name
> +			""")
> +
> +			self.db.execute("""
> +				--- Purge any redundant entries
> +				CREATE TEMPORARY TABLE _garbage ON COMMIT DROP
> +				AS
> +				SELECT network FROM networks candidates
> +				WHERE EXISTS (
> +					SELECT FROM networks
> +					WHERE
> +						networks.network << candidates.network
> +					AND
> +						networks.country = candidates.country
> +				);
> +
> +				CREATE UNIQUE INDEX _garbage_search ON _garbage USING BTREE(network);
> +
> +				DELETE FROM networks WHERE EXISTS (
> +					SELECT FROM _garbage WHERE networks.network = _garbage.network
> +				);
> 			""")
> 
> 		# Download all extended sources
> -- 
> 2.26.2
  

Patch

diff --git a/src/python/location-importer.in b/src/python/location-importer.in
index e3a07a0..1467923 100644
--- a/src/python/location-importer.in
+++ b/src/python/location-importer.in
@@ -374,7 +374,27 @@  class CLI(object):
 				INSERT INTO autnums(number, name)
 					SELECT _autnums.number, _organizations.name FROM _autnums
 						JOIN _organizations ON _autnums.organization = _organizations.handle
-				ON CONFLICT (number) DO UPDATE SET name = excluded.name;
+				ON CONFLICT (number) DO UPDATE SET name = excluded.name
+			""")
+
+			self.db.execute("""
+				--- Purge any redundant entries
+				CREATE TEMPORARY TABLE _garbage ON COMMIT DROP
+				AS
+				SELECT network FROM networks candidates
+				WHERE EXISTS (
+					SELECT FROM networks
+					WHERE
+						networks.network << candidates.network
+					AND
+						networks.country = candidates.country
+				);
+
+				CREATE UNIQUE INDEX _garbage_search ON _garbage USING BTREE(network);
+
+				DELETE FROM networks WHERE EXISTS (
+					SELECT FROM _garbage WHERE networks.network = _garbage.network
+				);
 			""")
 
 		# Download all extended sources