Project

General

Profile

Task #4235

Story #4230: CN System Metadata needs tidying

Task #4234: Refactor Metacat DAO to use bulk data transfer calls

Enable bulk read of system metadata from all 3 CNs

Added by Chris Jones about 10 years ago. Updated over 9 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
d1_replication_auditor
Target version:
-
Start date:
2014-01-24
Due date:
% Done:

100%

Milestone:
None
Product Version:
*
Story Points:
Sprint:

Description

We need access to the following tables from each CN's Metacat databases:

identifier
systemmetadata
smreplicationpolicy
smreplicationstatus
xml_access

Enable transfer of these tables to the controlling host (likely via pg_dump), and load these tables into Metacat on the controlling host (likely via pg_restore), using table name prefixes (like unm_, orc_, ucsb_) to discern the CN source among the tables.

History

#1 Updated by Chris Jones about 10 years ago

I've written scripts (~postgres/tx_sysmeta/tx_sysmeta_tables.sh) on all 3 production CNs that use pg_dump to dump the database tables. The scripts use sed to replace DDL statements and add table prefixes (unm_, orc_, ucsb_) in the SQL dump files, and then use rsync over ssh with cert-based auth to transfer the bulk data to the controlling machine. At the moment, it is sending the SQL data to cn-sandbox-unm-1 while testing.

#2 Updated by Robert Waltz over 9 years ago

  • Status changed from In Progress to Closed
  • translation missing: en.field_remaining_hours set to 0.0

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 14.8 MB)