DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2019-03-12T16:54:08ZDataONE Tasks
Redmine CN REST - Task #8778 (New): Ensure SystemMetadata replica auditing updates are saved and broadcasthttps://redmine.dataone.org/issues/87782019-03-12T16:54:08ZChris Jonescjones@nceas.ucsb.edu
<p>In <code>MNAuditTask.call()</code>, we process a batch of pids in need of auditing per Member Node. For each <code>pid</code> in the <code>auditPIDs</code> list, we call <code>MN.getChecksum()</code>. Regardless of success or failure, we set the <code>Replica.replicaVerified</code> date in the <code>SystemMetadata</code>. However, the task has a copy of the system metadata from <code>hzSystemMetadata.get()</code>, and the task doesn't subsequently call <code>hzSystemMetadata.put(pid, sysmeta)</code>. This means that while we are auditing content, we may just not be recording the results! I need to look at the code more to see if we make an API call to <code>CN.updateSystemMetadata()</code> elsewhere, but I would expect the<code>MNAuditTask</code> to do this. Also, if this happens in the task, we also need to broadcast the system metadata change to the authoritative MN and all replica MNS. Lastly, we need to update the <code>serialVersion</code> field to show the other CNs what the most recent replica list is.</p>
CN REST - Task #8777 (New): Configure CN to audit objects greater than 1GBhttps://redmine.dataone.org/issues/87772019-03-12T16:47:42ZChris Jonescjones@nceas.ucsb.edu
<p>The replication auditor currently limits auditing of objects at 1GB. There are currently 4 objects greater than 1TB in size, and 3,588 objects greater than 1GB in size, both being very small counts compared to the 2,769,111 objects less than 1GB in size in the network. Nonetheless, they should still be audited if feasible. The limiting factor is likely HTTP timeout limits during the call to <code>MN.getChecksum()</code>. For reference, I'm seeing the following general times for calculating MD5 and SHA-1 checksums:</p>
<pre>Size MD5 SHA-1
---- ------- -------
1GB 00m02.5s 00m02.6s
10GB 00m25.9s 00m30.0s
100GB 03m28.0s 04m01.8s
1TB 50m14.2s 67m38.6s
</pre>
<p>10GB and 100GB objects seem pretty feasible if we set the HTTP client timeout to > 5 minutes, whereas the few > 1TB files may be challenging just due to the timeouts. The other factor is that the <code>AbstractReplicationAuditor</code> sets a default timeout to 60 seconds, and if the task future doesn't return in that time frame, the future gets cancelled. So the HTTP timeout and this timeout need to be increased and coordinated in order to handle larger object auditing.</p>
CN REST - Task #8776 (New): Set valid replica status to completedhttps://redmine.dataone.org/issues/87762019-03-12T15:57:36ZChris Jonescjones@nceas.ucsb.edu
<p>In <code>MNAuditTask.call()</code> we audit replica checksums, but on success, we only set the <code>Replica.replicaVerified</code> field to the current date. We don't set the <code>Replica.replicationStatus</code> field to <code>COMPLETED</code>. This is an issue because the <code>Replica</code> entry in the <code>SystemMetadata</code> may have been set to <code>FAILED</code> or <code>INVALIDATED</code>, but may now be valid, and so would need to be updated.</p>
CN REST - Story #8757 (New): Fix getChecksum() in MNAuditTask to use dynamic checksum algorithmshttps://redmine.dataone.org/issues/87572019-01-14T16:46:33ZChris Jonescjones@nceas.ucsb.edu
<p>The <code>MNAuditTask.call()</code> method is hardcoded to use <code>MD5</code> checksums on line 277. It requests the Member Node to generate an <code>MD5</code> checksum, and then compares that checksum to the checksum stated in the Coordinating Node<code>s cached</code>SystemMetadata.checksum<code>field for the object. This obviously will fail for objects that submitted objects using</code>SHA-1` or other algorithms.</p>
CN REST - Story #8756 (New): Ensure replica auditor is effectivehttps://redmine.dataone.org/issues/87562019-01-12T20:25:18ZChris Jonescjones@nceas.ucsb.edu
<p>The replication auditor service is currently configured to audit all objects every 90 days. As documented in <a class="issue tracker-4 status-1 priority-4 priority-default child" title="Story: Replica Auditing service is throwing errors (New)" href="https://redmine.dataone.org/issues/8582">#8582</a>, the auditor is not working correctly. While the errors being thrown that are described in that ticket seem to be limited to <code>pid</code>s with certain characters in them, I think the whole auditor process is not keeping up with our content.</p>
<p>Looking at the number of objects on each member node that haven't been audited in the last 90 days, auditing is well behind (if we consider it working at all):</p>
<pre>SELECT sm.authoritive_member_node, count(smr.guid) AS count
FROM systemmetadata sm INNER JOIN smreplicationstatus smr
ON sm.guid = smr.guid
WHERE
smr.member_node != 'urn:node:CN' AND
sm.date_uploaded < (SELECT CURRENT_DATE - interval '90 days') AND
smr.date_verified < (SELECT CURRENT_DATE - interval '90 days')
GROUP BY sm.authoritive_member_node
ORDER BY count DESC;
authoritive_member_node | count
-------------------------+--------
urn:node:ARCTIC | 771872
urn:node:PANGAEA | 507456
urn:node:LTER | 416339
urn:node:DRYAD | 374439
urn:node:CDL | 242115
urn:node:PISCO | 235791
urn:node:KNB | 86075
urn:node:TDAR | 75639
urn:node:NCEI | 50974
urn:node:USGS_SDC | 40290
urn:node:TERN | 31671
urn:node:ESS_DIVE | 28830
urn:node:NMEPSCOR | 16042
urn:node:GOA | 9266
urn:node:IARC | 7677
urn:node:NRDC | 6673
urn:node:TFRI | 6478
urn:node:PPBIO | 3464
urn:node:ORNLDAAC | 3328
urn:node:FEMC | 2430
urn:node:EDI | 2098
urn:node:GRIIDC | 2065
urn:node:mnTestKNB | 2010
urn:node:SANPARKS | 2008
urn:node:ONEShare | 1874
urn:node:R2R | 1787
urn:node:USGSCSAS | 1151
urn:node:EDACGSTORE | 1075
urn:node:US_MPC | 1032
urn:node:RW | 970
urn:node:KUBI | 516
urn:node:NEON | 487
urn:node:LTER_EUROPE | 343
urn:node:IOE | 279
urn:node:RGD | 273
urn:node:ESA | 272
urn:node:NKN | 218
urn:node:OTS_NDC | 126
urn:node:BCODMO | 115
urn:node:SEAD | 90
urn:node:mnTestNKN | 50
urn:node:EDORA | 28
urn:node:ONEShare.pem | 22
urn:node:CLOEBIRD | 17
urn:node:mnTestBCODMO | 11
urn:node:USANPN | 10
urn:node:mnTestTDAR | 10
urn:node:MyMemberNode | 1
</pre>
<p>The table above represents the number of un-audited objects (in the last 90 days), but I get the feeling that the auditor isn't able to audit any of the content it is charged to audit given 1) the frequency, 2) the number of threads allotted, and 3) the configured batch count (seems way too low). <del>Note that this query excludes replicated content - this is just the original objects</del> (After looking at my query again, I think the join is including all replicas - the total is 2,935,787, which is greater than the total objects in the system (2,751,136), so this query needs to be refined).</p>
<p>We need to evaluate the true effectiveness of the auditor. Some strategies may include: 1) looking to see if we may be in an infinite loop on processing a few <code>pid</code>s due to the issues in <a class="issue tracker-4 status-1 priority-4 priority-default child" title="Story: Replica Auditing service is throwing errors (New)" href="https://redmine.dataone.org/issues/8582">#8582</a>, 2) seeing if we can increase the batch size by increasing the total threads allocated in the executor, and 3) decide if we need to offload the process from the CNs and distribute the workload across a cluster of workers that can do the auditing faster. Needs some thought and discussion.</p>
Member Nodes - Task #8697 (New): ESSDIVE: anonymous download issuehttps://redmine.dataone.org/issues/86972018-09-13T19:40:48ZAmy Forresteraforres4@utk.edu
<p>We do have one small issue that we will want to discuss with you and that will be figuring out if we can deal with anonymous download of our data. We had promised our users that they would be notified about downloads and who downloaded. We had not thought about that replication into DataONE of the data itself would violate that promise. I don't think it changes our join schedule and enthusiasm for joining, it just means we have an issue to work out soon.</p>
Infrastructure - Bug #8641 (New): Any change to SystemMetadata causes a new replication task to b...https://redmine.dataone.org/issues/86412018-07-04T12:28:08ZDave Vieglaisdave.vieglais@gmail.com
<p>The hazelcast event listener implemented by ReplicationEventListener basically does:</p>
<pre>ReplicationEventListener.entryUpdated()
if isAuthoritativeReplicaValid()
createReplicationTask()
</pre>
<p><code>isAuthoritativeReplicaValid()</code> checks whether the replication status for the Authoritative MN is <code>complete</code>.</p>
<p>Hence, any update or add event on the systemmetadata map in Hazelcast will trigger addition of a replication task if the authoritative MN has a completed replica, even if replication is not allowed for the object. This causes a significant number of entries to be added to the replication task queue even though those tasks will never do anything as they will be later rejected.</p>
<p>It would be appropriate in <code>entryUpdated()</code> to also check whether replication of the object is allowed. The overhead would be minimal since a copy of the system metadata is already available in <code>entryUpdated()</code>. The same logic should also be added to <code>entryAdded()</code>.</p>
<p><code>ReplicationManager</code> implements <code>boolean isAllowed(SystemMetadata sysmeta)</code> which should do the job.</p>
Infrastructure - Bug #8640 (New): Replication includes "down" nodes as replication targetshttps://redmine.dataone.org/issues/86402018-07-04T11:29:05ZDave Vieglaisdave.vieglais@gmail.com
<p>The call sequence to get the list of target nodes is roughly:</p>
<pre>ReplicationManager.processPid()
ReplicationManager.getPotentialTargetNodes()
ReplicationManager.getNodeReferences()
NodeRegistryServiceImpl.listNodes()
NodeFacade.getApprovedNodeList()
NodeAccess.getApprovedNodeList()
</pre>
<p>It does not appear that the <code>up/down</code> status of a node is examined. Seems the appropriate place to do this would be in <code>ReplicationManager.getPotentialTargetNodes()</code> as this is where previous attempts are examined and the node rejected if too many failures are reported in a time period. Having a check for node <code>up/down</code> status here is logical.</p>
Infrastructure - Story #8639 (New): Replication performance is too slow to service demandhttps://redmine.dataone.org/issues/86392018-07-04T11:17:55ZDave Vieglaisdave.vieglais@gmail.com
<p>The replication process is operating too slowly to service demand resulting in lengthy delays to completion of replication tasks for new and changed content.</p>
<p>This is particularly apparent in the stage environment where perhaps the number of orphaned objects and deprecated / defunct nodes is interfering with expected behaviors.</p>
<p>Goal of this story is to identify and address the immediate issues. Any significant refactoring of the replication process should be captured under another story / epic.</p>
Infrastructure - Feature #5145 (New): Consider including cert subject(s) in NotAuthorized exceptionshttps://redmine.dataone.org/issues/51452014-04-30T13:21:36ZRoger Dahldahl@unm.edu
<p>When a call fails with a NotAuthorized, including the cert subject(s) in the description makes it easy for the client to determine if they were using the right cert.</p>
Infrastructure - Bug #4211 (New): Potential race condition between archive and replicationhttps://redmine.dataone.org/issues/42112013-12-20T18:59:04ZSkye Roseboomsroseboo@dataone.unm.edu
<p>Current implementation of CNodeService.archive() does not appear to increment the serial version of the system metadata.</p>
<p>This implies that a race condition between calls of archive (either due to sync or directly by user) could be overwritten by updates from replication (other services in the future). Since the serial version is not updated by archive - the replication process may not recognize there has been an update from archive - and potentially overwrite the archive flag.</p>
<p>A possible short term solution for protecting the 'archive' flag data, would be to implement a 'business rule' that enforces that once SystemMetadata.archive is set to 'true' - that no subsequent updates are allowed to reset the archived flag to 'false'. This would prevent the race condition between updates from over-writing the 'archive' flag. This change will also PREVENT and archived document from being 'unarchived' in the future (even as requested by user?) </p>
<p>Further discussion is likely needed to determine the best layer to insert this type of business logic enforcement.</p>
Infrastructure - Bug #3700 (New): MNodeService.replicate will not request new object if existing ...https://redmine.dataone.org/issues/37002013-04-01T19:16:40ZSkye Roseboomsroseboo@dataone.unm.edu
<p>Discovered during auditing testing:</p>
<p>MNodeService.replicate first attempts to retrieve a local copy of the object. If found locally, replicate is not called. Checksum is then calculated against the object. If invalid, ServiceFailure is thrown. This effectively prevents the CN from asking a MN to re-replicate a replica that is invalid. </p>
<p>I think the same thing happens if a local id exists but the object is not found. This could be the situation where the MN has deleted the object bytes but still has the metadata? Preventing the CN from asking this MN to re-replicate the object.</p>
DataONE API - Bug #3658 (New): Deleting objects breaks obsoletes chain traversalhttps://redmine.dataone.org/issues/36582013-03-13T23:02:19ZRob Nahfrnahf@epscor.unm.edu
<p>A deleted object can be at the tail, head, or in the middle of an obsoletes chain. Once removed, assuming the sysmeta is also removed, the obsoletes chain is not fully traversable unless the obsoletes and obsoletedBy fields of its direct neighbors in the obsoletes chain are repointed. Additionally, if the deletion was from the head of an obsoletes chain, the obsoletes chain cannot be added to, because the latest in the chain has it's obsoletedBy field already populated.</p>
Infrastructure - Task #3510 (New): Issue certificates from D1TestIntCA for sandbox environment no...https://redmine.dataone.org/issues/35102013-01-23T16:27:32ZChris Jonescjones@nceas.ucsb.edu
<p>Reissue and install certificates in the sandbox environment.</p>
Infrastructure - Task #3509 (New): Issue certificates from D1TestIntCA for dev environment nodes,...https://redmine.dataone.org/issues/35092013-01-23T16:26:48ZChris Jonescjones@nceas.ucsb.edu
<p>Reissue and install certificates in the development environment.</p>