DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2020-08-06T00:06:07ZDataONE Tasks
Redmine CN REST - Bug #8867 (New): CNCore.listChecksumAlgorithms() returns incorrect listhttps://redmine.dataone.org/issues/88672020-08-06T00:06:07ZMatthew Jonesjones@nceas.ucsb.edu
<p>The definition of the ChecksumAlgorithm type in SystemMetadata allows any checksum algorithm listed in the Library of Congress vocab. But the current CNCore.listChecksumAlgorithms() implementation only returns two, MD5 and SHA-1. Need to correct this to include the full list of supported algorithms (see <a href="http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html">http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html</a>).</p>
<p>The implementation of this is in a property file, which needs to be updated with the correct list. The file (d1_cn_rest/src/test/resources/org/dataone/configuration/node.properties) currently contains:</p>
<p><code>cn.checksumAlgorithmList=SHA-1;MD5</code></p>
<p>But it should contain all of the other valid algorithms as well from the LoC.</p>
Infrastructure - Story #8823 (New): Recent Apache and OpenSSL combinations break connectivity on ...https://redmine.dataone.org/issues/88232019-06-19T02:03:44ZDave Vieglaisdave.vieglais@gmail.com
<p>The latest Ubuntu 18.04 release of Apache is 2.4.29 and OpenSSL is 1.1.1.</p>
<p>This combination creates a significant delay in TLS renegotiation that results from the Apache config option on the CNs:</p>
<pre>SSLVerifyClient none
<Location "/cn">
<If " ! ( %{HTTP_USER_AGENT} =~ /(windows|chrome|mozilla|safari|webkit)/i )">
SSLVerifyClient optional
</If>
</Location>
</pre>
<p>Which is intended to disable client certificate authentication for web browsers, but allow it for others. This approach worked fine on older Apache / OpenSSL but the new combination creates a several second wait when the server discovers the client is not a web browser and tells it to reconnect with the option of including a client certificate.</p>
<p>The latest released version of Apache is 2.4.39 and this is available through a PPA intended for Debian developers. This has been installed so far on dev-2, sandbox, stage, and stage-2 with the process:</p>
<pre>sudo add-apt-repository ppa:ondrej/apache2
sudo apt update
sudo apt dist-upgrade
sudo systemctl restart apache2
</pre>
<p>This installs Apache 2.4.39 and OpenSSL 1.1.1c which appears to resolve the apparent bug in the 2.4.29 / 1.1.1 combination.</p>
<p>One issue with the update is that by default, Apache now offers TLSv1.3, which is great except that it appears to cause problems with at least Python clients failing to connect and getting a 403 error. For example:</p>
<pre>$ python3
>>> import requests
>>> r = requests.get("https://cn-sandbox-ucsb-1.test.dataone.org/cn/v2/monitor/ping")
>>> r.status_code
403
</pre>
<p>That TLSv1.3 is the problem was verified with cn-stage-unm-2 by configuring Apache with:</p>
<pre> SSLProtocol all -TLSv1.3 -SSLv2 -SSLv3
</pre>
<p>to disable TLSv1.3. After this change the Python client was able to connect as expected.</p>
<p>A workaround has not yet been researched.</p>
<p>It is not clear if this issue applies to other clients such as R and Java, so until we learn one way or the other, TLSv1.3 will be disabled on the CNs.</p>
<p>--This issue will likely apply to Member Nodes as well once TLSv1.3 is generally available or if MNs choose to install Apache 2.4.39.-- CORRECTION: this issue only applies when attempting to renegotiate TLS after headers have been transferred, so will not typically apply to a MN.</p>
Infrastructure - Bug #8722 (New): Object in search index but systemmetadata is not available.https://redmine.dataone.org/issues/87222018-10-01T18:38:25ZDave Vieglaisdave.vieglais@gmail.com
<p>The object: <code>urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2</code> is a CSV file. </p>
<p>Resolve or getSystemMetadata return a 404 for that object. It is however, available in the search index:</p>
<p><a href="https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22">https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22</a></p>
<p>cn-synchronization.log* reports:</p>
<pre>cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [SynchronizeTask322] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,469 [SynchronizeTask322] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,575 [SynchronizeTask322] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:36,547 [SynchronizeTask322] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.32:[ERROR] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.32:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ WARN] 2018-09-21 15:37:40,389 [SynchronizeTask322] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,460 [SynchronizeTask322] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,461 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,203 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,204 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,205 [SynchronizeTask37] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,255 [SynchronizeTask37] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,462 [SynchronizeTask37] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:21,034 [SynchronizeTask37] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,672 [SynchronizeTask37] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.33:[ERROR] 2018-09-21 13:37:25,673 [SynchronizeTask37] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.33:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ WARN] 2018-09-21 13:37:25,674 [SynchronizeTask37] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [SynchronizeTask37] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
</pre>
<p>Need to determine how an object could be added to the search index, apparently be replicated, but not exist in the system metadata.</p>
Infrastructure - Bug #8696 (New): double indexing of a resource map and another not processed bec...https://redmine.dataone.org/issues/86962018-09-12T00:18:51ZRob Nahfrnahf@epscor.unm.edu
<p>In production, the ORE 'a1a0e96a-3cde-4f3c-829c-29650b09f22b' was not processed because a member was also referenced by the ORE it obsoleted, 'dc39515e-440b-4673-9f63-962c7374bf48'. The task failed without being requeued. Below is the log output.</p>
<pre>rnahf@cn-orc-1:/var/log/dataone/index$ grep a1a0e96a-3cde-4f3c-829c-29650b09f22b cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,901 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,902 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,404 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:384) We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:processTask:297) Unable to process task for pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:java.lang.Exception: We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,906 (IndexTaskProcessor:newOrFailedIndexTaskExists:890) IndexTaskProcess.newOrFailedIndexTaskExists for id a1a0e96a-3cde-4f3c-829c-29650b09f22b
rnahf@cn-orc-1:/var/log/dataone/index$ date
Tue Sep 11 23:46:56 UTC 2018
rnahf@cn-orc-1:/var/log/dataone/index$ grep dc39515e-440b-4673-9f63-962c7374bf48 cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:12,133 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:13,347 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,221 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,513 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,519 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,731 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,732 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:20,164 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,252 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,255 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:compareRaplicaList:256) HZEventFilter.compareReplicaList - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same replica list as the solr doc.
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:filter:164) HZEventFilter.filter - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same modification date as the SOLR server. Also both have the same replica list. So this event has been filtered out for indexing (no indexing).
rnahf@cn-orc-1:/var/log/dataone/index$
</pre> Infrastructure - Bug #8500 (New): Geohash not calculated properlyhttps://redmine.dataone.org/issues/85002018-03-14T17:06:52ZDave Vieglaisdave.vieglais@gmail.com
<p>The metadata with pid = <code>{39A912EE-E2DC-4F50-A3DE-C0EF04BC1E88}</code> has bounding coords correctly indexed:</p>
<pre> <float name="eastBoundCoord">166.657</float>
<float name="westBoundCoord">-178.378</float>
<float name="southBoundCoord">-14.5596</float>
<float name="northBoundCoord">28.4536</float>
</pre>
<p>but the computed geohash is wrong:</p>
<pre> <arr name="geohash_9">
<str>ec5zd8st0</str>
</arr>
<arr name="geohash_1">
<str>e</str>
</arr>
<arr name="geohash_2">
<str>ec</str>
</arr>
<arr name="geohash_3">
<str>ec5</str>
</arr>
<arr name="geohash_4">
<str>ec5z</str>
</arr>
<arr name="geohash_5">
<str>ec5zd</str>
</arr>
<arr name="geohash_6">
<str>ec5zd8</str>
</arr>
<arr name="geohash_7">
<str>ec5zd8s</str>
</arr>
<arr name="geohash_8">
<str>ec5zd8st</str>
</arr>
</pre>
<p>Investigate if this is a systemic issue or peculiar to certain types of metadata or representation of coordinates within the metadata.</p>
Infrastructure - Story #8173 (New): add checks for retrograde systemMetadata changeshttps://redmine.dataone.org/issues/81732017-09-01T19:42:33ZRob Nahfrnahf@epscor.unm.edu
<p>with the ability to prioritize and the introduction of parallelized index task processing, the effective queue is not guaranteed to be time-ordered. If there are two valid system metadata changes resulting in two tasks and the second change hits the index first, the earlier task should be rejected, as its changes are out of date.</p>
Infrastructure - Bug #8076 (New): sysmeta can not be retrieved for some objectshttps://redmine.dataone.org/issues/80762017-04-22T01:52:05ZDave Vieglaisdave.vieglais@gmail.com
<p>Appears that a (possibly large) number of system metadata may be invalid due to the jibx/jaxb transition.</p>
<p>For example:<br>
<br>
d1listobjects -x "2014-03-25" -y "2014-03-27" -C 500 -p 100</p>
<p>000000: 744 Bytes 2014-03-25T22:05:05Z text/csv ark:/13030/m50000sp/1/cadwsap-s3600587-002-main.csv<br>
000001: 20.9 KiB 2014-03-25T22:04:57Z application/pdf ark:/13030/m50000sp/1/cadwsap-s3600587-002.pdf<br>
000002: 302 Bytes 2014-03-25T22:05:09Z text/csv ark:/13030/m50000sp/1/cadwsap-s3600587-002-vuln.csv<br>
000003: 4.8 KiB 2014-03-25T22:05:01Z FGDC-STD-001-1998 ark:/13030/m50000sp/1/cadwsap-s3600587-002.xml<br>
000004: 4.0 KiB 2014-03-25T22:05:12Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000sp/1/mrt-dataone-map.rdf<br>
000005: 737 Bytes 2014-03-25T22:05:25Z text/csv ark:/13030/m50000t4/1/cadwsap-s1610004-004-main.csv<br>
000006: 22.8 KiB 2014-03-25T22:05:15Z application/pdf ark:/13030/m50000t4/1/cadwsap-s1610004-004.pdf<br>
000007: 1.7 KiB 2014-03-25T22:05:30Z text/csv ark:/13030/m50000t4/1/cadwsap-s1610004-004-vuln.csv<br>
000008: 4.8 KiB 2014-03-25T22:05:23Z FGDC-STD-001-1998 ark:/13030/m50000t4/1/cadwsap-s1610004-004.xml<br>
000009: 4.0 KiB 2014-03-25T22:05:35Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000t4/1/mrt-dataone-map.rdf<br>
000010: 750 Bytes 2014-03-25T22:05:46Z text/csv ark:/13030/m50000vk/1/cadwsap-s4300630-002-main.csv<br>
000011: 22.9 KiB 2014-03-25T22:05:38Z application/pdf ark:/13030/m50000vk/1/cadwsap-s4300630-002.pdf<br>
000012: 2.2 KiB 2014-03-25T22:05:50Z text/csv ark:/13030/m50000vk/1/cadwsap-s4300630-002-vuln.csv<br>
000013: 4.8 KiB 2014-03-25T22:05:43Z FGDC-STD-001-1998 ark:/13030/m50000vk/1/cadwsap-s4300630-002.xml<br>
000014: 4.0 KiB 2014-03-25T22:05:54Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000vk/1/mrt-dataone-map.rdf<br>
000015: 698 Bytes 2014-03-25T22:06:05Z text/csv ark:/13030/m50000w1/1/cadwsap-s1502277-001-main.csv</p>
<p>...</p>
Infrastructure - Bug #7919 (New): unloadable system metadata in CNs by Hazelcasthttps://redmine.dataone.org/issues/79192016-10-26T16:22:56ZRob Nahfrnahf@epscor.unm.edu
<p>Looking through the metacat logs, I found a lot of instances where the HzSystemMetadataMap could not load system metadata for particular pids. Most had dryad in the pid (~1200), but another 130 are from elsewhere.</p>
<p>a random sample showed that it couldn't be retrieved via /meta although the pid could be retrieved from the Dryad MN.</p>
<p>This appears to be another type of half-created content on the CN.</p>
<p>rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep -v dryad | wc -l<br>
139<br>
rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep dryad | wc -l<br>
1216</p>
Infrastructure - Bug #7601 (New): CN checksum inconsistencieshttps://redmine.dataone.org/issues/76012016-01-21T20:09:28ZBen Leinfelderleinfelder@nceas.ucsb.edu
<p>While transferring test data from production to the sandbox-2 environment I noticed failures for a group of pids.<br>
I'll use an example to illustrate (doi_10.5066_F71C1TV7)<br>
<a href="https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7">https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7</a><br>
CN.SystemMetadata reports checksum as:<br>
<br>
46178da6192263921eb755940d716725</p>
<p>Whereas calculating it from disk gives this:<br>
<br>
MD5(/var/metacat/documents/autogen.2013062508395355978.1)= efc11787f789b45db29999fb4bd8d745</p>
<p>The byte size is also off.<br>
<br>
16739<br>
<br>
On disk:<br>
<br>
-rw-r--r-- 1 tomcat7 tomcat7 16529 Jun 25 2013 /var/metacat/documents/autogen.2013062508395355978.1</p>
<p>There are ~70 similar pids that have issues (perhaps more) from our test corpus. They are from the now defunct USGS MN.</p>
<p>I'm not sure what our strategy is since the original MN is not online any longer so we cannot get the "original" bytes from that.</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Infrastructure - Task #4210 (Testing): Metacat does not set serialVersion correctly in CNodeServi...https://redmine.dataone.org/issues/42102013-12-20T15:22:50ZChris Jonescjones@nceas.ucsb.edu
<p>For DATA and METADATA, CNodeService.archive() and D1NodeService.archive(), respectively, don't increment the serialVersion field. Check this for delete() as well. D1NodeService delegates to DocumentImpl to call the HZ put() method, so the fix needs to be there, and in CNodeService.</p>
Infrastructure - Bug #3675 (New): package relationships not available for archived objectshttps://redmine.dataone.org/issues/36752013-03-20T19:12:28ZRob Nahfrnahf@epscor.unm.edu
<p>Currently, records for obsoleted items are maintained in the solr index so its resourceMap, documents, documentedBy relationships are available, and people can "investigate the past". However, those same relationships are not available for archived items, leading to an incomplete solution for this use case (accessing package relationships of out-of-date content).</p>
<p>Archive is used to limit discoverability, but it also eliminates the ability to navigate the package relationships. </p>
<p>Note: archive is intended to be used when the owner does not want to update the object, but simply remove it. However, nothing prevents the owner from archiving obsoleted content. So, in fact, the ability to navigate the package relationships of out-of-date content cannot be guaranteed, and is subject to the individual data management practices of content owners. </p>
Member Nodes - MNDeployment #3521 (Operational): SEAD Member Nodehttps://redmine.dataone.org/issues/35212013-01-25T21:19:12ZRebecca Koskelarkoskela@unm.edu
<p>SEAD (Sustainable Environment - Actionable Data), another DataNet, would like to become a DataONE Member Node<br>
(<a href="http://sead-data.net/">http://sead-data.net/</a>)</p>
Infrastructure - Bug #3492 (In Progress): Invalid PIDs in production (whitespace)https://redmine.dataone.org/issues/34922013-01-17T15:13:44ZDave Vieglaisdave.vieglais@gmail.com
<p>Recording this for future reference. </p>
<p>There are nine PIDs in the production environment that contain whitespace. This appears to have no functional effect - sysmeta and objects can be retrieved so no action is required other than to ensure no more sneak in.</p>
<p>The PIDs in question are:</p>
<a name="guid"></a>
<h2 > guid <a href="#guid" class="wiki-anchor">¶</a></h2>
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith botha.1.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.5.1<br>
judith botha.2.1<br>
resourceMap_Lin Cheng-Jung.1.1<br>
resourceMap_Lin Cheng-Jung.1.2<br>
resourceMap_Lin Cheng-Jung.1.3<br>
Lin Cheng-Jung.1.1<br>
Lin Cheng-Jung.1.2<br>
Lin Cheng-Jung.1.3<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1</p>
Member Nodes - MNDeployment #3118 (Operational): Dryad Member Nodehttps://redmine.dataone.org/issues/31182012-08-05T17:05:51ZDave Vieglaisdave.vieglais@gmail.com
<p>The Dryad MN will operate as a tier 1 member node.</p>
<p>Base_URL: <a href="https://datadryad.org/mn">https://datadryad.org/mn</a><br>
Node_ID: urn:node:DRYAD<br>
Deployment_Contact: Ryan Scherle<br>
Software: Custom on modified DSpace (Dryad)<br>
Target_Tier: 1<br>
Content_Volume_GB: 20</p>