DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2020-08-06T00:06:07ZDataONE Tasks
Redmine CN REST - Bug #8867 (New): CNCore.listChecksumAlgorithms() returns incorrect listhttps://redmine.dataone.org/issues/88672020-08-06T00:06:07ZMatthew Jonesjones@nceas.ucsb.edu
<p>The definition of the ChecksumAlgorithm type in SystemMetadata allows any checksum algorithm listed in the Library of Congress vocab. But the current CNCore.listChecksumAlgorithms() implementation only returns two, MD5 and SHA-1. Need to correct this to include the full list of supported algorithms (see <a href="http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html">http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html</a>).</p>
<p>The implementation of this is in a property file, which needs to be updated with the correct list. The file (d1_cn_rest/src/test/resources/org/dataone/configuration/node.properties) currently contains:</p>
<p><code>cn.checksumAlgorithmList=SHA-1;MD5</code></p>
<p>But it should contain all of the other valid algorithms as well from the LoC.</p>
Member Nodes - Task #8829 (New): get DKAN connected to GMNhttps://redmine.dataone.org/issues/88292019-07-23T16:41:33ZAmy Forresteraforres4@utk.edu
<p>Matt J talked to Carrie -- They recently installed DKAN, and are now having trouble programatically uploading to GMN as Ed had been doing. She was interested in MetacatUI and that there could be a web-based form for uploading content. NKN was unaware that this might be possible with GMN and Metacat. She had lots of good comments about our new service models too. We should definitely follow up.</p>
<p>At a minimum we need to help them get DKAN connected to GMN, as none of their new content is making it into GMN.</p>
Infrastructure - Story #8823 (New): Recent Apache and OpenSSL combinations break connectivity on ...https://redmine.dataone.org/issues/88232019-06-19T02:03:44ZDave Vieglaisdave.vieglais@gmail.com
<p>The latest Ubuntu 18.04 release of Apache is 2.4.29 and OpenSSL is 1.1.1.</p>
<p>This combination creates a significant delay in TLS renegotiation that results from the Apache config option on the CNs:</p>
<pre>SSLVerifyClient none
<Location "/cn">
<If " ! ( %{HTTP_USER_AGENT} =~ /(windows|chrome|mozilla|safari|webkit)/i )">
SSLVerifyClient optional
</If>
</Location>
</pre>
<p>Which is intended to disable client certificate authentication for web browsers, but allow it for others. This approach worked fine on older Apache / OpenSSL but the new combination creates a several second wait when the server discovers the client is not a web browser and tells it to reconnect with the option of including a client certificate.</p>
<p>The latest released version of Apache is 2.4.39 and this is available through a PPA intended for Debian developers. This has been installed so far on dev-2, sandbox, stage, and stage-2 with the process:</p>
<pre>sudo add-apt-repository ppa:ondrej/apache2
sudo apt update
sudo apt dist-upgrade
sudo systemctl restart apache2
</pre>
<p>This installs Apache 2.4.39 and OpenSSL 1.1.1c which appears to resolve the apparent bug in the 2.4.29 / 1.1.1 combination.</p>
<p>One issue with the update is that by default, Apache now offers TLSv1.3, which is great except that it appears to cause problems with at least Python clients failing to connect and getting a 403 error. For example:</p>
<pre>$ python3
>>> import requests
>>> r = requests.get("https://cn-sandbox-ucsb-1.test.dataone.org/cn/v2/monitor/ping")
>>> r.status_code
403
</pre>
<p>That TLSv1.3 is the problem was verified with cn-stage-unm-2 by configuring Apache with:</p>
<pre> SSLProtocol all -TLSv1.3 -SSLv2 -SSLv3
</pre>
<p>to disable TLSv1.3. After this change the Python client was able to connect as expected.</p>
<p>A workaround has not yet been researched.</p>
<p>It is not clear if this issue applies to other clients such as R and Java, so until we learn one way or the other, TLSv1.3 will be disabled on the CNs.</p>
<p>--This issue will likely apply to Member Nodes as well once TLSv1.3 is generally available or if MNs choose to install Apache 2.4.39.-- CORRECTION: this issue only applies when attempting to renegotiate TLS after headers have been transferred, so will not typically apply to a MN.</p>
Infrastructure - Bug #8722 (New): Object in search index but systemmetadata is not available.https://redmine.dataone.org/issues/87222018-10-01T18:38:25ZDave Vieglaisdave.vieglais@gmail.com
<p>The object: <code>urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2</code> is a CSV file. </p>
<p>Resolve or getSystemMetadata return a 404 for that object. It is however, available in the search index:</p>
<p><a href="https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22">https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22</a></p>
<p>cn-synchronization.log* reports:</p>
<pre>cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [SynchronizeTask322] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,469 [SynchronizeTask322] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,575 [SynchronizeTask322] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:36,547 [SynchronizeTask322] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.32:[ERROR] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.32:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ WARN] 2018-09-21 15:37:40,389 [SynchronizeTask322] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,460 [SynchronizeTask322] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,461 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,203 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,204 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,205 [SynchronizeTask37] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,255 [SynchronizeTask37] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,462 [SynchronizeTask37] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:21,034 [SynchronizeTask37] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,672 [SynchronizeTask37] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.33:[ERROR] 2018-09-21 13:37:25,673 [SynchronizeTask37] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.33:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ WARN] 2018-09-21 13:37:25,674 [SynchronizeTask37] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [SynchronizeTask37] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
</pre>
<p>Need to determine how an object could be added to the search index, apparently be replicated, but not exist in the system metadata.</p>
Infrastructure - Bug #8696 (New): double indexing of a resource map and another not processed bec...https://redmine.dataone.org/issues/86962018-09-12T00:18:51ZRob Nahfrnahf@epscor.unm.edu
<p>In production, the ORE 'a1a0e96a-3cde-4f3c-829c-29650b09f22b' was not processed because a member was also referenced by the ORE it obsoleted, 'dc39515e-440b-4673-9f63-962c7374bf48'. The task failed without being requeued. Below is the log output.</p>
<pre>rnahf@cn-orc-1:/var/log/dataone/index$ grep a1a0e96a-3cde-4f3c-829c-29650b09f22b cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,901 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,902 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,404 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:384) We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:processTask:297) Unable to process task for pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:java.lang.Exception: We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,906 (IndexTaskProcessor:newOrFailedIndexTaskExists:890) IndexTaskProcess.newOrFailedIndexTaskExists for id a1a0e96a-3cde-4f3c-829c-29650b09f22b
rnahf@cn-orc-1:/var/log/dataone/index$ date
Tue Sep 11 23:46:56 UTC 2018
rnahf@cn-orc-1:/var/log/dataone/index$ grep dc39515e-440b-4673-9f63-962c7374bf48 cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:12,133 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:13,347 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,221 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,513 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,519 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,731 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,732 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:20,164 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,252 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,255 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:compareRaplicaList:256) HZEventFilter.compareReplicaList - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same replica list as the solr doc.
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:filter:164) HZEventFilter.filter - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same modification date as the SOLR server. Also both have the same replica list. So this event has been filtered out for indexing (no indexing).
rnahf@cn-orc-1:/var/log/dataone/index$
</pre> Infrastructure - Bug #8655 (New): Synchronization died with OOMhttps://redmine.dataone.org/issues/86552018-07-13T11:24:16ZDave Vieglaisdave.vieglais@gmail.com
<p>d1-processing became unresponsive. cn-synchronization log showed:<br>
<code><br>
[ERROR] 2018-07-12 18:28:26,875 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
at java.util.concurrent.FutureTask.report(FutureTask.java:122)<br>
at java.util.concurrent.FutureTask.get(FutureTask.java:192)<br>
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)<br>
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>
at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>
at java.lang.Thread.run(Thread.java:748)<br>
Caused by: java.lang.OutOfMemoryError: Java heap space<br>
[ INFO] 2018-07-12 18:28:49,862 [ProcessDaemonTask1] (SyncObjectTaskManager:run:110) SyncObjectTaskManager Complete<br>
[ WARN] 2018-07-12 20:41:15,788 [hz.client.2.Listener] (NodeTopicListener:onMessage:68) urn:node:OTS_NDC- NodeTopicListener Disabl<br>
</code></p>
<p>d1-processing is running with:<br>
<code><br>
-Djava.awt.headless=true -XX:UseParallelGC -Xmx4096M -Xms1024M -Xss1280k -XX:MaxPermSize=512M<br>
</code></p>
Member Nodes - Bug #8622 (New): IOE repository is not respondinghttps://redmine.dataone.org/issues/86222018-06-18T19:04:55ZDave Vieglaisdave.vieglais@gmail.com
<p>Possibly related to <a class="issue tracker-4 status-1 priority-4 priority-default child parent" title="Story: Upgrade Member Node to current version of Metacat (IOE) (New)" href="https://redmine.dataone.org/issues/8244">#8244</a></p>
<p>The IOE repository is not responding at the expected baseURL of:</p>
<p><a href="https://data.rcg.montana.edu/catalog/d1/mn">https://data.rcg.montana.edu/catalog/d1/mn</a></p>
<p>and instead returns an error that SSL connection is not available. Attempting to use http instead of https returns a 404 error.</p>
<p>Contact the server administrator and have them check the service availability.</p>
Infrastructure - Bug #8500 (New): Geohash not calculated properlyhttps://redmine.dataone.org/issues/85002018-03-14T17:06:52ZDave Vieglaisdave.vieglais@gmail.com
<p>The metadata with pid = <code>{39A912EE-E2DC-4F50-A3DE-C0EF04BC1E88}</code> has bounding coords correctly indexed:</p>
<pre> <float name="eastBoundCoord">166.657</float>
<float name="westBoundCoord">-178.378</float>
<float name="southBoundCoord">-14.5596</float>
<float name="northBoundCoord">28.4536</float>
</pre>
<p>but the computed geohash is wrong:</p>
<pre> <arr name="geohash_9">
<str>ec5zd8st0</str>
</arr>
<arr name="geohash_1">
<str>e</str>
</arr>
<arr name="geohash_2">
<str>ec</str>
</arr>
<arr name="geohash_3">
<str>ec5</str>
</arr>
<arr name="geohash_4">
<str>ec5z</str>
</arr>
<arr name="geohash_5">
<str>ec5zd</str>
</arr>
<arr name="geohash_6">
<str>ec5zd8</str>
</arr>
<arr name="geohash_7">
<str>ec5zd8s</str>
</arr>
<arr name="geohash_8">
<str>ec5zd8st</str>
</arr>
</pre>
<p>Investigate if this is a systemic issue or peculiar to certain types of metadata or representation of coordinates within the metadata.</p>
Infrastructure - Story #8173 (New): add checks for retrograde systemMetadata changeshttps://redmine.dataone.org/issues/81732017-09-01T19:42:33ZRob Nahfrnahf@epscor.unm.edu
<p>with the ability to prioritize and the introduction of parallelized index task processing, the effective queue is not guaranteed to be time-ordered. If there are two valid system metadata changes resulting in two tasks and the second change hits the index first, the earlier task should be rejected, as its changes are out of date.</p>
Infrastructure - Bug #8076 (New): sysmeta can not be retrieved for some objectshttps://redmine.dataone.org/issues/80762017-04-22T01:52:05ZDave Vieglaisdave.vieglais@gmail.com
<p>Appears that a (possibly large) number of system metadata may be invalid due to the jibx/jaxb transition.</p>
<p>For example:<br>
<br>
d1listobjects -x "2014-03-25" -y "2014-03-27" -C 500 -p 100</p>
<p>000000: 744 Bytes 2014-03-25T22:05:05Z text/csv ark:/13030/m50000sp/1/cadwsap-s3600587-002-main.csv<br>
000001: 20.9 KiB 2014-03-25T22:04:57Z application/pdf ark:/13030/m50000sp/1/cadwsap-s3600587-002.pdf<br>
000002: 302 Bytes 2014-03-25T22:05:09Z text/csv ark:/13030/m50000sp/1/cadwsap-s3600587-002-vuln.csv<br>
000003: 4.8 KiB 2014-03-25T22:05:01Z FGDC-STD-001-1998 ark:/13030/m50000sp/1/cadwsap-s3600587-002.xml<br>
000004: 4.0 KiB 2014-03-25T22:05:12Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000sp/1/mrt-dataone-map.rdf<br>
000005: 737 Bytes 2014-03-25T22:05:25Z text/csv ark:/13030/m50000t4/1/cadwsap-s1610004-004-main.csv<br>
000006: 22.8 KiB 2014-03-25T22:05:15Z application/pdf ark:/13030/m50000t4/1/cadwsap-s1610004-004.pdf<br>
000007: 1.7 KiB 2014-03-25T22:05:30Z text/csv ark:/13030/m50000t4/1/cadwsap-s1610004-004-vuln.csv<br>
000008: 4.8 KiB 2014-03-25T22:05:23Z FGDC-STD-001-1998 ark:/13030/m50000t4/1/cadwsap-s1610004-004.xml<br>
000009: 4.0 KiB 2014-03-25T22:05:35Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000t4/1/mrt-dataone-map.rdf<br>
000010: 750 Bytes 2014-03-25T22:05:46Z text/csv ark:/13030/m50000vk/1/cadwsap-s4300630-002-main.csv<br>
000011: 22.9 KiB 2014-03-25T22:05:38Z application/pdf ark:/13030/m50000vk/1/cadwsap-s4300630-002.pdf<br>
000012: 2.2 KiB 2014-03-25T22:05:50Z text/csv ark:/13030/m50000vk/1/cadwsap-s4300630-002-vuln.csv<br>
000013: 4.8 KiB 2014-03-25T22:05:43Z FGDC-STD-001-1998 ark:/13030/m50000vk/1/cadwsap-s4300630-002.xml<br>
000014: 4.0 KiB 2014-03-25T22:05:54Z <a href="http://www.openarchives.org/ore/terms">http://www.openarchives.org/ore/terms</a> ark:/13030/m50000vk/1/mrt-dataone-map.rdf<br>
000015: 698 Bytes 2014-03-25T22:06:05Z text/csv ark:/13030/m50000w1/1/cadwsap-s1502277-001-main.csv</p>
<p>...</p>
Infrastructure - Bug #7919 (New): unloadable system metadata in CNs by Hazelcasthttps://redmine.dataone.org/issues/79192016-10-26T16:22:56ZRob Nahfrnahf@epscor.unm.edu
<p>Looking through the metacat logs, I found a lot of instances where the HzSystemMetadataMap could not load system metadata for particular pids. Most had dryad in the pid (~1200), but another 130 are from elsewhere.</p>
<p>a random sample showed that it couldn't be retrieved via /meta although the pid could be retrieved from the Dryad MN.</p>
<p>This appears to be another type of half-created content on the CN.</p>
<p>rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep -v dryad | wc -l<br>
139<br>
rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep dryad | wc -l<br>
1216</p>
Python GMN - Bug #7740 (New): GMN fails to write system metadata during a refresh under unknown c...https://redmine.dataone.org/issues/77402016-04-14T14:56:22ZMark Servillamark.servilla@gmail.com
<p>Some MNs (NMEPSCOR and TERN, for example) are reporting that GMN fails to write system metadata during a refresh. It is not clear under what conditions this occurs.</p>
<p>2016-03-24 06:00:15 ERROR SysMetaRefresher process_system_metadata_refresh_queue 30401 140171679168320 System Metadata update failed with internal exception:<br>
Traceback (most recent call last):<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 161, in <em>process_refresh_task<br>
self._refresh(task)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 174, in _refresh<br>
self._update_sys_meta(sys_meta)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 224, in _update_sys_meta<br>
mn.auth.set_access_policy(pid, sys_meta.accessPolicy)<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/auth.py", line 164, in set_access_policy<br>
with mn.sysmeta_store.sysmeta(pid, sci_obj.serial_version) as sysmeta:<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/sysmeta_store.py", line 97, in __init</em>_<br>
with open(sysmeta_path, 'rb') as f:<br>
IOError: [Errno 2] No such file or directory: '/var/local/dataone/gmn/lib/python2.7/site-packages/service/./stores/sysmeta/112/129/aekos.org.au%2Fcollection%2Fnsw.gov.au%2Fnsw_atlas%2Fvis_flora_module%2FP_ADHO4FB3.20150515.5'</p>
Python GMN - Bug #7738 (New): GMN perpetually retries system metadata updates after failureshttps://redmine.dataone.org/issues/77382016-04-14T14:26:33ZMark Servillamark.servilla@gmail.com
<p>It is believed that GMN perpetually attempts to update system metadata even when the system metadata update processing fails for specific system metadata objects. More sane logic needs to be implemented so that continuous attempts to update system metadata (and copious calls to the CN for system metadata) are averted.</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Infrastructure - Bug #3675 (New): package relationships not available for archived objectshttps://redmine.dataone.org/issues/36752013-03-20T19:12:28ZRob Nahfrnahf@epscor.unm.edu
<p>Currently, records for obsoleted items are maintained in the solr index so its resourceMap, documents, documentedBy relationships are available, and people can "investigate the past". However, those same relationships are not available for archived items, leading to an incomplete solution for this use case (accessing package relationships of out-of-date content).</p>
<p>Archive is used to limit discoverability, but it also eliminates the ability to navigate the package relationships. </p>
<p>Note: archive is intended to be used when the owner does not want to update the object, but simply remove it. However, nothing prevents the owner from archiving obsoleted content. So, in fact, the ability to navigate the package relationships of out-of-date content cannot be guaranteed, and is subject to the individual data management practices of content owners. </p>