DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2019-07-23T16:41:33ZDataONE Tasks
Redmine Member Nodes - Task #8829 (New): get DKAN connected to GMNhttps://redmine.dataone.org/issues/88292019-07-23T16:41:33ZAmy Forresteraforres4@utk.edu
<p>Matt J talked to Carrie -- They recently installed DKAN, and are now having trouble programatically uploading to GMN as Ed had been doing. She was interested in MetacatUI and that there could be a web-based form for uploading content. NKN was unaware that this might be possible with GMN and Metacat. She had lots of good comments about our new service models too. We should definitely follow up.</p>
<p>At a minimum we need to help them get DKAN connected to GMN, as none of their new content is making it into GMN.</p>
Infrastructure - Story #8823 (New): Recent Apache and OpenSSL combinations break connectivity on ...https://redmine.dataone.org/issues/88232019-06-19T02:03:44ZDave Vieglaisdave.vieglais@gmail.com
<p>The latest Ubuntu 18.04 release of Apache is 2.4.29 and OpenSSL is 1.1.1.</p>
<p>This combination creates a significant delay in TLS renegotiation that results from the Apache config option on the CNs:</p>
<pre>SSLVerifyClient none
<Location "/cn">
<If " ! ( %{HTTP_USER_AGENT} =~ /(windows|chrome|mozilla|safari|webkit)/i )">
SSLVerifyClient optional
</If>
</Location>
</pre>
<p>Which is intended to disable client certificate authentication for web browsers, but allow it for others. This approach worked fine on older Apache / OpenSSL but the new combination creates a several second wait when the server discovers the client is not a web browser and tells it to reconnect with the option of including a client certificate.</p>
<p>The latest released version of Apache is 2.4.39 and this is available through a PPA intended for Debian developers. This has been installed so far on dev-2, sandbox, stage, and stage-2 with the process:</p>
<pre>sudo add-apt-repository ppa:ondrej/apache2
sudo apt update
sudo apt dist-upgrade
sudo systemctl restart apache2
</pre>
<p>This installs Apache 2.4.39 and OpenSSL 1.1.1c which appears to resolve the apparent bug in the 2.4.29 / 1.1.1 combination.</p>
<p>One issue with the update is that by default, Apache now offers TLSv1.3, which is great except that it appears to cause problems with at least Python clients failing to connect and getting a 403 error. For example:</p>
<pre>$ python3
>>> import requests
>>> r = requests.get("https://cn-sandbox-ucsb-1.test.dataone.org/cn/v2/monitor/ping")
>>> r.status_code
403
</pre>
<p>That TLSv1.3 is the problem was verified with cn-stage-unm-2 by configuring Apache with:</p>
<pre> SSLProtocol all -TLSv1.3 -SSLv2 -SSLv3
</pre>
<p>to disable TLSv1.3. After this change the Python client was able to connect as expected.</p>
<p>A workaround has not yet been researched.</p>
<p>It is not clear if this issue applies to other clients such as R and Java, so until we learn one way or the other, TLSv1.3 will be disabled on the CNs.</p>
<p>--This issue will likely apply to Member Nodes as well once TLSv1.3 is generally available or if MNs choose to install Apache 2.4.39.-- CORRECTION: this issue only applies when attempting to renegotiate TLS after headers have been transferred, so will not typically apply to a MN.</p>
Infrastructure - Bug #8722 (New): Object in search index but systemmetadata is not available.https://redmine.dataone.org/issues/87222018-10-01T18:38:25ZDave Vieglaisdave.vieglais@gmail.com
<p>The object: <code>urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2</code> is a CSV file. </p>
<p>Resolve or getSystemMetadata return a 404 for that object. It is however, available in the search index:</p>
<p><a href="https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22">https://cn.dataone.org/cn/v2/query/solr/?start=0&rows=10&fl=*&q=id%3A%22urn%5C%3Auuid%5C%3A4923cca4%5C-c155%5C-4edc%5C-b901%5C-f6e3b4f2e7b2%22</a></p>
<p>cn-synchronization.log* reports:</p>
<pre>cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,418 [SynchronizeTask322] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,469 [SynchronizeTask322] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:32,575 [SynchronizeTask322] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:36,547 [SynchronizeTask322] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.32:[ERROR] 2018-09-21 15:37:40,388 [SynchronizeTask322] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.32:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ WARN] 2018-09-21 15:37:40,389 [SynchronizeTask322] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,460 [SynchronizeTask322] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.32:[ INFO] 2018-09-21 15:37:40,461 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,203 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:293) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 received
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,204 [ProcessDaemonTask2] (SyncObjectTask:executeTransferObjectTask:310) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 submitted for execution
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,205 [SynchronizeTask37] (V2TransferObjectTask:call:202) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Locking task, attempt 1
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,255 [SynchronizeTask37] (V2TransferObjectTask:call:207) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Processing SyncObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:17,462 [SynchronizeTask37] (V2TransferObjectTask:retrieveMNSystemMetadata:317) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Retrieved SystemMetadata Identifier:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 from node urn:node:KNB for ObjectInfo Identifier urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:21,034 [SynchronizeTask37] (V2TransferObjectTask:createObject:730) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Start CreateObject
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,672 [SynchronizeTask37] (V2TransferObjectTask:call:234) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - Unlocked Pid.
cn-synchronization.log.33:[ERROR] 2018-09-21 13:37:25,673 [SynchronizeTask37] (V2TransferObjectTask:call:269) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object. - InvalidRequest - The identifier is already in use by an existing object.
cn-synchronization.log.33:org.dataone.cn.batch.exceptions.UnrecoverableException: urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 cn.createObject failed: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ WARN] 2018-09-21 13:37:25,674 [SynchronizeTask37] (SyncFailedTask:submitSynchronizationFailed:116) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - SynchronizationFailed: detail code: 6001 id:urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 nodeId:urn:node:CNUCSB1 description:Synchronization task of [PID::] urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 [::PID] failed. Cause: InvalidRequest: The identifier is already in use by an existing object.
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [SynchronizeTask37] (V2TransferObjectTask:call:294) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 - exiting with callState: FAILED
cn-synchronization.log.33:[ INFO] 2018-09-21 13:37:25,740 [ProcessDaemonTask2] (SyncObjectTask:reapFutures:372) Task-urn:node:KNB-urn:uuid:4923cca4-c155-4edc-b901-f6e3b4f2e7b2 SyncObjectState: FAILED
</pre>
<p>Need to determine how an object could be added to the search index, apparently be replicated, but not exist in the system metadata.</p>
Infrastructure - Bug #8696 (New): double indexing of a resource map and another not processed bec...https://redmine.dataone.org/issues/86962018-09-12T00:18:51ZRob Nahfrnahf@epscor.unm.edu
<p>In production, the ORE 'a1a0e96a-3cde-4f3c-829c-29650b09f22b' was not processed because a member was also referenced by the ORE it obsoleted, 'dc39515e-440b-4673-9f63-962c7374bf48'. The task failed without being requeued. Below is the log output.</p>
<pre>rnahf@cn-orc-1:/var/log/dataone/index$ grep a1a0e96a-3cde-4f3c-829c-29650b09f22b cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,901 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,902 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,404 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:384) We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:processTask:297) Unable to process task for pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:java.lang.Exception: We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,906 (IndexTaskProcessor:newOrFailedIndexTaskExists:890) IndexTaskProcess.newOrFailedIndexTaskExists for id a1a0e96a-3cde-4f3c-829c-29650b09f22b
rnahf@cn-orc-1:/var/log/dataone/index$ date
Tue Sep 11 23:46:56 UTC 2018
rnahf@cn-orc-1:/var/log/dataone/index$ grep dc39515e-440b-4673-9f63-962c7374bf48 cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:12,133 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:13,347 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,221 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,513 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,519 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,731 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,732 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:20,164 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,252 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,255 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:compareRaplicaList:256) HZEventFilter.compareReplicaList - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same replica list as the solr doc.
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:filter:164) HZEventFilter.filter - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same modification date as the SOLR server. Also both have the same replica list. So this event has been filtered out for indexing (no indexing).
rnahf@cn-orc-1:/var/log/dataone/index$
</pre> Infrastructure - Bug #8655 (New): Synchronization died with OOMhttps://redmine.dataone.org/issues/86552018-07-13T11:24:16ZDave Vieglaisdave.vieglais@gmail.com
<p>d1-processing became unresponsive. cn-synchronization log showed:<br>
<code><br>
[ERROR] 2018-07-12 18:28:26,875 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
at java.util.concurrent.FutureTask.report(FutureTask.java:122)<br>
at java.util.concurrent.FutureTask.get(FutureTask.java:192)<br>
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)<br>
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>
at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>
at java.lang.Thread.run(Thread.java:748)<br>
Caused by: java.lang.OutOfMemoryError: Java heap space<br>
[ INFO] 2018-07-12 18:28:49,862 [ProcessDaemonTask1] (SyncObjectTaskManager:run:110) SyncObjectTaskManager Complete<br>
[ WARN] 2018-07-12 20:41:15,788 [hz.client.2.Listener] (NodeTopicListener:onMessage:68) urn:node:OTS_NDC- NodeTopicListener Disabl<br>
</code></p>
<p>d1-processing is running with:<br>
<code><br>
-Djava.awt.headless=true -XX:UseParallelGC -Xmx4096M -Xms1024M -Xss1280k -XX:MaxPermSize=512M<br>
</code></p>
Member Nodes - Bug #8622 (New): IOE repository is not respondinghttps://redmine.dataone.org/issues/86222018-06-18T19:04:55ZDave Vieglaisdave.vieglais@gmail.com
<p>Possibly related to <a class="issue tracker-4 status-1 priority-4 priority-default child parent" title="Story: Upgrade Member Node to current version of Metacat (IOE) (New)" href="https://redmine.dataone.org/issues/8244">#8244</a></p>
<p>The IOE repository is not responding at the expected baseURL of:</p>
<p><a href="https://data.rcg.montana.edu/catalog/d1/mn">https://data.rcg.montana.edu/catalog/d1/mn</a></p>
<p>and instead returns an error that SSL connection is not available. Attempting to use http instead of https returns a 404 error.</p>
<p>Contact the server administrator and have them check the service availability.</p>
Python GMN - Bug #7740 (New): GMN fails to write system metadata during a refresh under unknown c...https://redmine.dataone.org/issues/77402016-04-14T14:56:22ZMark Servillamark.servilla@gmail.com
<p>Some MNs (NMEPSCOR and TERN, for example) are reporting that GMN fails to write system metadata during a refresh. It is not clear under what conditions this occurs.</p>
<p>2016-03-24 06:00:15 ERROR SysMetaRefresher process_system_metadata_refresh_queue 30401 140171679168320 System Metadata update failed with internal exception:<br>
Traceback (most recent call last):<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 161, in <em>process_refresh_task<br>
self._refresh(task)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 174, in _refresh<br>
self._update_sys_meta(sys_meta)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 224, in _update_sys_meta<br>
mn.auth.set_access_policy(pid, sys_meta.accessPolicy)<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/auth.py", line 164, in set_access_policy<br>
with mn.sysmeta_store.sysmeta(pid, sci_obj.serial_version) as sysmeta:<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/sysmeta_store.py", line 97, in __init</em>_<br>
with open(sysmeta_path, 'rb') as f:<br>
IOError: [Errno 2] No such file or directory: '/var/local/dataone/gmn/lib/python2.7/site-packages/service/./stores/sysmeta/112/129/aekos.org.au%2Fcollection%2Fnsw.gov.au%2Fnsw_atlas%2Fvis_flora_module%2FP_ADHO4FB3.20150515.5'</p>
Python GMN - Bug #7738 (New): GMN perpetually retries system metadata updates after failureshttps://redmine.dataone.org/issues/77382016-04-14T14:26:33ZMark Servillamark.servilla@gmail.com
<p>It is believed that GMN perpetually attempts to update system metadata even when the system metadata update processing fails for specific system metadata objects. More sane logic needs to be implemented so that continuous attempts to update system metadata (and copious calls to the CN for system metadata) are averted.</p>
Python GMN - Bug #7651 (New): GMN returns two different dateSysMetadataModified dateTime stamps f...https://redmine.dataone.org/issues/76512016-02-22T23:42:19ZMark Servillamark.servilla@gmail.com
<p>GMN returns two different dataSysMetadataModified datesTime stamps for same object: one in MNRead.getSystemMetadata() and the other in MNRead.listObjects():</p>
<p>curl -s -X GET <a href="https://gmn.lternet.edu/mn/v1/meta/doi:10.6073/AA/knb-lter-bes.417.47">https://gmn.lternet.edu/mn/v1/meta/doi:10.6073/AA/knb-lter-bes.417.47</a> | xml fo<br>
<?xml version="1.0"?><br>
<br>
4<br>
doi:10.6073/AA/knb-lter-bes.417.47<br>
eml://ecoinformatics.org/eml-2.0.1<br>
11435<br>
84a5a7f446bed53b2a5f9090dfc2e7d9<br>
CN=urn:node:LTER,DC=dataone,DC=org<br>
uid=BES,o=LTER,dc=ecoinformatics,dc=org<br>
<br>
<br>
public<br>
read<br>
<br>
<br>
uid="BES",o=lter,dc=ecoinformatics,dc=org<br>
read<br>
write<br>
changePermission<br>
<br>
<br>
doi:10.6073/AA/knb-lter-bes.417.46<br>
doi:10.6073/AA/knb-lter-bes.417.48<br>
2015-03-06T07:09:13.931577<br>
2016-01-13T13:01:44.579484Z<br>
urn:node:LTER<br>
urn:node:LTER<br>
<a href="/ns1:systemMetadata">/ns1:systemMetadata</a></p>
<p>and</p>
<p>curl -s -X GET "<a href="https://gmn.lternet.edu/mn/v1/object?fromDate=2012-06-26T10:40:00.000+00:00&toDate=2012-06-26T10:45:00.000+00:00">https://gmn.lternet.edu/mn/v1/object?fromDate=2012-06-26T10:40:00.000+00:00&toDate=2012-06-26T10:45:00.000+00:00</a>" | xml fo<br>
<?xml version="1.0"?><br>
<br>
<br>
doi:10.6073/AA/knb-lter-bes.417.47<br>
eml://ecoinformatics.org/eml-2.0.1<br>
84a5a7f446bed53b2a5f9090dfc2e7d9<br>
2012-06-26T10:41:36.229<br>
11435<br>
<br>
<a href="/ns1:objectList">/ns1:objectList</a></p>
<p>This state occurs after the GMN receives an MNAuthorization.systemMetadataChanged() call from the CN to refresh system metadata.</p>
Infrastructure - Bug #7601 (New): CN checksum inconsistencieshttps://redmine.dataone.org/issues/76012016-01-21T20:09:28ZBen Leinfelderleinfelder@nceas.ucsb.edu
<p>While transferring test data from production to the sandbox-2 environment I noticed failures for a group of pids.<br>
I'll use an example to illustrate (doi_10.5066_F71C1TV7)<br>
<a href="https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7">https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7</a><br>
CN.SystemMetadata reports checksum as:<br>
<br>
46178da6192263921eb755940d716725</p>
<p>Whereas calculating it from disk gives this:<br>
<br>
MD5(/var/metacat/documents/autogen.2013062508395355978.1)= efc11787f789b45db29999fb4bd8d745</p>
<p>The byte size is also off.<br>
<br>
16739<br>
<br>
On disk:<br>
<br>
-rw-r--r-- 1 tomcat7 tomcat7 16529 Jun 25 2013 /var/metacat/documents/autogen.2013062508395355978.1</p>
<p>There are ~70 similar pids that have issues (perhaps more) from our test corpus. They are from the now defunct USGS MN.</p>
<p>I'm not sure what our strategy is since the original MN is not online any longer so we cannot get the "original" bytes from that.</p>
Member Nodes - Task #7046 (New): Certificate DC=org,DC=dataone,CN=osu.piscoweb.org expires in Pro...https://redmine.dataone.org/issues/70462015-04-15T19:06:20ZDave Vieglaisdave.vieglais@gmail.com
<p>Client certificate expires on May 17.</p>
<p>Need to:</p>
<ul>
<li>contact the Node Administrator and prepare for updating the certificate.</li>
<li>generate the certificate (Mark, Dave, or Chris)</li>
<li>guide the MN admin for certificate installation</li>
</ul>
<p><a href="https://repository.dataone.org/software/tools/trunk/ca/calendar.html">https://repository.dataone.org/software/tools/trunk/ca/calendar.html</a></p>
Member Nodes - MNDeployment #6853 (New): OBIS (Ocean Biogeographic Information System)https://redmine.dataone.org/issues/68532015-02-13T17:49:50ZLaura Moyerslmoyers1@utk.edu
<p><a href="http://www.iobis.org/">http://www.iobis.org/</a></p>
<p>OBIS is interested in working with DataONE at some point in the future. It has some similarity to GBIF in that it aggregates data from many different ocean biodiversity nodes (some are regional or country based and others are theme-based). If we add GBIF, we presumably also get the OBIS data. </p>
<p>After we add GBIF, we can revisit OBIS and see where they are irt participation with DataONE (there is an OBIS-USA component which might be a good place to start). </p>
<p>Bill Michener is our POC as he has quarterly calls with the OBIS advisory group.</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Member Nodes - Task #3906 (New): Update malformed Resource Mapshttps://redmine.dataone.org/issues/39062013-08-09T17:50:27ZRob Nahfrnahf@epscor.unm.edu
<p>Update all existing resource maps in Merritt and ONShare MNs so that URIs are used instead of object-literals, to create valid resource maps. </p>
Member Nodes - MNDeployment #3664 (New): Landcare Research New Zealandhttps://redmine.dataone.org/issues/36642013-03-14T20:11:25ZLaura Moyerslmoyers1@utk.edu
<p>From DataONE website "contact us" - HowtoBecomeMN, EnsureCorrectSetUpFromStart, ConsideringBecomingMN</p>
<p>Website: <a href="http://www.landcareresearch.co.nz/home">http://www.landcareresearch.co.nz/home</a><br>
Entity: Landcare Research New Zealand<br>
POC: <a href="mailto:mcglinchya@landcareresearch.co.nz">mcglinchya@landcareresearch.co.nz</a> <br>
Date of inquiry: 11/11/12<br><br>
Responder: Rebecca Koskela<br>
Date of response: 11/12/2012<br><br>
Response: John Cobb will be in touch. </p>