DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2018-09-12T00:18:51ZDataONE Tasks
Redmine Infrastructure - Bug #8696 (New): double indexing of a resource map and another not processed bec...https://redmine.dataone.org/issues/86962018-09-12T00:18:51ZRob Nahfrnahf@epscor.unm.edu
<p>In production, the ORE 'a1a0e96a-3cde-4f3c-829c-29650b09f22b' was not processed because a member was also referenced by the ORE it obsoleted, 'dc39515e-440b-4673-9f63-962c7374bf48'. The task failed without being requeued. Below is the log output.</p>
<pre>rnahf@cn-orc-1:/var/log/dataone/index$ grep a1a0e96a-3cde-4f3c-829c-29650b09f22b cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,901 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,902 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,404 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:384) We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:processTask:297) Unable to process task for pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:java.lang.Exception: We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,906 (IndexTaskProcessor:newOrFailedIndexTaskExists:890) IndexTaskProcess.newOrFailedIndexTaskExists for id a1a0e96a-3cde-4f3c-829c-29650b09f22b
rnahf@cn-orc-1:/var/log/dataone/index$ date
Tue Sep 11 23:46:56 UTC 2018
rnahf@cn-orc-1:/var/log/dataone/index$ grep dc39515e-440b-4673-9f63-962c7374bf48 cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:12,133 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:13,347 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,221 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,513 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,519 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,731 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,732 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:20,164 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,252 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,255 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:compareRaplicaList:256) HZEventFilter.compareReplicaList - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same replica list as the solr doc.
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:filter:164) HZEventFilter.filter - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same modification date as the SOLR server. Also both have the same replica list. So this event has been filtered out for indexing (no indexing).
rnahf@cn-orc-1:/var/log/dataone/index$
</pre> Infrastructure - Bug #8655 (New): Synchronization died with OOMhttps://redmine.dataone.org/issues/86552018-07-13T11:24:16ZDave Vieglaisdave.vieglais@gmail.com
<p>d1-processing became unresponsive. cn-synchronization log showed:<br>
<code><br>
[ERROR] 2018-07-12 18:28:26,875 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space<br>
at java.util.concurrent.FutureTask.report(FutureTask.java:122)<br>
at java.util.concurrent.FutureTask.get(FutureTask.java:192)<br>
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)<br>
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>
at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>
at java.lang.Thread.run(Thread.java:748)<br>
Caused by: java.lang.OutOfMemoryError: Java heap space<br>
[ INFO] 2018-07-12 18:28:49,862 [ProcessDaemonTask1] (SyncObjectTaskManager:run:110) SyncObjectTaskManager Complete<br>
[ WARN] 2018-07-12 20:41:15,788 [hz.client.2.Listener] (NodeTopicListener:onMessage:68) urn:node:OTS_NDC- NodeTopicListener Disabl<br>
</code></p>
<p>d1-processing is running with:<br>
<code><br>
-Djava.awt.headless=true -XX:UseParallelGC -Xmx4096M -Xms1024M -Xss1280k -XX:MaxPermSize=512M<br>
</code></p>
Infrastructure - Story #8525 (In Progress): timeout exceptions thrown from Hazelcast disable sync...https://redmine.dataone.org/issues/85252018-03-27T22:36:54ZRob Nahfrnahf@epscor.unm.edu
<p>Very occasionally, synchronization disables itself when RuntimeExceptions bubble up. The most common of these is when the Hazelcast client seemingly disconnects, or can't complete an operation, and a java.util.concurrent.TimeoutException is thrown.</p>
<p>These are usually due to network problems, as evidenced by timeout exceptions appearing in both the Metacat hazelcast-storage.log files as well as d1-processing logs.</p>
<p>Temporary problems like this should be recoverable, and so a retry or bypass for those timeouts should be implemented. It's not clear whether or not a new HazelcastClient should be instantiated, or whether the same client is still usable. (Is the client tightly bound to a session, or does it recover?) If a new client is needed, preliminary searching through the code indicates that refactoring the HazelcastClientFactory.getProcessingClient() method is only used in a few places, and the singleton behavior it uses can be sidestepped by removing the method and replacing it with a getLock() wrapper method (that seems to be the dominant use case for it). See the newer SyncQueueFacade in d1_synchronization for guidance on that. If the client is never exposed, it can be refreshed as needed.</p>
<pre>root@cn-unm-1:/var/metacat/logs# grep FATAL hazelcast-storage.log.1
[FATAL] 2018-03-27 03:15:19,380 (BaseManager$2:run:1402) [64.106.40.6]:5701 [DataONE] Caught error while calling event listener; cause: [CONCURRENT_MAP_CONTAINS_KEY] Operation Timeout (with no response!): 0
</pre><pre>[ERROR] 2018-03-27 03:15:19,781 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent
.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at com.hazelcast.impl.ClientServiceException.readData(ClientServiceException.java:63)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:104)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:79)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:121)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:156)
at com.hazelcast.client.ClientThreadContext.toObject(ClientThreadContext.java:72)
at com.hazelcast.client.IOUtil.toObject(IOUtil.java:34)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:186)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:146)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:140)
at com.hazelcast.client.QueueClientProxy.innerPoll(QueueClientProxy.java:115)
at com.hazelcast.client.QueueClientProxy.poll(QueueClientProxy.java:111)
at org.dataone.cn.batch.synchronization.type.SyncQueueFacade.poll(SyncQueueFacade.java:231)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:131)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:73)
</pre> Infrastructure - Bug #7919 (New): unloadable system metadata in CNs by Hazelcasthttps://redmine.dataone.org/issues/79192016-10-26T16:22:56ZRob Nahfrnahf@epscor.unm.edu
<p>Looking through the metacat logs, I found a lot of instances where the HzSystemMetadataMap could not load system metadata for particular pids. Most had dryad in the pid (~1200), but another 130 are from elsewhere.</p>
<p>a random sample showed that it couldn't be retrieved via /meta although the pid could be retrieved from the Dryad MN.</p>
<p>This appears to be another type of half-created content on the CN.</p>
<p>rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep -v dryad | wc -l<br>
139<br>
rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep dryad | wc -l<br>
1216</p>
Python GMN - Bug #7740 (New): GMN fails to write system metadata during a refresh under unknown c...https://redmine.dataone.org/issues/77402016-04-14T14:56:22ZMark Servillamark.servilla@gmail.com
<p>Some MNs (NMEPSCOR and TERN, for example) are reporting that GMN fails to write system metadata during a refresh. It is not clear under what conditions this occurs.</p>
<p>2016-03-24 06:00:15 ERROR SysMetaRefresher process_system_metadata_refresh_queue 30401 140171679168320 System Metadata update failed with internal exception:<br>
Traceback (most recent call last):<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 161, in <em>process_refresh_task<br>
self._refresh(task)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 174, in _refresh<br>
self._update_sys_meta(sys_meta)<br>
File "/var/local/dataone/gmn/local/lib/python2.7/site-packages/service/mn/management/commands/process_system_metadata_refresh_queue.py", line 224, in _update_sys_meta<br>
mn.auth.set_access_policy(pid, sys_meta.accessPolicy)<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/auth.py", line 164, in set_access_policy<br>
with mn.sysmeta_store.sysmeta(pid, sci_obj.serial_version) as sysmeta:<br>
File "/var/local/dataone/gmn/lib/python2.7/site-packages/service/mn/sysmeta_store.py", line 97, in __init</em>_<br>
with open(sysmeta_path, 'rb') as f:<br>
IOError: [Errno 2] No such file or directory: '/var/local/dataone/gmn/lib/python2.7/site-packages/service/./stores/sysmeta/112/129/aekos.org.au%2Fcollection%2Fnsw.gov.au%2Fnsw_atlas%2Fvis_flora_module%2FP_ADHO4FB3.20150515.5'</p>
Python GMN - Bug #7738 (New): GMN perpetually retries system metadata updates after failureshttps://redmine.dataone.org/issues/77382016-04-14T14:26:33ZMark Servillamark.servilla@gmail.com
<p>It is believed that GMN perpetually attempts to update system metadata even when the system metadata update processing fails for specific system metadata objects. More sane logic needs to be implemented so that continuous attempts to update system metadata (and copious calls to the CN for system metadata) are averted.</p>
Python GMN - Bug #7651 (New): GMN returns two different dateSysMetadataModified dateTime stamps f...https://redmine.dataone.org/issues/76512016-02-22T23:42:19ZMark Servillamark.servilla@gmail.com
<p>GMN returns two different dataSysMetadataModified datesTime stamps for same object: one in MNRead.getSystemMetadata() and the other in MNRead.listObjects():</p>
<p>curl -s -X GET <a href="https://gmn.lternet.edu/mn/v1/meta/doi:10.6073/AA/knb-lter-bes.417.47">https://gmn.lternet.edu/mn/v1/meta/doi:10.6073/AA/knb-lter-bes.417.47</a> | xml fo<br>
<?xml version="1.0"?><br>
<br>
4<br>
doi:10.6073/AA/knb-lter-bes.417.47<br>
eml://ecoinformatics.org/eml-2.0.1<br>
11435<br>
84a5a7f446bed53b2a5f9090dfc2e7d9<br>
CN=urn:node:LTER,DC=dataone,DC=org<br>
uid=BES,o=LTER,dc=ecoinformatics,dc=org<br>
<br>
<br>
public<br>
read<br>
<br>
<br>
uid="BES",o=lter,dc=ecoinformatics,dc=org<br>
read<br>
write<br>
changePermission<br>
<br>
<br>
doi:10.6073/AA/knb-lter-bes.417.46<br>
doi:10.6073/AA/knb-lter-bes.417.48<br>
2015-03-06T07:09:13.931577<br>
2016-01-13T13:01:44.579484Z<br>
urn:node:LTER<br>
urn:node:LTER<br>
<a href="/ns1:systemMetadata">/ns1:systemMetadata</a></p>
<p>and</p>
<p>curl -s -X GET "<a href="https://gmn.lternet.edu/mn/v1/object?fromDate=2012-06-26T10:40:00.000+00:00&toDate=2012-06-26T10:45:00.000+00:00">https://gmn.lternet.edu/mn/v1/object?fromDate=2012-06-26T10:40:00.000+00:00&toDate=2012-06-26T10:45:00.000+00:00</a>" | xml fo<br>
<?xml version="1.0"?><br>
<br>
<br>
doi:10.6073/AA/knb-lter-bes.417.47<br>
eml://ecoinformatics.org/eml-2.0.1<br>
84a5a7f446bed53b2a5f9090dfc2e7d9<br>
2012-06-26T10:41:36.229<br>
11435<br>
<br>
<a href="/ns1:objectList">/ns1:objectList</a></p>
<p>This state occurs after the GMN receives an MNAuthorization.systemMetadataChanged() call from the CN to refresh system metadata.</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Infrastructure - Task #4210 (Testing): Metacat does not set serialVersion correctly in CNodeServi...https://redmine.dataone.org/issues/42102013-12-20T15:22:50ZChris Jonescjones@nceas.ucsb.edu
<p>For DATA and METADATA, CNodeService.archive() and D1NodeService.archive(), respectively, don't increment the serialVersion field. Check this for delete() as well. D1NodeService delegates to DocumentImpl to call the HZ put() method, so the fix needs to be there, and in CNodeService.</p>
Member Nodes - Task #3906 (New): Update malformed Resource Mapshttps://redmine.dataone.org/issues/39062013-08-09T17:50:27ZRob Nahfrnahf@epscor.unm.edu
<p>Update all existing resource maps in Merritt and ONShare MNs so that URIs are used instead of object-literals, to create valid resource maps. </p>
Member Nodes - MNDeployment #3521 (Operational): SEAD Member Nodehttps://redmine.dataone.org/issues/35212013-01-25T21:19:12ZRebecca Koskelarkoskela@unm.edu
<p>SEAD (Sustainable Environment - Actionable Data), another DataNet, would like to become a DataONE Member Node<br>
(<a href="http://sead-data.net/">http://sead-data.net/</a>)</p>
Infrastructure - Bug #3492 (In Progress): Invalid PIDs in production (whitespace)https://redmine.dataone.org/issues/34922013-01-17T15:13:44ZDave Vieglaisdave.vieglais@gmail.com
<p>Recording this for future reference. </p>
<p>There are nine PIDs in the production environment that contain whitespace. This appears to have no functional effect - sysmeta and objects can be retrieved so no action is required other than to ensure no more sneak in.</p>
<p>The PIDs in question are:</p>
<a name="guid"></a>
<h2 > guid <a href="#guid" class="wiki-anchor">¶</a></h2>
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith botha.1.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.5.1<br>
judith botha.2.1<br>
resourceMap_Lin Cheng-Jung.1.1<br>
resourceMap_Lin Cheng-Jung.1.2<br>
resourceMap_Lin Cheng-Jung.1.3<br>
Lin Cheng-Jung.1.1<br>
Lin Cheng-Jung.1.2<br>
Lin Cheng-Jung.1.3<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1</p>
Member Nodes - MNDeployment #3118 (Operational): Dryad Member Nodehttps://redmine.dataone.org/issues/31182012-08-05T17:05:51ZDave Vieglaisdave.vieglais@gmail.com
<p>The Dryad MN will operate as a tier 1 member node.</p>
<p>Base_URL: <a href="https://datadryad.org/mn">https://datadryad.org/mn</a><br>
Node_ID: urn:node:DRYAD<br>
Deployment_Contact: Ryan Scherle<br>
Software: Custom on modified DSpace (Dryad)<br>
Target_Tier: 1<br>
Content_Volume_GB: 20</p>
Infrastructure - Task #1556 (In Progress): Interns mailing listhttps://redmine.dataone.org/issues/15562011-05-12T20:19:01ZAmber Buddenaebudden@gmail.com
<p>Can you please clear the current 'interns' mailing list in preparation for use in 2011. It currently sends out to last years interns. Once the new interns have set up plone accounts I will submit a new request to have them added.</p>
Requirements - Requirement #592 (New): DataONE needs to synchronize metadata between MNs and CNshttps://redmine.dataone.org/issues/5922010-04-23T00:53:56ZMatthew Jonesjones@nceas.ucsb.edu
<p><a class="wiki-page new" href="https://redmine.dataone.org/projects/d1req/wiki/DataONE">DataONE</a> functions by having CNs that provide a central index of all metadata in the system. This allows for rapid search of the whole network while maintaining autonomy of MNs.</p>