DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2020-08-06T00:06:07ZDataONE Tasks
Redmine CN REST - Bug #8867 (New): CNCore.listChecksumAlgorithms() returns incorrect listhttps://redmine.dataone.org/issues/88672020-08-06T00:06:07ZMatthew Jonesjones@nceas.ucsb.edu
<p>The definition of the ChecksumAlgorithm type in SystemMetadata allows any checksum algorithm listed in the Library of Congress vocab. But the current CNCore.listChecksumAlgorithms() implementation only returns two, MD5 and SHA-1. Need to correct this to include the full list of supported algorithms (see <a href="http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html">http://id.loc.gov/vocabulary/preservation/cryptographicHashFunctions.html</a>).</p>
<p>The implementation of this is in a property file, which needs to be updated with the correct list. The file (d1_cn_rest/src/test/resources/org/dataone/configuration/node.properties) currently contains:</p>
<p><code>cn.checksumAlgorithmList=SHA-1;MD5</code></p>
<p>But it should contain all of the other valid algorithms as well from the LoC.</p>
Infrastructure - Task #8824 (In Progress): Install Apache 2.4.39 and disable TLSv1.3 on CNshttps://redmine.dataone.org/issues/88242019-06-19T02:10:27ZDave Vieglaisdave.vieglais@gmail.com
<p>Apache 2.4.39 is needed to avoid a lengthy connection time, however it offers TLSv1.3 which seems to break at least python client.</p>
<p>Task is to install 2.4.39 but disable TLSv1.3 for now with:</p>
<pre>SSLProtocol all -TLSv1.3 -SSLv2 -SSLv3
</pre>
<p>in <code>/etc/apache2/mods-enabled/ssl.conf</code></p>
Infrastructure - Story #8823 (New): Recent Apache and OpenSSL combinations break connectivity on ...https://redmine.dataone.org/issues/88232019-06-19T02:03:44ZDave Vieglaisdave.vieglais@gmail.com
<p>The latest Ubuntu 18.04 release of Apache is 2.4.29 and OpenSSL is 1.1.1.</p>
<p>This combination creates a significant delay in TLS renegotiation that results from the Apache config option on the CNs:</p>
<pre>SSLVerifyClient none
<Location "/cn">
<If " ! ( %{HTTP_USER_AGENT} =~ /(windows|chrome|mozilla|safari|webkit)/i )">
SSLVerifyClient optional
</If>
</Location>
</pre>
<p>Which is intended to disable client certificate authentication for web browsers, but allow it for others. This approach worked fine on older Apache / OpenSSL but the new combination creates a several second wait when the server discovers the client is not a web browser and tells it to reconnect with the option of including a client certificate.</p>
<p>The latest released version of Apache is 2.4.39 and this is available through a PPA intended for Debian developers. This has been installed so far on dev-2, sandbox, stage, and stage-2 with the process:</p>
<pre>sudo add-apt-repository ppa:ondrej/apache2
sudo apt update
sudo apt dist-upgrade
sudo systemctl restart apache2
</pre>
<p>This installs Apache 2.4.39 and OpenSSL 1.1.1c which appears to resolve the apparent bug in the 2.4.29 / 1.1.1 combination.</p>
<p>One issue with the update is that by default, Apache now offers TLSv1.3, which is great except that it appears to cause problems with at least Python clients failing to connect and getting a 403 error. For example:</p>
<pre>$ python3
>>> import requests
>>> r = requests.get("https://cn-sandbox-ucsb-1.test.dataone.org/cn/v2/monitor/ping")
>>> r.status_code
403
</pre>
<p>That TLSv1.3 is the problem was verified with cn-stage-unm-2 by configuring Apache with:</p>
<pre> SSLProtocol all -TLSv1.3 -SSLv2 -SSLv3
</pre>
<p>to disable TLSv1.3. After this change the Python client was able to connect as expected.</p>
<p>A workaround has not yet been researched.</p>
<p>It is not clear if this issue applies to other clients such as R and Java, so until we learn one way or the other, TLSv1.3 will be disabled on the CNs.</p>
<p>--This issue will likely apply to Member Nodes as well once TLSv1.3 is generally available or if MNs choose to install Apache 2.4.39.-- CORRECTION: this issue only applies when attempting to renegotiate TLS after headers have been transferred, so will not typically apply to a MN.</p>
Infrastructure - Bug #8696 (New): double indexing of a resource map and another not processed bec...https://redmine.dataone.org/issues/86962018-09-12T00:18:51ZRob Nahfrnahf@epscor.unm.edu
<p>In production, the ORE 'a1a0e96a-3cde-4f3c-829c-29650b09f22b' was not processed because a member was also referenced by the ORE it obsoleted, 'dc39515e-440b-4673-9f63-962c7374bf48'. The task failed without being requeued. Below is the log output.</p>
<pre>rnahf@cn-orc-1:/var/log/dataone/index$ grep a1a0e96a-3cde-4f3c-829c-29650b09f22b cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,384 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,832 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18085996, pid=a1a0e96a-3cde-4f3c-829c-29650b09f22b, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2018091015425874434.1, dateSysMetaModified=1536087134490, deleted=false, taskModifiedDate=1536619467383, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:34,901 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:35,902 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,402 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:36,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:37,903 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,403 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:38,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,404 (IndexTaskProcessor:checkReadinessProcessResourceMap:369) ###################Another resource map is process the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 as well. So the thread to process id a1a0e96a-3cde-4f3c-829c-29650b09f22b has to wait 0.5 seconds.
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:checkReadinessProcessResourceMap:384) We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ERROR] 2018-09-10 22:44:39,904 (IndexTaskProcessor:processTask:297) Unable to process task for pid: a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:java.lang.Exception: We waited for another thread to finish indexing a resource map which has the referenced id ee73cf7f-1005-4b89-bab9-3a7fa01d27c6 for a while. Now we quited and can't index id a1a0e96a-3cde-4f3c-829c-29650b09f22b
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:39,906 (IndexTaskProcessor:newOrFailedIndexTaskExists:890) IndexTaskProcess.newOrFailedIndexTaskExists for id a1a0e96a-3cde-4f3c-829c-29650b09f22b
rnahf@cn-orc-1:/var/log/dataone/index$ date
Tue Sep 11 23:46:56 UTC 2018
rnahf@cn-orc-1:/var/log/dataone/index$ grep dc39515e-440b-4673-9f63-962c7374bf48 cn-index-processor-daemon.log.*
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:12,133 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:13,347 (HZEventFilter:filter:127) HZEventFilter.filter - the system metadata for the index event shows shows dc39515e-440b-4673-9f63-962c7374bf48 having a newer version than the SOLR server. So this event should be granted for indexing.
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:18,677 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:25,783 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086020, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619458675, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:27,221 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,513 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:44:43,519 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:saveTask:865) IndexTaskProcess.saveTask save the index task dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:11,604 (IndexTaskProcessor:getNextIndexTask:610) Start of indexing pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,731 (IndexTaskProcessor:getNextIndexTask:664) the original index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:18,732 (IndexTaskProcessor:getNextIndexTask:671) the new index task - IndexTask [id=18086015, pid=dc39515e-440b-4673-9f63-962c7374bf48, formatid=http://www.openarchives.org/ore/terms, objectPath=/var/metacat/data/autogen.2017072514144216470.1, dateSysMetaModified=1536087137440, deleted=false, taskModifiedDate=1536619571603, priority=3, status=IN PROCESS]
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:20,164 (IndexTaskProcessor:processTask:284) *********************start to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,252 (IndexTaskProcessor:processTask:288) *********************end to process update index task for dc39515e-440b-4673-9f63-962c7374bf48 in thread 20
cn-index-processor-daemon.log.6:[ INFO] 2018-09-10 22:46:36,255 (IndexTaskProcessor:processTask:315) Indexing complete for pid: dc39515e-440b-4673-9f63-962c7374bf48
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:compareRaplicaList:256) HZEventFilter.compareReplicaList - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same replica list as the solr doc.
cn-index-processor-daemon.log.7:[ INFO] 2018-09-10 21:44:09,798 (HZEventFilter:filter:164) HZEventFilter.filter - the system metadata for the index event shows dc39515e-440b-4673-9f63-962c7374bf48 having the same modification date as the SOLR server. Also both have the same replica list. So this event has been filtered out for indexing (no indexing).
rnahf@cn-orc-1:/var/log/dataone/index$
</pre> Infrastructure - Story #8525 (In Progress): timeout exceptions thrown from Hazelcast disable sync...https://redmine.dataone.org/issues/85252018-03-27T22:36:54ZRob Nahfrnahf@epscor.unm.edu
<p>Very occasionally, synchronization disables itself when RuntimeExceptions bubble up. The most common of these is when the Hazelcast client seemingly disconnects, or can't complete an operation, and a java.util.concurrent.TimeoutException is thrown.</p>
<p>These are usually due to network problems, as evidenced by timeout exceptions appearing in both the Metacat hazelcast-storage.log files as well as d1-processing logs.</p>
<p>Temporary problems like this should be recoverable, and so a retry or bypass for those timeouts should be implemented. It's not clear whether or not a new HazelcastClient should be instantiated, or whether the same client is still usable. (Is the client tightly bound to a session, or does it recover?) If a new client is needed, preliminary searching through the code indicates that refactoring the HazelcastClientFactory.getProcessingClient() method is only used in a few places, and the singleton behavior it uses can be sidestepped by removing the method and replacing it with a getLock() wrapper method (that seems to be the dominant use case for it). See the newer SyncQueueFacade in d1_synchronization for guidance on that. If the client is never exposed, it can be refreshed as needed.</p>
<pre>root@cn-unm-1:/var/metacat/logs# grep FATAL hazelcast-storage.log.1
[FATAL] 2018-03-27 03:15:19,380 (BaseManager$2:run:1402) [64.106.40.6]:5701 [DataONE] Caught error while calling event listener; cause: [CONCURRENT_MAP_CONTAINS_KEY] Operation Timeout (with no response!): 0
</pre><pre>[ERROR] 2018-03-27 03:15:19,781 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent
.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at com.hazelcast.impl.ClientServiceException.readData(ClientServiceException.java:63)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:104)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:79)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:121)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:156)
at com.hazelcast.client.ClientThreadContext.toObject(ClientThreadContext.java:72)
at com.hazelcast.client.IOUtil.toObject(IOUtil.java:34)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:186)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:146)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:140)
at com.hazelcast.client.QueueClientProxy.innerPoll(QueueClientProxy.java:115)
at com.hazelcast.client.QueueClientProxy.poll(QueueClientProxy.java:111)
at org.dataone.cn.batch.synchronization.type.SyncQueueFacade.poll(SyncQueueFacade.java:231)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:131)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:73)
</pre> Infrastructure - Bug #8195 (In Progress): Tomcat out of memory errorhttps://redmine.dataone.org/issues/81952017-10-17T22:19:48ZDave Vieglaisdave.vieglais@gmail.com
<p>Tomcat crashed on cn-orc-1 with an out of memory error:</p>
<pre>INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:35336
Oct 17, 2017 8:44:38 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [/0:0:0:0:0:0:0:1:35336] lost. Reason: java.io.EOFException[null]
Oct 17, 2017 8:46:01 PM com.hazelcast.client.ProxyHelper
INFO: There is no response for Call [76063] operation=GET_MEMBERS in 5 seconds.
Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" java.lang.OutOfMemoryError: Java heap space
20171017-20:57:05: [WARN]: [160.36.13.150]:5702 [hzProcess] Master node has timed out its heartbeat and will be removed [com.hazelcast.cluster.ClusterManager]
Oct 17, 2017 8:45:47 PM com.hazelcast.client.ConnectionManager
WARNING: Server didn't respond to client's ping call within 10 seconds!
Oct 17, 2017 8:51:41 PM com.hazelcast.client.ProxyHelper
INFO: There is no response for Call [76063] operation=GET_MEMBERS in 10 seconds.
Oct 17, 2017 8:57:09 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [Address[128.111.54.80]:5703] lost. Reason: java.io.EOFException[null]
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /object
original requestURI: /metacat/d1/cn/v1/object
stripping /metacat/d1/cn/v1 from requestURI
new pathinfo: /object
Exception in thread "Thread-77686" java.lang.OutOfMemoryError: Java heap space
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ajp-bio-8009-exec-49"
Exception in thread "Timer-3" java.lang.OutOfMemoryError: Java heap space
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "FileWatchdog"
Exception in thread "hz.dataonePortalInstance.ServiceThread" java.lang.OutOfMemoryError: Java heap space
Exception in thread "ajp-bio-8009-exec-5" java.lang.OutOfMemoryError: Java heap space
Oct 17, 2017 9:03:27 PM com.hazelcast.nio.InOutSelector
WARNING: [160.36.13.150]:5703 [DataONE] Selector exception at hz.dataonePortalInstance.IO.thread-2, cause= java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
Exception in thread "ajp-bio-8009-exec-307" java.lang.OutOfMemoryError: Java heap space
Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6293" java.lang.NullPointerException
Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6289" Oct 17, 2017 9:08:42 PM com.hazelcast.nio.InOutSelector
WARNING: [160.36.13.150]:5703 [DataONE] Selector exception at hz.dataonePortalInstance.IO.thread-1, cause= java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.nio.NodeIOService.onOutOfMemory(NodeIOService.java:56)
at com.hazelcast.nio.AbstractSelectionHandler.handleSocketException(AbstractSelectionHandler.java:58)
at com.hazelcast.nio.ReadHandler.handle(ReadHandler.java:92)
at com.hazelcast.nio.InOutSelector.run(InOutSelector.java:142)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "hz.dataonePortalInstance.cached.thread-5" at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
Oct 17, 2017 9:09:03 PM com.hazelcast.nio.InOutSelector
WARNING: [160.36.13.150]:5703 [DataONE] unhandled exception in hz.dataonePortalInstance.IO.thread-2
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.nio.NodeIOService.onOutOfMemory(NodeIOService.java:56)
at com.hazelcast.nio.InOutSelector.handleSelectorException(InOutSelector.java:171)
at com.hazelcast.nio.InOutSelector.run(InOutSelector.java:150)
at java.lang.Thread.run(Thread.java:748)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
Oct 17, 2017 9:09:21 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [Address[64.106.40.6]:5703] lost. Reason: java.lang.IllegalArgumentException[null]
Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6292" Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6285" Oct 17, 2017 9:09:28 PM com.hazelcast.nio.InOutSelector
WARNING: [160.36.13.150]:5703 [DataONE] Selector exception at hz.dataonePortalInstance.IO.thread-1, cause= java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.nio.NodeIOService.disconnectExistingCalls(NodeIOService.java:180)
at com.hazelcast.nio.Connection.close(Connection.java:218)
at com.hazelcast.nio.AbstractSelectionHandler.handleSocketException(AbstractSelectionHandler.java:63)
at com.hazelcast.nio.ReadHandler.handle(ReadHandler.java:92)
at com.hazelcast.nio.InOutSelector.run(InOutSelector.java:142)
at java.lang.Thread.run(Thread.java:748)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:40)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6288" Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6290" Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6284" Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6286" at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.OutOfMemoryError: Java heap space
Exception in thread "hz._hzInstance_1_DataONE.cached.thread-6282" java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:40)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
Exception in thread "hz._hzInstance_1_DataONE.IO.thread-Acceptor" at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.impl.ThreadContext.get(ThreadContext.java:67)
at com.hazelcast.impl.ExecutorManager.threadPoolBeforeExecute(ExecutorManager.java:396)
at com.hazelcast.impl.ExecutorManager.access$000(ExecutorManager.java:44)
at com.hazelcast.impl.ExecutorManager$1.beforeExecute(ExecutorManager.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.impl.ExecutorThreadFactory$1.run(ExecutorThreadFactory.java:38)
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher$DefaultOutOfMemoryHandler.onOutOfMemory(OutOfMemoryErrorDispatcher.java:79)
at com.hazelcast.impl.OutOfMemoryErrorDispatcher.onOutOfMemory(OutOfMemoryErrorDispatcher.java:70)
at com.hazelcast.nio.SocketAcceptor.run(SocketAcceptor.java:66)
at java.lang.Thread.run(Thread.java:748)
Oct 17, 2017 9:11:29 PM com.hazelcast.nio.SocketAcceptor
INFO: [160.36.13.150]:5703 [DataONE] 5703 is accepting socket connection from /0:0:0:0:0:0:0:1:46066
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
Oct 17, 2017 9:11:33 PM com.hazelcast.nio.ConnectionManager
INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:46066
Oct 17, 2017 9:11:49 PM com.hazelcast.nio.SocketAcceptor
INFO: [160.36.13.150]:5703 [DataONE] 5703 is accepting socket connection from /0:0:0:0:0:0:0:1:46311
Oct 17, 2017 9:11:49 PM com.hazelcast.nio.ConnectionManager
INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:46311
Oct 17, 2017 9:11:49 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [/0:0:0:0:0:0:0:1:46311] lost. Reason: java.io.EOFException[null]
20171017-21:12:10: [WARN]: [160.36.13.150]:5702 [hzProcess] Master node has timed out its heartbeat and will be removed [com.hazelcast.cluster.ClusterManager]
Oct 17, 2017 9:12:10 PM com.hazelcast.nio.SocketAcceptor
INFO: [160.36.13.150]:5703 [DataONE] 5703 is accepting socket connection from /0:0:0:0:0:0:0:1:46487
Oct 17, 2017 9:12:10 PM com.hazelcast.nio.ConnectionManager
INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:46487
Oct 17, 2017 9:12:10 PM com.hazelcast.nio.SocketAcceptor
INFO: [160.36.13.150]:5703 [DataONE] 5703 is accepting socket connection from /0:0:0:0:0:0:0:1:46489
Oct 17, 2017 9:12:10 PM com.hazelcast.nio.ConnectionManager
INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:46489
Oct 17, 2017 9:12:13 PM com.hazelcast.nio.SocketAcceptor
INFO: [160.36.13.150]:5703 [DataONE] 5703 is accepting socket connection from /0:0:0:0:0:0:0:1:46488
Oct 17, 2017 9:12:13 PM com.hazelcast.nio.ConnectionManager
INFO: [160.36.13.150]:5703 [DataONE] 5703 accepted socket connection from /0:0:0:0:0:0:0:1:46488
20171017-21:12:22: [WARN]: [160.36.13.150]:5702 [hzProcess] ExecutorService is rejecting an execution. java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@732a0f33 [com.hazelcast.impl.ExecutorManager]
Oct 17, 2017 9:12:26 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [/0:0:0:0:0:0:0:1:46488] lost. Reason: java.io.EOFException[null]
20171017-21:12:26: [WARN]: [160.36.13.150]:5702 [hzProcess] ExecutorService is rejecting an execution. com.hazelcast.impl.SplitBrainHandler$1@1af304f [com.hazelcast.impl.ExecutorManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
20171017-21:12:29: [WARN]: [160.36.13.150]:5702 [hzProcess] This is the master node and received a PartitionRuntimeState from Address[128.111.54.80]:5702. Ignoring incoming state! [com.hazelcast.impl.PartitionManager]
Oct 17, 2017 9:12:36 PM com.hazelcast.nio.Connection
INFO: [160.36.13.150]:5703 [DataONE] Connection [/0:0:0:0:0:0:0:1:46487] lost. Reason: java.io.EOFException[null]
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
20171017-21:13:09: [WARN]: [160.36.13.150]:5702 [hzProcess] hz.hzProcessInstance.IO.thread-2 Closing socket to endpoint Address[128.111.54.80]:5702, Cause:java.lang.OutOfMemoryError: Java heap space [com.hazelcast.nio.ReadHandler]
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at com.hazelcast.nio.DataHolder.<init>(DataHolder.java:28)
at com.hazelcast.nio.Packet.read(Packet.java:379)
at com.hazelcast.nio.SocketPacketReader$DefaultPacketReader.readPacket(SocketPacketReader.java:88)
at com.hazelcast.nio.SocketPacketReader.read(SocketPacketReader.java:62)
at com.hazelcast.nio.ReadHandler.handle(ReadHandler.java:85)
at com.hazelcast.nio.InOutSelector.run(InOutSelector.java:142)
at java.lang.Thread.run(Thread.java:748)
20171017-21:13:09: [WARN]: [160.36.13.150]:5702 [hzProcess] hz.hzProcessInstance.IO.thread-1 Closing socket to endpoint Address[64.106.40.6]:5702, Cause:java.lang.OutOfMemoryError: Java heap space [com.hazelcast.nio.ReadHandler]
java.lang.OutOfMemoryError: Java heap space
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
In D1URLFilter.
HTTP Verb: GET
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /object
original requestURI: /metacat/d1/cn/v1/object
stripping /metacat/d1/cn/v1 from requestURI
new pathinfo: /object
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /log
original requestURI: /metacat/d1/cn/v2/log
stripping /metacat/d1/cn/v2 from requestURI
new pathinfo: /log
java.lang.NullPointerException
at org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:3124)
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2901)
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1209)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1689)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1570)
at edu.ucsb.nceas.metacat.replication.ReplicationService.getURLStream(ReplicationService.java:2341)
at edu.ucsb.nceas.metacat.replication.ReplicationService.getURLContent(ReplicationService.java:2299)
at edu.ucsb.nceas.metacat.replication.ReplicationService.handleForceReplicateDataFileRequest(ReplicationService.java:936)
at edu.ucsb.nceas.metacat.replication.ReplicationServlet.handleGetOrPost(ReplicationServlet.java:157)
at edu.ucsb.nceas.metacat.replication.ReplicationServlet.doGet(ReplicationServlet.java:87)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:193)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerException
In D1URLFilter.
HTTP Verb: GET
original pathInfo: /object
original requestURI: /metacat/d1/cn/v1/object
stripping /metacat/d1/cn/v1 from requestURI
new pathinfo: /object
</pre> Infrastructure - Story #8173 (New): add checks for retrograde systemMetadata changeshttps://redmine.dataone.org/issues/81732017-09-01T19:42:33ZRob Nahfrnahf@epscor.unm.edu
<p>with the ability to prioritize and the introduction of parallelized index task processing, the effective queue is not guaranteed to be time-ordered. If there are two valid system metadata changes resulting in two tasks and the second change hits the index first, the earlier task should be rejected, as its changes are out of date.</p>
Infrastructure - Task #8026 (Testing): Portal reaches the maximum capacity in a poolhttps://redmine.dataone.org/issues/80262017-02-27T17:00:59ZJing Taotao@nceas.ucsb.edu
<p>After portal runs for a while, the ci-logon can't work since it reaches the maximum capacity of a pool. We have to restart tomcat.<br>
type Exception report</p>
<p>message An exception occurred processing JSP page /setup.jsp at line 15</p>
<p>description The server encountered an internal error that prevented it from fulfilling this request.</p>
<p>exception</p>
<p>org.apache.jasper.JasperException: An exception occurred processing JSP page /setup.jsp at line 15</p>
<p>12: <%<br>
13: <br>
14: // get the certificate, if we have it<br>
15: X509Certificate certificate = PortalCertificateManager.getInstance().getCertificate(request);<br>
16: PrivateKey key = PortalCertificateManager.getInstance().getPrivateKey(request);<br>
17: <br>
18: // if we don't have a certificate, then we aren't logged in</p>
<p>Stacktrace:<br>
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568)<br>
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:470)<br>
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)<br>
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)<br>
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)<br>
com.hazelcast.web.WebFilter.doFilter(WebFilter.java:598)</p>
<p>root cause</p>
<p>edu.uiuc.ncsa.security.core.util.PoolException: Error: Maximum capacity of 1000 elements has been exceeded<br>
edu.uiuc.ncsa.security.core.util.Pool.realCreate(Pool.java:75)<br>
edu.uiuc.ncsa.security.core.util.Pool.pop(Pool.java:63)<br>
edu.uiuc.ncsa.security.storage.sql.SQLDatabase.getConnection(SQLDatabase.java:23)<br>
edu.uiuc.ncsa.security.storage.sql.SQLStore.get(SQLStore.java:194)<br>
edu.uiuc.ncsa.security.storage.sql.SQLStore.get(SQLStore.java:35)<br>
edu.uiuc.ncsa.myproxy.oa4mp.client.storage.AssetStoreUtil.get(AssetStoreUtil.java:43)<br>
edu.uiuc.ncsa.myproxy.oa4mp.client.storage.SQLAssetStore.get(SQLAssetStore.java:34)<br>
org.dataone.portal.PortalCertificateManager.getCredentials(PortalCertificateManager.java:196)<br>
org.dataone.portal.PortalCertificateManager.getCredentials(PortalCertificateManager.java:235)<br>
org.dataone.portal.PortalCertificateManager.getCertificate(PortalCertificateManager.java:157)<br>
org.apache.jsp.account_jsp._jspService(account_jsp.java:100)<br>
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)<br>
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)<br>
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)<br>
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)<br>
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)<br>
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)<br>
com.hazelcast.web.WebFilter.doFilter(WebFilter.java:598)</p>
Infrastructure - Bug #7601 (New): CN checksum inconsistencieshttps://redmine.dataone.org/issues/76012016-01-21T20:09:28ZBen Leinfelderleinfelder@nceas.ucsb.edu
<p>While transferring test data from production to the sandbox-2 environment I noticed failures for a group of pids.<br>
I'll use an example to illustrate (doi_10.5066_F71C1TV7)<br>
<a href="https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7">https://cn.dataone.org/cn/v2/meta/doi_10.5066_F71C1TV7</a><br>
CN.SystemMetadata reports checksum as:<br>
<br>
46178da6192263921eb755940d716725</p>
<p>Whereas calculating it from disk gives this:<br>
<br>
MD5(/var/metacat/documents/autogen.2013062508395355978.1)= efc11787f789b45db29999fb4bd8d745</p>
<p>The byte size is also off.<br>
<br>
16739<br>
<br>
On disk:<br>
<br>
-rw-r--r-- 1 tomcat7 tomcat7 16529 Jun 25 2013 /var/metacat/documents/autogen.2013062508395355978.1</p>
<p>There are ~70 similar pids that have issues (perhaps more) from our test corpus. They are from the now defunct USGS MN.</p>
<p>I'm not sure what our strategy is since the original MN is not online any longer so we cannot get the "original" bytes from that.</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Infrastructure - Task #4210 (Testing): Metacat does not set serialVersion correctly in CNodeServi...https://redmine.dataone.org/issues/42102013-12-20T15:22:50ZChris Jonescjones@nceas.ucsb.edu
<p>For DATA and METADATA, CNodeService.archive() and D1NodeService.archive(), respectively, don't increment the serialVersion field. Check this for delete() as well. D1NodeService delegates to DocumentImpl to call the HZ put() method, so the fix needs to be there, and in CNodeService.</p>
Member Nodes - MNDeployment #3521 (Operational): SEAD Member Nodehttps://redmine.dataone.org/issues/35212013-01-25T21:19:12ZRebecca Koskelarkoskela@unm.edu
<p>SEAD (Sustainable Environment - Actionable Data), another DataNet, would like to become a DataONE Member Node<br>
(<a href="http://sead-data.net/">http://sead-data.net/</a>)</p>
Infrastructure - Bug #3492 (In Progress): Invalid PIDs in production (whitespace)https://redmine.dataone.org/issues/34922013-01-17T15:13:44ZDave Vieglaisdave.vieglais@gmail.com
<p>Recording this for future reference. </p>
<p>There are nine PIDs in the production environment that contain whitespace. This appears to have no functional effect - sysmeta and objects can be retrieved so no action is required other than to ensure no more sneak in.</p>
<p>The PIDs in question are:</p>
<a name="guid"></a>
<h2 > guid <a href="#guid" class="wiki-anchor">¶</a></h2>
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith botha.1.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.5.1<br>
judith botha.2.1<br>
resourceMap_Lin Cheng-Jung.1.1<br>
resourceMap_Lin Cheng-Jung.1.2<br>
resourceMap_Lin Cheng-Jung.1.3<br>
Lin Cheng-Jung.1.1<br>
Lin Cheng-Jung.1.2<br>
Lin Cheng-Jung.1.3<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1</p>
Member Nodes - MNDeployment #3118 (Operational): Dryad Member Nodehttps://redmine.dataone.org/issues/31182012-08-05T17:05:51ZDave Vieglaisdave.vieglais@gmail.com
<p>The Dryad MN will operate as a tier 1 member node.</p>
<p>Base_URL: <a href="https://datadryad.org/mn">https://datadryad.org/mn</a><br>
Node_ID: urn:node:DRYAD<br>
Deployment_Contact: Ryan Scherle<br>
Software: Custom on modified DSpace (Dryad)<br>
Target_Tier: 1<br>
Content_Volume_GB: 20</p>
Infrastructure - Task #1556 (In Progress): Interns mailing listhttps://redmine.dataone.org/issues/15562011-05-12T20:19:01ZAmber Buddenaebudden@gmail.com
<p>Can you please clear the current 'interns' mailing list in preparation for use in 2011. It currently sends out to last years interns. Once the new interns have set up plone accounts I will submit a new request to have them added.</p>